Implications of AI Involvement for Trust in Expert Advisory Workflows Under Epistemic Dependence
The increasing integration of AI-powered tools into expert workflows, such as medicine, law, and finance, raises a critical question: how does AI involvement influence a user’s trust in the human expert, the AI system, and their combination? To investigate this, we conducted a user study (N=77) featuring a simulated course-planning task. We compared various conditions that differed in both the presence of AI and the specific mode of human-AI collaboration. Our results indicate that while the advisor’s ability to create a correct schedule is important, the user’s perception of expertise and trust is also influenced by how the expert utilized the AI assistant. These findings raise important considerations for the design of human-AI hybrid teams, particularly when the adoption of recommendations depends on the end-user’s perception of the recommender’s expertise.
💡 Research Summary
The paper investigates how the involvement of AI assistants influences users’ trust in human experts, the AI system, and the combined human‑AI team when users are epistemically dependent on the advisor. To explore this, the authors conducted a controlled online experiment with 77 participants using a simulated academic advising task—specifically, constructing a valid university course schedule. Three interaction modes were compared: (1) no AI assistance (advisor‑only), (2) reactive AI assistance where the system automatically monitors the advisor’s output and corrects errors without prompting, and (3) proactive AI assistance where the advisor explicitly invokes the AI to verify the schedule. Each mode was further split into conditions where the advisor either made a mistake or produced a correct schedule, yielding five experimental conditions in total.
Trust was measured along several dimensions. The authors employed the Hendriks et al. epistemic trust questionnaire to capture perceived expertise, benevolence, and integrity of the advisor. They also adapted the Riedl et al. instrument to obtain entity‑level trust scores for the advisor, the AI, and the combined team. Additionally, a global 0‑100 trust rating and a binary reuse‑intention question were collected.
The hypotheses centered on the expectation that (a) advisor errors would reduce perceived expertise, (b) proactive AI use would signal lower advisor competence even when outcomes were correct, and (c) benevolence and integrity would be higher for advisors who voluntarily consulted AI. The results partially confirmed these expectations. Advisor mistakes significantly lowered perceived expertise (supporting H2) and also reduced willingness to reuse the advisor, especially in the proactive‑AI‑error condition (partial support for H7). However, when the advisor produced a correct schedule, the presence or mode of AI assistance did not materially affect perceived expertise, benevolence, or integrity (refuting H3‑H5). The global trust rating showed no significant differences across conditions, aligning with H6. Notably, the proactive AI condition—where the advisor manually invoked the AI—was consistently rated lower on trust dimensions than the reactive AI or no‑AI conditions when errors occurred, suggesting that users interpret manual AI consultation as a cue of advisor uncertainty.
These findings map onto established theories of automation and responsibility allocation. According to Parasuraman’s framework, who initiates monitoring and verification influences perceived decision authority. When the advisor asks the AI to check their work, users infer a shift of epistemic responsibility toward the AI, thereby diminishing the advisor’s perceived competence. Conversely, automatic AI oversight preserves the advisor’s primary role, mitigating negative trust effects.
The authors derive three design implications. First, the timing and visibility of AI intervention should be engineered to maintain the advisor’s sense of responsibility—transparent cues that the advisor remains the primary decision‑maker can preserve expertise judgments. Second, error correction should favor reactive (automatic) AI mechanisms when possible, to avoid signaling advisor doubt. Third, clear communication about the AI’s capabilities, limitations, and role can sustain trust in the AI while protecting trust in the human expert.
In sum, the study demonstrates that observed advisor performance is the dominant driver of trust outcomes, but the mode of AI involvement modulates where trust losses are allocated, especially under error conditions. These insights are valuable for designing human‑AI advisory systems in high‑stakes domains such as healthcare, law, and finance, where maintaining user confidence in both the expert and the supporting technology is critical for adoption and effective decision‑making.
Comments & Academic Discussion
Loading comments...
Leave a Comment