Ties of Trust: a bowtie model to uncover trustor-trustee relationships in LLMs

Ties of Trust: a bowtie model to uncover trustor-trustee relationships in LLMs
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

The rapid and unprecedented dominance of Artificial Intelligence (AI), particularly through Large Language Models (LLMs), has raised critical trust challenges in high-stakes domains like politics. Biased LLMs’ decisions and misinformation undermine democratic processes, and existing trust models fail to address the intricacies of trust in LLMs. Currently, oversimplified, one-directional approaches have largely overlooked the many relationships between trustor (user) contextual factors (e.g. ideology, perceptions) and trustee (LLMs) systemic elements (e.g. scientists, tool’s features). In this work, we introduce a bowtie model for holistically conceptualizing and formulating trust in LLMs, with a core component comprehensively exploring trust by tying its two sides, namely the trustor and the trustee, as well as their intricate relationships. We uncover these relationships within the proposed bowtie model and beyond to its sociotechnical ecosystem, through a mixed-methods explanatory study, that exploits a political discourse analysis tool (integrating ChatGPT), by exploring and responding to the next critical questions: 1) How do trustor’s contextual factors influence trust-related actions? 2) How do these factors influence and interact with trustee systemic elements? 3) How does trust itself vary across trustee systemic elements? Our bowtie-based explanatory analysis reveals that past experiences and familiarity significantly shape trustor’s trust-related actions; not all trustor contextual factors equally influence trustee systemic elements; and trustee’s human-in-the-loop features enhance trust, while lack of transparency decreases it. Finally, this solid evidence is exploited to deliver recommendations, insights and pathways towards building robust trusting ecosystems in LLM-based solutions.


💡 Research Summary

The paper addresses the pressing problem of trust in large language models (LLMs) when they are deployed in high‑stakes domains such as politics, where biased outputs and “hallucinations” can undermine democratic processes. Existing trust frameworks from organizational studies, sociology, and philosophy are criticized for being overly simplistic and one‑directional, focusing either on the trustor (the user) or the trustee (the technology) but not on the complex, bidirectional relationships that emerge in sociotechnical contexts. To fill this gap, the authors propose a novel “bowtie” model of trust in LLMs. The model visualizes trust as a two‑sided structure: on the left, user‑side contextual factors are organized into four broad categories—demographics, ideologies, background, and perceptions—and then broken down into specific variables such as education level, familiarity, and past experience. On the right, trustee‑side systemic elements are grouped into scientific discipline, scientists, products, and stakeholders, which are further detailed for the political‑discourse scenario (e.g., political scientists, data journalists, the LLM‑based discourse tool, hosting organizations). The central “bowtie core” ties these sides together and explicitly captures (1) intra‑relationships among user factors, (2) bidirectional inter‑relationships between user factors and trustee elements, and (3) intra‑relationships among trustee elements.

Methodologically, the study adopts a mixed‑methods explanatory design. Quantitative data were collected through on‑site interactive tasks where 29 participants (balanced across gender, age, occupation, and education) performed trust‑related actions under varying conditions (e.g., presence/absence of transparency information, activation of human‑in‑the‑loop features). Statistical correlation analyses examined how user variables influenced trust‑related behaviors. Qualitative insights were gathered via semi‑structured online interviews, and deductive thematic analysis was used to uncover causal explanations for the observed quantitative patterns. The study received IRB approval (protocol 9995/2024).

Key findings include: (1) past experience with AI systems and familiarity with LLMs are the strongest predictors of trust‑related actions such as willingness to rely on the model. (2) Not all user contextual factors affect trustee elements equally; ideological alignment strongly drives demands for transparency, whereas education level is more linked to expectations of human‑in‑the‑loop (HITL) functionalities. (3) HITL designs—allowing users to provide feedback, verify outputs, or intervene—significantly boost perceived transparency, accountability, and overall trust. Conversely, opacity in model reasoning or lack of explanatory cues sharply reduces trust. Qualitative comments reinforce these results, with participants emphasizing the need to “see why the model answered that way” and to “have a way to correct or verify outputs.”

Based on these insights, the authors propose four practical recommendations for policymakers, system designers, and researchers: (a) foster user familiarity through education and positive experience programs; (b) embed robust transparency mechanisms (e.g., uncertainty quantification, source citations, model rationale) into LLM interfaces; (c) standardize HITL interaction patterns to ensure user participation in the decision loop; and (d) tailor trust‑building strategies to users’ ideological and cultural backgrounds, recognizing that a one‑size‑fits‑all approach is insufficient.

In conclusion, the bowtie model reconceptualizes trust as a bidirectional, multi‑layered construct, providing a systematic framework for mapping and measuring the intricate relationships between trustors and trustees in LLM‑driven political discourse tools. By integrating quantitative and qualitative evidence, the study demonstrates that trust can be meaningfully enhanced through targeted design interventions and policy measures, offering a blueprint that can be extended to other high‑risk AI applications.


Comments & Academic Discussion

Loading comments...

Leave a Comment