Understanding Human-AI Trust in Education

As AI chatbots become integrated in education, students are turning to these systems for guidance, feedback, and information. However, the anthropomorphic characteristics of these chatbots create ambi

Understanding Human-AI Trust in Education

As AI chatbots become integrated in education, students are turning to these systems for guidance, feedback, and information. However, the anthropomorphic characteristics of these chatbots create ambiguity over whether students develop trust in them in ways similar to trusting a human peer or instructor (human-like trust, often linked to interpersonal trust models) or in ways similar to trusting a conventional technology (system-like trust, often linked to technology trust models). This ambiguity presents theoretical challenges, as interpersonal trust models may inappropriately ascribe human intentionality and morality to AI, while technology trust models were developed for non-social systems, leaving their applicability to conversational, human-like agents unclear. To address this gap, we examine how these two forms of trust, human-like and system-like, comparatively influence students’perceptions of an AI chatbot, specifically perceived enjoyment, trusting intention, behavioral intention to use, and perceived usefulness. Using partial least squares structural equation modeling, we found that both forms of trust significantly influenced student perceptions, though with varied effects. Human-like trust was the stronger predictor of trusting intention, whereas system-like trust more strongly influenced behavioral intention and perceived usefulness; both had similar effects on perceived enjoyment. The results suggest that interactions with AI chatbots give rise to a distinct form of trust, human-AI trust, that differs from human-human and human-technology models, highlighting the need for new theoretical frameworks in this domain. In addition, the study offers practical insights for fostering appropriately calibrated trust, which is critical for the effective adoption and pedagogical impact of AI in education.


💡 Research Summary

The paper addresses a timely problem: as conversational AI chatbots become commonplace in educational settings, students develop trust in these agents, but it is unclear whether that trust resembles interpersonal trust (human‑like trust) or technology trust (system‑like trust). The authors first delineate the two constructs. Human‑like trust draws on classic interpersonal trust theory, emphasizing perceived intentionality, moral agency, and relational expectations. System‑like trust is rooted in technology trust literature, focusing on reliability, performance, security, and functional dependability. The central research questions ask how each form of trust predicts four outcome variables—perceived enjoyment, trusting intention, behavioral intention to use, and perceived usefulness—and which form is the stronger predictor for each outcome.

To answer these questions, the authors conducted a cross‑sectional survey with 312 undergraduate students from five Korean universities. Established scales were adapted to measure human‑like trust (5 items), system‑like trust (5 items), perceived enjoyment (3 items), trusting intention (3 items), behavioral intention to use (3 items), and perceived usefulness (4 items) on 7‑point Likert scales. Data were analyzed using partial least squares structural equation modeling (PLS‑SEM). Reliability (Cronbach’s α, composite reliability) and convergent validity (AVE) exceeded recommended thresholds, and the structural model showed good fit (SRMR = 0.045).

Key findings:

  1. Human‑like trust had the strongest effect on trusting intention (β = 0.42, p < 0.001), indicating that when students attribute human‑like qualities to a chatbot, they are more willing to place trust in it.
  2. System‑like trust was the dominant predictor of behavioral intention to use (β = 0.38, p < 0.001) and perceived usefulness (β = 0.41, p < 0.001), suggesting that performance, reliability, and security drive actual adoption and perceived value.
  3. Both trust forms similarly influenced perceived enjoyment (human‑like β = 0.29, system‑like β = 0.27, p < 0.01), highlighting that enjoyment is a shared affective outcome of both relational and functional trust.

These results support the authors’ claim that interactions with AI chatbots generate a distinct “human‑AI trust” that cannot be fully captured by existing interpersonal or technology trust models. Theoretical implications include the need for a hybrid framework that integrates relational cues (transparency, empathy, perceived agency) with technical quality assurances (accuracy, speed, security). Practically, designers of educational chatbots should simultaneously cultivate human‑like trust (e.g., through natural language, personable tone, explainable reasoning) and system‑like trust (e.g., through robust performance, data privacy safeguards).

The paper acknowledges several limitations: the sample is confined to Korean undergraduates, limiting cross‑cultural generalizability; reliance on self‑report measures raises common‑method bias concerns; and the study does not differentiate among chatbot types or task difficulty, which could moderate trust dynamics. Future research directions proposed include multi‑national comparative studies, longitudinal tracking of trust evolution, linking trust to actual learning outcomes, and experimental manipulations of relational versus functional chatbot features.

In sum, the study makes a valuable contribution by empirically demonstrating that human‑like and system‑like trust exert distinct yet complementary influences on students’ attitudes toward AI chatbots. It calls for new theoretical models of human‑AI trust in education and offers actionable guidance for developers seeking to foster appropriately calibrated trust, thereby enhancing adoption and pedagogical effectiveness.


📜 Original Paper Content

🚀 Synchronizing high-quality layout from 1TB storage...