Trust as indicator of robot functional and social acceptance. An experimental study on user conformation to the iCubs answers
To investigate the functional and social acceptance of a humanoid robot, we carried out an experimental study with 56 adult participants and the iCub robot. Trust in the robot has been considered as a main indicator of acceptance in decision-making tasks characterized by perceptual uncertainty (e.g., evaluating the weight of two objects) and socio-cognitive uncertainty (e.g., evaluating which is the most suitable item in a specific context), and measured by the participants’ conformation to the iCub’s answers to specific questions. In particular, we were interested in understanding whether specific (i) user-related features (i.e. desire for control), (ii) robot-related features (i.e., attitude towards social influence of robots), and (iii) context-related features (i.e., collaborative vs. competitive scenario), may influence their trust towards the iCub robot. We found that participants conformed more to the iCub’s answers when their decisions were about functional issues than when they were about social issues. Moreover, the few participants conforming to the iCub’s answers for social issues also conformed less for functional issues. Trust in the robot’s functional savvy does not thus seem to be a pre-requisite for trust in its social savvy. Finally, desire for control, attitude towards social influence of robots and type of interaction scenario did not influence the trust in iCub. Results are discussed with relation to methodology of HRI research.
💡 Research Summary
This paper investigates how trust in a humanoid robot can serve as an indicator of both functional and social acceptance, using behavioral conformity rather than self‑report measures. Fifty‑six adult participants interacted with the iCub robot in a series of decision‑making tasks that were deliberately designed to create two distinct types of uncertainty. The “functional” tasks involved perceptual ambiguity (e.g., judging which of two objects is heavier), where the correct answer can be inferred from objective sensory information. The “social” tasks involved socio‑cognitive ambiguity (e.g., selecting the most appropriate item for a given context), where the correct answer depends on cultural norms, personal values, and contextual reasoning. After each trial, participants were shown the iCub’s answer and asked whether they would adopt that answer or stick with their own judgment. The proportion of trials in which participants adopted the robot’s answer—referred to as “conformity”—was taken as a behavioral proxy for trust.
In addition to the core task manipulation, the authors examined three potential moderators of trust: (i) desire for control, measured by a standard questionnaire assessing the extent to which individuals prefer to retain agency over outcomes; (ii) attitude toward the social influence of robots, gauging participants’ openness to robots shaping human decisions; and (iii) interaction scenario, operationalized as either a collaborative or competitive framing of the task. These variables were entered as covariates in a mixed‑effects model to test whether they predicted conformity beyond the functional versus social distinction.
The results were clear and theoretically informative. First, participants conformed significantly more often to the iCub’s answers in functional tasks than in social tasks (p < .01). This aligns with the intuition that people are more willing to rely on a robot when the domain is objectively measurable and the robot’s expertise can be empirically verified. Second, conformity in the social domain was generally low, indicating a reluctance to cede judgment to a machine when personal values and contextual nuance are at stake. Interestingly, the few participants who did conform in the social condition showed markedly lower conformity in the functional condition, suggesting an “inverse relationship” or domain‑specific trust: confidence in the robot’s social savvy does not presuppose confidence in its technical competence, and vice versa.
Crucially, none of the three moderator variables exerted a statistically significant effect on conformity (all p > .05). Desire for control, which prior literature has linked to technology avoidance, did not predict reduced reliance on the robot. Likewise, a positive attitude toward robots’ social influence did not translate into higher conformity, nor did the framing of the task as collaborative versus competitive alter trust levels. These null findings imply that the nature of the task itself—whether it is fundamentally perceptual or socio‑cognitive—dominates trust formation, outweighing individual differences and contextual framing.
Methodologically, the study employed a 2 × 2 mixed design (task type × scenario) with continuous covariates, analyzed using generalized linear models appropriate for binary outcomes. Effect sizes and confidence intervals were reported, lending robustness to the statistical conclusions. However, the sample size (N = 56) and the exclusive use of the iCub platform limit the generalizability of the findings. Future work should replicate the paradigm with larger, more diverse participant pools, incorporate robots with varying embodiment and interaction styles, and explore cross‑cultural differences in trust dynamics.
From an applied perspective, the findings carry several design implications. For domains where accuracy and objective measurement are paramount—such as manufacturing, medical diagnostics, or navigation—emphasizing the robot’s sensor precision, transparent feedback loops, and clear error reporting is likely to foster rapid trust acquisition. Conversely, in contexts that require nuanced judgment—education, counseling, hospitality—designers should prioritize explainability, user‑in‑the‑loop decision support, and mechanisms that allow users to retain agency, rather than relying on the robot’s autonomous recommendations.
In sum, the paper contributes a novel behavioral metric for robot trust, demonstrates a clear functional‑social split in conformity, and shows that individual predispositions and interaction framing play a minor role compared to task characteristics. These insights advance our theoretical understanding of trust in human‑robot interaction and provide concrete guidance for building robots that are accepted both functionally and socially.