Effects of Faults, Experience, and Personality on Trust in a Robot Co-Worker

Effects of Faults, Experience, and Personality on Trust in a Robot   Co-Worker
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

To design trustworthy robots, we need to understand the impact factors of trust: people’s attitudes, experience, and characteristics; the robot’s physical design, reliability, and performance; a task’s specification and the circumstances under which it is to be performed, e.g. at leisure or under time pressure. As robots are used for a wide variety of tasks and applications, robot designers ought to be provided with evidence and guidance, to inform their decisions to achieve safe, trustworthy and efficient human-robot interactions. In this work, the impact factors of trust in a collaborative manufacturing scenario are studied by conducting an experiment with a real robot and participants where a physical object was assembled and then disassembled. Objective and subjective measures were employed to evaluate the development of trust, under faulty and non-faulty robot conditions, and the effect of previous experience with robots, and personality traits. Our findings highlight differences when compared to other, more social, scenarios with robotic assistants (such as a home care assistant), in that the condition (faulty or not) does not have a significant impact on the human’s perception of the robot in terms of human-likeliness, likeability, trustworthiness, and even competence. However, personality and previous experience do have an effect on how the robot is perceived by participants, even though that is relatively small.


💡 Research Summary

**
The paper investigates how trust in a robotic co‑worker is shaped in a collaborative manufacturing setting, focusing on three human‑centric factors: the presence of robot faults, participants’ prior experience with robots, and personality traits. Using a Baxter robot, participants assembled and then disassembled a lightweight plastic race‑car model while receiving step‑by‑step guidance from the robot. Four experimental conditions were created: a fault‑free condition (D) and three fault conditions (A, B, C) that introduced increasingly severe cognitive errors (e.g., inappropriate action suggestions). Time pressure was added to simulate a realistic industrial environment.

Trust was measured both objectively (task success, completion time, error counts) and subjectively through pre‑ and post‑experiment questionnaires assessing perceived human‑likeness, likability, trustworthiness, and competence. Participants also completed a prior‑experience questionnaire and a Big‑Five personality inventory (extraversion, agreeableness, conscientiousness, neuroticism, openness).

Four hypotheses guided the study: (1) robot condition (faulty vs. non‑faulty) would affect perception and performance; (2) prior robot/technology experience would affect perception and performance; (3) personality traits would affect perception and performance; (4) task type (industrial vs. social) would modulate perception.

Results contradicted hypothesis 1: statistical analysis showed no significant differences in any of the subjective trust dimensions between fault‑free and faulty conditions, nor in objective performance metrics. The authors interpret this as evidence that in goal‑oriented, high‑stakes manufacturing tasks, users are more tolerant of minor cognitive faults because task efficiency and safety dominate trust judgments.

Hypothesis 2 received partial support. Participants with higher self‑reported robot experience rated the robot more favorably across all trust dimensions and were less negatively impacted by faults. Their objective performance (completion time, error rate) was marginally better, though not always statistically significant. This suggests that familiarity with robotic systems calibrates expectations and reduces the perceived severity of errors.

Hypothesis 3 was supported. Extraversion and openness positively correlated with higher ratings of human‑likeness and likability, while higher neuroticism correlated with lower trust scores, especially under faulty conditions. Conscientiousness showed a modest positive link to perceived competence. These findings align with prior work indicating that personality shapes how users interpret robot behavior and error tolerance.

Hypothesis 4 was not directly tested within the experiment but discussed by comparing the present results with earlier studies that used a home‑care robot (Care‑O‑Bot). In those social scenarios, faults dramatically reduced trust, whereas in the present industrial scenario they did not. The authors argue that task context (time pressure, criticality, performance focus) fundamentally alters the weight users assign to robot errors.

The discussion emphasizes design implications: in industrial co‑workers, ensuring consistent performance and providing clear feedback may be more critical than eliminating all minor cognitive faults. Designers should also consider user profiling—tailoring interaction styles or error‑handling strategies based on users’ experience levels and personality profiles could enhance trust.

Limitations include a relatively small, homogenous participant pool, the focus on only cognitive faults (excluding physical manipulation errors), and the short‑term nature of the interaction. Future work is suggested to explore long‑term trust development, a broader range of fault types, and cross‑cultural samples.

In conclusion, the study reveals that in collaborative manufacturing, robot faults do not significantly erode trust, whereas individual differences—particularly prior robot experience and personality—play a measurable role. These insights provide actionable guidance for engineers and managers seeking to integrate trustworthy robotic assistants into high‑performance industrial workflows.


Comments & Academic Discussion

Loading comments...

Leave a Comment