Presenting Large Language Models as Companions Affects What Mental Capacities People Attribute to Them

Presenting Large Language Models as Companions Affects What Mental Capacities People Attribute to Them
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

How might messages about large language models (LLMs) found in public discourse influence the way people think about and interact with these models? To explore this question, we randomly assigned participants (N = 470) to watch short informational videos presenting LLMs as either machines, tools, or companions – or to watch no video. We then assessed how strongly they believed LLMs to possess various mental capacities, such as the ability to have intentions or remember things. We found that participants who watched video messages presenting LLMs as companions reported believing that LLMs more fully possessed these capacities than did participants in other groups. In a follow-up study (N = 604), we replicated these findings and found nuanced effects on how these videos also impact people’s reliance on LLM-generated responses when seeking out factual information. Together, these studies suggest that messages about LLMs – beyond technical advances – may shape what people believe about these systems and how they rely on LLM-generated responses.


💡 Research Summary

This paper investigates how different public messages about large language models (LLMs) shape people’s beliefs about the mental capacities of these systems and how they rely on them. The authors created three short (≈5‑minute) informational videos that framed LLMs as (1) machines, (2) tools, or (3) companions, drawing on Dennett’s hierarchy of physical, design, and intentional stances. In a pre‑registered between‑subjects experiment (Study 1, N = 470), participants were randomly assigned to watch one of the videos or no video at all, then completed a survey measuring attributions of 40 mental capacities (e.g., intentions, memory, fatigue) on a 1‑7 Likert scale, as well as secondary attitudes such as trust, confidence, and overall feelings toward LLMs.

The results showed that participants exposed to the “companions” framing attributed significantly higher levels of both agency‑related and experience‑related capacities to LLMs than those in the machine, tool, or control conditions. Effect sizes were medium (Cohen’s d ≈ 0.6) and the differences remained robust after correcting for multiple comparisons. The machine and tool framings also influenced secondary attitudes (e.g., trust, overall affect) but did not affect mental‑capacity attributions.

A follow‑up study (Study 2, N = 604) replicated the mental‑capacity findings and extended the investigation to behavioral reliance. After watching the same videos, participants were presented with two LLM‑generated answers to factual questions: one answer was logically consistent but factually incorrect, the other was factually correct but logically inconsistent. Participants then indicated which answer they would rely on. The “machine” framing reduced participants’ willingness to rely on the logically inconsistent answer, suggesting that emphasizing the mechanistic nature of LLMs can promote more critical, vigilant use. The “companion” framing did not increase reliance on flawed answers, despite boosting mental‑capacity attributions.

The authors discuss the implications for AI literacy, safety, and design. Framing LLMs as companions may foster anthropomorphic expectations and potentially lead to over‑trust or over‑reliance in real‑world applications, whereas framing them as machines may encourage users to maintain a skeptical stance toward outputs that lack logical coherence. The paper acknowledges limitations, including the online survey context, the exclusive use of video as a communication medium, and the need to explore cultural, individual‑difference, and longitudinal effects.

Overall, the study provides empirical evidence that simple narrative framing—independent of any technical changes to the system—can causally shape public perceptions of AI’s mental attributes and influence how users interact with LLMs. This insight is valuable for policymakers, educators, and designers seeking to promote responsible AI use and to mitigate risks associated with anthropomorphizing advanced language models.


Comments & Academic Discussion

Loading comments...

Leave a Comment