A Conditional Companion: Lived Experiences of People with Mental Health Disorders Using LLMs

A Conditional Companion: Lived Experiences of People with Mental Health Disorders Using LLMs
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Large Language Models (LLMs) are increasingly used for mental health support, yet little is known about how people with mental health challenges engage with them, how they evaluate their usefulness, and what design opportunities they envision. We conducted 20 semi-structured interviews with people in the UK who live with mental health conditions and have used LLMs for mental health support. Through reflexive thematic analysis, we found that participants engaged with LLMs in conditional and situational ways: for immediacy, the desire for non-judgement, self-paced disclosure, cognitive reframing, and relational engagement. Simultaneously, participants articulated clear boundaries informed by prior therapeutic experience: LLMs were effective for mild-to-moderate distress but inadequate for crises, trauma, and complex social-emotional situations. We contribute empirical insights into the lived use of LLMs for mental health, highlight boundary-setting as central to their safe role, and propose design and governance directions for embedding them responsibly within care ecosystem.


💡 Research Summary

This paper investigates how people living with mental health conditions in the United Kingdom actually use large language models (LLMs) such as ChatGPT, Claude, Gemini, and Grok for informal emotional support. The authors conducted semi‑structured interviews with twenty participants who have diagnosed mental health disorders and who have previously turned to LLMs for help. Using reflexive thematic analysis, they identified two overarching themes: (1) conditional, situational engagement with LLMs, and (2) boundary‑setting that delineates the safe scope of AI‑mediated support.

The first theme reveals five distinct motivations for using LLMs. Users value immediacy—the ability to start a conversation at any hour without waiting for an appointment. They appreciate the non‑judgmental stance of the model, which feels safer than disclosing to a human who might judge or stigmatize them. The self‑paced disclosure affordance lets users control the speed, depth, and timing of their narrative, effectively turning the interaction into a personal diary. Participants also report that LLMs help with cognitive reframing: the model’s questions or paraphrases prompt users to reinterpret negative thoughts, mirroring basic CBT techniques. Finally, repeated interactions foster a sense of relational engagement; users describe the AI as a “digital companion” that offers a feeling of connection, even though they recognize it is not a true person.

The second theme captures how participants draw clear boundaries around AI use, informed by prior experience with human therapists. They deem LLMs useful for mild‑to‑moderate distress—everyday worries, intrusive thoughts, loneliness, or low‑level anxiety. However, they consistently assert that LLMs are inadequate for crises such as suicidal ideation, acute trauma, or complex social‑emotional dilemmas that require nuanced judgment, empathy, and professional accountability. Users also voice concerns about privacy, data handling, and hallucinations (fabricated or inaccurate responses) that could mislead vulnerable individuals. Consequently, they often feel the burden of self‑regulation, monitoring the conversation for signs of risk and deciding when to disengage and seek human help.

From these findings the authors derive concrete design and governance recommendations. First, transparency is essential: the system should clearly communicate its limitations, data sources, and when a “human‑in‑the‑loop” hand‑off is advisable. Second, safety guardrails—such as automated crisis detection and one‑click escalation to emergency services—should be built into the interface. Third, tools that let users set and adjust personal risk thresholds, review or delete conversation logs, and explicitly toggle “high‑risk mode” can empower agency while reducing the hidden cost of self‑monitoring. Fourth, LLMs should be positioned as complementary components within existing care ecosystems, with mechanisms for clinicians to review AI‑generated content and provide feedback. Finally, regulatory clarity is needed: companies must assume responsibility for safety testing, post‑deployment monitoring, and clear liability frameworks, while policymakers should develop standards for AI‑driven mental‑health tools.

The paper acknowledges several limitations. The sample size is modest and geographically confined to the UK, which may limit cross‑cultural generalizability. Data rely on self‑reported experiences rather than objective usage logs, and rapid evolution of LLM capabilities means findings may shift as newer models become available. Future work should incorporate longitudinal tracking, broader demographic representation, and controlled trials that embed LLMs within clinical pathways to assess therapeutic outcomes and safety in real‑world settings.

In sum, the study contributes three key insights to the HCI and digital mental‑health literature: (1) it grounds LLM use in everyday “situated care work” rather than passive adoption, (2) it reframes boundary‑setting as a form of competent, reflective user agency, and (3) it offers a roadmap for designing trustworthy, responsibly governed AI companions that augment—not replace—human mental‑health services.


Comments & Academic Discussion

Loading comments...

Leave a Comment