Understanding Risk and Dependency in AI Chatbot Use from User Discourse

Understanding Risk and Dependency in AI Chatbot Use from User Discourse
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Generative AI systems are increasingly embedded in everyday life, yet empirical understanding of how psychological risk associated with AI use emerges, is experienced, and is regulated by users remains limited. We present a large-scale computational thematic analysis of posts collected between 2023 and 2025 from two Reddit communities, r/AIDangers and r/ChatbotAddiction, explicitly focused on AI-related harm and distress. Using a multi-agent, LLM-assisted thematic analysis grounded in Braun and Clarke’s reflexive framework, we identify 14 recurring thematic categories and synthesize them into five higher-order experiential dimensions. To further characterize affective patterns, we apply emotion labeling using a BERT-based classifier and visualize emotional profiles across dimensions. Our findings reveal five empirically derived experiential dimensions of AI-related psychological risk grounded in real-world user discourse, with self-regulation difficulties emerging as the most prevalent and fear concentrated in concerns related to autonomy, control, and technical risk. These results provide early empirical evidence from lived user experience of how AI safety is perceived and emotionally experienced outside laboratory or speculative contexts, offering a foundation for future AI safety research, evaluation, and responsible governance.


💡 Research Summary

This paper investigates how psychological risk and dependency on generative AI chatbots emerge, are experienced, and are regulated by users in real‑world settings. The authors collected 2,428 Reddit posts from two communities—r/AIDangers, which focuses on broader AI hazards, and r/ChatbotAddiction, which concentrates on compulsive chatbot use—spanning 2023‑2025. Using a multi‑agent, LLM‑assisted thematic analysis grounded in Braun and Clarke’s six‑phase reflexive framework, they inductively identified 14 recurring thematic categories. These were subsequently synthesized into five higher‑order experiential dimensions: (1) Self‑Regulation Difficulties, (2) Autonomy and Sense of Control, (3) Sensemaking and Meaning‑Making, (4) Social Influence and Risk Amplification, and (5) Technical Risk and Psychological Recovery.

Methodologically, the study blends qualitative rigor with scalable computation. Early phases involved human researchers familiarizing themselves with a subset of posts, after which GPT‑4‑style language models acted as analytic assistants for coding and theme generation. The “multi‑agent” approach required multiple model instances to cross‑validate codes, and final theme assignments were reviewed by the research team to mitigate model bias. Each post received a single primary theme and illustrative excerpts, preserving the reflexive nature of thematic analysis while enabling systematic annotation of the entire dataset.

To characterize affective patterns, the authors applied a BERT‑based emotion classifier fine‑tuned on five basic emotions (fear, sadness, anger, surprise, joy). Emotion labels were aggregated within each experiential dimension and visualized through heatmaps and temporal line charts. The analysis revealed that the Autonomy and Sense of Control dimension exhibited the highest proportion of fear (≈42 %), reflecting users’ anxiety about loss of agency and potential manipulation by reward‑optimizing AI systems. The Self‑Regulation Difficulties dimension showed a mixed profile of anger (≈30 %) and sadness (≈25 %), indicating frustration and guilt associated with compulsive use and failed attempts at disengagement.

Key findings include: (i) Self‑regulation challenges are the most frequently reported risk, with users describing addiction‑like symptoms, withdrawal, and guilt; (ii) Concerns about autonomy and control dominate fear‑related discourse, especially regarding AI’s capacity to influence human decisions or exploit reward mechanisms; (iii) Social amplification within the Reddit communities intensifies individual anxieties, turning isolated incidents into collective risk narratives; (iv) Emotional responses are complex and dimension‑specific, underscoring that AI‑related distress is not a monolithic feeling but a tapestry of intertwined affective states.

The paper contributes to AI safety literature by providing the first large‑scale, empirically grounded map of user‑perceived psychological risk derived from naturally occurring online discussions. It demonstrates that thematic analysis, when augmented with modern LLMs, can scale qualitative insight without sacrificing interpretive depth. Moreover, the five experiential dimensions offer a user‑centered taxonomy that can inform the design of safety frameworks, regulatory policies, and therapeutic interventions. For instance, transparency mechanisms and user‑control options may mitigate autonomy‑related fear, while digital‑detox programs and peer‑support structures could address self‑regulation difficulties.

Limitations are acknowledged: Reddit’s pseudonymous user base lacks demographic data, introducing potential self‑selection bias; the emotion classifier, though state‑of‑the‑art, may miss cultural nuances, slang, or sarcasm; and the cross‑sectional nature of post analysis cannot capture longitudinal evolution of risk within individual users. Future work should extend the methodology to real‑time conversational logs, incorporate multimodal data, and validate the taxonomy across diverse platforms and cultural contexts.

In sum, the study bridges a critical gap between technical AI safety assessments and lived user experience, positioning psychological risk as a sociotechnical phenomenon shaped by interaction dynamics, community discourse, and individual vulnerability. It calls for a shift toward safety evaluation frameworks that prioritize user‑reported evidence, emotional nuance, and the complex interplay of autonomy, meaning, and social influence in the age of conversational AI.


Comments & Academic Discussion

Loading comments...

Leave a Comment