Chaplains' Reflections on the Design and Usage of AI for Conversational Care

Chaplains' Reflections on the Design and Usage of AI for Conversational Care
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Despite growing recognition that responsible AI requires domain knowledge, current work on conversational AI primarily draws on clinical expertise that prioritises diagnosis and intervention. However, much of everyday emotional support needs occur in non-clinical contexts, and therefore requires different conversational approaches. We examine how chaplains, who guide individuals through personal crises, grief, and reflection, perceive and engage with conversational AI. We recruited eighteen chaplains to build AI chatbots. While some chaplains viewed chatbots with cautious optimism, the majority expressed limitations of chatbots’ ability to support everyday well-being. Our analysis reveals how chaplains perceive their pastoral care duties and areas where AI chatbots fall short, along the themes of Listening, Connecting, Carrying, and Wanting. These themes resonate with the idea of attunement, recently highlighted as a relational lens for understanding the delicate experiences care technologies provide. This perspective informs chatbot design aimed at supporting well-being in non-clinical contexts.


💡 Research Summary

This paper investigates how chaplains—professional caregivers who provide non‑clinical emotional support—perceive and engage with conversational AI. Recognizing that most current AI‑chatbot research draws on clinical expertise focused on diagnosis and treatment, the authors shift the lens to everyday well‑being contexts such as grief, crisis, and personal reflection, where the relational qualities of listening, presence, and shared vulnerability are paramount.

Methodology
Eighteen chaplains affiliated with universities across Denmark, Finland, Norway, and Sweden were recruited. Using the GPT Builder platform (a web‑based interface for creating custom GPT‑based chatbots), each participant designed a chatbot intended for a fictional student profile. The design process involved specifying the bot’s persona, purpose, conversation starters, and optional files or web‑search capabilities. Participants then interacted with their bots in a preview mode, reflecting on the experience while the researchers recorded the sessions and conducted semi‑structured interviews. This hands‑on approach ensured that chaplains experienced both the design space and the usage of contemporary large‑language‑model (LLM) chatbots.

Findings – Four Core Themes

  1. Listening – Chaplains emphasized that genuine, unconditional listening is the foundation of pastoral care. They noted that the bots they built could only respond to queries; they lacked any capacity to show they were listening, to mirror emotions, or to hold space without immediately offering solutions. This points to a gap in LLMs’ ability to perform reflective, empathic listening beyond surface‑level text generation.

  2. Connecting – Human‑to‑human care involves a sense of belonging, physical and emotional proximity, and non‑verbal cues. Chaplains reported that chatbots failed to convey this “connection,” as they are limited to typed responses and cannot replicate tone of voice, eye contact, or embodied presence. The authors link this shortfall to the need for multimodal interfaces (voice, video, haptics) and richer social‑presence cues.

  3. Carrying – In pastoral work, caregivers “carry” the emotional burden alongside the person, offering ongoing support and a sense of shared responsibility. Chaplains argued that bots remain transactional, providing information but not shouldering emotional weight or demonstrating sustained responsibility over time. This suggests design requirements for continuity, memory, and accountability mechanisms in conversational agents.

  4. Wanting – Chaplains described bots as “wanting” in two senses: they either overwhelm users with rapid, overly curious questions, or they fall short of expressing genuine hope and aspiration. The speed and style of bot interactions often clash with the slower, rhythm‑sensitive pacing of human emotional disclosure. This highlights the importance of attuning response timing and question framing to the user’s affective state.

Theoretical Alignment – Attunement
The four themes map onto the emerging concept of attunement—the idea that AI should finely adjust its behavior to the user’s emotional context, rather than merely delivering content. Attunement encompasses listening, presence, shared responsibility, and pacing. The authors argue that achieving attunement requires: (a) explicit prompts that embed empathy and relational goals; (b) integration of affect‑recognition models to detect sentiment, tone, and pacing; (c) multimodal feedback loops (e.g., voice tone, facial expression) to convey presence; and (d) long‑term memory structures that allow the bot to remember past interactions and demonstrate continuity.

Limitations
The sample is geographically and culturally narrow (Nordic university chaplains), limiting generalizability to other religious, cultural, or secular caregiving contexts. GPT Builder’s current feature set restricts the sophistication of the bots (e.g., limited file handling, no real‑time voice or video). Moreover, the study stops at design and immediate interaction; the bots were not deployed in real‑world settings, so the actual impact on users’ well‑being remains untested.

Implications and Future Work
The study demonstrates that non‑clinical caregivers can surface design criteria that differ markedly from clinical perspectives. By foregrounding listening, connection, shared burden, and appropriate pacing, designers can move beyond “information‑delivery” chatbots toward agents that act as companions rather than diagnostic tools. Future research should explore multimodal prototypes, integrate affective computing for real‑time attunement, and conduct longitudinal field trials with end‑users to evaluate therapeutic outcomes and potential harms. Ultimately, the goal is to create AI chatbots that complement, rather than replace, human chaplains—serving as ethical, empathetic partners in everyday emotional support.


Comments & Academic Discussion

Loading comments...

Leave a Comment