"Having Lunch Now": Understanding How Users Engage with a Proactive Agent for Daily Planning and Self-Reflection
Conversational agents have been studied as tools to scaffold planning and self-reflection for productivity and well-being. While prior work has demonstrated positive outcomes, we still lack a clear understanding of what drives these results and how users behave and communicate with agents that act as coaches rather than assistants. Such understanding is critical for designing interactions in which agents foster meaningful behavioral change. We conducted a 14-day longitudinal study with 12 participants using a proactive agent that initiated regular check-ins to support daily planning and reflection. Our findings reveal diverse interaction patterns: participants accepted or negotiated suggestions, developed shared mental models, reported progress, and at times resisted or disengaged. We also identified problematic aspects of the agent’s behavior, including rigidity, premature turn-taking, and overpromising. Our work contributes to understanding how people interact with a proactive, coach-like agent and offers design considerations for facilitating effective behavioral change.
💡 Research Summary
The paper presents a fourteen‑day longitudinal study of a proactive, LLM‑driven coaching chatbot named PITCH that initiates twice‑daily check‑ins to support graduate students’ daily planning and evening self‑reflection. Twelve participants generated 336 conversational sessions (3,181 turns), which the authors analyzed using a mixed‑methods approach: a codebook‑based thematic analysis to surface patterns around planning, reflection, cooperation, and breakdown, and a dialogue‑act annotation to quantify user and system behaviors.
The study addresses two research questions: (RQ1) how users engage with a proactive coaching agent, and (RQ2) which conversational behaviors of the agent lead to interaction breakdowns. Findings reveal that users are not passive recipients; they actively shape the dialogue by (1) accepting suggestions when they align with personal goals, (2) negotiating or modifying proposals to fit their context, and (3) outright rejecting or disengaging when the agent’s input feels irrelevant or intrusive. This negotiation behavior demonstrates that users treat the agent as a collaborative partner rather than a simple tool.
A second major insight is the emergence of shared mental models. When the agent remembered prior commitments, referenced earlier reflections, or adapted its tone based on user mood, participants reported higher trust, perceived accountability, and sustained engagement. Conversely, three agent‑centric issues repeatedly triggered breakdowns: (a) rigidity—relying on fixed scripts that ignored real‑time context, (b) premature turn‑taking—pushing the conversation forward before the user finished speaking or before the appropriate moment, and (c) overpromising—making suggestions that were unrealistic or beyond the system’s capabilities. In these moments, users either stalled the conversation, responded with dismissive language, or stopped interacting altogether.
Social perception played a pivotal role: participants who anthropomorphized the chatbot (seeing it as a social entity) engaged in deeper self‑reflection and were more forgiving of minor errors. Those who viewed it strictly as a utility were quick to abandon the interaction after a single failure. The authors therefore argue that proactive agents must balance initiative with deference to user control, offering “escape hatches” that let users pause, re‑frame, or terminate the dialogue without penalty.
Design implications derived from the data include: (1) implement context‑aware turn management that waits for user signals before advancing, (2) constrain the agent’s promises to achievable, concrete actions, (3) maintain a lightweight memory of past interactions to nurture shared mental models, and (4) provide explicit mechanisms for users to correct misunderstandings or opt out of a check‑in.
Overall, the work contributes (i) a rich, anonymized dataset of real‑world proactive coaching conversations, (ii) a taxonomy of user engagement behaviors and agent‑induced breakdowns, and (iii) concrete guidelines for building more adaptive, socially attuned proactive conversational agents that can effectively support productivity and well‑being without causing frustration or disengagement. Future research should test these guidelines across diverse occupational settings, explore multimodal cues (e.g., voice, gesture) for richer grounding, and investigate long‑term behavioral outcomes beyond the two‑week study window.
Comments & Academic Discussion
Loading comments...
Leave a Comment