Design Guidance Towards Addressing Over-Reliance on AI in Sensemaking

Design Guidance Towards Addressing Over-Reliance on AI in Sensemaking
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Sensemaking in collaborative work and learning is increasingly supported by GenAI systems, however, emerging evidence suggests that poorly designed GenAI systems tend to provide explicit instruction that groups passively follow, fostering over-reliance and eroding autonomous sensemaking. Group awareness tools (GATs) address this challenge through implicit guidance: rather than instructing groups on what to do, GATs externalize observable collaboration data through visualizations that reveal differences between group members to create cognitive conflict, which triggers autonomous elaboration and discussion, thereby implicitly guiding autonomous sensemaking emergence. Drawing on an initial literature search of existing GAT systems, this paper explores the design of GenAI-augmented GATs to support autonomous sensemaking in collaborative work and learning, presenting preliminary design principles for discussion.


💡 Research Summary

The paper addresses a growing concern in collaborative work and learning: generative AI (GenAI) systems often provide explicit, step‑by‑step instructions that groups follow passively, leading to over‑reliance and a loss of autonomous sensemaking. The authors argue that this problem can be mitigated by leveraging Group Awareness Tools (GATs), which traditionally offer implicit guidance by externalising observable collaboration data (e.g., contribution counts, interaction patterns) through visualisations that highlight differences among members. These differences create cognitive conflict, prompting groups to discuss, elaborate, and construct meaning on their own.

The central research question is whether GenAI can be integrated into GATs in a way that preserves this implicit guidance rather than turning the tool into another source of explicit instruction. To explore this, the authors conducted an initial literature review of existing GAT systems across ACM Digital Library, IEEE Xplore, and Scopus, supplemented by backward snowballing. Their analysis identified three recurring design considerations—referred to as “considerations”—that shape how GenAI should be incorporated.

Consideration 1: Where to Deploy GenAI
Structured collaboration metrics (e.g., number of edits, turn‑taking frequencies) are well‑served by rule‑based analytics; GenAI’s strength lies in interpreting unstructured artifacts such as discussion transcripts, document revisions, or multimodal recordings. The authors therefore propose hybrid architectures that combine deterministic, rule‑based pipelines for quantitative signals with large language model (LLM) components for qualitative interpretation. This avoids an end‑to‑end AI pipeline that could inadvertently produce prescriptive advice.

Consideration 2: How to Present GenAI‑Generated Awareness
GATs succeed by surfacing differences that spark cognitive conflict. GenAI should augment, not replace, these visual cues. The paper illustrates a design where a traditional radar chart of self‑reported knowledge levels remains the primary visual, while GenAI’s analysis of actual discussion content is encoded as background colour intensity on each axis. Darker shades indicate alignment between reported knowledge and demonstrated discussion, lighter shades reveal misalignments. This secondary encoding preserves the original visual metaphor while adding a semantic layer that highlights otherwise invisible gaps, thereby deepening the conflict that drives autonomous sensemaking.

Consideration 3: Interaction Techniques for Exploration
Even with secondary encodings, users need affordances to interrogate the AI’s claims. The authors showcase a “hover‑for‑details” interaction: hovering over a lightly shaded segment reveals the AI’s estimated understanding level, a confidence score, and exemplar excerpts from the transcript that support the assessment. This design turns GenAI outputs into starting points for discussion rather than authoritative statements, encouraging groups to evaluate evidence, negotiate interpretations, and decide whether further dialogue is warranted.

Collectively, these considerations define a design space where GenAI enriches the differences presented to groups, preserves cognitive conflict, and supports autonomous elaboration. The authors present three preliminary design principles derived from the considerations: (1) employ hybrid pipelines that allocate quantitative tasks to rule‑based methods and qualitative tasks to LLMs; (2) visualise GenAI insights as complementary encodings that amplify, not simplify, observed disparities; and (3) provide interactive mechanisms that let users probe the AI’s evidence, fostering critical engagement.

The paper concludes by inviting workshop participants to discuss additional strategies for maintaining implicit guidance in AI‑supported sensemaking and to explore the transferability of this approach beyond collaborative education to other domains where collective reasoning is essential. The authors’ broader research agenda includes building prototypes that embody these principles, evaluating their impact on metacognitive regulation, and refining the hybrid architecture to balance transparency, controllability, and user agency.


Comments & Academic Discussion

Loading comments...

Leave a Comment