Cognitive Spillover in Human-AI Teams
AI is not only a neutral tool in team settings; it influence the social and cognitive fabric of collaboration. Across two randomized experiments, we demonstrate that AI exposure produces causal spillover into human-human interaction – affecting shared language, collective attention, shared mental models, and social cohesion. These spillover effects occur robustly across settings, modalities, tasks, and AI qualities, suggesting that mere exposure to AI drives the influence. AI functions as an implicit social forcefield,'' influencing not only how people speak, but also how they think, what they attend to, and how they relate to each other. We argue for shifting the design paradigm from optimizing AI as a tool’’ to understanding AI as a socially influential actor whose effects extend beyond the human-AI interface.
💡 Research Summary
The paper investigates a previously under‑explored phenomenon: the way artificial intelligence (AI) systems, when embedded in human‑AI teams, can influence not only the direct human‑AI interaction but also the subsequent human‑human dynamics that follow. The authors argue that AI functions as a “social forcefield,” leaving a cognitive “fingerprint” that spills over into multiple channels of team alignment—linguistic, attentional, shared mental models, and relational cohesion. To substantiate this claim, they conduct two rigorously controlled randomized experiments that differ in modality, task, and AI characteristics, thereby testing the robustness and generality of the effect.
Study 1 (Text‑based AI)
Participants interact with a ChatGPT‑4o assistant to draft responses to customer‑service complaints. The AI’s system prompt is manipulated to be either empathic or formal, creating systematic linguistic variation while keeping the underlying task constant. After the AI‑mediated phase, participants engage in a face‑to‑face debrief with a human partner. The authors measure the reuse of AI‑specific lexical items in the subsequent human‑human conversation, controlling for baseline alignment that naturally arises from shared task vocabulary. Results show a statistically significant increase in the reuse of empathic‑prompt language, demonstrating that subtle AI‑driven phrasing persists beyond the AI’s presence.
Study 2 (Voice‑based AI in Teams)
Teams of three to four members solve a complex problem while assisted by a voice‑based AI. Two AI attributes are crossed: helpfulness (high vs. low) and voice anthropomorphism (human‑like vs. synthetic). The experiment captures four alignment dimensions: (1) linguistic coordination (lexical overlap, syntactic similarity), (2) collective attention (topic timing and synchrony derived from transcripts), (3) shared mental models (validated survey instrument), and (4) social cohesion (pronoun usage and self‑report scales). Analyses reveal that both higher helpfulness and greater anthropomorphism boost linguistic alignment, increase temporal synchrony of topic discussion, improve shared‑mental‑model scores, and raise relational cohesion indicators. Crucially, these spillover effects remain observable even after the AI is turned off, and they are largely independent of participants’ explicit appraisals such as trust, perceived intelligence, or perceived team membership, suggesting an implicit, automatic mechanism.
Theoretical Integration
The authors embed their findings in a distributed cognition framework that treats alignment as a multi‑layered, parallel process. AI, by providing linguistic and conceptual anchors, can synchronize team members’ representations, thereby aligning attention and mental models without requiring conscious deliberation. When the AI is perceived as an insider (e.g., through human‑like voice), it further strengthens affective cohesion, illustrating the bidirectional link between cognitive and relational alignment.
Contributions
- Causal Evidence of Spillover – Demonstrates that AI‑induced alignment extends beyond the dyadic interaction and leaves measurable traces in later human‑human communication.
- Multi‑Channel Impact – Shows that spillover is not limited to surface linguistic mimicry but also affects attention coordination, shared mental models, and affective cohesion.
- Robust Generalizability – Effects persist across text vs. voice modalities, across AI attribute manipulations, and across AI‑present vs. AI‑absent phases.
- Design Paradigm Shift – Argues for reconceptualizing AI from a mere tool to a socially influential actor, urging designers to anticipate and shape its broader team‑level consequences.
Limitations and Future Work
The studies rely on relatively short‑term laboratory tasks and a limited cultural sample, leaving open questions about long‑term team performance, cross‑cultural variability, and the role of explicit AI‑human identity boundaries. Future research could explore longitudinal deployments, diverse work contexts, and interventions that deliberately harness or mitigate AI‑driven spillover.
In sum, the paper provides the first comprehensive, experimentally validated account of AI‑driven cognitive spillover in human‑AI teams, highlighting the need to treat AI as an active participant in the social fabric of collaboration rather than a passive instrument.
Comments & Academic Discussion
Loading comments...
Leave a Comment