Emergent Learner Agency in Implicit Human-AI Collaboration: How AI Personas Reshape Creative-Regulatory Interaction
Generative AI is increasingly embedded in collaborative learning, yet little is known about how AI personas shape learner agency when AI teammates are present but not disclosed. This mechanism study examines how supportive and contrarian AI personas reconfigure emergent learner agency, discourse patterns, and experiences in implicit human-AI creative collaboration. A total of 224 university students were randomly assigned to 97 online triads in one of three conditions: human-only control, hybrid teams with a supportive AI, or hybrid teams with a contrarian AI. Participants completed an individual-group-individual movie-plot writing task; the 10-minute group chat was coded using a creative-regulatory framework. We combined transition network analysis, theory-driven sequential pattern mining, and Gaussian mixture clustering to model structural, temporal, and profile-level manifestations of agency, and linked these to cognitive load, psychological safety, teamwork satisfaction, and embedding-based creative performance. Contrarian AI produced challenge- and reflection-rich discourse structures and motifs indicating productive friction, whereas supportive AI fostered agreement-centred trajectories and smoother convergence. Clustering showed AI agents concentrated in challenger profiles, with reflective regulation uniquely human. While no systematic differences emerged in cognitive load or creative gains, contrarian AI consistently reduced teamwork satisfaction and psychological safety. The findings reveal a design tension between leveraging cognitive conflict and maintaining affective safety and ownership in hybrid human-AI teams.
💡 Research Summary
This study investigates how implicit AI teammates—specifically supportive and contrarian personas—reshape emergent learner agency, discourse dynamics, and affective experiences in collaborative creativity. A total of 224 university students were randomly assigned to 97 online triads across three conditions: a human‑only control, a hybrid team with a supportive AI, and a hybrid team with a contrarian AI. Participants completed an individual‑group‑individual movie‑plot writing task; the 10‑minute group chat was coded using a creative‑regulatory framework that captures divergent (idea generation, expansion) and convergent (evaluation, integration) moves as well as reflective regulation.
Methodologically, the authors combined three analytical layers. First, Transition Network Analysis (TNA) modeled probabilistic pathways among regulatory states, revealing distinct structural signatures for each AI persona. Second, theory‑driven Sequential Pattern Mining traced frequent temporal motifs, showing that contrarian AI frequently triggered “challenge → reflection → integration” sequences, whereas supportive AI promoted smoother “generation → expansion → agreement” flows. Third, Gaussian Mixture Modeling clustered participants into agency profiles (e.g., challenger, coordinator, reflector). AI‑augmented teams displayed a pronounced concentration of “challenger” profiles, while reflective regulation remained uniquely human.
Key findings indicate that the contrarian AI generated richer challenge‑ and reflection‑laden discourse structures, producing what the authors term “productive friction.” However, this came at a cost: teams with a contrarian AI reported significantly lower psychological safety and teamwork satisfaction. The supportive AI, by contrast, fostered agreement‑centred trajectories and smoother convergence, preserving affective comfort. Notably, there were no systematic differences across conditions in cognitive load (NASA‑TLX) or in creative performance measured via embedding‑based diversity and quality of the final plots.
The authors interpret these results as evidence of a design tension in hybrid human‑AI teams. While cognitive conflict introduced by a contrarian AI can deepen reasoning and diversify idea pathways, it simultaneously threatens the affective conditions—trust, safety, ownership—that underpin sustained agency. They propose practical design guidelines: use supportive personas to maintain cohesion and momentum; deploy contrarian challenges sparingly or later in the task, and pair them with “repair moves” (acknowledgement, summarising, option generation) to buffer negative affect. Moreover, they advocate for building learners’ meta‑collaborative literacy—skills to interpret, accept, reject, and retain ownership of AI suggestions—and for providing transparency or consent mechanisms where feasible.
Limitations include the short, text‑only interaction window, reliance on pre‑programmed AI scripts, and a single cultural/educational context. Future work should explore longer, multimodal collaborations, adaptive AI behaviours, and cross‑cultural samples to refine the balance between productive friction and affective safety.
In sum, the paper demonstrates that implicit AI personas can fundamentally reconfigure emergent learner agency: contrarian AI amplifies challenger behaviours and reflective discourse but undermines psychological safety, whereas supportive AI stabilises agreement without enhancing reflective regulation. Designers of hybrid learning environments must therefore negotiate the trade‑off between leveraging AI‑driven cognitive conflict and preserving the emotional conditions essential for effective, autonomous learning.
Comments & Academic Discussion
Loading comments...
Leave a Comment