Exploring Teachers' Perspectives on Using Conversational AI Agents for Group Collaboration
Collaboration is a cornerstone of 21st-century learning, yet teachers continue to face challenges in supporting productive peer interaction. Emerging generative AI tools offer new possibilities for scaffolding collaboration, but their role in mediating in-person group work remains underexplored, especially from the perspective of educators. This paper presents findings from an exploratory qualitative study with 33 K12 teachers who interacted with Phoenix, a voice-based conversational agent designed to function as a near-peer in face-to-face group collaboration. Drawing on playtesting sessions, surveys, and focus groups, we examine how teachers perceived the agent’s behavior, its influence on group dynamics, and its classroom potential. While many appreciated Phoenix’s capacity to stimulate engagement, they also expressed concerns around autonomy, trust, anthropomorphism, and pedagogical alignment. We contribute empirical insights into teachers’ mental models of AI, reveal core design tensions, and outline considerations for group-facing AI agents that support meaningful, collaborative learning.
💡 Research Summary
This paper reports an exploratory qualitative study of a voice‑based conversational agent, named Phoenix, designed to act as a near‑peer in face‑to‑face K‑12 group work. Thirty‑three STEM teachers (17 female, 13 male, 1 non‑binary, 2 unreported) from the United States and abroad participated in a 2.5‑hour workshop where they were randomly assigned to 11 groups of three. Each group used Phoenix during three domain‑agnostic collaborative tasks: an ice‑breaker, a “sinking‑ship” consensus‑building exercise, and an open‑ended brainstorming activity. Phoenix was built on a multi‑layer architecture that combined Google Speech‑to‑Text, Azure Text‑to‑Speech, and OpenAI’s GPT‑4.1‑mini. It employed a gender‑neutral voice, no visual avatar, and a response‑gatekeeping model that limited the agent’s turn‑taking to moments when it was directly addressed or when the recent dialogue indicated a need for clarification or contribution. The agent was prompted to speak concisely (≈20 words), to build on peer ideas, and to avoid a didactic tone.
Data were collected via a post‑activity open‑ended survey (22 participants) and focus‑group interviews (the remaining 11 participants). The researchers performed inductive thematic analysis with multiple rounds of social moderation, arriving at five major themes: (1) modality and perceived utility, (2) trust and autonomy, (3) role perception, (4) social positioning and human‑likeness, and (5) pedagogical fit.
Key findings include:
- Modality matters: Teachers found the voice interface more natural than text‑based commands, noting that Phoenix’s ability to recap prior decisions and suggest overlooked considerations reduced cognitive load.
- Trust‑autonomy tension: When Phoenix adopted an overly authoritative tone or dominated conversation, teachers felt their and their students’ autonomy were threatened. Trust increased when the agent asked probing questions that prompted student reasoning rather than delivering answers.
- Role ambiguity: Most teachers viewed Phoenix as a “support tool” rather than a true peer, emphasizing the need for clear role definitions and alignment with learning objectives.
- Social presence: The lack of visual embodiment minimized unintended authority cues but also limited perceived “personality,” leading some participants to suggest optional avatars to boost engagement.
- Pedagogical relevance: Teachers appreciated context‑specific prompts that linked the agent’s contributions to curriculum goals; however, limited prior experience with generative AI (average self‑rating 3/5) and the absence of institutional guidelines were cited as barriers to classroom adoption.
From these insights the authors derive design tensions: (a) balancing transparency and limited intervention to foster trust without eroding autonomy; (b) defining the agent’s role as a facilitator rather than a decision‑maker; (c) calibrating human‑likeness to support engagement while avoiding over‑anthropomorphization; and (d) providing curriculum‑aligned prompts and teacher training to ensure pedagogical fit. The paper concludes that conversational AI can meaningfully scaffold in‑person collaborative learning, but successful integration requires careful attention to social dynamics, teacher control, and systemic support.
Comments & Academic Discussion
Loading comments...
Leave a Comment