Do Teachers Dream of GenAI Widening Educational (In)equality? Envisioning the Future of K-12 GenAI Education from Global Teachers' Perspectives

Do Teachers Dream of GenAI Widening Educational (In)equality? Envisioning the Future of K-12 GenAI Education from Global Teachers' Perspectives
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Generative artificial intelligence (GenAI) is rapidly entering K-12 classrooms worldwide, initiating urgent debates about its potential to either reduce or exacerbate educational inequalities. Drawing on interviews with 30 K-12 teachers across the United States, South Africa, and Taiwan, this study examines how teachers navigate this GenAI tension around educational equalities. We found teachers actively framed GenAI education as an equality-oriented practice: they used it to alleviate pre-existing inequalities while simultaneously working to prevent new inequalities from emerging. Despite these efforts, teachers confronted persistent systemic barriers, i.e., unequal infrastructure, insufficient professional training, and restrictive social norms, that individual initiative alone could not overcome. Teachers thus articulated normative visions for more inclusive GenAI education. By centering teachers’ practices, constraints, and future envisions, this study contributes a global account of how GenAI education is being integrated into K-12 contexts and highlights what is required to make its adoption genuinely equal.


💡 Research Summary

This paper investigates how K‑12 teachers across three disparate education systems— the United States, South Africa, and Taiwan—are grappling with the dual promise and peril of generative artificial intelligence (GenAI) for educational equality. Drawing on 30 semi‑structured interviews (10 teachers per country), the authors explore three research questions: (1) how teachers integrate GenAI instruction to promote equity, (2) what structural challenges lie beyond teachers’ individual control, and (3) what forms of support and future directions teachers envision from schools, industry, and policymakers.

Methodologically, the study follows a sociotechnical HCI lens. Interviews lasted 60‑90 minutes, were transcribed, and subjected to a rigorous thematic coding process. An initial codebook of 45 sub‑categories—derived from prior AI‑education literature and emergent interview data—was refined through iterative team discussions into three overarching themes with 12 sub‑themes.

The first theme, “Equality‑Oriented GenAI Teaching Practices,” reveals that teachers deliberately use GenAI as a lever to mitigate pre‑existing inequities. They provide personalized feedback to low‑resource learners, embed culturally and linguistically appropriate prompts to protect minority languages, and embed critical‑thinking modules that teach students to assess GenAI outputs, thereby preventing misuse such as plagiarism or bullying.

The second theme, “Structural Barriers,” captures systemic constraints that individual agency cannot overcome. Teachers across all three contexts cite (a) infrastructural divides—unequal broadband access, insufficient devices, and outdated hardware; (b) a lack of professional development and clear curricular guidance on GenAI; and (c) restrictive social norms, including stigma that AI use is cheating and parental anxiety about data privacy. These factors limit the reach of equity‑focused practices and risk turning GenAI into a privilege for well‑resourced schools.

The third theme, “Vision for Inclusive GenAI Education,” outlines teachers’ forward‑looking proposals. At the school level, they call for AI innovation hubs, resource redistribution, and peer‑learning networks. At the industry level, they request culturally localized user interfaces, free or low‑cost licenses for under‑served schools, and offline capabilities. At the policy level, they advocate for national AI literacy programs that extend to parents and communities, targeted infrastructure investment, and robust ethical‑privacy guidelines.

The authors claim three scholarly contributions: (1) extending AI‑education frameworks by foregrounding “equality‑oriented practice” as a distinct analytical lens; (2) empirically demonstrating that structural inequities—beyond teachers’ control—shape the equitable impact of GenAI; and (3) surfacing concrete design and policy recommendations that bridge HCI, learning sciences, and education policy.

Limitations include the modest sample size, reliance on self‑reported teacher perspectives, and the absence of direct measures of student learning outcomes or attitudes. The study also does not capture the voices of students, parents, or policymakers, which could enrich the understanding of multi‑stakeholder dynamics. Future work should incorporate multi‑method data (classroom observations, achievement metrics, student surveys) and broaden geographic coverage to test causal pathways and the efficacy of the proposed multi‑level support models.

In conclusion, teachers view GenAI simultaneously as a tool for reducing educational disparities and a potential source of new inequities. Their experiences underscore that without coordinated support from schools, technology firms, and governments, the promise of equitable GenAI‑enhanced learning will remain unrealized. The paper calls for a collective, sociotechnical effort to ensure GenAI becomes a catalyst for genuine educational equality rather than a divider.


Comments & Academic Discussion

Loading comments...

Leave a Comment