Open Mathematical Tasks as a Didactic Response to Generative Artificial Intelligence in Post-AI Contexts

Open Mathematical Tasks as a Didactic Response to Generative Artificial Intelligence in Post-AI Contexts
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

The widespread availability of generative artificial intelligence tools poses new challenges for school mathematics education, particularly regarding the formative role of traditional mathematical tasks. In post-AI educational contexts, many activities can be solved automatically, without engaging students in interpretation, decision-making, or mathematical validation processes. This study analyzes a secondary school classroom experience in which open mathematical tasks are implemented as a didactic response to this scenario, aiming to sustain students’ mathematical activity. Adopting a qualitative and descriptive-interpretative approach, the study examines the forms of mathematical work that emerge during task resolution, mediated by the didactic regulation device COMPAS. The analysis is structured around four analytical axes: open task design in post-AI contexts, students’ mathematical agency, human-AI complementarity, and modeling and validation practices. The findings suggest that, under explicit didactic regulation, students retain epistemic control over mathematical activity, even in the presence of generative artificial intelligence.


💡 Research Summary

The paper investigates how the widespread availability of generative artificial intelligence (AI) reshapes secondary‑school mathematics education and proposes a didactic response that preserves students’ mathematical agency. The authors define “post‑AI” contexts as learning environments where generative AI tools are readily accessible to both teachers and pupils, allowing many traditional, closed‑ended tasks to be solved automatically. This threatens the formative function of such tasks, which traditionally engage learners in interpretation, decision‑making, and validation.

Drawing on two theoretical strands—human‑AI complementarity (Hemmer et al., 2024) and the pedagogical affordances of open mathematical tasks—the authors argue that AI should be treated as a resource that supports rapid computation, example generation, and exploratory calculations, while learners retain responsibility for problem framing, assumption formulation, model construction, and result validation. Open tasks, characterized by initial indeterminacy, multiple solution pathways, and a requirement for validation, inherently resist immediate automation and thus compel students to engage in higher‑order reasoning.

To operationalise this vision, the study introduces the COMPAS regulatory device, a six‑phase framework (Comprehension, Organization, Modeling, Production, Analysis, Synthesis). Each phase explicitly delineates when AI may be consulted, ensuring that AI use is subordinate to human decision‑making. For example, the “Comprehension” phase requires students to interpret the situation without AI assistance; only in the “Production” phase may they invoke AI for calculations or to explore alternative results.

Methodologically, the research adopts a qualitative, descriptive‑interpretative approach. An empirical case study was conducted in a Peruvian secondary school, where a series of open tasks (e.g., “How many bottles are needed?”) were implemented under the COMPAS protocol. Data sources included students’ written work, audio‑recorded classroom discourse, and teacher‑researcher field notes. Analysis was organised around four analytical axes: (a) design of open tasks in AI‑rich contexts, (b) students’ mathematical agency, (c) human‑AI complementarity in practice, and (d) modeling and validation practices.

Findings reveal that the deliberate omission of explicit numerical data in task prompts forces students to define variables, decide on reasonable assumptions, and construct relational models before any calculation can occur. This design effectively blocks the “plug‑and‑play” use of AI for instant answers. In the agency dimension, students actively negotiated assumptions, justified choices, and critiqued each other’s models during collaborative discussions. When AI was consulted, its outputs were treated as provisional evidence; students systematically compared AI‑generated numbers with their own models, identified inconsistencies, and revised assumptions accordingly.

The complementarity analysis shows a clear asymmetry of roles: AI supplied instrumental support (quick calculations, example generation), while learners retained interpretive and evaluative authority. This distribution emerged not spontaneously but as a product of the task design and the explicit COMPAS regulation, which prevented premature closure of the problem‑solving process. The modeling‑validation phase was especially rich; students engaged in iterative cycles of hypothesis testing, using AI to explore alternative scenarios but ultimately deciding on the plausibility of results based on their own contextual reasoning.

Overall, the study demonstrates that, under explicit didactic regulation, open mathematical tasks can safeguard epistemic control even when generative AI is omnipresent. The authors conclude that educators should adopt open‑task designs coupled with structured regulatory frameworks like COMPAS to harness AI’s benefits without surrendering the formative core of mathematics education. They call for teacher professional development, policy support for open‑task curricula, and further cross‑cultural research to refine and scale these approaches as AI technologies continue to evolve.


Comments & Academic Discussion

Loading comments...

Leave a Comment