Cultivating higher-order cognitive abilities -- such as knowledge integration, critical thinking, and creativity -- in modern STEM education necessitates a pedagogical shift from passive knowledge transmission to active Socratic construction. Although Large Language Models (LLMs) hold promise for STEM Interdisciplinary education, current methodologies employing Prompt Engineering (PE), Supervised Fine-tuning (SFT), or standard Reinforcement Learning (RL) often fall short of supporting this paradigm. Existing methods are hindered by three fundamental challenges: the inability to dynamically model latent student cognitive states; severe reward sparsity and delay inherent in long-term educational goals; and a tendency toward policy collapse lacking strategic diversity due to reliance on behavioral cloning. Recognizing the unobservability and dynamic complexity of these interactions, we formalize the Socratic Interdisciplinary Instructional Problem (SIIP) as a structured Partially Observable Markov Decision Process (POMDP), demanding simultaneous global exploration and fine-grained policy refinement. To this end, we propose ERL4SIIP, a novel Evolutionary Reinforcement Learning (ERL) framework specifically tailored for this domain. ERL4SIIP integrates: (1) a dynamic student simulator grounded in a STEM knowledge graph for latent state modeling; (2) a Hierarchical Reward Mechanism that decomposes long-horizon goals into dense signals; and (3) a LoRA-Division based optimization strategy coupling evolutionary algorithms for population-level global search with PPO for local gradient ascent.
Deep Dive into Evolutionary Reinforcement Learning based AI tutor for Socratic Interdisciplinary Instruction.
Cultivating higher-order cognitive abilities – such as knowledge integration, critical thinking, and creativity – in modern STEM education necessitates a pedagogical shift from passive knowledge transmission to active Socratic construction. Although Large Language Models (LLMs) hold promise for STEM Interdisciplinary education, current methodologies employing Prompt Engineering (PE), Supervised Fine-tuning (SFT), or standard Reinforcement Learning (RL) often fall short of supporting this paradigm. Existing methods are hindered by three fundamental challenges: the inability to dynamically model latent student cognitive states; severe reward sparsity and delay inherent in long-term educational goals; and a tendency toward policy collapse lacking strategic diversity due to reliance on behavioral cloning. Recognizing the unobservability and dynamic complexity of these interactions, we formalize the Socratic Interdisciplinary Instructional Problem (SIIP) as a structured Partially Observab
Fostering higher-order cognitive abilities such as knowledge integration, transfer, critical thinking, and creativity is widely regarded as a central aim of modern STEM education [27,40]. Research grounded in constructivist theory suggests that these abilities arise through active knowledge construction rather than passive reception, as learners reorganize ideas and resolve cognitive conflict [4]. Socratic pedagogy builds on this view by guiding students through purposeful questioning and sustained intellectual challenge, encouraging them to articulate reasoning, confront misconceptions, and gradually refine their understanding [5,29].
However, current AI tutors built on Large Language Models (LLMs) still fall short of translating these pedagogical ideals into practical instructional behavior. Although LLMs possess extensive conceptual knowledge [2], mainstream alignment approaches-such as Prompt Engineering (PE) and Supervised Fine-tuning (SFT)-remain fundamentally static. SFT models, constrained by behavioral cloning, are prone to the so-called plot echo effect [43], converging toward safe and formulaic average responses rather than offering the adaptive variation needed for personalized scaffolding. More importantly, in the absence of a mechanism for long-horizon planning, these models are implicitly driven to optimize short-term conversational efficiency. As a result, they often default to direct answer delivery rather than sustained reasoning support, making it difficult to elicit genuine cognitive engagement [19].
While Reinforcement Learning (RL) in principle enables optimization beyond behavioral imitation, applying standard RL methods (e.g., PPO) to the Socratic Interdisciplinary Instructional Problem (SIIP) remains fundamentally challenging. First, instructional dialogue inherently forms a Partially Observable Markov Decision Process (POMDP) in which student cognition-such as misconceptions, confidence, or frustration-is latent and continuously evolving; collapsing this into a fully observable Markov Decision Process (MDP) breaks belief-state tracking and undermines adaptive response generation [32]. Second, evidence of conceptual growth is naturally sparse and delayed, so short-term conversational proxies often misrepresent true learning progress and expose models to Goodhart-style reward exploitation, where direct answers become a locally optimal but pedagogically shallow strategy [35]. Finally, the immense and compositional action space of natural language renders the optimization landscape highly non-convex, causing gradient-based RL to converge prematurely and collapse into limited behavioral modes, which reduces the strategic diversity needed to support varied learners and sustained reasoning.
To address these deficits, we introduce ERL4SIIP, an Evolutionary Reinforcement Learning (ERL) framework designed for Socratic Interdisciplinary Instruction. Rather than relying on static imitation, ERL4SIIP enables dynamic, belief-driven interaction by unifying a Dynamic Student Simulator based on a STEM knowledge graph for modeling latent cognition and providing a high-fidelity POMDP environment, a Hierarchical Reward System that decomposes longhorizon instructional goals into dense and non-deceptive feedback to reduce reward hacking, and a LoRA-Division Based Optimization approach that separates exploration from exploitation, using population-level search to maintain diversity while gradient refinement stabilizes learning and prevents strategy collapse.
Our contributions are summarized as follows:
• We formalize SIIP as a POMDP and construct a student simulator that explicitly models latent knowledge dynamics. This approach mitigates the unobservability of student cognitive states and establishes a dynamic surrogate environment for pedagogical exploration beyond static datasets. • We introduce a hierarchical reward mechanism to resolve the reward sparsity and hacking challenges in Socratic teaching. This non-deceptive scheme decomposes long-horizon educational goals into dense process signals, ensuring alignment between pedagogical behaviors and student conceptual reorganization. • We propose a LoRA-Division based ERL framework that decouples global exploration from local refinement. By projecting the search into a low-rank manifold, we make populationbased evolutionary algorithms computationally feasible for LLMs, effectively preventing the strategy collapse common in standard RL.
Initial efforts leveraged PE to instantiate specific pedagogical tactics, such as refutation or induction, within LLMs [3,16]. While accessible, these heuristic-based methods lack structural constraints, frequently degenerating into repetitive patterns or hallucinations.
To capture more nuanced strategies, recent frameworks like So-craticLM [23] and PlatoLM [21] apply SFT to datasets derived from multi-agent simulations. Despite improved coherence, these methods are fundamentally constrained by behavior cloning: th
…(Full text truncated)…
This content is AI-processed based on ArXiv data.