Prompting Destiny: Negotiating Socialization and Growth in an LLM-Mediated Speculative Gameworld
We present an LLM-mediated role-playing game that supports reflection on socialization, moral responsibility, and educational role positioning. Grounded in socialization theory, the game follows a four-season structure in which players guide a child prince through morally charged situations and compare the LLM-mediated NPC’s differentiated responses across stages, helping them reason about how educational guidance shifts with socialization. To approximate real educational contexts and reduce score-chasing, the system hides real-time evaluative scores and provides delayed, end-of-stage growth feedback as reflective prompts. We conducted a user study (N=12) with gameplay logs and post-game interviews, analyzed via reflexive thematic analysis. Findings show how players negotiated responsibility and role positioning, and reveal an entry-load tension between open-ended expression and sustained engagement. We contribute design knowledge on translating sociological models of socialization into reflective AI-mediated game systems.
💡 Research Summary
The paper introduces “Prompting Destiny,” an LLM‑mediated role‑playing game designed to surface reflection on educational role positioning, moral responsibility, and socialization across developmental stages. Grounded in sociological theories of socialization, the authors map a four‑stage model onto a seasonal narrative (Spring‑Initiation, Summer‑Exploration, Autumn‑Consolidation, Winter‑Consequences). Players act as mentors guiding a child‑prince through morally charged dilemmas, while a GPT‑4 powered NPC generates context‑sensitive responses that differ across stages, illustrating how the same pedagogical action can have divergent meanings over time.
A central design choice is the “anti‑visualization” of evaluative scores. Instead of exposing real‑time metrics that encourage score‑chasing, the system records internal evaluation signals (narrative events, character trust changes, resource shifts) and presents them only at the end of each season as a “growth summary.” This delayed feedback mirrors real‑world education, where consequences often become visible only retrospectively, and it creates “sudden realization” moments that prompt deeper moral reasoning.
To manage the cognitive burden of open‑ended input (the authors call this entry load), the interface blends structured choice trees with free‑text entries, adding lightweight justification templates in later stages to reduce linguistic load while preserving agency. The prototype is built in Unity with a Python orchestration layer that calls the LLM API, and it logs all player decisions for post‑hoc analysis.
A user study with twelve university participants (aged 21‑32, mixed gender) involved a single‑player session lasting roughly 30 minutes, followed by a semi‑structured interview. The authors applied Reflexive Thematic Analysis to interview transcripts and gameplay logs, generating five cross‑stage themes: (1) Uncontrollable Educational Impact – mismatches between intent and outcome that destabilize perceived control; (2) Social Role Shift – movement from directive mentor to negotiator, mediator, or observer as the prince resists; (3) Moral Situational Tension – dilemmas lacking clear “correct” answers, forcing participants to question their authority; (4) AI Cognitive Transformation – the prince evolves from a plot device to an emotionally salient other, shifting player focus from instrumental success to relational outcomes; and (5) Self‑Reflection Upgrade – players draw parallels between their in‑game guidance and their real‑world educational practices, prompting identity re‑evaluation.
The delayed growth summaries were reported as effective “boundary markers” that encouraged retrospective sense‑making without turning the experience into a performance metric. Participants noted that the lack of real‑time scores reduced metric‑driven behavior and allowed them to attend to the narrative consequences of their choices. However, they also reported increasing entry load in later stages, suggesting a need for lightweight scaffolds (e.g., justification templates, contextual recall cues) to sustain engagement.
Design implications highlighted by the authors include: (a) using stage‑based temporal scaffolds to make the fluid process of socialization legible; (b) leveraging delayed, narrative‑focused feedback to model education as a cumulative, often invisible, social process; and (c) balancing open‑ended agency with cognitive load management through structured input aids.
Limitations are acknowledged: the sample size is small and culturally homogeneous (Chinese university students), the internal evaluation LLM is not a validated psychometric instrument, and the study examined only single‑player sessions. Future work should explore multi‑player collaborative scenarios, cross‑cultural deployments, and longer‑term feedback loops, as well as develop more rigorous methods for validating LLM‑generated reflective artifacts.
Overall, the paper contributes a novel design framework that translates sociological models of socialization into interactive AI‑mediated gameplay, demonstrating how stage‑based interaction design and delayed feedback can foster deeper reflection on educational responsibility and moral agency.
Comments & Academic Discussion
Loading comments...
Leave a Comment