Self-Regulated Reading with AI Support: An Eight-Week Study with Students
College students increasingly use AI chatbots to support academic reading, yet we lack granular understanding of how these interactions shape their reading experience and cognitive engagement. We conducted an eight-week longitudinal study with 15 undergraduates who used AI to support assigned readings in a course. We collected 838 prompts across 239 reading sessions and developed a coding schema categorizing prompts into four cognitive themes: Decoding, Comprehension, Reasoning, and Metacognition. Comprehension prompts dominated (59.6%), with Reasoning (29.8%), Metacognition (8.5%), and Decoding (2.1%) less frequent. Most sessions (72%) contained exactly three prompts, the required minimum of the reading assignment. Within sessions, students showed natural cognitive progression from comprehension toward reasoning, but this progression was truncated. Across eight weeks, students’ engagement patterns remained stable, with substantial individual differences persisting throughout. Qualitative analysis revealed an intention-behavior gap: students recognized that effective prompting required effort but rarely applied this knowledge, with efficiency emerging as the primary driver. Students also strategically triaged their engagement based on interest and academic pressures, exhibiting a novel pattern of reading through AI rather than with it: using AI-generated summaries as primary material to filter which sections merited deeper attention. We discuss design implications for AI reading systems that scaffold sustained cognitive engagement.
💡 Research Summary
This paper presents an eight‑week longitudinal investigation of how undergraduate students employ generative AI chatbots to support self‑regulated reading tasks. Fifteen participants from an introductory AI course logged their interactions with AI tools (primarily ChatGPT, Gemini, Claude, and others) while completing weekly reading assignments. Across 239 reading sessions, the researchers collected 838 prompts and coded each prompt into one of four cognitive themes—Decoding, Comprehension, Reasoning, and Metacognition—derived from reading comprehension taxonomies, Bloom’s hierarchy, and self‑regulated learning theory. The coding schema comprised ten sub‑codes and achieved high inter‑rater reliability (Cohen’s κ = 0.82).
Quantitative analysis revealed a pronounced skew toward comprehension prompts (59.6% of all prompts), followed by reasoning (29.8%), metacognition (8.5%), and decoding (2.1%). Most sessions (72%) contained exactly three prompts, the minimum required by the assignment, indicating that students prioritized efficiency over deeper engagement. Within a session, a natural progression from comprehension to reasoning was observable, but the transition to metacognitive reflection was rarely realized, suggesting a truncated cognitive trajectory. Over the eight‑week period, usage patterns remained remarkably stable; individual differences persisted, with no systematic shift toward more sophisticated prompting.
Qualitative interviews with five participants uncovered an “intention‑behavior gap.” Although students articulated that well‑crafted prompts demand effort and yield better learning outcomes, they seldom applied this insight in practice, citing time pressure and assignment constraints. A novel “reading through AI” strategy emerged: students first consumed AI‑generated summaries, then selectively consulted the original text for sections they deemed interesting or essential. This triage approach reflects an efficiency‑driven mindset that favors rapid information extraction over thorough textual analysis.
The authors discuss three design implications for AI‑supported reading environments. First, scaffolding mechanisms should monitor a learner’s cognitive stage and proactively nudge them toward higher‑order reasoning and metacognitive activities (e.g., prompting reflective questions after a summary). Second, prompt‑engineering guidance should be embedded directly into the workflow, offering templates or real‑time feedback to lower the activation cost of sophisticated prompting. Third, adaptive systems could personalize feedback based on each student’s interaction pattern, encouraging balanced engagement across all cognitive themes.
Overall, the study contributes an empirically grounded coding framework for AI‑reading interactions, evidence of stable yet limited cognitive engagement, and a clear articulation of the gap between students’ stated best practices and actual behavior. It highlights the risk that AI tools, while enhancing convenience and personalization, may inadvertently encourage shallow processing if not deliberately designed to sustain deeper cognitive involvement. Future work should expand the sample size, explore cross‑disciplinary contexts, and link interaction patterns to measurable learning outcomes.
Comments & Academic Discussion
Loading comments...
Leave a Comment