"Everyone's using it, but no one is allowed to talk about it": College Students' Experiences Navigating the Higher Education Environment in a Generative AI World

"Everyone's using it, but no one is allowed to talk about it": College Students' Experiences Navigating the Higher Education Environment in a Generative AI World
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Higher education students are increasingly using generative AI in their academic work. However, existing institutional practices have not yet adapted to this shift. Through semi-structured interviews with 23 college students, our study examines the environmental and social factors that influence students’ use of AI. Findings show that institutional pressure factors like deadlines, exam cycles, and grading lead students to engage with AI even when they think it undermines their learning. Social influences, particularly peer micro-communities, establish de-facto AI norms regardless of official AI policies. Campus-wide ``AI shame’’ is prevalent, often pushing AI use underground. Current institutional AI policies are perceived as generic, inconsistent, and confusing, resulting in routine noncompliance. Additionally, students develop value-based self-regulation strategies, but environmental pressures create a gap between students’ intentions and their behaviors. Our findings show student AI use to be a situated practice, and we discuss implications for institutions, instructors, and system tool designers to effectively support student learning with AI.


💡 Research Summary

**
This paper presents a qualitative investigation into how college students navigate the higher‑education environment in a world where generative AI tools (e.g., ChatGPT, Perplexity, Claude) are widely available. The authors conducted semi‑structured interviews with 23 undergraduate, master’s, and doctoral students at a large public university in the United States between May and July 2025. Participants were recruited via Slack, Discord, campus flyers, and classroom announcements; all had prior experience with at least one generative AI system. Each interview lasted roughly an hour, was recorded on Zoom, transcribed verbatim, and analyzed using Reflexive Thematic Analysis.

The study is organized around three research questions: (1) What environmental factors influence students’ engagement with AI in academic contexts? (2) How do students feel about these influences? (3) What institutional or policy changes do students recommend? The analysis yielded four major thematic clusters.

1. Institutional and Academic Pressure
Students repeatedly described how deadlines, exam cycles, grading schemes, and the sheer volume of coursework create a “time‑pressure” environment that makes AI feel necessary. When a deadline looms, participants reported turning to AI as a shortcut to meet expectations, even if they believed it might undermine deep learning. This finding highlights that existing workload‑management policies are ill‑suited to an AI‑rich landscape; the pressure to produce quickly directly fuels AI adoption.

2. Peer Micro‑Communities and Informal Norms
Beyond formal rules, the dominant influence came from peers. Students described a campus‑wide “AI shame” culture: “Everyone’s using it, but no one is allowed to talk about it.” Within study groups and Discord channels, de‑facto norms emerged that encouraged covert AI use while simultaneously stigmatizing open discussion. These informal norms often overrode official policies, normalizing policy violations and creating a hidden ecosystem of AI reliance.

3. Perception of Institutional AI Policies
Current university AI policies were perceived as generic, outdated, and inconsistent across departments. Participants characterized them as “one‑size‑fits‑all” documents that lack concrete examples, making compliance difficult. Consequently, routine non‑compliance was reported, with many students viewing bans as impractical and likely to push usage underground. The authors argue that a prohibition‑centric approach may backfire, fostering secrecy rather than responsible use.

4. Value‑Based Self‑Regulation and the Intention‑Behavior Gap
Students expressed personal values that emphasized learning integrity, originality, and critical thinking. Many described self‑imposed boundaries—such as limiting AI to brainstorming, double‑checking outputs, or avoiding AI for high‑stakes assessments. However, when confronted with intense academic pressure, these boundaries often collapsed, producing a clear intention‑behavior gap. This gap underscores the need for environmental redesign rather than relying solely on individual self‑control.

Student Recommendations
Participants advocated for collaborative policy development involving students, faculty, and administrators to produce transparent, context‑specific guidelines. They suggested increasing in‑person assessments (quizzes, oral exams) to better gauge learning when AI is pervasive. A strong call was made for campus‑wide AI‑literacy courses that teach both technical capabilities and ethical considerations. Finally, they urged AI tool designers to embed provenance and confidence indicators that help users verify and reflect on AI‑generated content.

Implications
The authors argue that AI use should be framed not as a form of academic misconduct but as a situated practice shaped by institutional structures and social cultures. Effective responses must therefore (a) redesign workload and assessment policies to reduce time‑pressure incentives, (b) acknowledge and reshape peer norms through open dialogue, (c) replace blanket bans with nuanced, actionable guidelines, and (d) provide scaffolding—through pedagogy and tool design—that supports students’ value‑aligned self‑regulation.

Conclusion
The study fills a gap in the literature by offering rich, qualitative insight into the environmental and social determinants of AI adoption among college students. It demonstrates that students’ AI behavior is largely a product of external pressures and community norms, and that policy, instructional design, and AI‑tool features must evolve in tandem to support responsible, learning‑enhancing AI use. Future work is suggested to expand the sample across institutions, quantify the prevalence of identified themes, and evaluate the impact of the proposed policy interventions.


Comments & Academic Discussion

Loading comments...

Leave a Comment