Student Perceptions of Large Language Models Use in Self-Reflection and Design Critique in Architecture Studio

Student Perceptions of Large Language Models Use in Self-Reflection and Design Critique in Architecture Studio
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

This study investigates the integration of Large Language Models (LLMs) into the feedback mechanisms of the architectural design studio, shifting the focus from generative production to reflective pedagogy. Employing a mixed-methods approach with surveys and semi structured interviews with 22 architecture students at the Singapore University of Technology and De-sign, the research analyzes student perceptions across three distinct feed-back domains: self-reflection, peer critique, and professor-led reviews. The findings reveal that students engage with LLMs not as authoritative in-structors, but as collaborative “cognitive mirrors” that scaffold critical thinking. In self-directed learning, LLMs help structure thoughts and over-come the “blank page” problem, though they are limited by a lack of contex-tual nuance. In peer critiques, the technology serves as a neutral mediator, mitigating social anxiety and the “fear of offending”. Furthermore, in high-stakes professor-led juries, students utilize LLMs primarily as post-critique synthesis engines to manage cognitive overload and translate ab-stract academic discourse into actionable design iterations.


💡 Research Summary

This paper investigates how architecture students perceive the integration of Large Language Models (LLMs) into the three core feedback loops of the design studio: self‑reflection, peer critique, and professor‑led juries. Using a sequential explanatory mixed‑methods design, the authors collected quantitative data from three online surveys (one for each feedback modality) and then conducted semi‑structured interviews with a purposive sample of 22 students from the Singapore University of Technology and Design. The participants, who were all experienced with studio critiques and had used LLMs for some studio‑related task, provided a balanced gender mix and represented a range of study years.

Quantitative results show that a majority (58 %) view LLMs primarily as “discussion partners” rather than as tutors or critics. Students most frequently engage LLMs during design iteration (41 %) and after formal critiques (37 %). In the self‑reflection domain, LLMs are used to check the clarity and justification of ideas, acting as an “cognitive mirror” that helps overcome the “blank page” problem without delivering definitive answers.

In the peer‑critique context, LLMs are perceived as neutral mediators. Students report that the AI’s non‑judgmental stance reduces social anxiety and the “fear of offending” that often leads to vague or overly positive feedback. Interview excerpts illustrate that participants feel comfortable voicing tentative or critical thoughts to an LLM, which they see as a safe interlocutor.

For professor‑led juries, 95 % of respondents identify the ideal LLM role as a “post‑critique reflection partner.” Here the technology is valued for its ability to synthesize dense, sometimes contradictory, jury comments, translate academic jargon into actionable design recommendations, and alleviate cognitive overload. This post‑critique synthesis function enables students to process feedback more systematically and to generate concrete iteration plans.

The authors also acknowledge limitations of LLMs: a lack of contextual nuance, potential misinterpretation of design intent, and the risk of homogenizing creative output. Consequently, they argue that LLMs should not replace human instructors but should complement them, handling low‑level synthesis and emotional buffering while faculty retain responsibility for high‑order conceptual guidance and ethical mentorship.

Overall, the study contributes empirical evidence that LLMs can re‑frame feedback in architectural education from a predominantly hierarchical, anxiety‑inducing process to a more collaborative, reflective, and cognitively manageable one. It proposes a hybrid feedback model where AI serves as a scaffold for reflection and a neutral conduit for critique, and it calls for further research with larger samples, varied LLM architectures, and longitudinal links to design performance to validate and extend these findings.


Comments & Academic Discussion

Loading comments...

Leave a Comment