Large Language Models in Game Development: Implications for Gameplay, Playability, and Player Experience

This paper investigates how the integration of large language models influences gameplay, playability, and player experience in game development. We report a collaborative autoethnographic study of two game projects in which LLMs were embedded as arc…

Authors: Keeryn Johnson, Muhammad Ahmed, Charlie Lang

Large Language Models in Game Development: Implications for Gameplay, Playability, and Player Experience
Large Language Models in Game Development: Implications for Gameplay , Playability , and P layer Experience Keeryn Johnson keeryn.johnson@ucalgar y .ca University of Calgary Calgary, Alberta, Canada Muhammad Ahmed muhammad.ahmed3@ucalgary .ca University of Calgary Calgary, Alberta, Canada Charlie Lang charlie.lang@ucalgary .ca University of Calgary Calgary, Alberta, Canada Sahib Thethi sahib.thethi@ucalgary .ca University of Calgary Calgary, Alberta, Canada Wilson Zheng wilson.zheng@ucalgary .ca University of Calgary Calgary, Alberta, Canada Ronnie de Souza Santos ronnie.desouzasantos@ucalgary .ca University of Calgary Calgary, Alberta, Canada Abstract This paper investigates how the integration of large language mod- els inuences gameplay , playability , and player experience in game development. W e report a collab orative autoethnographic study of two game pr ojects in which LLMs were embedded as architec- tural components. Reective narratives and development artifacts were analyzed using gameplay , playability , and player experience as guiding constructs. The ndings suggest that LLM integration increases variability and personalization while introducing chal- lenges related to correctness, diculty calibration, and structural coherence across these concepts. The study provides preliminary empirical insight into how generative AI integration reshapes es- tablished game constructs and introduces ne w architectural and quality considerations within game engineering practice. Ke ywords LLMs, game development, Gameplay , Playability , Player Exp erience A CM Reference Format: Keeryn Johnson, Muhammad Ahmed, Charlie Lang, Sahib Thethi, Wilson Zheng, and Ronnie de Souza Santos. 2018. Large Language Models in Game Development: Implications for Gameplay, Playability , and Player Experience. In Companion Proceedings of the 34th ACM Symp osium on the Foundations of Software Engineering (FSE ’26), July 5 - 9, 2026 Montreal, Canada. ACM, New Y ork, NY, USA, 6 pages. https://doi.org/XXXXXXX.XXXXXXX 1 Introduction Over the years, game development has been investigated within the software engineering literature, including discussions of processes, challenges, requirements engineering, testing, and quality assur- ance in the context of video games [ 2 , 13 , 23 ]. The game software engineering research report sustained attention to requirements, usability , architecture, testing, and process management [ 2 ]. How- ever , unlike general software systems that are typically evaluated Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for prot or commercial advantage and that copies bear this notice and the full citation on the rst page. Copyrights for components of this work owned by others than the author(s) must be honor ed. Abstracting with credit is p ermitted. T o copy otherwise, or republish, to post on servers or to redistribute to lists, requir es prior specic permission and /or a fee. Request permissions from permissions@acm.org. Conference’17, W ashington, DC, USA © 2018 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-1-4503-XXXX -X/2018/06 https://doi.org/XXXXXXX.XXXXXXX in terms of functional correctness and usability , video games are de- scribed as interactive systems whose main aim is to provide fun and entertainment [ 11 ]. In this context, general software characteristics alone are considered insucient to achiev e optimal player experi- ence, indicating that game dev elopment must address additional concerns beyond traditional software projects [ 11 ]. Within this context, gameplay , playability , and player experience emerge as key constructs [ 9 , 11 , 20 , 22 ]. Gameplay has been dened as the set of activities that can be performed by the player and by other entities in response to player actions within the virtual w orld [ 9 ]. Playability has be en conceptualized as the instantiation of us- ability in games and as a construct that should be assessed thr ough- out the development process to support player experience, with structured facets and measurable attributes [ 9 , 11 , 20 , 22 ]. Player experience is related to engagement, enjo yment, and the interac- tive processes that occur during gameplay [ 15 , 16 ]. In this sense, gameplay concerns the activities available in the game, playability concerns the quality with which these activities can be p erformed, and player experience concerns the perceptions and responses that emerge during interaction [ 9 , 11 , 15 , 16 , 20 , 22 ]. Recently , advances in articial intelligence, particularly large language models, have introduced new forms of content genera- tion, adaptive agents, and mixed initiative interaction into games [ 19 , 25 , 27 ]. Large language models have b een applied to proce- dural content generation, dialogue systems, gameplay-r elated de- sign tasks, and mixed initiative development support [ 19 , 25 , 27 ]. These studies report opportunities for increased variability and personalization, while also identifying challenges related to unpre- dictability , coherence, and evaluation [ 19 , 25 , 27 ]. In this context, current research primarily documents applications and technical ca- pabilities of generative models in game contexts [ 25 , 27 ]. However , comparatively less attention has been given to exploring how the integration of large language models during development relates to established constructs such as gameplay , playability , and player experience. Based on this problem, in this paper , we adopt an auto ethno- graphic approach [ 8 , 24 , 28 ] to describe our experience de veloping two games that integrate large language models as part of their structure, with the goal of analyzing how the use of LLMs aected gameplay , playability , and player experience. Therefore, we are following this r esearch question: RQ . What is the impact of large Conference’17, July 2017, W ashington, DC, USA Keeryn Johnson, Muhammad Ahmed, Charlie Lang, Sahib Thethi, Wilson Zheng, and Ronnie de Souza Santos language models on gameplay , playability , and player expe- rience in the context of game development? By answering this question, we aim to provide preliminary empirical insights into how LLM integration inuences gameplay structures, playability considerations, and player experience during development. 2 Method This study adopts a collaborative auto ethnographic approach to investigate how the integration of LLMs inuenced gameplay , playa- bility , and player experience in two game projects. A utoethnog- raphy is an empirical method that describ es and systematically analyzes personal experience in order to understand broader prac- tices [ 3 , 8 ]. In contrast to traditional ethnography , where an external researcher observes a social setting, collaborative autoethnography positions members of the setting as co-authors who reect on and interpret their own practice [ 3 ]. In this study , student developers document and analyze their lived experience integrating LLMs within a game software engineering context. Consistent with ana- lytic expectations in autoethnographic research, narratives are not presented solely as personal accounts but are examined through a structured conceptual lens [ 8 ]. This work is situated within ethno- graphic traditions in software engineering that emphasize the anal- ysis of socio-technical development practices and contextualized design processes [ 24 , 28 ]. Context and Setting. The study was conducted in two educational settings: a third-year undergraduate course on software architec- ture and an undergraduate research project fo cused on software sustainability . Across these conte xts, seven students de veloped two digital games over periods of three months and six months, respe c- tively . Integration of LLMs was a mandator y component of both projects. Students were required to incorp orate generative mod- els into the game architecture as functional elements rather than auxiliary tools. Implemented uses included dialogue generation, narrative branching, non-player character behavior , and procedural content generation. Google Gemini and OpenAI mo dels were used to support these functionalities. Development activities were em- bedded within standard software engineering worko ws, including architectural design, implementation, iterative r enement, and test- ing. Participants and Authorship. Seven students participated in the projects, and ve are co-authors of this pap er . All teams contributed reective narratives. Reections were written individually and fo- cused on development decisions, implementation challenges, and perceived eects of LLM integration on gameplay structur es, playa- bility characteristics, and player experience. Authorship reects both direct involvement in development and intellectual contribu- tion to the interpretation of experiences. The instructor acte d as supervisor and co-author but did not include personal experience as primary data. Game Artifacts. T wo games, Wizdom Run and Sena , were devel- oped as the empirical context of this study . In both cases, large language models were integrate d as functional components that directly shaped gameplay structure and play er interaction. Wizdom Run (Figure 1 ) is a study-oriented role-playing game implemented in Unity , structured around an auto-run campaign combined with combat mechanics. Players authenticate, create a character , and import personal study notes to initiate a campaign. The LLM pro- cesses imported documents using PDF extraction and generates multiple-choice questions across three diculty levels. These ques- tions are embedded into the core gameplay loop. During standard combat, correct answers replenish mana required for spell cast- ing, while incorrect answers restrict available actions. Boss bat- tles introduce a turn-based card system in which answering ques- tions correctly grants bonuses or abilities. Questions are generated through OpenAI models, structured into JSON format, and stored in a PostgreSQL backend to support persistence of campaign state and player progress. In this game, the LLM functions primarily as a dynamic content generator that personalizes progression and links knowledge assessment directly to combat mechanics. Sena (Figure 2 ) integrates LLM-driven interaction within a staged learn- ing and gameplay structure focused on sustainability in software engineering. The system combines structured instructional con- tent, conversational interaction, quiz-based r einforcement, and a scenario-driven game mo de. The LLM supports dialogue-based clarication, scenario interpretation, and feedback generation. In the game mode, players evaluate sustainability-related decisions, select professional roles, and obser ve cumulative consequences represented through changes in the game state. Unlike Wizdom Run , where the LLM primarily generates assessment questions tied to combat, Sena uses the LLM to support reective dialogue and consequence-oriented reasoning within a structured decision- making environment. Across both artifacts, LLMs were embedded as architectural components rather than auxiliar y tools. Their inte- gration inuenced content variability , pacing, diculty progression, and the coupling between player decisions and system responses, forming the basis for the experiential analyses in this study . Data Collection. The primar y dataset consists of ve reective narratives, each approximately two pages in length. Reections were written both during development and after project comple- tion, allowing participants to document evolving perceptions of LLM integration. Students were guided by prompts asking them to reect explicitly on gameplay , playability , and player experience. In addition to narratives, design documents and code repositories were consulted to contextualize implementation decisions and sup- port triangulation of reported experiences. Analysis. Analysis combined thematic reading and content analy- sis [ 7 ]. First, narratives were r ead holistically to identify recurring themes r elated to the integration of LLMs. Second, segments of text were examined in relation to the constructs of gameplay , playa- bility , and player experience, which functioned as sensitizing con- cepts guiding interpretation. Themes emerging across individual narratives were synthesized to characterize ho w LLM integration inuenced structural aspects of gameplay , stability and coherence of playability , and p erceiv ed qualities of player experience. Experi- ences from both games were merged to identify common patterns rather than being treated as separate case comparisons. Analysis was conducte d primarily by the instructor-researcher , with pe er review and interpretive discussion inv olving co-authors to rene Large Language Models in Game Development: Implications for Gameplay, Playability , and Player Experience Conference’17, July 2017, Washington, DC, USA thematic interpretations. Reexivity . Given the collab orative autoethnographic design [ 3 , 8 ], reexivity was maintaine d throughout the study . Although the instructor served as course leader and sup ervisor , only student- authored narratives were treated as primary data. Triangulation across multiple individual accounts and consultation of develop- ment artifacts supported consistency of interpretation. Pe er revie w among co-authors functioned as an additional mechanism to inter- rogate assumptions and rene themes. Threats to V alidity . This study is context-bound and does not aim for statistical generalizability , consistent with collaborative autoethnographic traditions [ 8 ]. Findings are derived from a small number of student-authored reections within two undergradu- ate development settings, which limits external validity . Insider positionality and reliance on self-reporte d narratives introduce potential interpretive bias and selective recall [ 28 ]. These threats were mitigate d through triangulation across multiple individual reections, consultation of design and code artifacts, and pe er dis- cussion among co-authors during analysis. Additionally , the study focuses on developer perspectives rather than independent player evaluation; therefore , conclusions regarding gameplay , playability , and player experience reect development experience rather than external user testing. Ethical Considerations. All student participants contribute d vol- untarily and are co-authors of this manuscript. The reections w ere written by the individuals represented in the study and are pub- lished with their explicit consent. In accordance with institutional guidelines, formal ethics approval was not required b ecause the data consist of author-generated narratives. Participation was not linked to grading, and contributors retained the right to withdraw from publication. 3 Results This section synthesizes participant reections on how LLM inte- gration inuenced gameplay , playability , and player experience. Impact on Gameplay Structures. LLM integration altered game- play by coupling generated game elements directly to rule execution and progression systems. Rather than functioning as optional nar- rative additions, generated content and adaptive elements became operational components of advancement, resource management, and player actions. One participant emphasized that question gen- eration in the game “directly drives skill p oint replenishment, boss en- counters, and level progression, making it a core part of the gameplay” (P1). In this conguration, generative outputs determine d access to resources and continuation of core mechanics, positioning the LLM as an embedded rule-enforcing component within the game system. At the same time, LLMs introduced structured variability across repeated playthroughs by dynamically generating content tied to player-provided inputs. Dierences in input materials altered the set of in-game elements, pacing, and exposur e to challenge while the overarching mechanics remained stable. As one participant Figure 1: WizdomRun game interface in use noted, “Each campaign oers a vastly dierent user experience, de- spite the underlying code remaining basic” (P1). Another observed that “each playthrough will dier in its questions and answers” (P2). Howev er , this variability was bounde d by both mo del congura- tion and system constraints. One de veloper reected that when the same document was reused, “you generally get the same kinds of questions” (P5), indicating that generative diversity was moderated by architectural decisions. In interactive contexts, player decisions were incorporated into subse quent system responses, such that “the LLM would be fed the decision that was made. . . allowing the system to generate a custom response” (P4). In general, these ac- counts show that LLMs aected gameplay by layering probabilistic content generation onto stable mechanical frameworks, expanding possible game states without replacing deterministic rule structures. Impact on Playability . While generative mechanisms expanded content exibility , they simultaneously required architectural con- tainment to preserve system coherence and functional reliability . Participants described the ne ed to constrain outputs to predictable formats to ensure seamless integration with backend systems and gameplay logic. One participant explained that “ensuring that LLM responses were formatted exactly as expe cted. . . allowed for the back- end design to remain cohesive and well-structured” (P1). Another noted that explicitly dening the expected return format “allowed me to create objects in the C# aspect of the game” (P4), ensuring reliable parsing and system stability . These reections indicate that maintaining playability depended on aligning generative outputs Conference’17, July 2017, W ashington, DC, USA Keeryn Johnson, Muhammad Ahmed, Charlie Lang, Sahib Thethi, Wilson Zheng, and Ronnie de Souza Santos Figure 2: Sena game interface in use with structured schemas and validation processes. Challenge cali- bration also emerged as a tension. Although generated elements were intentionally categorized to support escalating diculty (men- tioned by P5), inconsistencies in how challenge levels were pro- duced disrupted pacing. One participant obser ved that “some of the questions generate d for medium or hard felt like the easy set” (P3), challenging progression balance . Such mismatches required iterative adjustment to sustain coherent diculty curves. The reli- ability of generated content further ae cted perceived gameplay fairness. Because player outcomes were dir ectly tied to generate d elements, incorrect outputs undermined legitimacy . One partici- pant recalled that “a simple math question app eared. . . but none of the answer options were correct” (P2). Another note d that incorrect responses “would tarnish the player’s experience be cause they’re not actually learning; they’re just tr ying to gure out how the AI thinks” (P5). Participants also reported patterned output structures, where the correct option “would show up in the same multiple-choice slot more than once” (P3), potentially enabling unintended strategies. These accounts indicate that playability in LLM-integrated games is tightly coupled to output reliability , consistency , and controlled variability . Impact on Player Experience. LLM integration enhance d per- ceived personalization by aligning generated content with player- provided inputs and contextual queries. Allowing users to introduce their own materials expanded the scope of in-game elements and individualized progression pathways. One participant remarked that “the range of topics that players can test themselves on is endless” (P2). In conversational features, players could request clarication dynamically; as one developer noted, “the player could ask clarify- ing questions with additional context” (P4). Such me chanisms wer e perceived as increasing engagement through contextual respon- siveness and adaptability . However , experiential quality depended on balancing variability with predictability . One participant re- ected that “having variable gameplay to a degree where there’s still some expe ctancy oers more variety , ” while also emphasizing that “games need some level of predictability to remain fair and playable ” (P5). When contextual grounding was insucient, the system could “simply answer like any publicly available LLM and didn’t deter- mine if the question was on topic” (P4), weakening coherence and trust. Incorrect outputs similarly disrupted immersion. These reec- tions suggest that player experience in generative game systems is shaped not only by personalization and dynamic content but also by perceived consistency . 4 Discussion Existing studies on large language models in games primarily em- phasize application domains such as procedural content generation, dialogue systems, and mixed initiative supp ort, often framing vari- ability and personalization as key b enets while noting challenges of unpredictability and evaluation [ 19 , 25 , 27 ]. Our ndings align with this body of work in that participants experienced increased variability and adaptive interaction as central eects of LLM inte- gration. Howev er , our study extends prior work by exploring how such integration recongures established game constructs at the architectural level. Rather than focusing solely on expressive po- tential, the ndings show that coupling probabilistic generation to deterministic rule systems aects progression mechanics, diculty calibration, and perceived gameplay fairness. In contrast to the Large Language Models in Game Development: Implications for Gameplay, Playability , and Player Experience Conference’17, July 2017, Washington, DC, USA literature that treats unpr edictability primarily as a content-level concern [ 19 , 25 , 27 ], our ndings indicate that variability operates as a structural property that must b e mediated through prompt engineering, schema enforcement, and validation mechanisms. When situated within software engineering research on game development, which emphasizes architecture, testing, requirements management, and quality assurance in game contexts [ 2 , 13 , 23 ], our ndings suggest that LLM integration introduces a distinct class of architectural and quality concerns. Traditional game software engineering highlights the importance of managing complexity and ensuring system quality through structured design and testing practices [ 2 , 13 ]. In our study , variability becomes intertwined with rule consistency , progression control, and perceived gameplay fair- ness, shifting part of architectural control from fully pre-authored logic to generative subsystems. The need for prompt engineer- ing, structured output schemas, and validation pipelines indicates that LLM components change coordination patterns within the architecture, r equiring deliberate alignment b etween probabilistic generation and deterministic gameplay logic. The contribution of this study , therefore, lies in articulating how generative AI inte- gration simultaneously expands gameplay variability and imposes new constraints on playability , quality assurance, and experiential trust within established software engineering practices for games. Overall, our paper has implications for research and practice. From a research perspective, our ndings suggest that future work should move beyond cataloguing generative applications and in- stead investigate how specic architectural integration strategies inuence gameplay structure, playability stability , and player ex- perience coherence. From a practice perspective, our ndings in- dicate that eective LLM integration into games requires delib- erate containment strategies embedded within standard software engineering workows. Developers must treat correctness, format- ting determinism, and diculty alignment as gameplay-critical properties rather than p eripheral implementation details. More broadly , our study suggests that integrating LLMs into games is not merely a matter of adding adaptive content but of redesigning the relationship between generative exibility , structural control, and experiential reliability . 5 Conclusions and Future W ork This paper investigated ho w the integration of large language mod- els inuences gameplay , playability , and player experience within game development contexts through a collaborative autoethno- graphic study of two LLM-integrate d games. Our ndings indi- cate that LLM integration embeds generative mechanisms within core rule systems, increasing variability and personalization while introducing new constraints related to stability , correctness, and perceived gameplay . By coupling probabilistic generation with de- terministic gameplay logic, LLM components require structured prompt design, validation, and calibration to preserve playability and coherence . Overall, these r esults provide pr eliminary empirical insight into how generative AI integration intersects with estab- lished game constructs and software engineering concerns. As future work, we will conduct user-centered empirical studies to ex- tend this investigation beyond developer reections and investigate how LLM-driven variability , correctness, and architectural cou- pling inuence engagement and overall player experience during gameplay . Data A vailability Both games developed and analyzed in this study are open source. T o preserve the integrity of the double blind review process, links to the corresponding public repositories will be provided in this section after the review of the manuscript. Sena is publicly available online: https://ww w .senaplural.ca/ References [1] Saiqa Aleem, Luiz Fernando Capretz, and Fahe em Ahmed. 2016. Game devel- opment software engineering process life cycle: a systematic review . Journal of Software Engineering Research and Development 4, 1 (2016), 6. [2] Apostolos Ampatzoglou and Ioannis Stamelos. 2010. Software engineering re- search for computer games: A systematic review . Information and Software T echnology 52, 9 (2010), 888–901. [3] Heewon Chang, Faith Ngunjiri, and Kathy- Ann C Hernandez. 2016. Collaborative autoethnography . Routledge. [4] Isaac Cheah, Anwar Sadat Shimul, and Ian Phau. 2022. Motivations of playing digital games: A review and r esearch agenda. Psychology & Marketing 39, 5 (2022), 937–950. [5] Jorge Chueca, Javier V erón, Jaime Font, Francisca Pérez, and Carlos Cetina. 2024. The consolidation of game software engineering: A systematic literature review of software engineering for industr y-scale computer games. Information and Software T echnology 165 (2024), 107330. [6] Heather Desurvire, Martin Caplan, and Jozsef A T oth. 2004. Using heuristics to evaluate the playability of games. In CHI’04 extended abstracts on Human factors in computing systems . 1509–1512. [7] James W Drisko and Tina Maschi. 2016. Content analysis . Oxford university press. [8] Carolyn Ellis, T ony E Adams, and Arthur P Bo chner . 2011. A utoethnography: an overview . Historical social research/Historische sozialforschung (2011), 273–290. [9] Carlo Fabricatore. 2007. Gameplay and game mechanics: a key to quality in videogames. In ENLACES (MINEDUC Chile)-OECD Expert meeting on videogames and education . [10] Roberto Gallotta, Graham Todd, Marvin Zammit, Sam Earle, Antonios Liapis, Julian T ogelius, and Georgios N Y annakakis. 2024. Large language models and games: A survey and roadmap. IEEE Transactions on Games (2024). [11] Jose Luis González Sánchez, Natalia Padilla Zea, and Francisco L Gutiérrez. 2009. From usability to playability: Introduction to player-centred video game develop- ment process. In International Conference on Human Centered Design . Springer , 65–74. [12] Jin Jeong and T ak Y eon Le e. 2025. Ligs: developing an LLM-infused game sys- tem for emergent narrative. In Procee dings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems . 1–12. [13] Christopher M Kanode and Hisham M Haddad. 2009. Software engineering chal- lenges in game dev elopment. In 2009 Sixth International Conference on Information T echnology: New Generations . IEEE, 260–265. [14] Xiaozhou Li, Zhe ying Zhang, and Kostas Stefanidis. 2021. A data-driven approach for video game playability analysis based on players’ reviews. Information 12, 3 (2021), 129. [15] David Marshall, Damien Coyle, Shane Wilson, and Michael Callaghan. 2013. Games, gameplay , and BCI: the state of the art. IEEE Transactions on Computa- tional Intelligence and AI in Games 5, 2 (2013), 82–99. [16] Frans Mäyrä and Laura Ermi. 2011. Fundamental components of the gameplay experience. Digarec Series 6 (2011), 88–115. [17] Janne Paavilainen. 2020. Dening playability of games: functionality , usability , and gameplay . In Proceedings of the 23rd International Conference on Academic Mindtrek . 55–64. [18] Xiangyu Peng, Jessica Quaye, Sudha Rao, W eijia Xu, Portia Botchway, Chris Brockett, Nebojsa Jojic, Gabriel DesGarennes, Ken Lobb, Michael Xu, et al . 2024. Player-driven emergence in llm-driven game narrative. In 2024 IEEE Conference on Games (CoG) . IEEE, 1–8. [19] Pankaj Pilaniwala, Girish Chhabra, and Prabhleen Kaur . 2024. The futur e of game development in the era of gen ai. In 2024 A rticial Intelligence for Business (AIxB) . IEEE, 39–42. [20] José Luis González Sánchez, Francisco Montero Simarr o, Natalia Padilla Zea, and Francisco Luis Gutiérrez V ela. 2009. P layability as Extension of Quality in Use in Video Games.. In I-USED . [21] José Luis González Sánchez and Francisco Luis Gutiérrez V ela. 2014. Assessing the player interaction experiences based on playability . Entertainment Computing Conference’17, July 2017, W ashington, DC, USA Keeryn Johnson, Muhammad Ahmed, Charlie Lang, Sahib Thethi, Wilson Zheng, and Ronnie de Souza Santos 5, 4 (2014), 259–267. [22] José Luis González Sánchez, Francisco Luis Gutiérrez V ela, Francisco Montero Simarro, and Natalia Padilla-Zea. 2012. Playability: analysing user experience in video games. Behaviour & information te chnology 31, 10 (2012), 1033–1054. [23] Ronnie ES Santos, Cle yton VC Magalhães, Luiz Fernando Capretz, Jorge S Correia- Neto, Fabio QB Da Silva, and Abdelrahman Saher. 2018. Computer games are serious business and so is their quality: particularities of software testing in game development from the perspective of practitioners. In Proceedings of the 12th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement . 1–10. [24] Helen Sharp, Y vonne Dittrich, and Cleidson RB De Souza. 2016. The role of ethnographic studies in empirical software engine ering. IEEE Transactions on Software Engineering 42, 8 (2016), 786–804. [25] Penny Sweetser . 2024. Large language models and video games: A preliminary scoping review . In Proceedings of the 6th ACM Conference on Conversational User Interfaces . 1–8. [26] Swapnil Thakar , Saurabh Tiwari, and Santosh Singh Rathore. 2025. A human values perspective on playability issues of mobile games. International Journal of Human–Computer Interaction (2025), 1–28. [27] Daijin Y ang, Erica Kleinman, and Casper Harteveld. 2024. GPT for games: A scoping review (2020-2023). In 2024 IEEE Conference on Games (CoG) . IEEE, 1–8. [28] He Zhang, Xin Huang, Xin Zhou, Huang Huang, and Muhammad Ali Babar . 2019. Ethnographic resear ch in software engineering: a critical review and checklist. In Proceedings of the 2019 27th ACM joint me eting on European software engineering conference and symp osium on the foundations of software engine ering . 659–670.

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment