A Digital Game Maturity Model (DGMM)
Game development is an interdisciplinary concept that embraces artistic, software engineering, management, and business disciplines. This research facilitates a better understanding of important dimensions of digital game development methodology. Game development is considered as one of the most complex tasks in software engineering. The increased popularity of digital games, the challenges faced by game development organizations in developing quality games, and high competition in the digital game industry demand a game development maturity assessment. Consequently, this study presents a Digital Game Maturity Model to evaluate the current development methodology in an organization. The framework of this model consists of assessment questionnaires, a performance scale, and a rating method. The main goal of the questionnaires is to collect information about current processes and practices. In general, this research contributes towards formulating a comprehensive and unified strategy for game development maturity evaluation. Two case studies were conducted and their assessment results reported. These demonstrate the level of maturity of current development practices in two organizations.
💡 Research Summary
The paper addresses the growing need for a systematic assessment of game development processes in an industry characterized by high artistic, technical, and business complexity. Recognizing that traditional software maturity models such as CMMI and ISO/IEC 15504 (SPICE) do not fully capture the unique dynamics of game creation—particularly the iterative prototyping, rapid market feedback cycles, and cross‑disciplinary collaboration—the authors propose the Digital Game Maturity Model (DGMM).
DGMM is structured around five maturity levels (Initial, Managed, Defined, Quantitatively Managed, Optimizing) and eight core domains that reflect the full spectrum of game production: (1) Planning & Design, (2) Art & Graphics, (3) Engine & Technology, (4) Quality Assurance, (5) Project Management, (6) Human Resources, (7) Marketing & Business, and (8) Organizational Culture. For each domain, a set of 4–6 questionnaire items is defined, yielding a total of 42 items. Respondents rate each item on a 0‑4 Likert scale (0 = Never, 4 = Always). The raw scores are aggregated into domain averages and then into an overall mean. Pre‑defined threshold ranges map these averages to the five maturity levels, allowing a clear, numeric classification of an organization’s current state.
A distinctive feature of DGMM is its emphasis on quantitative metrics at the “Quantitatively Managed” and “Optimizing” stages. The model recommends the collection of key performance indicators (KPIs) such as bug density, sprint goal attainment, play‑test feedback turnaround time, code coverage, and asset reuse rates. These metrics serve two purposes: they provide objective evidence to complement the subjective questionnaire data, and they enable organizations to monitor continuous improvement over time.
To validate the model, the authors conducted two case studies. The first involved a small independent studio (≈30 staff) that primarily develops mobile titles. The DGMM assessment revealed low scores in Planning & Design (1.2) and Quality Assurance (1.0), indicating a lack of formal design documentation and insufficient automated testing. The studio’s overall maturity was classified as “Initial,” suggesting that basic process management practices are still emerging.
The second case study examined a large, multi‑disciplinary publisher (≈200 staff) responsible for AAA console games. This organization scored high in Project Management (3.6) and Marketing & Business (3.4), reflecting mature scheduling, risk‑management, and market‑analysis practices. However, the scores for Art & Graphics (2.3) and Engine & Technology (2.1) were modest, highlighting challenges in asset pipeline standardization and engine optimization. Both firms exhibited gaps in the quantitative stage: KPIs were either not defined or not consistently tracked, preventing progression to the “Optimizing” level.
The discussion interprets these findings as evidence that DGMM can pinpoint specific strengths and weaknesses across the entire development lifecycle, thereby guiding targeted improvement initiatives. The authors acknowledge several limitations: the reliance on self‑reported questionnaire data introduces subjectivity; the model must be periodically updated to reflect emerging technologies (e.g., cloud gaming, AI‑driven content generation); and cultural factors—such as the tension between creative freedom and procedural rigor—are difficult to quantify.
In conclusion, DGMM offers a comprehensive, domain‑balanced framework for assessing game development maturity. It bridges the gap between traditional software process models and the artistic, fast‑paced nature of game production. The authors propose future work that includes integrating automated telemetry collection, applying machine‑learning techniques to analyze play‑test feedback, developing a cloud‑based dashboard for real‑time maturity tracking, and extending validation to a broader set of studios across different genres and platforms. By doing so, DGMM could become a standard benchmark for the industry, supporting higher quality, more predictable delivery, and sustained competitive advantage.
Comments & Academic Discussion
Loading comments...
Leave a Comment