Agile Software Development Method, A Comparative Review1
Although agile software development methods have caught the attention of software engineers and researchers worldwide, scientific research still remains quite scarce. The aim of this study is to order and make sense of the different agile approaches that have been proposed. This comparative review is performed from the standpoint of using the following features as the analytical perspectives: project management support, life-cycle coverage, type of practical guidance, adaptability in actual use, type of research objectives and existence of empirical evidence. The results show that agile software development methods cover, without offering any rationale, different phases of the software development life-cycle and that most of these methods fail to provide adequate project management support. Moreover, quite a few methods continue to offer little concrete guidance on how to use their solutions or how to adapt them in different development situations. Empirical evidence after ten years of application remains quite limited. Based on the results, new directions on agile methods are outlined.
💡 Research Summary
The paper addresses the paradox that, despite the widespread enthusiasm for agile software development methods (ASDs) among practitioners and researchers, systematic scientific investigation of these methods remains limited. To bring order to the proliferating landscape of agile approaches, the authors conduct a comparative review that evaluates each method against six analytical dimensions: (1) Project Management Support, (2) Life‑Cycle Coverage, (3) Type of Practical Guidance, (4) Adaptability in Actual Use, (5) Type of Research Objectives, and (6) Existence of Empirical Evidence.
Methodology
The authors first define the six dimensions, drawing on established project‑management literature (e.g., PMBOK) for the first dimension and on software‑engineering life‑cycle models for the second. Practical guidance is examined in terms of concrete artefacts, prescribed roles, meeting structures, and tooling recommendations. Adaptability is assessed by looking for explicit mechanisms that allow a method to be tuned to different organizational sizes, cultures, or domains. Research objectives are classified as theoretical modeling, empirical validation, or pragmatic tool provision. Finally, the presence of empirical evidence is determined by counting peer‑reviewed studies, industrial case reports, and quantitative data that document real‑world outcomes.
Corpus
Twelve representative agile methods are selected for analysis: eXtreme Programming (XP), Scrum, Crystal, Feature‑Driven Development (FDD), Adaptive Software Development (ASD), Dynamic Systems Development Method (DSDM), Lean Software Development, Agile Modeling, OpenUP, Agile Unified Process (AUP), Kanban, and Scaled Agile Framework (SAFe). Each method is mapped onto a matrix that records whether it satisfies each of the six dimensions, and the authors discuss the patterns that emerge.
Key Findings
-
Project Management Support Is Generally Weak – Most methods concentrate on team collaboration, feedback loops, and engineering practices, while offering little or no guidance on schedule, cost, risk, or resource allocation. For instance, XP provides detailed practices for test‑driven development and pair programming but lacks any formal schedule‑control mechanism. Scrum introduces sprint planning and daily stand‑ups but does not prescribe systematic risk‑management or quality‑assurance processes.
-
Uneven Life‑Cycle Coverage – Agile methods tend to focus on the implementation and testing phases. Only a few, such as DSDM and SAFe, attempt to address requirements analysis, design, deployment, and maintenance. Consequently, organizations that adopt a single method may need to supplement it with additional practices to achieve end‑to‑end coverage.
-
Guidance Is Often Abstract – While many methods define roles (e.g., Scrum Master, Product Owner) and ceremonies (e.g., sprint review), they rarely provide concrete artefacts, templates, or decision‑making criteria. This abstraction makes it difficult for novice teams to translate the method into day‑to‑day activities without extensive interpretation.
-
Adaptability Mechanisms Are Sparse – Although most methods claim to be “adaptable,” they seldom present systematic procedures for tailoring the method to specific contexts. Kanban is an exception, offering explicit flow‑management metrics (WIP limits, cumulative flow diagrams) that can be calibrated, but other methods rely on informal, experience‑based adjustments.
-
Research Objectives Skew Toward Practice, Not Validation – The majority of the surveyed literature focuses on describing the method or providing a practitioner‑oriented guide, with far fewer studies dedicated to rigorous empirical validation. Even after a decade of real‑world use, peer‑reviewed evidence (e.g., controlled experiments, longitudinal case studies) remains scarce.
-
Empirical Evidence Is Limited – The authors locate only a handful of quantitative studies that measure productivity gains, defect reduction, or team satisfaction attributable to a specific agile method. Most evidence is anecdotal or derived from industry white papers, which limits the ability to generalize findings across domains.
Discussion and Future Directions
The authors argue that the identified gaps hinder both academic progress and practical adoption. They propose several research avenues:
- Development of an integrated framework that couples agile engineering practices with traditional project‑management disciplines, thereby delivering a holistic governance model.
- Creation of phase‑by‑phase guidance, including concrete artefacts and checklists for requirements, design, deployment, and maintenance, to extend the life‑cycle coverage of existing methods.
- Formalization of adaptability through diagnostic models that assess organizational size, culture, and domain, followed by a “tailoring matrix” that prescribes which practices to retain, modify, or discard.
- Expansion of empirical research using multi‑project meta‑analyses, controlled field experiments, and long‑term case studies, with standardized performance metrics (e.g., cycle time, defect density, team morale).
- Development of educational curricula and tooling support that embed the detailed guidance missing from many methods, reducing the learning curve for new adopters.
Conclusion
By systematically mapping twelve prominent agile methods against six well‑defined analytical dimensions, the paper reveals a consistent pattern: while agile approaches excel at fostering collaboration and rapid feedback, they often fall short in providing comprehensive project‑management support, full life‑cycle coverage, concrete implementation guidance, and robust empirical validation. Addressing these shortcomings through integrated frameworks, detailed tailoring mechanisms, and rigorous empirical studies will be essential for the next generation of agile methods to achieve both practical effectiveness and scientific credibility.
Comments & Academic Discussion
Loading comments...
Leave a Comment