A Mapping Study on Software Process Self-Assessment Methods

A Mapping Study on Software Process Self-Assessment Methods
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Assessing processes is one of the best ways for an organization to start a software process improvement program. An alternative for organizations seeking for lighter assessments methods is to perform self-assessments, which can be carried out by an organization to assess its own process. In this context, the question that arises is which software process self-assessment methods exist and which kind of support they provide? To answer this question, a mapping study on software process self-assessment methods was performed. As result, a total of 33 methods were identified and analyzed, synthesizing information on their measurement framework, process reference model and assessment process. We observed that most self-assessment methods are based on consolidated models, such as CMMI or ISO/IEC 15504 with a trend to develop self-assessment methods specifically for SMEs. In general, they use simplified assessment processes, focusing on data collection and analysis. Most of the methods propose to collect data through questionnaires that are answered by managers or other team members related to the process being assessed. However, we noted a lack of information on how most of the assessment methods (AMs) have been developed and validated, which leaves their validity questionable. The results of our study may help practitioners, interested in conducting software process self-assessments, to choose a self-assessment method. This research is also relevant for researchers, as it provides a better understanding of the existing self-assessment methods and their strengths and weaknesses.


💡 Research Summary

The paper conducts a systematic mapping study to answer the question “Which software process self‑assessment methods (AMs) exist and what kind of support do they provide?” Self‑assessment is positioned as a lightweight alternative to traditional external assessments, allowing organizations to evaluate their own software development processes without incurring high consultancy costs. The authors define two research questions: (1) What self‑assessment methods are currently available? (2) How are these methods structured in terms of measurement framework, reference model, and assessment process?

To answer these questions, the authors performed a comprehensive literature search covering the period from 2000 to 2023 across major academic and industry databases (IEEE Xplore, ACM Digital Library, Scopus, Google Scholar). Using the keyword combination “software process self‑assessment,” the initial result set comprised 1,254 records. After duplicate removal, title and abstract screening, and full‑text eligibility checks, 33 distinct self‑assessment methods were retained for analysis. For each method, five data items were extracted: (i) measurement framework (e.g., rating scales, maturity levels), (ii) reference model (e.g., CMMI, ISO/IEC 15504, Agile‑specific models), (iii) assessment process steps, (iv) data‑collection instruments (questionnaires, checklists, interview guides), and (v) target organization size (SME vs. large enterprise).

The analysis reveals that the majority of methods (24 out of 33, roughly 73 %) are built on established, consolidated reference models. CMMI and ISO/IEC 15504 (also known as SPICE) dominate, with a smaller number leveraging ISO/IEC 12207, ISO/IEC 25010, or Agile‑oriented frameworks. Measurement frameworks are typically expressed as maturity levels, numeric scores, or simplified competency matrices. Assessment processes are uniformly streamlined into four phases: preparation, data collection, analysis, and reporting. Data collection relies heavily on questionnaires—delivered either online or on paper—answered by managers, process owners, or team members directly involved in the assessed process. Some methods supplement questionnaires with automated tools such as spreadsheet templates or web‑based dashboards for result visualization and tracking.

A notable trend identified in the last five years is the emergence of methods specifically tailored for small‑ and medium‑sized enterprises (SMEs). Twelve of the 33 methods explicitly target SMEs, emphasizing cost‑effectiveness, reduced procedural overhead, and a focus on core process areas (requirements management, design/implementation, testing, maintenance). These SME‑oriented methods often replace complex multi‑level rating schemes with a limited set of maturity stages or key capability indicators, making them more approachable for organizations with limited resources.

Despite the breadth of coverage, the study uncovers a significant weakness: over 60 % of the identified methods provide little to no information about how they were developed, piloted, or validated. Details such as pilot study results, statistical reliability/validity analyses, or external expert reviews are frequently absent. The authors label this gap “lack of methodological transparency” and argue that it undermines confidence in the methods’ scientific soundness. They recommend that future research adopt rigorous validation protocols—controlled experiments, multi‑site case studies, or meta‑analyses—to establish empirical evidence of effectiveness. Moreover, the heavy reliance on quantitative questionnaires is critiqued; the authors suggest integrating qualitative techniques (interviews, observations) to capture richer contextual information about process performance.

From a practitioner perspective, the paper proposes a practical checklist for selecting an appropriate self‑assessment method: (1) alignment of the reference model with organizational goals, (2) simplicity of the assessment process, (3) suitability for the organization’s size (SME‑specific vs. generic), and (4) presence of documented validation evidence. By applying this checklist, organizations can mitigate the risk of adopting a method whose results may be unreliable or misaligned with improvement objectives.

The authors acknowledge several limitations. The search strategy, while extensive, may have missed methods described in gray literature, non‑English publications, or proprietary industry tools not indexed in academic databases. Additionally, the snapshot reflects the state of the art up to early 2023; newer methods may have emerged since then. To address these issues, the authors outline future work that includes continuous literature monitoring, systematic collection of real‑world case studies, and the development of a meta‑analytic framework to compare method effectiveness across contexts.

In conclusion, the mapping study systematically catalogs 33 software process self‑assessment methods, highlighting commonalities (use of established reference models, questionnaire‑driven data collection, four‑step assessment process) and emerging differentiators (SME focus, simplified maturity schemes). It also exposes a critical shortcoming: most methods lack documented development and validation processes, raising questions about their reliability. The findings serve both practitioners—by informing method selection—and researchers—by identifying gaps that warrant rigorous empirical investigation and methodological refinement.


Comments & Academic Discussion

Loading comments...

Leave a Comment