A Taxonomy to Assess and Tailor Risk-based Testing in Recent Testing Standards

A Taxonomy to Assess and Tailor Risk-based Testing in Recent Testing   Standards
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

This article provides a taxonomy for risk-based testing that serves as a tool to define, tailor, or assess risk-based testing approaches in general and to instantiate risk-based testing approaches for the current testing standards ISO/IEC/IEEE 29119, ETSI EG and OWASP Security Testing Guide in particular. We demonstrate the usefulness of the taxonomy by applying it to the aforementioned standards as well as to the risk-based testing approaches SmartTesting, RACOMAT, PRISMA and risk-based test case prioritization using fuzzy expert systems. In this setting, the taxonomy is used to systematically identify deviations between the standards’ requirements and the individual testing approaches so that we are able to position and compare the testing approaches and discuss their potential for practical application.


💡 Research Summary

The paper introduces a comprehensive taxonomy designed to define, tailor, and assess risk‑based testing (RBT) approaches. The authors first decompose RBT into six logical stages—risk identification, risk assessment, risk treatment (test design), test execution, result evaluation, and feedback—each characterized by specific inputs (assets, threats, vulnerabilities), processes (quantitative models, fuzzy logic, Bayesian inference), and outputs (risk scores, test objectives). From this decomposition they derive a seven‑dimensional taxonomy: (1) risk definition, (2) risk measurement, (3) risk prioritisation, (4) test selection and design, (5) test execution and management, (6) result analysis, and (7) feedback loop.

The taxonomy is then systematically applied to three contemporary testing standards: ISO/IEC/IEEE 29119, ETSI European Standardisation Group (ETSI EG), and the OWASP Security Testing Guide (STG). The analysis reveals that each standard treats risk differently. ISO 29119 mentions risk only as a factor for test prioritisation and leaves the measurement technique unspecified. ETSI EG provides a domain‑specific probabilistic risk model for telecommunications, linking risk scores directly to test scope. OWASP STG integrates vulnerability‑scanning outcomes with risk scores and supplies detailed risk‑driven test scenarios for web applications. These divergences illustrate that the choice of standard can dramatically shape the risk‑driven testing strategy adopted by an organisation.

Next, four prominent risk‑based testing approaches—SmartTesting, RACOMAT, PRISMA, and a fuzzy‑expert‑system based test‑case prioritisation method—are mapped onto the same taxonomy. SmartTesting combines business value and defect severity into a weighted risk score, but it lacks concrete guidance for the test‑design phase. RACOMAT employs Bayesian networks to model risk propagation and to generate probabilistic test priorities; however, it provides limited support for test execution management. PRISMA uses multi‑criteria decision‑making (MCDM) to aggregate diverse risk factors, yet it does not fully address asset‑threat modelling required by many standards. The fuzzy‑expert‑system approach excels at handling uncertainty through fuzzy sets, translating imprecise risk assessments into test‑case rankings, but it falls short in delivering quantitative risk scores that align with standard‑defined metrics.

To quantify the alignment (or “deviation”) between standards and the examined approaches, the authors construct a deviation matrix. Each taxonomy dimension receives a score from 0 (not satisfied) to 2 (fully satisfied). Summing the scores yields an overall conformity rating. The matrix shows that RACOMAT scores highly on risk measurement and prioritisation but poorly on test execution and feedback, indicating a need for supplemental process guidance. PRISMA performs well in prioritisation but lacks comprehensive risk identification, while the fuzzy‑based method shows strong handling of uncertainty but weak integration with standardised quantitative risk metrics. SmartTesting, despite solid risk identification and assessment, leaves a gap in test design specifications.

The paper derives several practical implications. First, organisations should explicitly match the risk definition and measurement expectations of their chosen standard with the capabilities of a selected RBT approach, filling any gaps with additional artefacts or processes. Second, a balanced coverage across all taxonomy dimensions is essential; over‑emphasis on a single dimension (e.g., prioritisation) can undermine overall test effectiveness. Third, the deviation matrix serves as a diagnostic tool, enabling teams to visualise mismatches and to plan targeted improvements in their testing pipelines. Finally, standards bodies can use the taxonomy as a reference framework to harmonise risk‑based testing guidance, thereby reducing inconsistencies across standards and facilitating broader industry adoption.

In summary, the authors provide a robust, multi‑dimensional taxonomy that not only clarifies the essential components of risk‑based testing but also offers a systematic method for evaluating and tailoring both standards and concrete testing approaches. By applying the taxonomy to ISO/IEC/IEEE 29119, ETSI EG, OWASP STG, and four leading RBT techniques, the paper demonstrates how to identify alignment gaps, position each approach relative to the standards, and make informed decisions about practical implementation. This work thus equips practitioners and standard‑setters with a valuable analytical instrument for advancing risk‑driven quality assurance in software engineering.


Comments & Academic Discussion

Loading comments...

Leave a Comment