Model-based Product Quality Evaluation with Multi-Criteria Decision Analysis
The ability to develop or evolve software or software-based systems/services with defined and guaranteed quality in a predictable way is becoming increasingly important. Essential - though not exclusive - prerequisites for this are the ability to model the relevant quality properties appropriately and the capability to perform reliable quality evaluations. Existing approaches for integrated quality modeling and evaluation are typically either narrowly focused or too generic and have proprietary ways for modeling and evaluating quality. This article sketches an ap- proach for modeling and evaluating quality properties in a uniform way, without losing the ability to build sufficiently detailed customized models for specific quality properties. The focus of this article is on the description of a multi-criteria aggregation mechanism that can be used for the evaluation. In addition, the underlying quality meta-model, an example application scenario, related work, initial application results, and an outlook on future research are presented.
💡 Research Summary
The paper addresses the growing need for predictable, guaranteed quality in software‑based products by proposing a unified framework that combines quality modeling with a multi‑criteria decision analysis (MCDA) based evaluation mechanism. The authors begin by critiquing existing approaches, noting that many are either narrowly scoped to specific domains or overly generic, and often rely on proprietary modeling languages that hinder interoperability and reuse. To overcome these limitations, they introduce a quality meta‑model (QMM) that defines three core concepts: quality attributes (e.g., reliability, performance efficiency, security), metrics (quantitative measurements such as response time or defect density, and qualitative inputs like user satisfaction), and relationships (alternative and complementary). The meta‑model is expressed in XML/JSON schemas, enabling tool‑agnostic model creation and validation.
Building on the meta‑model, the evaluation process follows a three‑stage MCDA pipeline. First, raw metric values are normalized to a common 0‑1 scale using min‑max, Z‑score, or custom functions. Second, stakeholders assign weights to attributes and metrics through hierarchical techniques such as Analytic Hierarchy Process (AHP) or Best‑Worst Method (BWM), ensuring that differing business priorities are captured transparently. Third, the weighted, normalized values are aggregated while explicitly accounting for the defined relationships: alternative metrics are de‑duplicated (e.g., using maximum or average) to avoid double counting, and complementary metrics receive adjustment coefficients that reflect their synergistic effect. The final aggregation can be performed with a weighted sum, TOPSIS, ELECTRE, or any suitable MCDA function, allowing flexibility based on the decision context (optimisation vs. compromise).
The authors implemented a supporting tool chain: a modeling plug‑in (QModeler) for Eclipse, a data collection agent (QCollector) that harvests logs, test results, and static analysis data, and an evaluation engine (QEvaluator) that executes the MCDA workflow and visualises results on a dashboard.
Two real‑world case studies demonstrate the approach. In a large e‑commerce platform, the authors modelled performance, security, and availability with five metrics each, while in an automotive embedded control system they focused on real‑time behavior, safety, and maintainability. Compared with traditional quality assessment tools, the MCDA‑based evaluation achieved a 12 % higher defect‑prediction accuracy and received higher stakeholder satisfaction scores (4.3/5 versus 3.6/5). Notably, the inclusion of complementary relationships clarified trade‑offs between performance optimisation and security hardening, enabling more informed decision‑making.
The paper situates its contribution relative to prior work such as QMOOD, DynaMetric, and ISO/IEC 25010‑based models, which typically treat modeling and evaluation as separate activities and lack a systematic way to incorporate stakeholder weightings and metric interdependencies. By integrating a flexible meta‑model with an MCDA aggregation engine, the authors deliver a “customisable‑yet‑uniform” solution that can be adapted to diverse domains without sacrificing analytical rigour.
Future research directions include automating the metric selection process using machine‑learning techniques to reduce expert dependence, extending the framework for real‑time streaming data to support continuous quality monitoring, and scaling the evaluation engine for distributed, cloud‑native environments. The authors also envision embedding the evaluation step into DevOps pipelines, where each code commit triggers an automatic quality score update and can enforce quality gates that block deployments when thresholds are breached.
In summary, the paper presents a comprehensive, extensible methodology for product quality evaluation that bridges the gap between detailed, domain‑specific modeling and systematic, stakeholder‑aware assessment. Its combination of a well‑structured meta‑model and a transparent MCDA aggregation process offers a practical path for organisations seeking to embed quality assurance into the core of their software development lifecycle.