An overview of metrics-based approaches to support software components reusability assessment
Objective: To present an overview on the current state of the art concerning metrics-based quality evaluation of software components and component assemblies. Method: Comparison of several approaches available in the literature, using a framework comprising several aspects, such as scope, intent, definition technique, and maturity. Results: The identification of common shortcomings of current approaches, such as ambiguity in definition, lack of adequacy of the specifying formalisms and insufficient validation of current quality models and metrics for software components. Conclusions: Quality evaluation of components and component-based infrastructures presents new challenges to the Experimental Software Engineering community.
💡 Research Summary
The paper provides a comprehensive overview of metric‑based approaches for assessing the reusability of software components and component assemblies in component‑based development (CBD). Recognizing that CBD promises cost reduction, faster time‑to‑market, and quality improvements through reuse, the authors argue that the opaque, black‑box nature of many commercial off‑the‑shelf (COTS) components makes traditional source‑code metrics (e.g., McCabe complexity) unsuitable. Instead, metrics must rely on publicly available interface information, contracts, and adaptability characteristics.
To systematically compare existing proposals, the authors introduce an evaluation framework consisting of five dimensions: Scope (granularity and artifact type, e.g., coarse‑grained vs. fine‑grained, black‑box vs. white‑box), Intent (the primary goals of the approach), Technique (how metrics are defined and validated, ranging from informal natural‑language descriptions to formal algebraic specifications), Critique (qualitative assessment of strengths and weaknesses), and Maturity (an ordinal scale indicating the development stage of the proposal). Using this framework, they survey a range of metric sets proposed in the literature for both individual components and component assemblies.
The analysis uncovers three pervasive shortcomings. First, there is no widely accepted quality model for CBD. Although several extensions of ISO‑9126 have been suggested, none have achieved industry‑wide adoption, leaving metric definitions without a clear mapping to quality attributes such as maintainability, portability, or reliability. Second, metric definitions are often ambiguous. Many papers rely on natural‑language descriptions that omit crucial measurement rules (e.g., whether to count blank lines in LOC, which resources are considered in a resource‑utilization metric, or how interface complexity should be quantified). This leads to inconsistent implementations and incomparable results across tools. The authors illustrate the problem with two example metrics—Component Interface Complexity Metric (CICM) and Component Resource Utilization Metric (CRUM)—showing how vague wording prevents reproducible measurement. Third, validation is insufficient. The authors cite systematic reviews indicating that less than 2 % of software‑engineering publications report controlled experiments, and replication studies are even rarer. Consequently, most metric proposals lack empirical evidence of reliability, validity, or predictive power.
To address these gaps, the paper advocates for (1) the development of a standardized, CBD‑specific quality model, possibly through collaboration with standards bodies; (2) the formalization of metric definitions using set theory, algebra, or other mathematically rigorous methods, which would enable unambiguous tool implementation; and (3) the adoption of standardized experimental protocols and replication studies to build a robust evidence base. The authors also note that while formal approaches improve precision, they may raise the entry barrier for practitioners lacking advanced mathematical training, suggesting a need for tooling and education support.
Finally, the maturity assessment places most existing proposals at the “conceptual” level, with few reaching “validated” status. The paper’s roadmap—standard quality model, formal metric specification, and systematic empirical validation—offers a clear path for the experimental software‑engineering community to advance component reusability assessment from ad‑hoc heuristics to scientifically grounded, repeatable practices. This transition is essential for component assemblers to make data‑driven decisions rather than relying on personal experience or expert opinion, ultimately enhancing the reliability and efficiency of component‑based software systems.
Comments & Academic Discussion
Loading comments...
Leave a Comment