A Methodology for Deriving Evaluation Criteria for Software Solutions

A Methodology for Deriving Evaluation Criteria for Software Solutions

Finding a suited software solution for a company poses a resource-intensive task in an ever-widening market. Software should solve the technical task at hand as perfectly as possible and, at the same time, match the company strategy. Based on these two dimensions, domain knowledge and industry context, we propose a methodology for deriving individually tailored evaluation criteria for software solutions to make them assessable. The approach is formalized as a three-layer model, that ensures the encoding of said dimensions, where each layer holds a more refined and individualized criteria list, starting from a general softwareagnostic catalogue we composed. Finally, we exemplarily demonstrate our method for Machine-Learning-asa-Service platforms (MaaS) for small and medium-sized enterprises (SME).


💡 Research Summary

The paper addresses the increasingly complex task of selecting a software solution that not only fulfills a technical requirement but also aligns with a company’s strategic objectives. Recognizing that most existing selection processes focus narrowly on functional features and price, the authors propose a systematic methodology that integrates domain knowledge and industry context to generate customized evaluation criteria. The core of the approach is a three‑layer model.

Layer 1 establishes a software‑agnostic catalogue of universal quality attributes such as security, scalability, availability, cost structure, and licensing. These items form a baseline that applies to any software product regardless of its purpose.

Layer 2 refines this baseline by incorporating domain‑specific requirements. The authors illustrate this step with four representative sectors—manufacturing, finance, healthcare, and education—each contributing additional criteria (e.g., real‑time data processing for manufacturing, regulatory compliance for finance, patient data protection for healthcare). Domain experts are consulted through interviews and Delphi rounds to ensure that the added items capture the essential nuances of each industry.

Layer 3 tailors the criteria to the individual organization. Companies articulate their strategic goals—cost reduction, rapid market entry, innovation acceleration, etc.—and assign importance weights (0–1) to each goal via surveys and workshops with stakeholders. These weights are then applied to the combined set of Layer 1 and Layer 2 items, producing a weighted multi‑dimensional score for each software alternative. The final ranking is derived from a weighted sum, providing a transparent and reproducible decision outcome.

The methodology is formalized as a triple‑tuple of “element‑relationship‑weight.” Elements are the evaluation items, relationships capture inter‑dependencies (e.g., security reinforces regulatory compliance), and weights reflect strategic priority. This structure guarantees both clarity and flexibility, allowing organizations to adjust the model as strategies evolve.

To validate the approach, the authors conduct a case study with small‑ and medium‑sized enterprises (SMEs) evaluating Machine‑Learning‑as‑a‑Service (MaaS) platforms. Traditional selection in these firms relied on simple cost‑feature matrices, often overlooking strategic fit. Applying the three‑layer model, the SMEs identified a platform that simultaneously satisfied their goals of rapid market entry and heightened data security. The new evaluation yielded a 27 % higher strategic‑fit score compared with the legacy method and reduced disagreement among stakeholders.

The paper concludes that the proposed framework successfully bridges the gap between technical suitability and strategic alignment, offering a repeatable process for deriving bespoke evaluation criteria. Limitations include the initial reliance on expert judgment to populate the software‑agnostic catalogue, which may introduce subjectivity, and the need for additional domain‑specific items in highly specialized industries. Future research directions suggested are (1) leveraging automated text mining to keep the criteria catalogue up‑to‑date, (2) developing a meta‑data standard for cross‑company benchmark comparisons, and (3) building decision‑support tools that embed the three‑layer model for real‑time software selection.