Runtime Adaptability driven by Negotiable Quality Requirements
Two of the common features of business and the web are diversity and dynamism. Diversity results in users having different preferences for the quality requirements of a system. Diversity also makes possible alternative implementations for functional requirements, called variants, each of them providing different quality. The quality provided by the system may vary due to different variant components and changes in the environment. The challenge is to dynamically adapt to quality variations and to find the variant that best fulfills the multi-criteria quality requirements driven by user preferences and current runtime conditions. For service-oriented systems this challenge is augmented by their distributed nature and lack of control over the constituent services and their provided quality of service (QoS). We propose a novel approach to runtime adaptability that detects QoS changes, updates the system model with runtime information, and uses the model to select the variant to execute at runtime. We introduce negotiable maintenance goals to express user quality preferences in the requirements model and automatically interpret them quantitatively for system execution. Our lightweight selection strategy selects the variant that best fulfills the user required multi-criteria QoS based on updated QoS values.
💡 Research Summary
The paper addresses the challenge of providing runtime adaptability for service‑oriented systems in which both user quality preferences and the quality of service (QoS) offered by constituent services can change dynamically. The authors introduce “negotiable maintenance goals” as an extension of goal‑oriented requirements engineering (GORE) to capture user‑specific, possibly conflicting, QoS preferences. Three types of negotiable goals are defined: High Priority (a single QoS attribute is given absolute priority, optionally with hard thresholds for all attributes), Distributed Priority (a set of attributes share the same priority while others are ignored), and Conditional Priority (improvements in one attribute may be compensated by degradations in another, expressed as percentage upgrades/degradations).
Each goal is translated into a mathematical macro that the system can evaluate at runtime. For example, “list IS HIGH PRIORITY” assigns a uniform priority of 1/m to every attribute in the list (m = number of attributes) and zero to all others; “list IS LESS THAN value” sets a threshold for negative attributes (lower is better), while “list IS GREATER THAN value” does the same for positive attributes. Conditional macros adjust thresholds and priorities based on percentage changes, handling both positive and negative QoS parameters.
The overall adaptation loop follows four activities: Express, Find, (Re)Estimate, and Execute.
- Express – Users attach negotiable maintenance goals to functional requirements in a requirements model.
- Find – A Variant Finder matches each functional requirement with one or more concrete service compositions (variants). This can be manual or automated (e.g., using existing service composition tools).
- (Re)Estimate – A Change Detector monitors runtime QoS metrics (response time, cost, accuracy, etc.). When a significant change is detected, a QoS Estimator recomputes the cumulative QoS for every affected variant using formulas from prior work. If a parameter lacks a user‑defined threshold, the maximum observed value becomes the default threshold.
- Execute – A Dispatcher selects the variant to invoke based on the latest QoS values and the selection policies derived from the negotiable goals. For each variant, the deviation from its threshold is computed as σ_Pij = (P_i^crt – threshold_Pi) * 100 / threshold_Pi. The policy combines these deviations with the priorities to rank variants and pick the best one.
The authors illustrate the approach with an intelligent route‑planner scenario. The functional requirement “provide driving time between two locations” can be satisfied by three variants: (V1) traffic data from a road department, (V2) crowdsourced driver network, and (V3) satellite image analysis. Each variant is annotated with response time, cost, and an accuracy score (0‑10). Two example user profiles are presented: (a) a time‑critical user who sets response time as high priority (< 4 s) and cost < 10 cents, and (b) a cost‑sensitive user who requires cost = 0 regardless of latency. The system translates these preferences into macros, updates QoS values at runtime, and selects the appropriate variant for each user.
Key contributions:
- A formal, macro‑based method to capture and quantify multi‑criteria QoS preferences per user, enabling multiple simultaneous Service Level Objectives (SLOs).
- A lightweight variant selection algorithm that avoids heavyweight multi‑attribute utility or regression models, making it suitable for fast runtime decisions.
- An integrated change‑detection and model‑updating mechanism that keeps the computational model synchronized with the actual QoS of underlying services.
Limitations noted by the authors include the static nature of the initial variant set (new services require model regeneration) and the potential inability of simple macros to express highly non‑linear or context‑dependent user preferences. Future work is suggested to incorporate dynamic variant discovery, machine‑learning‑based preference inference, and hybrid optimization techniques to overcome these constraints.
Overall, the paper presents a pragmatic framework that bridges the gap between high‑level, negotiable quality requirements and low‑level, runtime service selection, offering a viable path toward truly adaptive service‑oriented applications.
Comments & Academic Discussion
Loading comments...
Leave a Comment