Using Models at Runtime to Address Assurance for Self-Adaptive Systems

A self-adaptive software system modifies its behavior at runtime in response to changes within the system or in its execution environment. The fulfillment of the system requirements needs to be guaran

Using Models at Runtime to Address Assurance for Self-Adaptive Systems

A self-adaptive software system modifies its behavior at runtime in response to changes within the system or in its execution environment. The fulfillment of the system requirements needs to be guaranteed even in the presence of adverse conditions and adaptations. Thus, a key challenge for self-adaptive software systems is assurance. Traditionally, confidence in the correctness of a system is gained through a variety of activities and processes performed at development time, such as design analysis and testing. In the presence of selfadaptation, however, some of the assurance tasks may need to be performed at runtime. This need calls for the development of techniques that enable continuous assurance throughout the software life cycle. Fundamental to the development of runtime assurance techniques is research into the use of models at runtime (M@RT). This chapter explores the state of the art for usingM@RT to address the assurance of self-adaptive software systems. It defines what information can be captured by M@RT, specifically for the purpose of assurance, and puts this definition into the context of existing work. We then outline key research challenges for assurance at runtime and characterize assurance methods. The chapter concludes with an exploration of selected application areas where M@RT could provide significant benefits beyond existing assurance techniques for adaptive systems.


💡 Research Summary

Self‑adaptive software systems modify their structure or behavior at runtime in response to changes in the system itself or its execution environment. While traditional software assurance relies on design‑time activities such as static analysis, testing, and formal verification, these techniques cannot guarantee that an adaptive system will continue to satisfy its requirements after it has reconfigured itself. Consequently, assurance must be extended into the operational phase, giving rise to the notion of continuous assurance.

The chapter positions Models at Runtime (M@RT) as the cornerstone technology for achieving continuous assurance. An M@RT is a living, executable model that mirrors the current configuration, behavior, goals, and environmental context of the running system. By keeping this model synchronized with the actual system, it becomes possible to evaluate adaptation decisions against the system’s quality requirements before the changes are enacted. The authors categorize the information that can be captured by an M@RT for assurance purposes into four groups: (1) structural and interface metadata, (2) the set of currently active adaptation options and their parameters, (3) a probabilistic representation of the environment and its uncertainties, and (4) quantitative expressions of quality goals and constraints (e.g., reliability, performance, security).

Having identified the model content, the chapter outlines the principal research challenges that must be solved to make runtime assurance practical. Maintaining model‑system consistency in the face of asynchronous updates is essential; any lag can render verification results obsolete. The computational cost of performing verification or analysis at runtime must be bounded so that the assurance infrastructure does not jeopardize the system’s performance. Uncertainty handling is another critical issue: the model must be expressive enough to capture stochastic or non‑linear environmental dynamics. Finally, there is a need for metrics and evidence‑management mechanisms that can assess the trustworthiness of the assurance outcomes themselves.

To address these challenges, the authors classify runtime assurance techniques into three complementary families. Formal‑verification‑based approaches apply model checking, theorem proving, or runtime synthesis to the M@RT, guaranteeing hard constraints such as safety or resource limits. Because exhaustive verification is often infeasible at runtime, these methods rely on abstraction, incremental checking, or selective verification of the most critical adaptation options. Statistical and machine‑learning‑based approaches continuously ingest execution logs and sensor readings, updating predictive models that estimate quality attributes (e.g., latency, failure probability). Such estimators can flag high‑risk adaptation choices before they are applied. Simulation and digital‑twin approaches create high‑fidelity virtual replicas of the system and its environment, allowing “what‑if” analyses of adaptation scenarios without affecting the live system. The digital twin stays synchronized with the real system, providing a sandbox for safety‑critical validation.

The chapter then surveys four representative application domains where M@RT‑enabled assurance can deliver tangible benefits beyond traditional techniques. In smart‑grid management, runtime models are used to verify voltage stability and load‑balancing constraints as the grid autonomously reconfigures in response to demand fluctuations. Autonomous vehicles employ digital twins to evaluate path‑planning and control adaptations under uncertain sensor inputs, ensuring safety properties are never violated. Cloud orchestration platforms leverage model‑based runtime verification to enforce Service Level Agreements while dynamically scaling services. Finally, medical‑IoT systems use statistical assurance on the M@RT to continuously monitor therapeutic algorithm adjustments, guaranteeing patient safety despite rapid physiological changes.

In summary, the chapter argues that integrating M@RT into the software lifecycle transforms assurance from a pre‑deployment activity into a continuous, runtime service. It identifies the essential model artifacts, delineates the open research problems (model consistency, analysis overhead, uncertainty, evidence management), and maps existing formal, statistical, and simulation‑based methods onto these challenges. The authors conclude by calling for further work on automated model evolution, lightweight verification algorithms, and robust evidence‑based trust frameworks to fully realize the promise of runtime assurance for self‑adaptive systems.


📜 Original Paper Content

🚀 Synchronizing high-quality layout from 1TB storage...