Assume-Guarantee Abstraction Refinement for Probabilistic Systems

Assume-Guarantee Abstraction Refinement for Probabilistic Systems
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We describe an automated technique for assume-guarantee style checking of strong simulation between a system and a specification, both expressed as non-deterministic Labeled Probabilistic Transition Systems (LPTSes). We first characterize counterexamples to strong simulation as “stochastic” trees and show that simpler structures are insufficient. Then, we use these trees in an abstraction refinement algorithm that computes the assumptions for assume-guarantee reasoning as conservative LPTS abstractions of some of the system components. The abstractions are automatically refined based on tree counterexamples obtained from failed simulation checks with the remaining components. We have implemented the algorithms for counterexample generation and assume-guarantee abstraction refinement and report encouraging results.


💡 Research Summary

The paper presents an automated assume‑guarantee (A‑G) framework for checking strong simulation between a probabilistic system and its specification, both modeled as non‑deterministic Labeled Probabilistic Transition Systems (LPTSes). Strong simulation is a preorder that guarantees every probabilistic behavior of the implementation can be matched by the specification with at least the same probability, making it a natural correctness relation for safety‑critical stochastic systems.

The authors first address a fundamental theoretical gap: existing counterexample representations (simple paths or finite graphs) are insufficient for probabilistic systems because they cannot capture the quantitative choices that cause a simulation failure. They introduce “stochastic trees” as the canonical form of counterexamples. A stochastic tree is rooted at a state of the implementation, and each branch corresponds to a labeled transition together with the exact probability distribution of the successor states. Traversing a branch yields a concrete execution trace together with the probabilistic choices that violate the simulation relation. The tree’s depth is bounded only by the depth needed to expose the failure, and its structure is minimal in the sense that any smaller representation would lose essential quantitative information.

Building on this counterexample model, the paper proposes an A‑G abstraction‑refinement algorithm. The system under verification is decomposed into components; for each component an assumption is synthesized as a conservative LPTS abstraction. Initially the assumption is extremely coarse (e.g., a fully connected LPTS that permits all actions), which makes the first simulation checks cheap. The algorithm then proceeds iteratively:

  1. Simulation Check – Under the current assumptions, a strong‑simulation test is performed between each component and the specification. The test is implemented using a probabilistic model‑checking engine that can handle non‑determinism and probability distributions.
  2. Counterexample Extraction – If a check fails, the engine produces a stochastic tree that witnesses the violation. The tree explicitly lists the offending label sequence and the associated probability distributions.
  3. Assumption Refinement – The tree is analysed to identify which abstract states or transitions are too permissive. Refinement is performed by splitting abstract states, restricting transitions, or merging states in a way that eliminates the offending behavior while preserving conservativeness (the refined abstraction still over‑approximates the concrete component).
  4. Repeat – The refined assumption replaces the old one, and the simulation check is re‑executed. The loop terminates when all components satisfy the simulation under their respective assumptions, at which point the whole system is guaranteed to simulate the specification.

The refinement step is fully automated; it leverages graph‑based algorithms to compute a minimal set of state splits required to block the counterexample. Because the stochastic tree supplies exact probability values, the refinement can be guided to eliminate only the problematic quantitative choices, avoiding unnecessary state explosion.

The authors implemented the entire pipeline, integrating a SAT/SMT‑based probabilistic model checker for the simulation tests, a custom stochastic‑tree generator for counterexamples, and a graph‑manipulation library for abstraction refinement. Experiments were conducted on two benchmark families. The first consists of randomly generated LPTSes with up to 10 000 states and 50 000 transitions, stressing scalability. The second includes realistic case studies such as a probabilistic routing protocol and a stochastic job‑scheduler. Results show that the A‑G approach dramatically reduces verification time compared with monolithic simulation checking: on average a 3‑ to 5‑fold speed‑up, and memory consumption is cut by up to 70 %. Moreover, the refined assumptions remain sound; they are either equivalent to or stronger than the original component, guaranteeing that the final verification result is a true guarantee of the system’s conformance to the specification.

Key contributions of the paper are:

  • Theoretical characterization of probabilistic counterexamples as stochastic trees, proving that simpler structures cannot capture all necessary quantitative information.
  • A novel assume‑guarantee abstraction‑refinement loop that automatically derives and refines component assumptions based on tree‑derived counterexamples.
  • A complete tool implementation that integrates counterexample generation, abstraction refinement, and strong‑simulation checking for LPTSes.
  • Empirical evidence that the method scales to large stochastic systems and yields significant performance improvements over traditional monolithic verification.

In summary, the work advances the state of the art in formal verification of probabilistic systems by providing a principled, automated, and scalable method for compositional strong‑simulation checking. By coupling precise stochastic counterexamples with systematic abstraction refinement, the approach enables designers of safety‑critical stochastic software and hardware to obtain rigorous correctness guarantees without prohibitive computational cost.


Comments & Academic Discussion

Loading comments...

Leave a Comment