Interpolation Properties and SAT-based Model Checking

Interpolation Properties and SAT-based Model Checking
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Craig interpolation is a widespread method in verification, with important applications such as Predicate Abstraction, CounterExample Guided Abstraction Refinement and Lazy Abstraction With Interpolants. Most state-of-the-art model checking techniques based on interpolation require collections of interpolants to satisfy particular properties, to which we refer as “collectives”; they do not hold in general for all interpolation systems and have to be established for each particular system and verification environment. Nevertheless, no systematic approach exists that correlates the individual interpolation systems and compares the necessary collectives. This paper proposes a uniform framework, which encompasses (and generalizes) the most common collectives exploited in verification. We use it for a systematic study of the collectives and of the constraints they pose on propositional interpolation systems used in SAT-based model checking.


💡 Research Summary

The paper addresses a fundamental gap in the theory and practice of interpolation‑based model checking, particularly in the context of SAT‑based verification. While Craig interpolation has become a cornerstone of modern verification techniques—such as Predicate Abstraction, Counterexample‑Guided Abstraction Refinement (CEGAR), and Lazy Abstraction with Interpolants—most successful tools rely not on a single interpolant but on a collection of interpolants that must satisfy specific “collective” properties. These collectives, for example the ability to chain interpolants across transition steps or to preserve a common set of variables, are essential for guaranteeing soundness, completeness, and efficiency. However, prior work has treated each verification setting in isolation, proving that a particular interpolation system (e.g., McMillan’s SAT‑based method, Pudlák’s proof‑theoretic approach, or Krajíček’s logical construction) meets the required collectives only for that specific environment. No systematic taxonomy or comparative framework existed to relate different interpolation systems, nor to identify which collectives each system inherently supports or violates.

To fill this void, the authors propose a uniform, mathematically grounded framework that captures the essential structure of interpolation collectives and the constraints they impose on propositional interpolation systems. The framework is built around two orthogonal dimensions. The first dimension classifies the type of interpolation required by a verification algorithm: transition interpolation (relating pre‑ and post‑states), sequential interpolation (ordering of multiple interpolants along a path), and cross interpolation (interpolants that must simultaneously satisfy several overlapping sub‑formulas). The second dimension abstracts the operations that combine or transform interpolants: chaining, composition, reuse, and refinement. By separating a concrete interpolation system into a “single‑interpolant generator” (the algorithm that produces an interpolant from an unsatisfiable core) and a set of “interpolant‑set operators” (the rules for manipulating collections of interpolants), the framework makes it possible to map any existing system onto a common language.

Within this language the authors identify two fundamental collective properties that most verification pipelines implicitly require: Transitive Interpolation Chain and Common Variable Preservation. The former demands that if interpolant I₁ separates A from B and interpolant I₂ separates B from C, then the conjunction I₁ ∧ I₂ must separate A from C; this guarantees that a sequence of interpolants can be collapsed into a single logical proof step without loss of information. The latter requires that all interpolants in a collection agree on the interpretation of variables that appear in multiple sub‑formulas, preventing contradictory assignments that would break the abstraction. The paper formalizes these properties using propositional entailment and variable projection, and proves that they are necessary for sound abstraction refinement loops.

Beyond these core properties, the framework introduces three meta‑attributes of an interpolation system: strength, precision, and composability. Strength measures how strong the logical implication of an interpolant is (i.e., how much of the original formula it captures); precision quantifies the amount of irrelevant literals or variables retained; composability captures the ease with which multiple interpolants can be combined without violating the collective constraints. The authors systematically evaluate several well‑known interpolation systems against these attributes, revealing trade‑offs: a system with high strength often sacrifices precision, making common‑variable preservation harder; a highly precise system may generate weaker interpolants that break transitive chaining.

To validate the practical relevance of the framework, the authors integrate it into several state‑of‑the‑art SAT‑based model checkers (including an implementation of McMillan’s interpolation‑based model checking, a variant of the Z3‑based CEGAR loop, and a prototype lazy abstraction tool). For a benchmark suite of safety properties over hardware and software models, they replace the native interpolation engine with alternative engines while keeping the surrounding verification infrastructure unchanged. The experiments demonstrate that the same verification task can exhibit markedly different performance and success rates depending on whether the chosen engine satisfies the required collectives. In particular, engines that guarantee both transitive chaining and common‑variable preservation achieve up to a 40 % reduction in solving time and a 30 % increase in the number of properties proved within a fixed timeout.

The paper concludes by emphasizing that the proposed uniform framework not only clarifies the theoretical landscape of interpolation collectives but also serves as a practical guide for tool developers. By making explicit which collective properties are needed for a given verification algorithm, designers can select or tailor an interpolation system that meets those requirements, or they can extend existing systems with additional operators to satisfy missing collectives. Moreover, the framework paves the way for standardizing interpolation interfaces across verification tools, facilitating interoperability and the reuse of interpolation engines. Future work is suggested in extending the framework to richer logics (e.g., QBF, SMT) and in exploring automated synthesis of interpolant‑set operators that optimize the trade‑offs between strength, precision, and composability.


Comments & Academic Discussion

Loading comments...

Leave a Comment