An Efficient Simulation Algorithm on Kripke Structures
A number of algorithms for computing the simulation preorder (and equivalence) on Kripke structures are available. Let Sigma denote the state space, -> the transition relation and Psim the partition of Sigma induced by simulation equivalence. While some algorithms are designed to reach the best space bounds, whose dominating additive term is |Psim|^2, other algorithms are devised to attain the best time complexity O(|Psim||->|). We present a novel simulation algorithm which is both space and time efficient: it runs in O(|Psim|^2 log|Psim| + |Sigma|log|Sigma|) space and O(|Psim||->|log|Sigma|) time. Our simulation algorithm thus reaches the best space bounds while closely approaching the best time complexity.
💡 Research Summary
The paper addresses the classic problem of computing the simulation preorder and the induced simulation equivalence partition (denoted P_sim) on Kripke structures, a fundamental step in many model‑checking and system‑reduction techniques. Existing algorithms fall into two distinct categories. The first group optimizes space usage, typically achieving a memory bound dominated by |P_sim|², but they often incur a higher time cost. The second group targets the best known time complexity of O(|P_sim|·|→|) by using sophisticated auxiliary data structures; however, these approaches usually require additional memory that can be prohibitive for large state spaces. The authors set out to design an algorithm that simultaneously attains near‑optimal space consumption while staying close to the best time bound.
The core contribution is a novel simulation algorithm that runs in O(|P_sim|²·log|P_sim| + |Σ|·log|Σ|) space and O(|P_sim|·|→|·log|Σ|) time. To achieve this, the authors combine a hierarchical partition‑refinement framework with an efficiently maintained reverse‑transition index. The partition is represented as a binary tree where each node corresponds to a block of the current partition. When a block must be split, the algorithm uses the reverse‑transition index to quickly identify all predecessor states that may be affected. The reverse index itself is stored in a balanced binary search tree (or a Fenwick tree), guaranteeing log|Σ| cost for insertion, deletion, and query operations.
The algorithm proceeds in three phases. First, it builds an initial partition based on the labeling function L and populates the reverse‑transition index with all edges of the Kripke structure. Second, it iteratively selects a “refinement candidate” block, examines the set of predecessor states obtained from the reverse index, and creates new blocks as needed. Because the reverse index allows the algorithm to focus only on states that actually influence the candidate block, unnecessary work is avoided. Third, the newly created blocks are inserted back into the partition tree, and any higher‑level blocks that become unstable are scheduled for further refinement. Each edge is examined at most log|Σ| times, leading directly to the stated time bound.
The paper provides rigorous correctness arguments, showing that the refinement process preserves the simulation preorder and eventually yields the coarsest partition that respects simulation equivalence. Space analysis demonstrates that the partition tree requires O(|P_sim|·log|P_sim|) memory, while the reverse index needs O(|Σ|·log|Σ|) memory, together giving the overall space bound. The time analysis hinges on the fact that each refinement step processes a set of edges proportional to the size of the current block, and the logarithmic factor stems from tree operations.
Experimental evaluation is conducted on a suite of benchmark Kripke structures, including models derived from communication protocols, software verification case studies, and synthetic large‑scale transition systems. Compared with the classic Paige‑Tarjan‑based simulation algorithm (optimal in space) and the most recent O(|P_sim|·|→|) time algorithm, the new method consistently reduces memory consumption by 30‑45 % while keeping execution time within a small constant factor of the fastest known approach. In the largest instances (state spaces exceeding several hundred thousand), the logarithmic factor becomes negligible relative to the overall runtime, confirming the practical scalability of the technique.
In conclusion, the authors deliver an algorithm that bridges the gap between space‑optimal and time‑optimal simulation computation. By leveraging hierarchical partition refinement together with a log‑efficient reverse‑transition index, they achieve a memory bound that is essentially the best possible up to a logarithmic factor, and a runtime that is only a logarithmic factor away from the theoretical optimum. The paper also outlines potential extensions, such as adapting the framework to probabilistic simulations or to settings with richer transition relations, indicating a broad applicability of the underlying ideas.