Distributed MAP in the SpinJa Model Checker

Distributed MAP in the SpinJa Model Checker
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Spin in Java (SpinJa) is an explicit state model checker for the Promela modelling language also used by the SPIN model checker. Designed to be extensible and reusable, the implementation of SpinJa follows a layered approach in which each new layer extends the functionality of the previous one. While SpinJa has preliminary support for shared-memory model checking, it did not yet support distributed-memory model checking. This tool paper presents a distributed implementation of a maximal accepting predecessors (MAP) search algorithm on top of SpinJa.


💡 Research Summary

The paper presents a distributed implementation of the Maximal Accepting Predecessors (MAP) algorithm on top of SpinJa, a Java‑based explicit‑state model checker for the Promela language. SpinJa’s architecture is layered: a generic layer provides language‑independent state and transition objects together with encoding/decoding and hashing facilities; an abstract layer adds concurrency support and partial‑order reduction; a tool layer implements Promela‑specific constructs such as accept, progress and end labels. This modular design makes it possible to add new verification algorithms with minimal changes to the existing code base.

The MAP algorithm is based on the observation that a state belonging to an accepting cycle is its own predecessor. Storing a predecessor for every accepting state would cause memory usage to grow linearly with the number of accepting states, so MAP stores only a single “maximal” accepting predecessor per state. The maximality is defined with respect to a total ordering of all accepting states. However, a state on a cycle may be “dominated” by a larger accepting state that lies outside the cycle, preventing the cycle from being detected. To overcome this, MAP is executed iteratively: after each iteration the algorithm removes from consideration those accepting states that were dominated (by placing them in an exclude‑set) and repeats the search with the remaining accepting states. The process terminates when either a cycle is found or no accepting states remain.

For the distributed version the authors use the Ibis Portability Layer (IPL) to provide asynchronous message passing and automatic peer discovery. Each worker hashes generated states to decide which node owns the state and then sends the state to that node. Three logical queues are maintained: a control queue (high priority) for messages that affect the control flow, a state queue for newly generated states, and a token queue used for termination detection. Control messages include ITERATE (start a new iteration), TERMINATE (finish the whole algorithm), and FLUSH (pause state processing when a cycle is found).

Termination detection is based on a token‑circulation variant of Safra’s algorithm. When a worker’s state queue becomes empty it may generate a token; the token circulates among workers and, when it returns to its origin, signals that the global search has finished. This approach avoids a global barrier and tolerates network latency.

SpinJa originally stored meta‑data only for transitions, not for states, and its depth‑first search (DFS) algorithm appended meta‑data to the state vector, inflating memory consumption. The MAP algorithm requires per‑state meta‑data (the maximal accepting predecessor), so the authors modified the ProbingHashTable to allocate an explicit meta‑data slot for each state. This change enables efficient storage and retrieval of the MAP information without the large overhead of the previous DFS‑specific workaround.

The implementation was evaluated on the DAS‑4 cluster using four BEEM benchmark models (reader_writer, firewire_link, lamport, elevator) ranging from 10 million to 600 million states. On a single node, MAP took 39 seconds compared with 29 seconds for DFS and 34 seconds for BFS, showing a modest overhead due to the extra bookkeeping. When scaled to multiple nodes, the algorithm achieved roughly 50 % efficiency relative to the single‑node DFS baseline, with efficiency decreasing as the number of remote messages grew. Compared with the DiVinE distributed model checker, the total runtimes were of the same order of magnitude, although DiVinE displayed more predictable scaling. The authors note that further optimizations—such as message combining, adaptive polling rates, prioritising latency‑critical communications, and avoiding queue congestion during FLUSH—could improve performance substantially.

In conclusion, the paper demonstrates that SpinJa’s layered architecture is sufficiently flexible to support distributed‑memory verification. Implementing MAP required relatively few changes: extending the generic layer to handle distributed state ownership, adding meta‑data support, and writing explicit communication code. The work also highlights current limitations, notably the lack of a distributed‑aware partial‑order reduction and the need for more sophisticated meta‑data handling. Future work will likely focus on integrating these optimisations and further abstracting communication primitives, thereby making SpinJa a more competitive platform for large‑scale distributed model checking.


Comments & Academic Discussion

Loading comments...

Leave a Comment