Raw Report on the Model Checking Contest at Petri Nets 2012
This article presents the results of the Model Checking Contest held at Petri Nets 2012 in Hambourg. This contest aimed at a fair and experimental evaluation of the performances of model checking techniques applied to Petri nets. This is the second edition after a successful one in 2011. The participating tools were compared on several examinations (state space generation and evaluation of several types of formulae - structural, reachability, LTL, CTL) run on a set of common models (Place/Transition and Symmetric Petri nets). After a short overview of the contest, this paper provides the raw results from the context, model per model and examination per examination.
💡 Research Summary
The paper presents a comprehensive raw data report of the Model Checking Contest held at Petri Nets 2012 in Hamburg, the second edition following the successful 2011 event. Its primary purpose is to provide an unbiased, experimental comparison of various model‑checking techniques applied to Petri nets. The contest used a common benchmark suite consisting of both Place/Transition (P/T) nets and Symmetric Petri nets, each represented by a set of models of varying size and complexity (low, medium, high). In total, twelve model instances were evaluated.
Fifteen state‑of‑the‑art tools participated. For fairness, all tools were executed on identical hardware (an 8‑core 2.4 GHz CPU with 32 GB RAM) under a strict time limit of one hour per run. The tools fell into two broad categories: explicit‑state explorers and symbolic or hybrid approaches based on decision diagrams or other compression techniques.
Four examination categories were defined. (1) State‑space generation measured the total number of reachable markings, transition firings, memory consumption, and wall‑clock time. (2) Structural formula evaluation tested invariants such as token conservation and other net‑specific properties. (3) Reachability queries asked whether particular markings could be reached, focusing on safety‑type checks. (4) Temporal logic verification applied both Linear Temporal Logic (LTL) and Computation Tree Logic (CTL) formulas to assess dynamic behaviours.
Results reveal clear trade‑offs. Explicit‑state tools excelled on small‑scale models, delivering rapid answers, but they frequently ran out of memory on larger instances due to state‑space explosion. Symbolic tools, by contrast, managed memory far more efficiently on large models, though they incurred significant preprocessing overhead, which sometimes offset their advantage in overall runtime. Notably, tools that exploited symmetry reduction on the Symmetric Petri nets achieved up to a 70 % reduction in the explored state space while maintaining high accuracy on structural checks.
Temporal‑logic evaluation exposed additional challenges. Several tools lacked support for specific LTL/CTL operators, resulting in missing entries in the result tables and highlighting the need for broader operator coverage in future tool development. Moreover, discrepancies were observed between tools on the same formula, traced back to differing handling of nondeterministic transitions during model interpretation.
Overall, the contest succeeded in delivering an objective performance portrait of contemporary Petri‑net model‑checking tools. It underscored the pivotal role of memory management and state‑space compression for scaling to large models, and it identified gaps in logical operator support and deterministic semantics handling. The authors propose extending future contests to include timed and colored Petri nets, as well as more complex real‑time constraints, to further stress‑test tool scalability and versatility.
Comments & Academic Discussion
Loading comments...
Leave a Comment