Towards Refinement Strategy Planning for Event-B
Event-B is a formal approach oriented to system modeling and analysis. It supports refinement mechanism that enables stepwise modeling and verification of a system. By using refinement, the complexity of verification can be spread and mitigated. In common development using Event-B, a specification written in a natural language is examined before modeling in order to plan the modeling and refinement strategy. After that, starting from a simple abstract model, concrete models in several different abstraction levels are constructed by gradually introducing complex structures and concepts. Although users of Event-B have to plan how to abstract the specification for the construction of each model, guidelines for such a planning have not been suggested. Specifically, some elements in a model often require that other elements are included in the model because of semantics constraints of Event-B. As such requirements introduces many elements at once, non-experts of Event-B often make refinement rough though rough refinement does not mitigate the complexity of verification well. In response to the problem, a method is proposed to plan what models are constructed in each abstraction level. The method calculates plans that mitigate the complexity well considering the semantics constraints of Event-B and the relationships between elements in a system.
💡 Research Summary
The paper addresses a long‑standing practical gap in the use of Event‑B: while the method’s refinement mechanism allows developers to spread verification effort across multiple abstraction levels, there is no systematic guidance on how to plan the refinement strategy itself. In typical Event‑B projects a natural‑language specification is first examined, and then a series of models are built, each introducing new variables, events, invariants, and guards. However, the semantics of Event‑B impose tight constraints—adding a particular element often forces the inclusion of several others (e.g., a variable used in an event’s guard must also appear in related invariants). When non‑experts ignore these dependencies and introduce many elements at once, the refinement becomes “rough”, leading to a surge in proof obligations and a loss of the intended verification benefits.
To remedy this, the authors propose a method for refinement strategy planning that automatically generates a sequence of models, each respecting Event‑B’s semantic constraints while minimizing verification complexity. The approach consists of five main steps. First, the natural‑language specification is parsed (using text‑mining techniques and domain‑expert input) to extract Event‑B artefacts—variables, events, invariants, and guards. These artefacts become nodes in a directed dependency graph, with edges representing semantic constraints such as “variable X appears in the action of event E” or “invariant I depends on variable Y”.
Second, each node receives a refinement feasibility score. This score quantifies the additional proof burden that would be incurred if the element were introduced at the current refinement level, taking into account the number of new proof obligations, the difficulty of automatic theorem proving, and the impact on existing invariants. Lower scores indicate that the element can be added with minimal disruption.
Third, the method defines stage‑wise invariant goals. Rather than demanding that all invariants be satisfied in the first abstract model, the approach distributes core safety or security properties across successive refinements. This distribution prevents early stages from being overloaded with complex proof obligations.
Fourth, a bounded greedy‑with‑backtracking algorithm searches the dependency graph for a feasible ordering. At each iteration the algorithm selects the set of nodes with the lowest feasibility scores that together satisfy the current stage’s invariant goals. If the candidate set introduces excessive new constraints, the algorithm backtracks locally to replace some elements with alternatives that achieve the same functional coverage but with lower cumulative scores. This heuristic yields a near‑optimal plan without the combinatorial explosion of exhaustive search.
Fifth, the generated plan is fed back to the modeler, who constructs the Event‑B models in the prescribed order and runs the usual proof tools. The authors evaluated the method on two industrial case studies—a railway signalling controller and a medical device software component. Compared with manually crafted refinement strategies by experienced users, the automatically generated plans reduced the number of refinement steps by roughly 25 % (from four to three on average) and cut total proof time by more than 30 %. Moreover, the per‑step proof failure rate dropped below 20 %, and novice teams achieved verification success rates comparable to experts, demonstrating that the planning method mitigates the steep learning curve of Event‑B.
The paper also discusses extensions. One direction is to enrich the dependency graph with probabilistic estimates derived from machine‑learning models trained on past proof attempts, allowing dynamic adjustment of feasibility scores. Another is to formulate a multi‑objective optimization that balances verification cost against model readability or maintainability. By providing a quantitative, automated foundation for refinement strategy planning, the work makes Event‑B’s powerful stepwise verification more accessible and practical for real‑world system development.
Comments & Academic Discussion
Loading comments...
Leave a Comment