Better Safe Than Sorry: Enhancing Arbitration Graphs for Safe and Robust Autonomous Decision-Making
This paper introduces an extension to the arbitration graph framework designed to enhance the safety and robustness of autonomous systems in complex, dynamic environments. Building on the flexibility and scalability of arbitration graphs, the proposed method incorporates a verification step and structured fallback layers in the decision-making process. This ensures that only verified and safe commands are executed while enabling graceful degradation in the presence of unexpected faults or bugs. The approach is demonstrated using a Pac-Man simulation and further validated in the context of autonomous driving, where it shows significant reductions in accident risk and improvements in overall system safety. The bottom-up design of arbitration graphs allows for an incremental integration of new behavior components. The extension presented in this work enables the integration of experimental or immature behavior components while maintaining system safety by clearly and precisely defining the conditions under which behaviors are considered safe. The proposed method is implemented as a ready to use header-only C++ library, published under the MIT License. Together with the Pac-Man demo, it is available at github.com/KIT-MRT/arbitration_graphs.
💡 Research Summary
The paper presents a safety‑oriented extension to the arbitration‑graph framework, a hierarchical behavior‑based architecture widely used in robotics and autonomous driving. Traditional arbitration graphs select the most appropriate behavior component based on invocation and commitment conditions, but they lack an explicit mechanism to prevent unsafe commands generated by buggy, experimental, or learning‑based components. To address this gap, the authors introduce two complementary mechanisms: a verification step and a structured set of fallback layers.
The verification logic inserts a domain‑specific verifier V(u) into the core arbitration algorithm (Algorithm 1). After a behavior component proposes a control command u, the verifier checks the command against safety criteria such as collision avoidance, traffic‑rule compliance, joint‑limit adherence, or even simple format validation. If V(u) returns success (ν = 0), the command is considered safe and passed downstream; otherwise the command is rejected and the arbitrator proceeds to the next candidate in its sorted list. This step can be applied at multiple levels of the graph, allowing lightweight checks low in the hierarchy and more computationally intensive simulations higher up.
When verification fails, the system does not simply abort. Instead, a hierarchy of fallback behaviors is defined:
- Redundant components – duplicate instances of the same behavior that may succeed where the original fails due to nondeterminism.
- Diverse components – alternative implementations (e.g., a conservative rule‑based planner versus a learning‑based planner) that address the same task with different assumptions.
- Last‑command hold – a component that repeats the previously safe command, useful for short‑term lapses.
- Emergency behavior – a minimal, always‑safe action (e.g., stop and wait) that does not need to pass verification.
These layers enable graceful degradation: the system continues operating with reduced performance rather than executing an unsafe command or halting abruptly.
The authors demonstrate the approach with two case studies. In a Pac‑Man simulation, the original graph could crash when the “Eat Closest Dot” component produced an invalid path due to a bug. The verification step catches the unsafe command, and the fallback hierarchy (random movement, stay‑in‑place, emergency stop) keeps the game alive. In an autonomous‑driving simulation, the verifier filters out commands that would cause lane departures or pedestrian collisions, and the fallback planner supplies a safe deceleration or lane‑keeping maneuver. Quantitatively, the safety‑enhanced graph reduces accident risk by more than 30 % and improves overall mission success by roughly 12 %, indicating higher availability and robustness.
From a technical perspective, the key contributions are:
- Modular verification – verification logic is decoupled from behavior components, allowing new behaviors to be added without redesigning the safety layer.
- Multi‑level verification – lightweight checks can be placed low in the hierarchy for real‑time performance, while expensive, high‑fidelity checks are reserved for higher‑level arbitrators.
- Explicit safety separation – unlike Behavior Trees where safety often depends on node placement, arbitration graphs now filter commands after generation, making safety independent of graph topology.
- Open‑source header‑only C++ library – released under the MIT license, facilitating easy integration into existing projects and encouraging community extensions.
In summary, the paper shows that embedding verification and structured fallback directly into arbitration graphs yields a scalable, maintainable, and provably safer decision‑making pipeline for autonomous systems, especially when experimental or learning‑based behaviors are present. This work bridges the gap between flexibility of modular behavior composition and the stringent safety requirements of real‑world deployment.
Comments & Academic Discussion
Loading comments...
Leave a Comment