Expecting the Unexpected: Developing Autonomous-System Design Principles for Reacting to Unpredicted Events and Conditions
When developing autonomous systems, engineers and other stakeholders make great effort to prepare the system for all foreseeable events and conditions. However, these systems are still bound to encounter events and conditions that were not considered at design time. For reasons like safety, cost, or ethics, it is often highly desired that these new situations be handled correctly upon first encounter. In this paper we first justify our position that there will always exist unpredicted events and conditions, driven among others by: new inventions in the real world; the diversity of world-wide system deployments and uses; and, the non-negligible probability that multiple seemingly unlikely events, which may be neglected at design time, will not only occur, but occur together. We then argue that despite this unpredictability property, handling these events and conditions is indeed possible. Hence, we offer and exemplify design principles that when applied in advance, can enable systems to deal, in the future, with unpredicted circumstances. We conclude with a discussion of how this work and a broader theoretical study of the unexpected can contribute toward a foundation of engineering principles for developing trustworthy next-generation autonomous systems.
💡 Research Summary
The paper tackles a fundamental challenge in autonomous‑system engineering: no matter how exhaustive the design and verification effort, a deployed system will inevitably encounter events and conditions that were not anticipated at design time. The authors identify four primary sources of such “unpredicted” situations: (1) the emergence of new objects, technologies, or inventions after deployment; (2) deliberate omission of low‑probability scenarios due to cost, schedule, or reliance on external mitigations; (3) worldwide deployment in environments that differ from the developers’ experience; and (4) the combinatorial explosion of variables and actors in rich, dynamic settings, which makes exhaustive analysis infeasible. They also note that malicious attacks constitute a special class of unforeseen conditions.
Despite this inevitability, the authors argue that autonomous systems can still handle first‑time encounters with unexpected events, drawing an analogy to how humans react to surprise. To operationalize this claim, they propose a set of design principles grouped into three overarching categories: (i) reactive and proactive behaviors, (ii) knowledge and skill acquisition, and (iii) viewing the system as a social entity. Each category is broken down into concrete practices, illustrated with examples ranging from a self‑driving car turning away from a tsunami wave to a factory robot probing an unknown obstacle.
1. Reactive and Proactive Behaviors
- High‑level behavioral rules: Abstract directives such as “when danger is sensed, retreat”, “when under attack, seek shelter”, and “when the situation is not understood, slow down”. These rules are mapped to concrete sensor‑actuator pairs (e.g., a tsunami detection triggers a U‑turn for a vehicle).
- Probing: The system actively explores unfamiliar objects using available sensors, manipulators, or remote resources (e.g., internet image lookup, nearby security cameras). This may require equipping the platform with extra hardware or dynamic access to shared infrastructure.
- Self‑reflection: Continuous monitoring of its own state and history enables the system to recognize repeated failures, avoid futile loops, and select alternative actions based on past outcomes. The authors stress the importance of encoding desirability orders among states (soft goals) and reasoning about causality.
- Physical and logical look‑ahead: Enhanced perception (360° cameras, external video feeds) and runtime simulation provide spatial and temporal foresight, reducing the surprise factor.
- Alternative solution planning: Redundancy is extended beyond hardware to include multiple mission‑level strategies (e.g., rerouting a passenger to a train station if the road is blocked).
2. Knowledge and Skill Acquisition
- Self‑knowledge of capabilities: Sensors and actuators are represented as globally accessible resources, enabling any planning component to invoke them. BDI (Belief‑Desire‑Intention) architectures are suggested for this purpose.
- General world knowledge: Basic physics, object dynamics, speed limits, and other domain‑independent facts are either stored locally or fetched from cloud services, forming a “common‑sense” layer.
- Run‑time knowledge acquisition: Real‑time data streams (weather alerts, traffic conditions, disaster warnings) are ingested to anticipate upcoming obstacles.
- Learning and adaptivity: Systems continuously learn from their own successes and failures, as well as from peer systems, sharing models of novel objects (e.g., unfamiliar agricultural machinery) to improve first‑time handling.
3. Social Entity Perspective
- Responsibility delineation: The paper stresses explicit definition of roles among stakeholders (vehicle, operator, service provider, passenger) and suggests formal verification of responsibility assignments to avoid gaps during crises.
- Mimicking others: Observing the behavior of nearby agents (human drivers, other autonomous units) and adapting accordingly (e.g., switching lanes when the empty lane is actually blocked).
- Help‑seeking and support: Designing interfaces for the system to request assistance from humans or other machines, specifying who to contact, what information to transmit, and how control may be transferred.
- Passive acceptance of help: Even when the system does not recognize its own need, it should expose sufficient identity and state information (visual tags, status displays) so that external agents can intervene autonomously.
The authors situate these principles within a broader research agenda they term “Autonomics”, a prospective engineering foundation for next‑generation autonomous systems. They argue that handling the unexpected requires not only a catalogue of practices but also a formal theoretical treatment of unpredictability, including models that capture the probability of rare event conjunctions and the logical structure of responsibility. Future work should therefore focus on (a) formalizing the notion of “unexpected” within system specifications, (b) developing verification techniques that can certify compliance with the proposed principles, and (c) conducting large‑scale empirical studies to validate the effectiveness of the guidelines in real‑world deployments.
In summary, the paper moves beyond traditional exhaustive testing and simulation, advocating for autonomous systems that are equipped with high‑level reactive rules, rich self‑knowledge, continuous learning pipelines, and socially aware interaction mechanisms. By embedding these capabilities during design, engineers can endow machines with the ability to cope with truly unforeseen events, thereby advancing safety, robustness, and public trust in autonomous technologies.
Comments & Academic Discussion
Loading comments...
Leave a Comment