Iterated Belief Change Due to Actions and Observations

Iterated Belief Change Due to Actions and Observations

In action domains where agents may have erroneous beliefs, reasoning about the effects of actions involves reasoning about belief change. In this paper, we use a transition system approach to reason about the evolution of an agents beliefs as actions are executed. Some actions cause an agent to perform belief revision while others cause an agent to perform belief update, but the interaction between revision and update can be non-elementary. We present a set of rationality properties describing the interaction between revision and update, and we introduce a new class of belief change operators for reasoning about alternating sequences of revisions and updates. Our belief change operators can be characterized in terms of a natural shifting operation on total pre-orderings over interpretations. We compare our approach with related work on iterated belief change due to action, and we conclude with some directions for future research.


💡 Research Summary

The paper “Iterated Belief Change Due to Actions and Observations” tackles the problem of how an autonomous agent’s belief state evolves when it both acts in the world and receives possibly noisy observations, especially when its initial beliefs may be erroneous. Traditional belief change research distinguishes between belief revision (AGM) – which deals with incorporating new, possibly contradictory information while preserving consistency – and belief update (KMU) – which models changes in the world itself. However, in realistic action domains an agent frequently experiences a mixture of both: some actions force a revision of its epistemic state, while others reflect a genuine change in the external environment that should be handled by an update. The interaction between these two processes can be highly non‑trivial, and existing frameworks either treat them in isolation or rely on ad‑hoc meta‑rules for their combination.

The authors adopt a transition‑system perspective. A world is represented by a set of interpretations, and an agent’s epistemic attitude is captured by a total preorder over these interpretations (the more preferred, the more plausible). Actions are classified into two categories. Revision actions trigger an AGM‑style minimal change of the preorder to accommodate a newly observed fact that conflicts with the current belief set. Update actions correspond to genuine world transitions; they are modeled by a KMU‑style shift of the preorder that reflects the temporal evolution of the environment. Crucially, the paper does not simply apply revision then update (or vice‑versa) in a black‑box fashion; instead it defines a systematic interaction between them.

To formalize this interaction the authors introduce a set of rationality postulates. The most important are:

  • R1 (Consistency after revision) – if an observation is compatible with the revised belief set, the resulting beliefs remain consistent.
  • U1 (Preservation after update) – beliefs that were firmly held before an update should be retained as far as possible, unless contradicted by the new world state.
  • IR (Iterated interchange) – the order of a revision followed by an update should be interchangeable with an update followed by a revision, provided the same information is ultimately incorporated. This captures the intuition that the agent’s epistemic state should not depend on an arbitrary sequencing of epistemic and ontic changes.
  • Additional closure and compositionality conditions (C1‑C4) guarantee that sequences of revisions and updates produce a well‑behaved preorder.

The core technical contribution is the shift operation on total preorders. Given a current preorder, a shift moves a selected set of interpretations forward (making them more plausible) while pushing others backward, according to the nature of the action. For a revision action, the shift removes interpretations that violate the new observation from the top of the order, thereby achieving minimal change. For an update action, the shift inserts interpretations that reflect the new world state at the front, effectively re‑ordering the plausibility landscape to match the ontic transition. By defining revision and update as particular instances of this generic shift, the authors obtain a unified operator that automatically satisfies all the rationality postulates, including the non‑trivial IR property.

The paper presents three main theorems. The first establishes that the shift‑based operator is well‑defined and yields a total preorder after any finite sequence of actions. The second proves that the operator satisfies each of the proposed postulates, with a detailed proof that IR holds because the shift operation is commutative up to a re‑labeling of the moved sets. The third theorem provides a representation result: any belief change process that meets the postulates can be represented as a particular instantiation of the shift operation, thereby showing the operator’s expressive completeness.

In the related‑work discussion, the authors compare their approach with iterated revision frameworks such as Darwiche‑Pearl, Nayak’s dynamic epistemic logic, and recent action‑theoretic models that embed AGM revision inside planning. They argue that those models either lack a principled treatment of updates (they focus solely on epistemic changes) or require complex meta‑level rules to handle mixed sequences, leading to computational overhead and unintuitive behavior. By contrast, the shift‑based operator works directly on the underlying preorder, making the interaction between revision and update transparent and computationally tractable (the shift can be implemented in linear time with respect to the size of the preorder).

The authors illustrate their theory with two case studies. The first involves a mobile robot navigating a grid while receiving occasional faulty sensor readings; the robot must revise its map when a sensor indicates an obstacle that contradicts its current belief, and update its belief when it actually moves to a new cell. The second case study models a smart‑home system where actions like “turn on the heater” may require belief revision if the system incorrectly believes the temperature is already high, while the passage of time constitutes an update reflecting the actual temperature change. In both scenarios, the shift‑based operator yields belief trajectories that respect the rationality postulates and avoid the pathological belief oscillations observed in earlier models.

The paper concludes by acknowledging limitations. The current framework assumes deterministic actions and full observability of the propositional facts involved in revisions; it does not yet handle probabilistic observations, partial observability, or multi‑agent belief merging. Moreover, while the shift operation is linear in the number of interpretations, scaling to domains with huge interpretation spaces may still be challenging. The authors outline future work directions: extending the shift to probabilistic preorders, integrating partial observation models (e.g., belief revision with evidence), and exploring distributed versions for multi‑agent systems where agents must negotiate conflicting revisions and updates.

In summary, the paper delivers a novel, mathematically grounded operator for iterated belief change that unifies revision and update through a simple yet powerful shift on total preorders, provides a clear set of rationality criteria, and demonstrates both theoretical completeness and practical applicability in action‑rich environments.