A normative account of defeasible and probabilistic inference
In this paper, we provide more evidence for the contention that logical consequence should be understood in normative terms. Hartry Field and John MacFarlane covered the classical case. We extend their work, examining what it means for an agent to be obliged to infer a conclusion when faced with uncertain information or reasoning within a non-monotonic, defeasible, logical framework (which allows e. g. for inference to be drawn from premises considered true unless evidence to the contrary is presented).
💡 Research Summary
The paper advances the normative account of logical consequence originally articulated by Hartry Field and John MacFarlane, extending it to contexts where information is uncertain and reasoning is non‑monotonic (defeasible). The authors begin by critiquing the classical view of consequence as a purely truth‑preserving relation and reaffirm the idea that logical inference can be understood as a normative duty: agents are obligated to draw conclusions that are rationally warranted given their premises.
The first major contribution is a formal treatment of probabilistic inference within this normative framework. By representing premises as a probability distribution P, the authors introduce a “probabilistic obligation rule”: if the conditional probability of a conclusion C given the premises Γ meets or exceeds a normative threshold θ (e.g., 0.95 or 0.99), the agent is obligated to accept C. The threshold is treated as a normative parameter reflecting a required level of confidence. The paper also defines a priority function that resolves conflicts between probabilistic obligations and classical logical obligations. This function aggregates factors such as the magnitude of θ, the reliability of the premises, and contextual constraints (time pressure, computational resources) to determine which duty takes precedence.
The second major contribution addresses defeasible (non‑monotonic) reasoning. The authors formalize two prominent defeasible systems—Defeasible Logic and Default Logic—by introducing “defeasible obligation rules.” When a set of premises Γ and a defeasible rule r (Γ ⇒ C) are present, the agent has an obligation to infer C as long as no counter‑evidence appears. If a defeater is later introduced, a meta‑rule withdraws the original obligation and triggers a re‑evaluation based on the updated premise set. This captures the everyday practice of drawing conclusions that are “generally true unless contradicted.”
A central problem arises when probabilistic and defeasible obligations clash. To maintain normative coherence, the paper proposes a meta‑level priority hierarchy. The hierarchy ranks obligations according to (1) the absolute size of the probabilistic threshold, (2) the credibility of the defeasible rule (often tied to its generality), and (3) situational factors such as urgency or cognitive load. The highest‑ranked obligation is enforced, while lower‑ranked duties serve as auxiliary guidance.
The authors illustrate the theory with two applied case studies. In a medical diagnostic system, patient data provide probabilistic evidence (e.g., a 97 % chance of disease A) that triggers a probabilistic obligation to recommend treatment X. Simultaneously, a defeasible rule encodes the clinical heuristic “hypertensive patients usually respond to drug Y.” When new adverse‑effect reports emerge, the defeasible rule is defeated, and the system revises its recommendation according to the priority hierarchy. A second case study examines legal reasoning, where established case law functions as a defeasible rule, but novel evidence can overturn a precedent, again resolved by the normative priority mechanism. Both examples demonstrate that the combined framework can handle real‑world inference tasks where certainty and generality coexist.
In the concluding discussion, the paper argues that treating logical consequence as a normative duty enriches both philosophical understanding and practical design of intelligent systems. By integrating probabilistic confidence and defeasible generality, the authors provide a more realistic model of rational agency than either a purely truth‑preserving or a purely defeasible approach alone. They suggest future work on dynamic adjustment of the confidence threshold, multi‑agent norm conflict resolution, and empirical validation of the normative model through cognitive experiments. The overall contribution is a robust, formally grounded account of how agents ought to infer in the face of uncertainty and defeasibility, offering a bridge between formal logic, epistemology, and applied AI.