A Unified framework for order-of-magnitude confidence relations
The aim of this work is to provide a unified framework for ordinal representations of uncertainty lying at the crosswords between possibility and probability theories. Such confidence relations between events are commonly found in monotonic reasoning, inconsistency management, or qualitative decision theory. They start either from probability theory, making it more qualitative, or from possibility theory, making it more expressive. We show these two trends converge to a class of genuine probability theories. We provide characterization results for these useful tools that preserve the qualitative nature of possibility rankings, while enjoying the power of expressivity of additive representations.
💡 Research Summary
The paper tackles a long‑standing divide between two major qualitative approaches to uncertainty: possibility theory, which excels at ranking events but lacks additive structure, and probability theory, which provides a fully additive measure but often forces a quantitative view that discards the intuitive ordinal information. The authors propose a unified mathematical framework called an “order‑of‑magnitude confidence relation” (OM‑relation) that simultaneously preserves the qualitative nature of possibility rankings and inherits the expressive power of additive probabilities.
The core object of study is a binary confidence relation ⪰ defined on the power set of a finite outcome space Ω. For any two events A and B, A ⪰ B means that A is at least as credible as B. The authors enrich this relation with a notion of “order of magnitude”: instead of comparing raw numerical values, events are compared according to the number of magnitude steps that separate their associated confidence levels. In practice this is modeled by representing probabilities as series in a infinitesimal ε:
P(A) = Σ_k ε^k·p_k(A)
where k indexes the magnitude level (order) and p_k is a conventional probability distribution confined to the k‑th level. Higher‑order terms dominate lower‑order ones, mirroring the way possibility theory treats a higher possibility level as strictly more credible than any lower level, regardless of the number of events sharing that level.
Four axioms are introduced to capture the desired behavior:
- Monotonicity – If A ⊆ B then B ⪰ A.
- Pre‑additivity – For disjoint C, A ⪰ B iff A ∪ C ⪰ B ∪ C. This guarantees that adding the same irrelevant evidence does not flip the ordering.
- Comparability – For any A, B, either A ⪰ B or B ⪰ A (or both). This makes the relation a total preorder, a common assumption in qualitative decision theory.
- Order‑distinguishability – Events that belong to the same magnitude class are ordered exactly as in the underlying possibility ranking.
The first major theorem proves that any relation satisfying these axioms can be represented by an OM‑probability measure of the form above. The construction proceeds by assigning a distinct ε‑weight to each “level” of the possibility ranking and then distributing ordinary probability mass within each level. The resulting measure respects the original ordinal ranking (because higher ε‑powers dominate) while also being fully additive, allowing standard probabilistic operations such as expectation and Bayes updating.
Conversely, the second theorem shows that any OM‑probability induces a unique possibility ranking: by discarding the ε‑weights and retaining only the order of the dominant term for each event, one recovers a possibility function Π that reproduces the original qualitative ordering. Thus the two formalisms are shown to be mathematically equivalent under the proposed axioms.
A particularly interesting subclass, called “big‑stepped probabilities,” is examined. In this case each magnitude level contains at most one event with non‑zero mass, so the probability distribution collapses to a lexicographic ordering that is essentially identical to a pure possibility ranking, yet it still satisfies full additivity. This subclass is computationally attractive because it avoids the combinatorial explosion of distributing mass across many events at the same level, making it suitable for implementations in inconsistency management, non‑monotonic reasoning, and qualitative decision support systems.
The authors compare their framework with existing approaches such as lexicographic probabilities and hybrid possibility‑probability models. Lexicographic probabilities preserve the order of importance but sacrifice additivity, whereas hybrid models often require two separate representations and a translation mechanism, increasing complexity. The OM‑framework, by contrast, unifies the two in a single structure: it retains a clear ordinal hierarchy (as in possibility theory) and simultaneously supports additive calculations (as in probability theory). This dual capability enables, for example, the computation of expected utilities while still allowing a decision maker to express preferences purely in terms of ordinal rankings.
Practical implications are illustrated through two case studies. In inconsistency handling, the OM‑measure automatically gives higher‑order contradictions overwhelming influence, ensuring that resolution procedures respect the most credible information first. In qualitative decision making, a user can provide a possibility‑based ranking of outcomes; the system translates this ranking into an OM‑probability, computes expected utilities, and returns a recommendation, thereby bridging the gap between human‑friendly input and machine‑friendly computation.
In conclusion, the paper delivers a rigorous, axiomatic bridge between possibility and probability theories. By introducing order‑of‑magnitude confidence relations, it shows that the qualitative richness of possibility rankings can be embedded in a fully additive probabilistic framework without loss of information. This contribution not only advances the theoretical understanding of uncertainty representation but also offers concrete tools for AI systems that must reason with both ordinal knowledge and quantitative inference.