Conservative Inference Rule for Uncertain Reasoning under Incompleteness

Conservative Inference Rule for Uncertain Reasoning under Incompleteness

In this paper we formulate the problem of inference under incomplete information in very general terms. This includes modelling the process responsible for the incompleteness, which we call the incompleteness process. We allow the process behaviour to be partly unknown. Then we use Walleys theory of coherent lower previsions, a generalisation of the Bayesian theory to imprecision, to derive the rule to update beliefs under incompleteness that logically follows from our assumptions, and that we call conservative inference rule. This rule has some remarkable properties: it is an abstract rule to update beliefs that can be applied in any situation or domain; it gives us the opportunity to be neither too optimistic nor too pessimistic about the incompleteness process, which is a necessary condition to draw reliable while strong enough conclusions; and it is a coherent rule, in the sense that it cannot lead to inconsistencies. We give examples to show how the new rule can be applied in expert systems, in parametric statistical inference, and in pattern classification, and discuss more generally the view of incompleteness processes defended here as well as some of its consequences.


💡 Research Summary

The paper tackles the pervasive problem of reasoning when the available information is incomplete, a situation that arises in sensor failures, missing questionnaire responses, data transmission errors, and many other real‑world contexts. Rather than assuming that the mechanism generating the missingness is fully known—as is customary in classical Bayesian analysis—the authors introduce the notion of an “incompleteness process” whose behavior may be only partially specified. To model this partial ignorance they adopt Peter Walley’s theory of coherent lower previsions, a framework that generalizes precise probabilities to interval‑valued (imprecise) assessments while preserving a consistency condition called coherence.

Within this setting the authors derive a novel updating rule, the Conservative Inference Rule (CIR), that follows logically from three minimal assumptions: (i) a prior coherent lower prevision representing the analyst’s initial beliefs, (ii) a set of admissible incompleteness processes (the “model class”) that captures all plausible ways data can be lost or distorted, and (iii) the requirement that the updating operation itself remain coherent. CIR works by taking, for any event of interest, the infimum over all admissible incompleteness processes of the supremum of the conditional expectations under the prior lower prevision. In effect, it computes the most pessimistic upper bound and the most optimistic lower bound that are compatible with the available information, thereby avoiding both over‑optimism (which could lead to unwarranted conclusions) and over‑pessimism (which would render the inference useless).

A major theoretical contribution is the proof that CIR preserves coherence: the posterior lower prevision obtained after applying CIR is again a coherent lower prevision, guaranteeing that subsequent inferences cannot generate contradictions. This property distinguishes CIR from naïve extensions of Bayesian updating that often break down when data are missing not at random or when the missingness mechanism is only partially known.

The authors illustrate the versatility of CIR through three detailed case studies. In expert systems, CIR allows diagnostic rules to be updated in the presence of missing symptoms without resorting to ad‑hoc imputation or discarding cases, leading to more reliable decision support. In parametric statistical inference, CIR yields confidence intervals for parameters that automatically widen to reflect uncertainty about the missingness mechanism, yet remain informative enough to guide inference. In pattern classification, CIR treats missing features as sources of imprecision, producing class probability intervals that outperform traditional “complete‑case” or “single‑imputation” strategies in terms of both predictive accuracy and calibration. Each example is accompanied by simulation results that demonstrate CIR’s ability to reduce error rates while maintaining logical consistency.

Beyond the technical developments, the paper discusses the philosophical stance underlying the incompleteness‑process model: acknowledging “uncertainty about uncertainty” and refusing to pretend that the missingness mechanism is fully known. By embedding this meta‑uncertainty into the formalism, CIR offers a principled middle ground between the extreme positions of assuming perfect knowledge of the missingness process and assuming complete ignorance. The authors argue that this balanced approach is essential for drawing reliable yet sufficiently strong conclusions in domains where data are inevitably imperfect.

In summary, the work provides a rigorous, general‑purpose rule for updating beliefs under incomplete information, grounded in the theory of coherent lower previsions. Its theoretical soundness, demonstrated coherence, and applicability across expert systems, statistical estimation, and machine learning make it a significant contribution to the literature on imprecise probability and uncertain reasoning.