Introducing doubt in Bayesian model comparison

Introducing doubt in Bayesian model comparison
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

There are things we know, things we know we don’t know, and then there are things we don’t know we don’t know. In this paper we address the latter two issues in a Bayesian framework, introducing the notion of doubt to quantify the degree of (dis)belief in a model given observational data in the absence of explicit alternative models. We demonstrate how a properly calibrated doubt can lead to model discovery when the true model is unknown.


💡 Research Summary

The paper tackles a fundamental limitation of conventional Bayesian model comparison: the inability to account for models that are not explicitly included among the set of candidates. Traditional Bayesian model selection evaluates the posterior probability of each candidate model by combining prior model probabilities with the evidence (marginal likelihood) derived from the data. This framework assumes that the true data‑generating process is represented within the candidate set, an assumption that is rarely satisfied in practice, especially in complex scientific domains where the space of plausible models is vast and often ill‑defined.

To address this gap, the authors introduce a new meta‑concept called “doubt” (denoted (D)). Doubt is defined as the probability that the true model lies outside the explicitly considered model set ({M_1, M_2, \dots, M_K}). Formally, they treat (D) as an additional “unknown” model with its own prior probability (\pi_D). The posterior doubt after observing data (D_{\text{obs}}) is computed via a Bayesian update that mirrors the standard model‑wise update:

\


Comments & Academic Discussion

Loading comments...

Leave a Comment