Predictively Oriented Posteriors
We advocate for a new statistical principle that combines the most desirable aspects of both parameter inference and density estimation. This leads us to the predictively oriented (PrO) posterior, which expresses uncertainty as a consequence of predictive ability. Doing so leads to inferences which predictively dominate both classical and generalised Bayes posterior predictive distributions: up to logarithmic factors, PrO posteriors converge to the predictively optimal model average. Whereas classical and generalised Bayes posteriors only achieve this rate if the model can recover the data-generating process, PrO posteriors adapt to the level of model misspecification. This means that they concentrate around the true model in the same way as Bayes and Gibbs posteriors if the model can recover the data-generating distribution, but do not concentrate in the presence of non-trivial forms of model misspecification. Instead, they stabilise towards a predictively optimal posterior whose degree of irreducible uncertainty admits an interpretation as the degree of model misspecification – a sharp contrast to how Bayesian uncertainty and its existing extensions behave. Lastly, we show that PrO posteriors can be sampled from by evolving particles based on mean field Langevin dynamics, and verify the practical significance of our theoretical developments on a number of numerical examples.
💡 Research Summary
**
The paper introduces a novel statistical principle that places predictive performance at the core of Bayesian inference, leading to the Predictively Oriented (PrO) posterior. Traditional Bayesian analysis first constructs a posterior over parameters (Πₙ) and then derives a predictive distribution by averaging the model likelihoods with respect to Πₙ. This approach optimizes parameter uncertainty and only yields optimal predictions when the model class exactly contains the true data‑generating distribution (i.e., the model is well‑specified). In the presence of any misspecification, the posterior collapses to a point mass, and the resulting predictive distribution becomes overly deterministic and often sub‑optimal.
PrO reverses this order: it directly minimizes a proper scoring rule evaluated on the averaged predictive distribution, while regularizing with a Kullback–Leibler (KL) divergence to a prior. Formally, the PrO posterior Qₙ solves
Qₙ = arg min_{Q∈𝒫(Θ)}
Comments & Academic Discussion
Loading comments...
Leave a Comment