Asymptotics for parametric martingale posteriors
The martingale posterior framework is a generalization of Bayesian inference where one elicits a sequence of one-step ahead predictive densities instead of the likelihood and prior. Posterior sampling then involves the imputation of unseen observables, and can then be carried out in an expedient and parallelizable manner using predictive resampling without requiring Markov chain Monte Carlo. Recent work has investigated the use of plug-in parametric predictive densities, combined with stochastic gradient descent, to specify a parametric martingale posterior. This paper investigates the asymptotic properties of this class of parametric martingale posteriors. In particular, two central limit theorems based on martingale limit theory are introduced and applied. The first is a predictive central limit theorem, which enables a significant acceleration of the predictive resampling scheme through a hybrid sampling algorithm based on a normal approximation. The second is a Bernstein-von Mises result, which is novel for martingale posteriors, and provides methodological guidance on attaining desirable frequentist properties. We demonstrate the utility of the theoretical results in simulations and a real data example.
💡 Research Summary
This paper develops the asymptotic theory for a class of parametric martingale posteriors, a recent generalization of Bayesian inference that replaces the traditional likelihood‑prior pair with a sequence of one‑step‑ahead predictive densities. The authors focus on the “plug‑in” setting where the predictive density is taken from a parametric family pθ(y) and the parameter estimate is updated online using a natural‑gradient stochastic approximation:
θ_N = θ_{N‑1} + (N‑1)^{-1} I(θ_{N‑1})^{-1} s(θ_{N‑1}, Y_N),
where s(θ, y) is the score function and I(θ) the Fisher information. The learning rate (N‑1)^{-1} guarantees the classic stochastic‑approximation conditions ∑(N‑1)^{-1}=∞ and ∑(N‑1)^{-2}<∞, which in turn ensure that the sequence {θ_N} forms a martingale that converges almost surely to a random limit θ_∞ (the martingale posterior).
Two central limit theorems (CLTs) are established.
- Predictive CLT – Treating the initial data Y₁:n as fixed, the authors study the limit as the imputed population size N→∞. Under mild regularity (square‑integrable scores, continuous positive Fisher information) and a uniform L² integrability condition on the natural gradient Z_N = I(θ_{N‑1})^{-1}s(θ_{N‑1}, Y_N), they prove
V_N^{-1}(θ_∞ – θ_N) ⇒ N(0,1),
where V_N² = ∑_{i=N+1}^∞ E
Comments & Academic Discussion
Loading comments...
Leave a Comment