Variance bounding and geometric ergodicity of Markov chain Monte Carlo kernels for approximate Bayesian computation

Variance bounding and geometric ergodicity of Markov chain Monte Carlo   kernels for approximate Bayesian computation
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Approximate Bayesian computation has emerged as a standard computational tool when dealing with the increasingly common scenario of completely intractable likelihood functions in Bayesian inference. We show that many common Markov chain Monte Carlo kernels used to facilitate inference in this setting can fail to be variance bounding, and hence geometrically ergodic, which can have consequences for the reliability of estimates in practice. This phenomenon is typically independent of the choice of tolerance in the approximation. We then prove that a recently introduced Markov kernel in this setting can inherit variance bounding and geometric ergodicity from its intractable Metropolis–Hastings counterpart, under reasonably weak and manageable conditions. We show that the computational cost of this alternative kernel is bounded whenever the prior is proper, and present indicative results on an example where spectral gaps and asymptotic variances can be computed, as well as an example involving inference for a partially and discretely observed, time-homogeneous, pure jump Markov process. We also supply two general theorems, one of which provides a simple sufficient condition for lack of variance bounding for reversible kernels and the other provides a positive result concerning inheritance of variance bounding and geometric ergodicity for mixtures of reversible kernels.


💡 Research Summary

This paper investigates the convergence properties of Markov chain Monte Carlo (MCMC) kernels that are commonly employed in Approximate Bayesian Computation (ABC), a methodology used when the likelihood function is intractable. The authors first demonstrate that many of the standard ABC‑MCMC kernels—such as the naïve ABC Metropolis–Hastings (MH) algorithm, pseudo‑marginal ABC, and various adaptive proposal schemes—can fail to be variance‑bounding. Variance‑bounding is a technical condition that guarantees a finite asymptotic variance for all square‑integrable test functions; it is closely linked to geometric ergodicity, which ensures that the chain converges to its stationary distribution at a geometric rate. By establishing a simple sufficient condition for the lack of variance‑bounding in reversible kernels, the authors show that the problem is largely independent of the tolerance parameter ε that controls the ABC approximation. In particular, when the acceptance probability can become arbitrarily small in some region of the state space, the spectral gap of the transition operator collapses, and the chain loses geometric ergodicity.

Having identified this deficiency, the paper turns to a recently introduced pseudo‑marginal ABC kernel. This kernel replaces the intractable MH acceptance ratio with an unbiased estimator obtained by simulating synthetic data. Under three modest assumptions—(i) the prior distribution is proper, (ii) the estimator has finite second moment (i.e., bounded variance), and (iii) the underlying reversible MH kernel possesses a positive minorisation condition—the new kernel inherits the variance‑bounding and geometric ergodicity of its exact MH counterpart. In other words, the pseudo‑marginal kernel retains the same spectral gap and thus the same finite asymptotic variance for all integrable functions. The authors also prove a general result for mixtures of reversible kernels: if each component kernel is variance‑bounding and the mixture weights are strictly positive, then the mixture kernel is also variance‑bounding. This theorem provides a theoretical foundation for constructing composite proposal strategies without sacrificing convergence guarantees.

The theoretical contributions are illustrated with two concrete examples. The first example uses a simple Gaussian‑Gaussian model where the exact spectral gap and asymptotic variance can be computed analytically. The authors show that the naïve ABC‑MH kernel loses its spectral gap as ε decreases, whereas the pseudo‑marginal kernel maintains the gap and yields stable variance estimates. The second example involves inference for a time‑homogeneous pure‑jump Markov process observed at discrete, partially observed time points. This model features a multimodal posterior and a non‑trivial observation mechanism. The pseudo‑marginal kernel is shown to have bounded computational cost (provided the prior is proper) and to achieve rapid geometric convergence, as evidenced by empirical autocorrelation diagnostics and estimated spectral gaps.

In addition to the main results, the paper supplies two auxiliary theorems in the appendix. The first gives a straightforward sufficient condition for the failure of variance‑bounding in any reversible kernel, based on the existence of a set where the acceptance probability can be made arbitrarily small. The second theorem establishes that variance‑bounding and geometric ergodicity are preserved under convex combinations of reversible kernels, provided each component satisfies the property.

Overall, the study highlights a subtle but critical pitfall in the design of ABC‑MCMC algorithms: without careful attention to the acceptance mechanism, one may inadvertently construct chains that are not geometrically ergodic, leading to unreliable Monte‑Carlo estimates. The pseudo‑marginal ABC kernel presented here offers a practical remedy, inheriting the desirable convergence properties of the exact MH algorithm while remaining computationally feasible. The authors suggest that future work could explore high‑dimensional extensions, adaptive tuning of the unbiased estimator, and automated diagnostics for variance‑bounding in complex scientific applications.


Comments & Academic Discussion

Loading comments...

Leave a Comment