Kernel estimators of asymptotic variance for adaptive Markov chain Monte Carlo
We study the asymptotic behavior of kernel estimators of asymptotic variances (or long-run variances) for a class of adaptive Markov chains. The convergence is studied both in $L^p$ and almost surely. The results also apply to Markov chains and improve on the existing literature by imposing weaker conditions. We illustrate the results with applications to the $\operatorname {GARCH}(1,1)$ Markov model and to an adaptive MCMC algorithm for Bayesian logistic regression.
💡 Research Summary
The paper investigates the asymptotic behavior of kernel estimators for long‑run (asymptotic) variance in the context of adaptive Markov chain Monte Carlo (MCMC) algorithms. Long‑run variance quantifies the variability of ergodic averages and is essential for constructing confidence intervals, assessing efficiency, and comparing samplers. While kernel‑based variance estimators are well‑studied for fixed‑kernel Markov chains, the adaptive setting—where the transition kernel changes over time—poses additional challenges because standard mixing and ergodicity assumptions may no longer hold.
Problem Setting and Assumptions
Consider an adaptive chain ({X_n}{n\ge0}) on a measurable space ((\mathcal X,\mathcal F)) with a family of transition kernels ({P{\theta}}{\theta\in\Theta}). At iteration (n) the chain uses kernel (P{\theta_n}), where the parameter (\theta_n) is updated online based on the past trajectory. The authors adopt the classic “diminishing adaptation” condition (|\theta_{n+1}-\theta_n|\to0) a.s. together with (\sum_n|\theta_{n+1}-\theta_n|<\infty), and a “containment” condition guaranteeing that the chain spends a non‑negligible proportion of time in a small set uniformly over (\theta). Crucially, they replace the usual uniform geometric ergodicity assumption with a weaker uniform drift (or Foster‑Lyapunov) condition: there exists a function (V\ge1), constants (\lambda\in(0,1)), (b<\infty) and a petite set (C) such that for every (\theta),
\
Comments & Academic Discussion
Loading comments...
Leave a Comment