Limit theorems for some adaptive MCMC algorithms with subgeometric kernels: Part II

Limit theorems for some adaptive MCMC algorithms with subgeometric   kernels: Part II
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We prove a central limit theorem for a general class of adaptive Markov Chain Monte Carlo algorithms driven by sub-geometrically ergodic Markov kernels. We discuss in detail the special case of stochastic approximation. We use the result to analyze the asymptotic behavior of an adaptive version of the Metropolis Adjusted Langevin algorithm with a heavy tailed target density.


💡 Research Summary

The paper addresses a gap in the theory of adaptive Markov Chain Monte Carlo (MCMC) methods when the underlying transition kernels are only sub‑geometrically ergodic, i.e., their convergence to the stationary distribution proceeds at a polynomial or slower rate rather than exponentially fast. After a concise introduction that situates the work within the existing literature on adaptive MCMC (notably the diminishing‑adaptation and bounded‑variation conditions of Roberts and Rosenthal, and the sub‑geometric ergodicity framework of Fort, Moulines and others), the authors lay out a rigorous set of assumptions.

First, each kernel (P_{\theta}) indexed by a tuning parameter (\theta) satisfies a drift condition of the form
\


Comments & Academic Discussion

Loading comments...

Leave a Comment