Langevin type limiting processes for Adaptive MCMC
Adaptive Markov Chain Monte Carlo (AMCMC) is a class of MCMC algorithms where the proposal distribution changes at every iteration of the chain. In this case it is important to verify that such a Markov Chain indeed has a stationary distribution. In this paper we discuss a diffusion approximation to a discrete time AMCMC. This diffusion approximation is different when compared to the diffusion approximation as in Gelman, Gilks and Roberts (1997) where the state space increases in dimension to infinity. In our approach the time parameter is sped up in such a way that the limiting distribution (as the mesh size goes to 0) is to a non-trivial continuous time diffusion process.
💡 Research Summary
The paper addresses a fundamental theoretical gap in Adaptive Markov Chain Monte Carlo (AMCMC) methods: while traditional MCMC algorithms have well‑established convergence guarantees, the adaptive nature of AMCMC—where the proposal distribution is updated at every iteration—makes it non‑trivial to verify that the chain still targets the intended stationary distribution. The authors propose a diffusion‑approximation framework that differs from the classic “diffusion limit” of Gelman, Gilks, and Roberts (1997). Instead of letting the state‑space dimension grow to infinity, they keep the dimension fixed and accelerate the time scale so that, as the discretisation mesh size Δ → 0, the rescaled discrete‑time chain converges to a non‑degenerate continuous‑time diffusion process.
Model Setup and Scaling.
The discrete‑time AMCMC is represented as a bivariate Markov chain ({(X_n,\theta_n)}_{n\ge0}), where (X_n) denotes the state of interest and (\theta_n) encodes the adaptive parameters (e.g., proposal scale). The authors consider a time‑rescaled process
\
Comments & Academic Discussion
Loading comments...
Leave a Comment