Markovian stochastic approximation with expanding projections

Markovian stochastic approximation with expanding projections
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Stochastic approximation is a framework unifying many random iterative algorithms occurring in a diverse range of applications. The stability of the process is often difficult to verify in practical applications and the process may even be unstable without additional stabilisation techniques. We study a stochastic approximation procedure with expanding projections similar to Andrad'{o}ttir [Oper. Res. 43 (1995) 1037-1048]. We focus on Markovian noise and show the stability and convergence under general conditions. Our framework also incorporates the possibility to use a random step size sequence, which allows us to consider settings with a non-smooth family of Markov kernels. We apply the theory to stochastic approximation expectation maximisation with particle independent Metropolis-Hastings sampling.


💡 Research Summary

This paper addresses the stability and convergence of stochastic approximation (SA) algorithms when the underlying noise is generated by a family of Markov kernels that may lose ergodicity near certain critical parameter values. Classical Robbins‑Monro SA updates of the form θ_{i+1}=θ_i+γ_{i+1}h(θ_i) become unstable when the required expectations cannot be computed exactly and are replaced by Monte‑Carlo estimates using a Markov chain X_{i+1}∼P_{θ_i}(X_i,·). Existing stabilization techniques—fixed projections onto a compact set or adaptive projections that enlarge the feasible region—either cause frequent “restarts” or introduce spurious attractors on the boundary.

The authors propose a new “expanding projections” scheme. A sequence of nested projection sets {R_i}⊂Θ is defined, growing to cover the whole parameter space. At each iteration a tentative update θ*{i+1}=θ_i+Γ{i+1}H(θ_i,X_{i+1}) is computed, where H is the noisy observation of the mean field h and Γ_{i+1} is a random step size. If θ*{i+1}∈R{i+1} the update is accepted; otherwise a measurable projection variable θ_{proj,i+1} (e.g., the previous iterate or the orthogonal projection onto R_{i+1}) is used to force the new parameter back into the feasible set. The random step size allows the method to handle families of kernels {P_θ} that are not smooth in θ, a situation that arises in particle methods and other non‑differentiable Monte‑Carlo schemes.

Stability analysis hinges on constructing a Lyapunov function w:Θ→


Comments & Academic Discussion

Loading comments...

Leave a Comment