Dynamic Gauss Newton Metropolis Algorithm
GNM: The MCMC Jagger. A rocking awesome sampler. This python package is an affine invariant Markov chain Monte Carlo (MCMC) sampler based on the dynamic Gauss-Newton-Metropolis (GNM) algorithm. The GNM algorithm is specialized in sampling highly non-linear posterior probability distribution functions of the form $e^{-||f(x)||^2/2}$, and the package is an implementation of this algorithm. On top of the back-off strategy in the original GNM algorithm, there is the dynamic hyper-parameter optimization feature added to the algorithm and included in the package to help increase performance of the back-off and therefore the sampling. Also, there are the Jacobian tester, error bars creator and many more features for the ease of use included in the code. The problem is introduced and a guide to installation is given in the introduction. Then how to use the python package is explained. The algorithm is given and finally there are some examples using exponential time series to show the performance of the algorithm and the back-off strategy.
💡 Research Summary
The paper introduces the Dynamic Gauss‑Newton‑Metropolis (GNM) algorithm, a novel affine‑invariant Markov chain Monte Carlo (MCMC) sampler specifically designed for posterior distributions that can be expressed as (p(x)\propto\exp!\big(-|f(x)|^{2}/2\big)). Such posteriors arise frequently in nonlinear least‑squares problems, hierarchical Bayesian models, and many scientific applications where the likelihood is a Gaussian error model applied to a nonlinear forward map (f). The authors argue that conventional samplers—Metropolis‑Hastings, Adaptive Metropolis, Hamiltonian Monte Carlo (HMC), and even the original Gauss‑Newton‑Metropolis method—struggle with these targets because they either lack affine invariance or require delicate tuning of proposal scales and mass matrices.
The core of the algorithm is a local quadratic approximation of the log‑posterior obtained via the Gauss‑Newton method. At the current state (x_t) the residual vector (f(x_t)) and its Jacobian (J(x_t)) are computed. The Gauss‑Newton approximation yields a surrogate Hessian (H_t\approx J(x_t)^{!T}J(x_t)) and a Newton‑type step (\Delta_t = -H_t^{-1}J(x_t)^{!T}f(x_t)). Using these quantities, a proposal distribution is defined as a multivariate normal
\
Comments & Academic Discussion
Loading comments...
Leave a Comment