Static Parameter Estimation for ABC Approximations of Hidden Markov Models

Static Parameter Estimation for ABC Approximations of Hidden Markov   Models
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

In this article we focus on Maximum Likelihood estimation (MLE) for the static parameters of hidden Markov models (HMMs). We will consider the case where one cannot or does not want to compute the conditional likelihood density of the observation given the hidden state because of increased computational complexity or analytical intractability. Instead we will assume that one may obtain samples from this conditional likelihood and hence use approximate Bayesian computation (ABC) approximations of the original HMM. ABC approximations are biased, but the bias can be controlled to arbitrary precision via a parameter \epsilon>0; the bias typically goes to zero as \epsilon \searrow 0. We first establish that the bias in the log-likelihood and gradient of the log-likelihood of the ABC approximation, for a fixed batch of data, is no worse than \mathcal{O}(n\epsilon), n being the number of data; hence, for computational reasons, one might expect reasonable parameter estimates using such an ABC approximation. Turning to the computational problem of estimating $\theta$, we propose, using the ABC-sequential Monte Carlo (SMC) algorithm in Jasra et al. (2012), an approach based upon simultaneous perturbation stochastic approximation (SPSA). Our method is investigated on two numerical examples


💡 Research Summary

This paper tackles the problem of estimating static parameters θ in hidden Markov models (HMMs) when the conditional observation density p(yₖ | xₖ, θ) is either analytically intractable or computationally prohibitive to evaluate. Instead of attempting to compute the exact likelihood, the authors adopt an Approximate Bayesian Computation (ABC) framework, which replaces the true conditional density with a simulated‐based approximation pε(yₖ | xₖ, θ). The approximation accepts simulated observations that lie within a tolerance ε of the actual data, thereby introducing a controllable bias that vanishes as ε → 0.

The first theoretical contribution establishes that, for a fixed data batch of length n, the bias in both the log‑likelihood ℓε(θ)=∑ₖlog pε(yₖ | θ) and its gradient ∇ℓε(θ) is bounded by O(n ε). This result shows that the error introduced by the ABC approximation scales linearly with the tolerance and the number of observations, providing a clear guideline for selecting ε: a sufficiently small ε yields a negligible bias while still offering computational savings.

To exploit this approximation for maximum‑likelihood estimation, the authors combine the ABC‑Sequential Monte Carlo (SMC) algorithm of Jasra et al. (2012) with Simultaneous Perturbation Stochastic Approximation (SPSA). The ABC‑SMC routine supplies unbiased (up to the O(n ε) bias) estimates of the approximate likelihood for any candidate θ, while SPSA furnishes a highly efficient stochastic gradient estimate using only two random perturbations per iteration. The resulting algorithm proceeds as follows:

  1. Initialise θ₀, and choose diminishing step‑size sequences aₖ and perturbation magnitudes cₖ (as in standard SPSA).
  2. At iteration k, draw a random perturbation vector Δₖ with independent symmetric Bernoulli entries.
  3. Form two perturbed parameter vectors θₖ⁺ = θₖ + cₖΔₖ and θₖ⁻ = θₖ − cₖΔₖ.
  4. Run ABC‑SMC separately for θₖ⁺ and θₖ⁻ to obtain approximate log‑likelihoods ℓε(θₖ⁺) and ℓε(θₖ⁻).
  5. Compute the SPSA gradient estimate gₖ =

Comments & Academic Discussion

Loading comments...

Leave a Comment