Estimating the Static Parameters in Linear Gaussian Multiple Target Tracking Models

Estimating the Static Parameters in Linear Gaussian Multiple Target   Tracking Models

We present both offline and online maximum likelihood estimation (MLE) techniques for inferring the static parameters of a multiple target tracking (MTT) model with linear Gaussian dynamics. We present the batch and online versions of the expectation-maximisation (EM) algorithm for short and long data sets respectively, and we show how Monte Carlo approximations of these methods can be implemented. Performance is assessed in numerical examples using simulated data for various scenarios and a comparison with a Bayesian estimation procedure is also provided.


💡 Research Summary

The paper addresses the problem of estimating static (time‑invariant) parameters in a multiple‑target tracking (MTT) system whose dynamics are linear and Gaussian. Two maximum‑likelihood estimation (MLE) schemes based on the Expectation‑Maximisation (EM) algorithm are developed: a batch (offline) version that processes the entire data set at once, and an online (sequential) version that updates the parameters as new measurements arrive. Because the hidden state of each target and the data‑association (which measurement belongs to which target) are both latent, the EM steps cannot be carried out analytically. The authors therefore resort to Monte‑Carlo (MC) approximations: particles are sampled that jointly encode target states, the number of targets, and association labels. In the E‑step the expected sufficient statistics of the linear‑Gaussian model are estimated by averaging over these particles; in the M‑step the parameters (state‑transition matrix, observation matrix, process‑noise and measurement‑noise covariances) are updated using the closed‑form expressions that are available for linear Gaussian systems once the sufficient statistics are known.

For the batch EM, a particle filter combined with a labeling scheme generates a set of plausible association histories for the whole observation sequence. The sufficient statistics are computed once per EM iteration, and the parameters are maximised in closed form. This approach naturally incorporates the uncertainty of data association, unlike conventional EM that fixes the association beforehand.

The online EM adapts the batch formulation to a streaming context. At each time step a new observation vector is incorporated, the particle approximation of the current posterior is refreshed, and the sufficient statistics are updated by an exponential moving average with a step‑size schedule α_t. The same closed‑form M‑step formulas are then applied, yielding a constant‑memory, constant‑time‑per‑step algorithm. The online method thus scales to long sequences while preserving the statistical efficiency of the batch EM.

A crucial design choice is the proposal distribution for the MC sampling. The authors construct a hybrid proposal that mixes the predictive Gaussian density (derived from the current parameter guess) with the prior association probabilities (including detection probability and clutter model). This targeted proposal concentrates particles in high‑probability regions, allowing accurate expectation estimates with a modest number of particles (typically 500–2000). The paper demonstrates that the MC‑based EM converges rapidly and that the variance introduced by sampling can be controlled by increasing the particle count.

Performance is evaluated on simulated scenarios that vary the number of targets, detection probability, and clutter rate. Three metrics are reported: (i) parameter estimation error (Euclidean distance between true and estimated parameters), (ii) log‑likelihood convergence speed, and (iii) tracking accuracy measured by the OSPA distance. Both batch and online EM outperform a fully Bayesian MCMC estimator in terms of computational time (5–10× faster) while achieving comparable or lower parameter errors (10–20 % reduction). The online EM, in particular, maintains real‑time feasibility and shows stable convergence of the static parameters over long data streams.

In summary, the paper contributes a practical framework for static‑parameter learning in linear‑Gaussian MTT. By marrying EM with Monte‑Carlo approximations, it handles the combinatorial data‑association problem efficiently, provides both offline and real‑time solutions, and validates the approach through extensive simulations. The authors suggest future extensions to non‑linear or non‑Gaussian models, adaptive particle‑size strategies, and application to real radar or video tracking data.