Second Order Statistics Analysis and Comparison between Arithmetic and Geometric Average Fusion
Two fundamental approaches to information averaging are based on linear and logarithmic combination, yielding the arithmetic average (AA) and geometric average (GA) of the fusing initials, respectively. In the context of target tracking, the two most common formats of data to be fused are random variables and probability density functions, namely $v$-fusion and $f$-fusion, respectively. In this work, we analyze and compare the second order statistics (including variance and mean square error) of AA and GA in terms of both $v$-fusion and $f$-fusion. The case of weighted Gaussian mixtures representing multitarget densities in the presence of false alarms and misdetection (whose weight sums are not necessarily unit) is also considered, the result of which appears significantly different from that for a single target. In addition to exact derivation, exemplifying analysis and illustrations are provided.
💡 Research Summary
The paper provides a rigorous comparative study of two fundamental information‑averaging schemes—arithmetic averaging (AA) and geometric averaging (GA)—as they are applied to sensor‑network target tracking. The authors distinguish between variable‑based fusion (v‑fusion), where local estimates are random variables, and density‑based fusion (f‑fusion), where the local information is represented by probability density functions (PDFs) or probability hypothesis densities (PHDs). For v‑fusion, AA is defined as a weighted linear combination of the estimates, while GA is the weighted product (or equivalently the weighted sum of logarithms). The bias analysis shows that AA preserves unbiasedness when all inputs are unbiased, whereas GA generally does not, because the logarithm is undefined for non‑positive values and introduces a bias when the inputs differ.
The variance analysis for AA yields a closed‑form expression involving the individual variances and the pairwise covariances. By introducing the variance ratio α = Σ₂/Σ₁ and the correlation coefficient ρ, the authors derive tight upper and lower bounds: the AA variance never exceeds the largest individual variance and can be lower than the smallest variance when ρ < α − ½, with optimal weights given by an inverse‑variance rule (a generalisation of Millman’s equation). When ρ is larger, the lower bound collapses to the minimum of the two variances. GA’s variance requires the covariance of log‑transformed variables, which lacks a simple analytic form; the authors therefore resort to Monte‑Carlo simulations to illustrate the behaviour. Numerical experiments with approximately Gaussian variables (means 50 and 60, variances 100 and 200) and with Poisson variables (rates 12 and 10) confirm that AA can be either larger or smaller than GA depending on the chosen weights and the correlation structure.
For the mean‑square‑error (MSE) analysis, the AA MSE is expressed as a quadratic form in the individual MSEs and a cross‑term involving a correlation factor β. Because GA’s MSE cannot be derived analytically for v‑fusion (again due to the logarithm), the authors use Monte‑Carlo simulations to compare the two. The results demonstrate that, similar to the variance case, AA may outperform GA or vice‑versa, with the greatest discrepancy occurring for intermediate weight values.
In the f‑fusion setting, AA corresponds to a weighted linear mixture of PDFs, while GA corresponds to a normalized weighted product of PDFs (the log‑linear pool). The support of the AA result is the union of the individual supports, whereas the GA result is confined to their intersection, which may be empty if the PDFs do not overlap. The paper discusses the special case of weighted Gaussian mixtures representing multitarget densities where the mixture weights need not sum to one (due to false alarms and missed detections). In this scenario, GA behaves differently from the classic Covariance Intersection (CI) method, leading to distinct performance characteristics.
Throughout the manuscript, the authors provide “Remarks” that summarise key findings: (1) AA’s variance is bounded above by the largest input variance and can be lower than the smallest input variance under certain correlation conditions; (2) the variance of GA can be either larger or smaller than that of AA depending on the weight choice; (3) the MSE of AA can similarly be better or worse than GA’s. The paper concludes that neither averaging rule is universally superior; the optimal choice depends on the statistical dependence among inputs, the weighting strategy, and whether the data are represented as variables or densities. Future work is suggested on extending the analysis to multidimensional spaces, non‑Gaussian models, and real‑time distributed implementations.
Comments & Academic Discussion
Loading comments...
Leave a Comment