On computation of a common mean

On computation of a common mean
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Combining several independent measurements of the same physical quantity is one of the most important tasks in metrology. Small samples, biased input estimates, not always adequate reported uncertainties, and unknown error distribution make a rigorous solution very difficult, if not impossible. For this reason, many methods to compute a common mean and its uncertainty were proposed, each with own advantages and shortcomings. Most of them are variants of the weighted average (WA) approach with different strategies to compute WA and its standard deviation. Median estimate became also increasingly popular during recent years. In this paper, these two methods in most widely used modifications are compared using simulated and real data. To overcome some problems of known approaches to compute the WA uncertainty, a new combined estimate has been proposed. It has been shown that the proposed method can help to obtain more robust and realistic estimate suitable for both consistent and discrepant measurements.


💡 Research Summary

The paper addresses the long‑standing problem of combining several independent measurements of the same physical quantity into a single “common mean” (CM) together with a realistic estimate of its uncertainty. In metrology and many scientific fields, the available data are often limited to a set of measured values (x_i) and their reported standard deviations (s_i); correlations are unknown, sample sizes are small, and the error distributions may be non‑Gaussian. Under these constraints the classical weighted average (WA) remains the most widely used estimator, but the literature offers several competing formulas for the uncertainty of the WA.

The authors first review the standard WA formulation: weights (p_i = 1/s_i^2) lead to the estimate (\bar{x}_w = \sum p_i x_i / \sum p_i). Two traditional uncertainty estimates are considered. The first, (\sigma_1 = 1/\sqrt{\sum p_i}), depends only on the reported uncertainties and ignores the scatter of the data. The second, (\sigma_2 = \sigma_1 \sqrt{H/(n-1)}), incorporates the chi‑square statistic (H = \sum p_i (x_i-\bar{x}_w)^2) and therefore reflects the actual dispersion of the measurements. While (\sigma_1) is appropriate when the reported uncertainties are larger than the observed scatter, (\sigma_2) is preferable when the opposite holds.

A third approach, denoted (\sigma_3), selects between (\sigma_1) and (\sigma_2) based on a chi‑square significance level (Q). If (H) is below the critical value (\chi^2(Q, n-1)) the method adopts (\sigma_1); otherwise it adopts (\sigma_2). The authors point out that (\sigma_3) inherits a subjective element (choice of (Q)) and can exhibit abrupt jumps for small changes in the data, making it unstable in practice.

To overcome these drawbacks the paper proposes a new combined uncertainty estimator: \


Comments & Academic Discussion

Loading comments...

Leave a Comment