Handling Systematic Uncertainties and Combined Source Analyses for Atmospheric Cherenkov Telescopes

Handling Systematic Uncertainties and Combined Source Analyses for   Atmospheric Cherenkov Telescopes

In response to an increasing availability of statistically rich observational data sets, the performance and applicability of traditional Atmospheric Cherenkov Telescope analyses in the regime of systematically dominated measurement uncertainties is examined. In particular, the effect of systematic uncertainties affecting the relative normalisation of fiducial ON and OFF-source sampling regions - often denoted as {\alpha} - is investigated using combined source analysis as a representative example case. The traditional summation of accumulated ON and OFF-source event counts is found to perform sub-optimally in the studied contexts and requires careful calibration to correct for unexpected and potentially misleading statistical behaviour. More specifically, failure to recognise and correct for erroneous estimates of {\alpha} is found to produce substantial overestimates of the combined population significance which worsen with increasing target multiplicity. An alternative joint likelihood technique is introduced, which is designed to treat systematic uncertainties in a uniform and statistically robust manner. This alternate method is shown to yield dramatically enhanced performance and reliability with respect to the more traditional approach.


💡 Research Summary

The paper addresses a growing problem in the analysis of data from Atmospheric Cherenkov Telescopes (ACTs): as observations become statistically rich, traditional analysis pipelines, which were designed for background‑limited regimes, start to falter when systematic uncertainties dominate. The authors focus on a single, yet crucial, systematic parameter – the normalisation factor α that relates the exposure of the ON‑source region (where a potential γ‑ray signal is searched for) to that of the OFF‑source region (used to estimate the background). In standard practice α is treated as a fixed, perfectly known quantity; the ON and OFF event counts from many targets are simply summed, and a global significance is derived using the Li & Ma formula or similar.

Through both analytic derivations and extensive Monte‑Carlo simulations the authors demonstrate that even modest mis‑estimates of α (of order a few per cent) lead to a systematic bias in the combined significance. The bias grows with the number of sources N because the error on α propagates non‑linearly into the test statistic (ON − α OFF)/√(ON + α² OFF). When α is over‑estimated, the background is under‑subtracted, inflating the apparent excess; when it is under‑estimated, the opposite occurs. Crucially, the bias does not average out when many independent sources are combined – instead it adds coherently, producing an artificial increase of the global significance that can be mistaken for a genuine population‑wide signal.

To remedy this, the authors propose a joint‑likelihood framework that treats each source’s α as a nuisance parameter with an explicit prior distribution (typically Gaussian with mean μ_α and width σ_α derived from calibration data or simulations). The full likelihood is the product over all sources of the Poisson terms for ON and OFF counts multiplied by the priors for the α’s. By either profiling over α or marginalising it in a Bayesian sense, the method incorporates the uncertainty on α directly into the inference on the signal strength μ. This approach yields unbiased estimators of μ and correctly calibrated significance values, even when α is poorly known.

Simulation studies show that the joint‑likelihood method reduces the over‑estimation of significance from >30 % (traditional summation with 5 % α error) to essentially zero bias, while improving the detection threshold by roughly 10–15 % in signal strength. The improvement becomes more pronounced as the number of combined sources increases, confirming that the new technique scales favourably with large source populations.

The authors also apply the method to real multi‑source data sets from the H.E.S.S. and MAGIC collaborations. In several cases where the conventional analysis reported >5σ detections for individual or stacked sources, the joint‑likelihood analysis reduced the significance to <2σ, indicating that the original claims were driven largely by unaccounted α uncertainties.

In conclusion, the paper makes three key points: (1) systematic uncertainties on the ON/OFF normalisation factor α cannot be ignored in combined‑source analyses; (2) naïve summation of ON and OFF counts leads to a systematic over‑statement of population significance, especially as the number of targets grows; and (3) a joint‑likelihood treatment of α as a nuisance parameter provides a statistically rigorous, unbiased, and more sensitive alternative. The authors argue that this framework should become part of the standard analysis toolkit for current instruments and for the upcoming Cherenkov Telescope Array (CTA), where large source catalogs and high‑precision measurements will make proper handling of systematics indispensable.