Estimating limits from Poisson counting data using Dempster--Shafer analysis
We present a Dempster–Shafer (DS) approach to estimating limits from Poisson counting data with nuisance parameters. Dempster–Shafer is a statistical framework that generalizes Bayesian statistics. DS calculus augments traditional probability by allowing mass to be distributed over power sets of the event space. This eliminates the Bayesian dependence on prior distributions while allowing the incorporation of prior information when it is available. We use the Poisson Dempster–Shafer model (DSM) to derive a posterior DSM for the ``Banff upper limits challenge’’ three-Poisson model. The results compare favorably with other approaches, demonstrating the utility of the approach. We argue that the reduced dependence on priors afforded by the Dempster–Shafer framework is both practically and theoretically desirable.
💡 Research Summary
The paper introduces a novel application of Dempster‑Shafer (DS) theory to the problem of setting upper limits on signal rates when the data consist of Poisson counts with nuisance parameters such as background and efficiency. Traditional approaches fall into two camps: Bayesian methods, which require explicit prior distributions for all unknowns, and frequentist methods, which often rely on Monte‑Carlo intensive procedures (e.g., the CLs technique) and can struggle to incorporate systematic uncertainties in a transparent way. The authors argue that DS offers a middle ground: it generalizes probability by allowing belief mass to be assigned not only to singletons but to subsets of the parameter space, thereby reducing dependence on arbitrary priors while still permitting the inclusion of genuine prior information when available.
The core of the methodology is the Poisson Dempster‑Shafer Model (DSM). For a Poisson‑distributed count (n) with mean (\lambda), a belief function is constructed by allocating a basic belief mass to the whole non‑negative real line and, optionally, to sub‑intervals that reflect any prior knowledge. This yields lower and upper bounds (belief and plausibility) for any statement about (\lambda). Crucially, when no prior information is supplied, the belief mass is spread uniformly over the entire space, producing a non‑informative yet mathematically coherent representation of uncertainty.
To demonstrate the approach, the authors tackle the “Banff upper limits challenge,” which specifies a three‑Poisson model: an observed count (n), a background count (b) with its own measurement uncertainty, and a signal efficiency (\epsilon). Each component is modeled as an independent Poisson DSM. Dempster’s rule of combination is then applied to fuse the three belief structures into a joint belief function for the signal strength (\mu). During combination, any conflict (i.e., contradictory evidence) is quantified; the conflict mass is subsequently redistributed, providing a built‑in diagnostic of model‑data incompatibility that has no analogue in standard Bayesian updating.
From the resulting joint belief function, the authors define the upper limit on (\mu) as the smallest value for which the plausibility (the upper bound of the belief interval) exceeds 0.95. This definition mirrors the conventional 95 % confidence level but does not rely on a specific prior for (\mu). The authors compare DSM‑derived limits with those obtained using Bayesian priors (flat, Jeffreys, and informative priors) and with the frequentist CLs method. The DSM limits are virtually indistinguishable from Bayesian limits when informative priors are used, yet they remain stable when priors are vague or absent, avoiding the over‑optimistic limits sometimes produced by poorly chosen priors. Compared with CLs, the DSM approach achieves comparable coverage while requiring far fewer simulated pseudo‑experiments, because the combination rule is analytic and polynomial‑time.
The paper also discusses practical considerations. Constructing the initial belief masses may involve expert judgment, especially when partial prior information is available. In high‑dimensional problems, the combinatorial growth of the power set can make exact Dempster combination computationally demanding; the authors suggest approximate algorithms (e.g., Monte‑Carlo sampling of focal elements) as possible remedies. Nonetheless, for the typical low‑dimensional counting problems encountered in particle physics and astrophysics, the DSM is computationally efficient and conceptually straightforward.
In conclusion, the authors present a compelling case that Dempster‑Shafer analysis provides a robust, prior‑light framework for limit setting with Poisson data. By explicitly handling nuisance parameters, quantifying conflict, and delivering analytically tractable belief‑plausibility intervals, the DSM bridges the gap between Bayesian flexibility and frequentist objectivity. The method is especially attractive for experiments where prior knowledge is scarce or where systematic uncertainties dominate, offering a theoretically sound and practically viable alternative to existing techniques.
Comments & Academic Discussion
Loading comments...
Leave a Comment