Priors for New Physics

Priors for New Physics
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

The interpretation of data in terms of multi-parameter models of new physics, using the Bayesian approach, requires the construction of multi-parameter priors. We propose a construction that uses elements of Bayesian reference analysis. Our idea is to initiate the chain of inference with the reference prior for a likelihood function that depends on a single parameter of interest that is a function of the parameters of the physics model. The reference posterior density of the parameter of interest induces on the parameter space of the physics model a class of posterior densities. We propose to continue the chain of inference with a particular density from this class, namely, the one for which indistinguishable models are equiprobable and use it as the prior for subsequent analysis. We illustrate our method by applying it to the constrained minimal supersymmetric Standard Model and two non-universal variants of it.


💡 Research Summary

The paper addresses a fundamental challenge in Bayesian analyses of new‑physics models at the LHC: how to construct a prior distribution over a high‑dimensional parameter space in a principled, data‑driven way. Conventional choices such as flat or logarithmic priors are arbitrary in many dimensions and can lead to pathological posterior behavior, especially near the boundaries of the parameter space. To overcome this, the authors adopt the framework of Bayesian reference analysis, which defines a “reference prior” that maximizes the influence of the data relative to the prior.

The methodology proceeds in four steps. First, a simple counting experiment is considered, where the observed number of events N follows a Poisson distribution with mean µ + s, where µ is the expected Standard Model background and s ≥ 0 is the expected signal from new physics. The background µ is assigned an evidence‑based Gamma prior, reflecting minimal prior knowledge. By marginalizing over µ, a one‑parameter marginal likelihood p(N|s) is obtained in closed form (Eq. 7). Second, the reference prior for the signal s is derived using Jeffreys’ rule, which in this context reduces to a sum over Poisson probabilities weighted by coefficients v_{ik} (Eq. 10). Third, the reference posterior p(s|N) is formed by multiplying the marginal likelihood by the reference prior and normalizing (Eq. 12). This posterior is the most “objective” inference about s that the data can support.

The crucial step is the mapping from the one‑dimensional posterior for s to a prior over the full set of model parameters θ. The authors impose an equiprobability condition: all points in θ‑space that predict the same expected signal s(θ) must receive the same prior weight. Consequently, the multi‑parameter prior is taken to be proportional to the reference posterior evaluated at s(θ), possibly corrected by a Jacobian factor to ensure uniformity on the constant‑s hypersurfaces. Practically, this is implemented by Monte‑Carlo sampling of θ, computing s(θ) for each sample, and assigning weights according to p(s(θ)|N). The resulting prior is invariant under reparameterizations and inherits the good frequentist coverage properties of the reference prior.

The authors demonstrate the approach on three supersymmetric models: the constrained MSSM (CMSSM) with two free parameters (m₀, m₁/₂) and two non‑universal Higgs mass extensions each with five free parameters. For each model, the expected signal s(θ) is obtained from simulated LHC analyses (e.g., jets + missing E_T). The derived priors are then used in full Bayesian inference, yielding posterior distributions that are stable against prior choice and free of artificial spikes that often appear with naïve flat priors. Model comparison via Bayes factors also becomes less sensitive to arbitrary prior specifications.

In addition to parameter estimation, the paper proposes a novel Bayesian measure of signal significance. By defining a loss function δ(µ,s) that quantifies the penalty for insisting on the background‑only hypothesis when a signal may be present, and averaging this loss over the joint posterior p(µ,s|N), one obtains a scalar d(N). Large values of d(N) indicate that rejecting the background‑only hypothesis would have avoided a substantial expected loss, providing an intuitive, prior‑consistent alternative to traditional p‑values.

Overall, the work offers a coherent, mathematically justified solution to the prior‑selection problem in multi‑parameter new‑physics analyses. It combines the objectivity of reference priors with a physically motivated equiprobability condition, delivering priors that are data‑driven, transformation‑invariant, and possess good frequentist coverage. While computationally demanding—requiring extensive sampling of high‑dimensional parameter spaces—the approach is feasible with modern high‑performance computing resources. Future extensions envisaged include multi‑channel counting experiments, more elaborate loss functions, and global scans of supersymmetric parameter spaces, which would further solidify the method as a standard tool for Bayesian new‑physics inference at the LHC and beyond.


Comments & Academic Discussion

Loading comments...

Leave a Comment