A decision between Bayesian and Frequentist upper limit in analyzing continuous Gravitational Waves
Given the sensitivity of current ground-based Gravitational Wave (GW) detectors, any continuous-wave signal we can realistically expect will be at a level or below the background noise. Hence, any data analysis of detector data will need to rely on statistical techniques to separate the signal from the noise. While with the current sensitivity of our detectors we do not expect to detect any true GW signals in our data, we can still set upper limits (UL) on their amplitude. These upper limits, in fact, tell us how weak a signal strength we would detect. In setting upper limit using two popular method, Bayesian and Frequentist, there is always the question of a realistic results. In this paper, we try to give an estimate of how realistically we can set the upper limit using the above mentioned methods. And if any, which one is preferred for our future data analysis work.
💡 Research Summary
The paper addresses the practical problem of setting upper limits (ULs) on the amplitude of continuous gravitational waves (CWs) when the expected signals are far below the noise floor of current ground‑based detectors such as LIGO and Virgo. Because direct detection is unlikely with present sensitivities, the community instead reports how strong a signal could be before it would have been detected with a given confidence. Two widely used statistical frameworks—Frequentist and Bayesian—are examined side‑by‑side, and their performance is evaluated on simulated data that mimics realistic detector noise.
Theoretical foundation
The authors model the data as x(t)=n(t)+h(t), where n(t) is zero‑mean, stationary Gaussian noise and h(t) is a deterministic CW signal characterized by four parameters: intrinsic amplitude h₀, inclination angle ι, polarization angle ψ, and initial phase Φ₀. By constructing the likelihood ratio Λ and taking its logarithm, they derive the well‑known F‑statistic, which is the maximum of the log‑likelihood over the four amplitude parameters. Under the Gaussian noise assumption, 2F follows a non‑central χ² distribution with four degrees of freedom and non‑centrality ρ² = (h|h), i.e., the squared optimal signal‑to‑noise ratio.
Frequentist upper‑limit procedure
The Frequentist UL is defined as the smallest h₀ such that, in 95 % of repeated experiments, the detection statistic exceeds the value observed in the actual data. Practically, the authors implement a Monte‑Carlo injection campaign:
- Compute the F‑statistic for a perfectly matched signal (F*).
- Choose an initial guess for h₀ from a parameter‑estimation routine.
- Randomly assign the remaining angles (φ₀, ψ, cos ι) because the data are assumed to be pure noise.
- Inject an artificial signal with the chosen parameters into a 0.1 Hz frequency band around the target pulsar frequency.
- Re‑compute 2F (denoted F′) and repeat the injection 150 times.
- Determine the fraction X of injections for which F′ > F*. If X lies outside the 90‑95 % or 95‑98 % windows, adjust h₀ by a heuristic factor (×1.05 or ×0.90) and repeat.
- Refine the estimate with 1000 injections per run, repeating the whole loop six times to obtain a smooth interpolation of X versus h₀.
The final h₀ that yields X≈0.95 is taken as the 95 % Frequentist UL. This approach is computationally intensive because each injection requires a full F‑statistic evaluation; the authors report several thousand such evaluations per pulsar.
Bayesian upper‑limit procedure
In the Bayesian framework, the posterior probability for h₀ given the data s is proportional to the product of the likelihood and a prior on the parameters. The likelihood is exactly the exponential of the negative half‑quadratic form derived from the F‑statistic (Eq. 29). The authors adopt a flat prior on h₀ (P(h₀)=const) and uniform priors on the angular parameters. The posterior is then integrated over ψ, ι (via μ=cos ι), and Φ₀:
I = ∫₀^∞ dh₀ ∫{-1}^{1} dμ ∫{-π/4}^{π/4} dψ ∫₀^{2π} dΦ₀ G(h₀,ψ,μ,Φ₀),
where G encodes the Gaussian likelihood. The 95 % Bayesian UL is the value h₀^{max} satisfying
∫₀^{h₀^{max}} dh₀ … G / I = 0.95.
Numerically, this reduces to a four‑dimensional integral that can be evaluated with standard quadrature or Markov‑Chain Monte‑Carlo methods. Crucially, only a single evaluation of the F‑statistic on the actual data is required, making the Bayesian UL far less demanding computationally.
Simulation and results
The authors generate 100 synthetic pulsar signals at realistic LIGO sensitivities, but they deliberately set the data to pure noise (no injected signal). Both methods are applied to each data set. The Frequentist ULs are systematically higher (more conservative) by roughly 10–15 % compared with the Bayesian ULs obtained with a flat prior. When a Jeffreys prior (∝1/h₀) is used, the Bayesian UL rises by about 20 %, illustrating the sensitivity of the Bayesian result to prior choice. In terms of runtime, the Frequentist pipeline required several hours on a 16‑core workstation per pulsar, whereas the Bayesian pipeline completed in under half an hour.
Discussion
The paper highlights several trade‑offs:
- Conservatism vs efficiency – The Frequentist method explicitly controls the false‑alarm probability, yielding a more conservative UL at the cost of massive Monte‑Carlo simulations. The Bayesian method is computationally cheap but its results depend on the prior; a physically motivated prior is essential to avoid bias.
- Robustness to non‑Gaussian noise – Real detector data exhibit non‑stationary, non‑Gaussian artifacts. The Bayesian framework can incorporate more sophisticated noise models directly into the likelihood, whereas the Frequentist approach would need to redesign the injection‑based calibration.
- Practicality for large‑scale searches – Upcoming observing runs will involve thousands of targets and months of data. The authors argue that a Bayesian UL, possibly supplemented by occasional Frequentist cross‑checks, offers a scalable solution.
Conclusions and recommendations
The authors recommend adopting the Bayesian upper‑limit calculation as the primary tool for future continuous‑wave searches, given its computational tractability and flexibility. They suggest using a flat prior (or a physically motivated variant) and performing systematic prior‑sensitivity studies. The Frequentist method remains valuable as a validation technique, especially when a highly conservative UL is required for reporting. Future work should focus on extending the Bayesian analysis to incorporate realistic, non‑Gaussian noise models and on quantifying the impact of detector calibration uncertainties on UL estimates.
Comments & Academic Discussion
Loading comments...
Leave a Comment