Event Weighted Tests for Detecting Periodicity in Photon Arrival Times
This paper treats the problem of detecting periodicity in a sequence of photon arrival times, which occurs, for example, in attempting to detect gamma-ray pulsars. A particular focus is on how auxiliary information, typically source intensity, background intensity, and incidence angles and energies associated with each photon arrival should be used to maximize the detection power. We construct a class of likelihood-based tests, score tests, which give rise to event weighting in a principled and natural way, and derive expressions quantifying the power of the tests. These results can be used to compare the efficacies of different weight functions, including cuts in energy and incidence angle. The test is targeted toward a template for the periodic lightcurve, and we quantify how deviation from that template affects the power of detection.
💡 Research Summary
The paper addresses the classic problem of detecting a periodic signal hidden in a stream of photon arrival times, a situation that arises most prominently in the search for gamma‑ray pulsars with instruments such as Fermi‑LAT. The authors begin by modelling the photon stream as a non‑homogeneous Poisson process whose intensity is the sum of a periodic signal component λ_s(t,θ,E) and a background component λ_b(t,θ,E). The variables t, θ and E denote the arrival time, incidence angle and measured energy of each photon, respectively, and therefore each event carries a three‑dimensional auxiliary vector that can be exploited to improve detection power.
Instead of treating all photons equally or applying crude cuts on energy or angle, the authors derive a likelihood function L(α,β) that depends on a signal amplitude parameter α (which is zero under the null hypothesis of no periodicity) and a background scaling parameter β. By taking the score (the derivative of the log‑likelihood with respect to α) at α = 0 they obtain a test statistic S that is a weighted sum of sinusoidal basis functions:
S = Σ_i w_i cos(2πf t_i) and S′ = Σ_i w_i sin(2πf t_i)
where f is the trial frequency and w_i is a weight assigned to photon i. Crucially, the weight emerges naturally from the likelihood derivation as the ratio of the signal to background probability densities evaluated at the photon’s auxiliary variables:
w_i = λ_s(t_i,θ_i,E_i) / λ_b(t_i,θ_i,E_i)
In practice λ_s and λ_b are estimated from prior spectral models, instrument response functions, or from the data itself using an iterative background‑estimation scheme. This formulation makes the test a score test that is asymptotically optimal (Neyman–Pearson optimal) for detecting a weak periodic component when the auxiliary information is correctly modelled.
The paper then explores several concrete weight choices: (1) unit weight (the traditional unweighted Rayleigh test), (2) binary cuts w_i = 1_{E_i>E_thr} or w_i = 1_{θ_i<θ_thr}, and (3) the continuous likelihood‑ratio weight described above. For each case the authors compute the expected value and variance of S under both the null and the alternative hypothesis, thereby obtaining analytic expressions for the detection power as a function of signal‑to‑background ratio, number of photons, and the degree of mismatch between the assumed periodic light‑curve template h(t) and the true signal shape.
A key insight is that the power loss due to template mismatch scales with the inner product ⟨h_true , h_assumed⟩; if the template deviates substantially, the expected signal contribution to S is reduced proportionally, and the test behaves like a sub‑optimal matched filter. The authors quantify this effect and provide guidance on how finely the template must be sampled to avoid significant degradation.
Simulation studies using realistic gamma‑ray pulsar parameters (power‑law spectra, instrument point‑spread functions, and time‑varying exposure) confirm the analytic predictions. The likelihood‑ratio weighting consistently outperforms the unweighted and cut‑based versions, delivering a 20–30 % increase in power for low signal‑to‑background regimes (S/B ≈ 0.1) and even larger gains when the background is highly anisotropic. Moreover, the distribution of the weighted score converges rapidly to a normal distribution, allowing straightforward p‑value calculation via standard Z‑scores; the authors provide the necessary variance‑normalization formulas for practical pipeline implementation.
Finally, the authors discuss practical considerations: (i) the need for accurate models of λ_s and λ_b, (ii) computational cost of evaluating weights for millions of photons (mitigated by pre‑computing lookup tables in (θ,E) space), and (iii) the flexibility to incorporate additional auxiliary variables such as event quality flags. They argue that the proposed framework subsumes traditional cut‑based methods as special cases and offers a principled path toward maximal detection sensitivity in current and future high‑energy astrophysics missions.
In summary, the paper delivers a rigorous, likelihood‑based derivation of event‑weighted periodicity tests, demonstrates their superior statistical power, and supplies the analytical tools needed to compare alternative weighting schemes and to assess the impact of template inaccuracies. This work represents a significant methodological advance for pulsar searches and other time‑domain analyses where each photon carries rich ancillary information.
Comments & Academic Discussion
Loading comments...
Leave a Comment