A data-driven method to estimate contamination from light ion beam transmutation at colliders
Collisions of relativistic light ions such as oxygen, neon, and magnesium, have been proposed as a way to examine the system-size dependence of dynamics typically associated with the quark-gluon plasma produced in collisions of heavier ions such as xenon, gold, or lead. Recent efforts at both the Relativistic Heavy Ion Collider (RHIC) and Large Hadron Collider (LHC) have produced large datasets of proton-oxygen, oxygen-oxygen, and neon-neon collisions, catalyzing intense interest in experimental backgrounds associated with light ion collisions. In particular, electromagnetic dissociation of light ions while they are circulating in a collider can result in beam contamination that is difficult to simulate precisely. Here we propose a data-driven method for evaluating the potential impact of beam contaminants on physics analyses. The method exploits the time-dependence and smaller size of contaminant ion species to define control regions that can be used to quantify potential contamination effects. A simple model is used to illustrate the method and to study its robustness. This method can inform studies of recent LHC and RHIC data and could also be useful for future light ion programs at the LHC and beyond.
💡 Research Summary
The paper addresses a practical problem that has emerged with the recent light‑ion collision programs at RHIC and the LHC: electromagnetic dissociation (EMD) of the circulating ion beams can produce daughter ions (most notably ⁴He) that retain the same charge‑to‑mass ratio as the parent ion and therefore continue to circulate alongside the primary beam. Over the course of a fill these “transmutation” products accumulate, giving rise to contaminant collisions such as He‑O, He‑He, d‑O, etc. Because the contaminant species have fewer participating nucleons, they populate the low‑multiplicity region of the event‑by‑event track‑count (Ntrk) distribution, potentially biasing any analysis that relies on system‑size scaling or that assumes a pure O‑O (or Ne‑Ne) sample. Simulating the absolute contamination level from first principles is extremely challenging, requiring detailed knowledge of the EMD cross sections, beam optics, and decay kinematics.
To circumvent these difficulties the authors propose a data‑driven method that exploits two experimentally accessible, approximately uncorrelated variables: (i) the elapsed time within a fill, and (ii) an event‑level observable that scales with the number of participating nucleons, for which they choose the total reconstructed track count Ntrk. The method is conceptually analogous to the ABCD technique used in background estimation.
-
Reference Control Region (early‑time, low‑contamination):
The first few minutes of a fill (t₀ → t₁) are assumed to contain negligible transmutation products. The Ntrk distribution measured in this interval defines the shape of the pure‑signal (e.g., O‑O) collisions. By construction this shape does not evolve with time for genuine signal events; only the overall rate falls as the beam intensity decays. -
High‑Purity Control Region (high‑Ntrk tail):
A second region is defined by selecting events with Ntrk > Ncut, where Ncut is chosen such that contaminant collisions never reach this multiplicity. This region therefore contains only pure signal events at any fill time. By comparing the integrated yields in the high‑Ntrk tail at early and later times one obtains a scaling factor\
Comments & Academic Discussion
Loading comments...
Leave a Comment