Off-line data quality monitoring for the GERDA experiment
GERDA is an experiment searching for the neutrinoless {\beta}{\beta} decay of Ge-76. The experiment uses an array of high-purity germanium detectors, enriched in Ge-76, directly immersed in liquid argon. GERDA recently started the physics data taking using eight enriched coaxial detectors. The status of the experiment has to be closely monitored in order to promptly identify possible instabilities or problems. The on-line slow control system is complemented by a regular off-line monitoring of data quality. This ensures that data are qualified to be used in the physics analysis and allows to reject data sets which do not meet the minimum quality standards. The off-line data monitoring is entirely performed within the software framework GELATIO. In addition, a relational database, complemented by a web-based interface, was developed to support the off-line monitoring and to automatically provide information to daily assess data quality. The concept and the performance of the off-line monitoring tools were tested and validated during the one-year commissioning phase.
💡 Research Summary
The GERDA experiment searches for neutrinoless double‑beta decay of 76Ge using an array of high‑purity germanium detectors that are operated “naked” in liquid argon. Because the physics signal is extremely rare, the integrity of the recorded data must be continuously verified. In addition to the real‑time slow‑control system that monitors temperatures, pressures, detector currents and alarms, the collaboration has implemented a comprehensive off‑line data‑quality monitoring framework.
Raw charge‑pulse waveforms are digitised at 100 MHz and stored on a DAQ server in the underground laboratory. Every night the files are copied to a surface file server where they are automatically processed by the GERDA analysis framework GELATIO. GELATIO uses a modular approach to extract physics‑relevant parameters such as rise time, amplitude and energy from each pulse. All extracted quantities are written into a MySQL relational database that is tightly coupled to GELATIO: the database can be queried to select events of interest, and the resulting event list can be fed back into GELATIO as an input file.
The off‑line monitoring focuses on four principal aspects.
-
Duty‑cycle monitoring – A “Run” is defined by a specific detector configuration and bias voltage. During a Run data acquisition should be uninterrupted except for scheduled calibrations with 228Th sources (once per week, < 2 h). By analysing the timestamps of the raw files the system computes the fraction of live time. During the one‑year commissioning, after excluding periods dedicated to hardware upgrades, the experiment achieved a duty cycle above 90 %, consistent with expectations for a low‑rate experiment.
-
Rate monitoring – The system distinguishes physical events from the injected test pulse (0.1 Hz, amplitude fixed). Test pulses dominate the data stream (> 80 % of all triggers) and provide a high‑statistics probe of the electronic chain. The counting rate of physical events is expected to be stable at a few mHz per detector. Sudden increases (noise bursts) appear as spikes in the rate plot; these are automatically flagged. The muon veto system provides a logical tag for each event, allowing the experiment to measure a muon‑induced background of ≈ 0.5 counts day⁻¹ detector⁻¹, in agreement with Monte‑Carlo predictions.
-
Read‑out electronics performance – Four key indicators are continuously evaluated: (i) baseline amplitude versus time, (ii) baseline RMS (noise level), (iii) test‑pulse amplitude (global gain), and (iv) test‑pulse width (FWHM, electronic noise). Drifts in baseline voltage may signal changes in detector leakage current or amplifier gain; RMS excursions point to electronic noise fluctuations; variations in test‑pulse amplitude or width indicate gain drifts or capacitance changes. When any indicator exceeds predefined limits, the corresponding data segment is marked “invalid” and excluded from physics analyses.
-
Web interface and automated reporting – A dedicated web portal queries the MySQL database and produces daily reports that include numeric summaries (counting rates), one‑dimensional histograms, and two‑dimensional scatter plots (energy vs. time). Operators can browse these plots, run custom SQL queries, and download selected event lists. The reports provide a concise, human‑readable overview of the experiment’s health and enable rapid response to emerging issues.
The monitoring tools were exercised throughout the commissioning phase. Early runs with non‑enriched detectors exhibited occasional noise bursts and electronic instabilities; these were promptly identified via the baseline and rate plots, and corrective actions (e.g., firmware updates, grounding improvements) were taken. Subsequent runs showed a marked reduction in such anomalies, and the duty cycle, rate stability, and electronic performance remained within the prescribed tolerances.
In conclusion, the GERDA collaboration has built a robust off‑line data‑quality monitoring system that complements the real‑time slow‑control infrastructure. By integrating GELATIO, a MySQL database, and a user‑friendly web interface, the experiment can automatically assess detector live time, event rates, and electronic stability on a daily basis. The system proved its reliability during a full year of commissioning and is now ready for routine use in Phase I physics data taking, ensuring that only high‑quality data enter the neutrinoless double‑beta decay analysis.
Comments & Academic Discussion
Loading comments...
Leave a Comment