A Comparative Analysis of the CERN ATLAS ITk MOPS Readout: A Feasibility Study on Production and Development Setups
The upcoming High-Luminosity upgrade of the Large Hadron Collider (LHC) necessitates a complete replacement of the ATLAS Inner Detector with the new Inner Tracker (ITk). This upgrade imposes stringent requirements on the associated Detector Control System (DCS), which is responsible for the monitoring, control, and safety of the detector. A critical component of the ITk DCS is the Monitoring of Pixel System (MOPS), which supervises the local voltages and temperatures of the new pixel detector modules. This paper introduces a dedicated testbed and verification methodology for the MOPS readout, defining a structured set of test cases for two DCS-readout architectures: a preliminary Raspberry Pi-based controller, the “MOPS-Hub Mock-up”(MH Mock-up), and the final production FPGA-based “MOPS-Hub” (MH). The methodology specifies the measurement chain for end-to-end latency, jitter, and data integrity across CAN and UART interfaces, including a unified time-stamping scheme, non-intrusive signal taps, and a consistent data-logging and analysis pipeline. This work details the load profiles and scalability scenarios (baseline operation, full-crate stress, and CAN Interface Card channel isolation), together with acceptance criteria and considerations for measurement uncertainty to ensure reproducibility. The objective is to provide a clear, repeatable procedure to qualify the MH architecture for production and deployment in the ATLAS ITk DCS. A companion paper will present the experimental results and the comparative analysis obtained using this testbed.
💡 Research Summary
The paper presents a comprehensive methodology for qualifying the read‑out electronics of the ATLAS Inner Tracker (ITk) Monitoring of Pixel System (MOPS) before large‑scale production for the High‑Luminosity LHC upgrade. Two architectures are considered: the early‑stage “MOPS‑Hub Mock‑up” that uses a Raspberry Pi and Ethernet to aggregate up to four CAN Interface Cards (CICs), and the final production “MOPS‑Hub” which employs a custom FPGA board, a high‑speed e‑Link, and eight CICs (sixteen channels). Because the DCS must reliably monitor thousands of voltage and temperature sensors under harsh radiation conditions, the authors develop a dedicated test‑bed—named the “Readout Hub”—to measure end‑to‑end latency, jitter, and data integrity across the 1.2 V, 125 kbit/s CAN bus and the UART interface used to deliver processed data to the control room.
The Readout Hub is built around an STM32F767ZIT6 microcontroller (216 MHz Cortex‑M7) that provides a single high‑precision clock for all timestamps, eliminating timing errors introduced by conventional PC‑based USB tools. A non‑intrusive CAN‑Tap board with a level‑shifter converts the low‑voltage differential CAN signals to 5 V logic for the STM32, while external MCP2515 CAN controllers are interfaced via SPI to allow simultaneous handling of multiple CAN channels. The firmware records the moment a CAN message is received (t1) and the moment the device under test (DUT) emits the corresponding UART packet (t2); the difference Δt = t2 – t1 is the measured latency. The UART payload is compared with the original CAN payload to detect any data loss or corruption.
Three test scenarios are defined:
-
Baseline Performance – Four CICs (eight channels) are operated at a modest 10 Hz per channel (≈1 kB/s). The acceptance criterion is an average round‑trip latency ≤ 7 ms and zero packet loss.
-
Full‑Crate Stress Test – All eight CICs (sixteen channels) are driven at the maximum CAN bandwidth (125 kbit/s per channel). The system must keep latency ≤ 15 ms and packet‑loss rate ≤ 0.1 % under this load.
-
CIC Functionality and Isolation – Each CIC’s A and B channels are toggled independently to verify galvanic isolation and to ensure that the CAN‑Tap does not introduce cross‑talk. Voltage level shifters and shielding are evaluated for effectiveness.
Measurement uncertainty is carefully quantified: timer resolution (±0.5 µs), clock drift (±10 ppm), and the additional load imposed by the test‑bed on the CAN bus (< 1 %). The total uncertainty is kept below 1 %.
Preliminary expectations, based on architectural analysis, indicate that the FPGA‑based MOPS‑Hub will meet all criteria, while the Raspberry Pi prototype will suffer from non‑deterministic Linux scheduling, leading to increased jitter and occasional data loss under stress. The methodology provides a repeatable, hardware‑centric validation framework that can be incorporated into the quality‑control flow for CIC production and later used during system integration and radiation‑hardness testing. Although experimental results are deferred to a companion paper, this work establishes the necessary procedures, acceptance limits, and uncertainty handling required to qualify the MOPS‑Hub for deployment in the ATLAS ITk DCS.
Comments & Academic Discussion
Loading comments...
Leave a Comment