In-situ benchmarking of fault-tolerant quantum circuits. I. Clifford circuits
Benchmarking physical devices and verifying logical algorithms are important tasks for scalable fault-tolerant quantum computing. Numerous protocols exist for benchmarking devices before running actual algorithms. In this work, we show that both physical and logical errors of fault-tolerant circuits can even be characterized in-situ using syndrome data. To achieve this, we map general fault-tolerant Clifford circuits to subsystem codes using the spacetime code formalism and develop a scheme for estimating Pauli noise in Clifford circuits using syndrome data. We give necessary and sufficient conditions for the learnability of physical and logical noise from given syndrome data, and show that we can accurately predict logical fidelities from the same data. Importantly, our approach requires only a polynomial sample size, even when the logical error rate is exponentially suppressed by the code distance, and thus gives an exponential advantage against methods that use only logical data such as direct fidelity estimation. We demonstrate the practical applicability of our methods in various scenarios using synthetic data as well as the experimental data from a recent demonstration of fault-tolerant circuits by Bluvstein et al. [Nature 626, 7997 (2024)]. Our methods provide an efficient, in-situ way of characterizing a fault-tolerant quantum computer to help gate calibration, improve decoding accuracy, and verify logical circuits.
💡 Research Summary
This paper addresses two central challenges for scalable fault‑tolerant quantum computing: (i) characterizing the physical error processes that affect a device and (ii) verifying the performance of logical circuits built on top of error‑correcting codes. Existing benchmarking protocols either require additional calibration experiments or rely on direct logical fidelity estimation, which becomes infeasible when the logical error rate is exponentially suppressed by the code distance. The authors propose a unified, in‑situ approach that exploits the syndrome data already generated during error‑correction cycles.
The core technical contribution is twofold. First, they map any fault‑tolerant Clifford circuit onto a subsystem stabilizer code in a higher‑dimensional “spacetime” picture. In this picture each gate, measurement, and idle period corresponds to a set of physical qubits arranged along a time axis, and the entire circuit becomes a static code whose stabilizers are measured once per round. This mapping, previously introduced in the literature, allows the authors to treat circuit‑level noise as static code noise, thereby reducing the problem of learning circuit noise to a well‑studied static‑code learning problem.
Second, building on the framework of Wagner et al., they develop an efficient algorithm for learning Pauli noise from syndrome data alone. They assume a local Pauli noise model: the total noise channel is a composition of many local channels, each acting on a constant‑size subset of qubits (the “r‑local, c‑sparse” model). Under this model the number of independent parameters scales linearly with the number of physical qubits, making learning feasible. The algorithm groups all errors that produce the same syndrome into a “syndrome class” and estimates the total probability of each class directly from observed frequencies. By solving a linear system that relates class probabilities to the underlying local error rates, the method recovers the full set of physical error parameters up to a known gauge freedom.
The authors rigorously derive necessary and sufficient conditions for learnability. Physically, the stabilizer (or gauge) group must be able to distinguish all error patterns that affect the syndrome; mathematically this translates to the measurement matrix having full rank. For logical error‑rate learning they require that the circuit be fault‑tolerant with respect to the assumed Pauli noise, ensuring that logical errors manifest as a linear function of the syndrome‑class probabilities. Under these conditions, the logical error rate can be predicted from syndrome data alone, without ever measuring logical qubits.
A central result is the sample‑complexity analysis. In the low‑error regime, when the physical error rate is below the code threshold, the number of circuit executions needed to estimate both physical and logical error rates to a given precision scales polynomially with the spacetime volume of the circuit (i.e., the number of measured stabilizers). By contrast, direct logical fidelity estimation requires a number of samples that grows exponentially with the code distance because the logical error probability becomes exponentially small. Thus the proposed method offers an exponential advantage in the regime where fault tolerance is most valuable.
The authors validate their theory with both synthetic simulations and real experimental data. They apply the algorithm to data from a recent demonstration of a fault‑tolerant logical GHZ‑state preparation (Bluvstein et al., Nature 626, 7997, 2024). Using only the recorded syndrome outcomes, they recover physical error rates that agree with independently calibrated values and predict logical error rates that match the directly measured logical fidelities. Additional simulations on quantum low‑density parity‑check (qLDPC) codes illustrate that the method remains efficient even for codes with highly non‑local stabilizers.
Beyond the immediate results, the paper outlines several practical implications. Since syndrome data are already collected during normal error‑correction operation, no extra experimental overhead is required. The learned noise model can be fed back into gate calibration routines, improve decoder performance (e.g., by informing belief‑propagation or neural‑network decoders), and serve as a diagnostic tool for hardware developers. Moreover, the framework is extensible: Part II of the work promises to handle circuits that include non‑Clifford resources such as magic‑state injection, enabling verification of classically hard algorithms.
In summary, this work introduces a scalable, in‑situ benchmarking protocol that leverages syndrome information to learn both physical Pauli error channels and logical error rates for fault‑tolerant Clifford circuits. By converting circuit‑level noise to a static‑code problem and exploiting the structure of local Pauli noise, the authors achieve polynomial‑sample learning where previous methods required exponential effort. The approach is validated on state‑of‑the‑art experimental data and promises broad applicability to future fault‑tolerant quantum processors.
Comments & Academic Discussion
Loading comments...
Leave a Comment