Multi-Layer Cycle Benchmarking for high-accuracy error characterization
Accurate noise characterization is essential for reliable quantum computation. Effective Pauli noise models have emerged as powerful tools, offering detailed description of the error processes with a manageable number of parameters, which guarantees the scalability of the characterization procedure. However, a fundamental limitation in the learnability of Pauli fidelities impedes full high-accuracy characterization of both general and effective Pauli noise, thereby restricting e.g., the performance of noise-aware error mitigation techniques. We introduce Multi-Layer Cycle Benchmarking (MLCB), an enhanced characterization protocol that improves the learnability associated with effective Pauli noise models by jointly analyzing multiple layers of Clifford gates. We show a simple experimental implementation and demonstrate that, in realistic scenarios, MLCB can reduce unlearnable noise degrees of freedom by up to $75%$, improving the accuracy of sparse Pauli-Lindblad noise models and boosting the performance of error mitigation techniques like probabilistic error cancellation. Our results highlight MLCB as a scalable, practical tool for precise noise characterization and improved quantum computation.
💡 Research Summary
Accurate noise characterization is a prerequisite for reliable quantum computation, yet existing scalable techniques face a fundamental learnability limitation: a substantial fraction of the parameters describing Pauli noise cannot be uniquely identified because of gauge freedoms. Standard Cycle Benchmarking (CB) mitigates this issue by measuring products of Pauli eigenvalues with high multiplicative precision, but still leaves many degrees of freedom (DOF) unlearnable, especially in multi‑qubit, parallel gate layers where crosstalk and correlated errors are prevalent.
The authors introduce Multi‑Layer Cycle Benchmarking (MLCB), an extension of CB that simultaneously analyzes several Clifford gate layers rather than treating each layer in isolation. By interleaving different layers and performing Pauli twirling and SPAM‑robust readout on each, MLCB extracts additional algebraic constraints linking the Pauli eigenvalues of the various layers. The key insight is that the orbits of Pauli operators under different Clifford layers overlap; when an orbit contains operators with identical support, the corresponding eigenvalue products can be combined to reduce the effective orbit size. Consequently, many eigenvalues that were previously only accessible through low‑accuracy, gauge‑dependent methods become indirectly constrained by high‑accuracy measurements from other layers. The authors prove that, for realistic hardware, this strategy can cut the number of unlearnable DOF by up to 75 %.
The paper validates MLCB on the 20‑qubit IQM Garnet™ processor, which implements a square lattice of superconducting qubits. The device’s native two‑qubit gate layer consists of parallel CZ gates, supplemented by single‑qubit Clifford gates for twirling. Using MLCB, the authors obtain a richer set of Pauli eigenvalue products than standard CB, enabling the reconstruction of a much larger portion of the underlying Pauli channel. In particular, local crosstalk terms (e.g., XZ, YZ on neighboring qubits) that were previously unlearnable become identifiable with high precision.
To demonstrate the practical impact of the improved characterization, the authors focus on the Sparse Pauli‑Lindblad (SPL) model, a scalable effective noise model that assumes the Lindbladian is generated by low‑weight, local Pauli operators (weight ≤ 2). The SPL model has O(n) parameters (≈ 21 n for a square lattice) instead of the exponential 4ⁿ − 1 of a full Pauli channel. The relationship between the SPL rates λ_k and the Pauli eigenvalues f_α is linear in the logarithmic domain: log f_α = −2 ∑k M{αk} λ_k, where M_{αk} is the symplectic inner product. By feeding the high‑accuracy eigenvalue products obtained via MLCB into a non‑negative least‑squares inversion, the authors recover the SPL rates with significantly reduced statistical error compared to rates inferred from standard CB or low‑accuracy protocols.
The authors then assess how the refined SPL model influences error mitigation, focusing on Probabilistic Error Cancellation (PEC). PEC requires an accurate estimate of the noise rates to construct a quasi‑probabilistic inverse channel; any error in λ_k propagates exponentially into the sampling overhead and residual bias. Using the MLCB‑derived SPL parameters, the authors implement PEC on benchmark circuits and observe a ~30 % reduction in mean‑squared error relative to PEC based on standard CB data, while keeping the same number of samples. This improvement is most pronounced for circuits with strong crosstalk, confirming that the additional constraints supplied by MLCB directly translate into more effective mitigation.
Beyond PEC, the paper discusses the broader implications: MLCB retains the SPAM‑robustness and multiplicative precision of standard CB, while providing a systematic way to “gauge‑fix” the otherwise ambiguous Pauli parameters through multi‑layer correlations. The protocol requires only modest experimental overhead—additional layers are executed sequentially, and the post‑processing involves constructing and solving a linear system whose size scales with the number of learned products, not with the Hilbert space dimension. Therefore, MLCB can be integrated into existing quantum control stacks with minimal software changes and scales naturally to larger processors.
In summary, Multi‑Layer Cycle Benchmarking offers a practical, scalable solution to the long‑standing learnability bottleneck in Pauli‑noise characterization. By leveraging the structure of multiple Clifford layers, it dramatically reduces the number of unlearnable degrees of freedom, yields more accurate parameters for sparse, physically motivated noise models, and consequently enhances the performance of advanced error mitigation techniques such as PEC. The work bridges the gap between high‑accuracy, small‑scale tomography and coarse, large‑scale benchmarking, providing a valuable tool for the next generation of quantum hardware.
Comments & Academic Discussion
Loading comments...
Leave a Comment