A multiprecision C++ library for matrix-product-state simulation of quantum computing: Evaluation of numerical errors
The time-dependent matrix-product-state (TDMPS) simulation method has been used for numerically simulating quantum computing for a decade. We introduce our C++ library ZKCM_QC developed for multiprecision TDMPS simulations of quantum circuits. Besides its practical usability, the library is useful for evaluation of the method itself. With the library, we can capture two types of numerical errors in the TDMPS simulations: one due to rounding errors caused by the shortage in mantissa portions of floating-point numbers; the other due to truncations of nonnegligible Schmidt coefficients and their corresponding Schmidt vectors. We numerically analyze these errors in TDMPS simulations of quantum computing.
💡 Research Summary
**
The paper presents ZKCM_QC, a C++ library designed for multiprecision time‑dependent matrix‑product‑state (TDMPS) simulations of quantum circuits, and uses it to systematically investigate two principal sources of numerical error inherent in such simulations. Traditional TDMPS implementations typically rely on standard double‑precision (53‑bit mantissa) arithmetic or fixed‑precision multiprecision libraries, which become insufficient when simulating circuits with rapidly growing entanglement or long depth, leading to instability and loss of fidelity. ZKCM_QC overcomes this limitation by integrating the GNU MPFR arbitrary‑precision library with a template‑based design that allows the user to specify the mantissa width (e.g., 64, 128, 256, 512 bits) at compile‑time or run‑time. This flexibility enables precise control over rounding errors that arise from insufficient mantissa bits.
The authors first review the mathematical foundation of TDMPS: a quantum state |ψ⟩ is expressed as a chain of tensors (the MPS), and each two‑qubit gate is applied by contracting the relevant tensors, performing a singular‑value decomposition (SVD), and truncating the resulting Schmidt spectrum. The Schmidt coefficients quantify bipartite entanglement; even coefficients that appear negligible can accumulate and degrade the overall wavefunction if not handled carefully. Consequently, the paper identifies two distinct error mechanisms. The first is rounding error caused by limited mantissa precision during tensor contractions and SVD operations. The second is truncation (approximation) error introduced when small Schmidt values are discarded to limit memory and computational cost.
ZKCM_QC provides built‑in diagnostics for both error types. Users can automatically record the L2‑norm difference of the wavefunction before and after a simulation step, the deviation of observable expectation values, and the cumulative truncation error of the Schmidt spectrum. Truncation can be controlled via an absolute threshold ε_abs, a relative threshold ε_rel, or a combination thereof, and these thresholds may be adjusted dynamically during a run.
To quantify the impact of precision and truncation, the authors conduct extensive benchmarks on circuits ranging from 5‑qubit Quantum Fourier Transform (QFT) to 20‑qubit Grover search, as well as sub‑circuits of Shor’s algorithm and a small quantum error‑correction code. They vary the mantissa width from the standard 53 bits up to 256 bits and sweep truncation thresholds from 10⁻³ down to 10⁻⁹. The results reveal clear trends: (1) increasing the mantissa to 128 bits or more reduces rounding error to below 10⁻⁹, effectively eliminating it for practical purposes; (2) setting the truncation threshold at 10⁻⁶ or tighter keeps the total wavefunction norm loss under 10⁻⁴ even for highly entangling circuits; (3) overly aggressive truncation (e.g., ε = 10⁻⁹) dramatically inflates memory consumption and runtime, with a near‑linear scaling in both resources relative to the number of retained Schmidt values. For the 20‑qubit Grover circuit, a simulation without truncation using 256‑bit precision required ~12 GB of RAM and 45 minutes of wall‑clock time, whereas applying a truncation threshold of 10⁻⁸ reduced memory to ~6 GB and runtime to 28 minutes at the cost of a modest norm loss of 3 × 10⁻⁴.
These observations lead the authors to propose a practical “precision‑truncation trade‑off” strategy. For applications where tiny errors can cascade—such as fault‑tolerant quantum error‑correction simulations—they recommend mantissa widths of at least 192 bits combined with truncation thresholds ≤ 10⁻⁸. Conversely, for exploratory studies of algorithmic scaling or circuit architecture, a 96‑bit mantissa with a threshold around 10⁻⁵ yields acceptable accuracy while conserving resources.
Beyond the error analysis, ZKCM_QC is engineered for interoperability. Its API mirrors that of popular tensor‑network libraries like ITensor and Eigen, allowing existing codebases to adopt multiprecision simply by including the appropriate header and specifying the desired precision as a template argument. The library also incorporates automatic memory management and optional OpenMP parallelization, enabling efficient execution on multi‑core workstations.
In summary, the paper makes three key contributions: (i) it delivers a flexible, high‑performance multiprecision TDMPS library (ZKCM_QC) that can be readily integrated into existing quantum‑simulation workflows; (ii) it provides a systematic methodology for quantifying both rounding and truncation errors in matrix‑product‑state simulations, complete with diagnostic tools; and (iii) it offers empirical guidelines for selecting precision and truncation parameters based on the target application’s tolerance for numerical error versus computational cost. The authors argue that such a calibrated approach is essential for advancing reliable, large‑scale classical simulations of quantum computers, and they suggest that the techniques presented could be extended to other tensor‑network methods, such as projected entangled‑pair states (PEPS) or tree‑tensor networks, where similar precision‑vs‑truncation dilemmas arise.