A Case for Variability-Aware Policies for NISQ-Era Quantum Computers

A Case for Variability-Aware Policies for NISQ-Era Quantum Computers
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Recently, IBM, Google, and Intel showcased quantum computers ranging from 49 to 72 qubits. While these systems represent a significant milestone in the advancement of quantum computing, existing and near-term quantum computers are not yet large enough to fully support quantum error-correction. Such systems with few tens to few hundreds of qubits are termed as Noisy Intermediate Scale Quantum computers (NISQ), and these systems can provide benefits for a class of quantum algorithms. In this paper, we study the problems of Qubit-Allocation (mapping of program qubits to machine qubits) and Qubit-Movement(routing qubits from one location to another to perform entanglement). We observe that there exists variation in the error rates of different qubits and links, which can have an impact on the decisions for qubit movement and qubit allocation. We analyze characterization data for the IBM-Q20 quantum computer gathered over 52 days to understand and quantify the variation in the error-rates and find that there is indeed significant variability in the error rates of the qubits and the links connecting them. We define reliability metrics for NISQ computers and show that the device variability has the substantial impact on the overall system reliability. To exploit the variability in error rate, we propose Variation-Aware Qubit Movement (VQM) and Variation-Aware Qubit Allocation (VQA), policies that optimize the movement and allocation of qubits to avoid the weaker qubits and links and guide more operations towards the stronger qubits and links. We show that our Variation-Aware policies improve the reliability of the NISQ system up to 2.5x.


💡 Research Summary

The paper investigates how variability in qubit and coupling‑link error rates on today’s noisy intermediate‑scale quantum (NISQ) devices can be exploited to improve overall system reliability. Using publicly available calibration data from IBM’s 20‑qubit processor (IBM‑Q20) collected over 52 days, the authors quantify the spread in coherence times (T1, T2), single‑qubit gate errors (≈10⁻³) and two‑qubit gate errors (≈10⁻²). They find that error rates differ by up to a factor of seven across different links, and that these rates fluctuate over time, demonstrating that “all qubits are not created equal.”

To translate this hardware variability into system‑level benefits, the authors introduce two reliability metrics: Mean Instructions Before Failure (MIBF) and Probability of Successful Trial (PST). MIBF measures how many instructions a program can execute before the first error, while PST captures the chance that a full execution finishes without any error. These metrics provide a concrete way to assess the impact of variability on algorithmic outcomes.

Building on this analysis, the paper proposes two variation‑aware policies. Variation‑Aware Qubit Movement (VQM) selects a routing path for SWAP operations not by minimizing the number of swaps, but by maximizing the product of link success probabilities along the path. A tunable parameter limits how many extra swaps may be taken, allowing a trade‑off between latency and reliability. In a simple five‑qubit example, VQM chooses a longer path (A‑E‑D‑C) with a 56.7 % success probability instead of the shorter A‑B‑C path that only yields 42 % success. Simulations show that VQM can increase MIBF by up to 1.5×.

Variation‑Aware Qubit Allocation (VQA) extends existing allocation heuristics by evaluating each possible mapping of logical to physical qubits with respect to the combined success probability of the involved links. The policy prefers mappings that place logical qubits on physically strong qubits and use high‑quality couplings, even if the number of required SWAPs is unchanged. Experiments on a suite of small quantum kernels demonstrate that VQA can raise PST by as much as 2.5×, dramatically improving the likelihood of a successful run.

The authors also conduct a case study comparing two execution strategies for programs that occupy less than half the available qubits: (1) running two copies concurrently on the full device (increasing trial throughput but using weaker qubits) versus (2) running a single copy on the strongest region of the chip (lower throughput but higher per‑trial success). Results indicate that, under realistic variability, the single‑copy strong‑region strategy can deliver more successful trials per unit time, highlighting how variability‑aware decisions can influence higher‑level scheduling and partitioning policies.

Overall, the paper makes three key contributions: (1) a thorough empirical characterization of qubit and link variability on a real NISQ processor; (2) the definition of system‑level reliability metrics (MIBF, PST) that capture the effect of hardware variation on algorithmic success; and (3) the design and evaluation of VQM and VQA, which together can improve NISQ reliability by up to 2.5× without any hardware changes. The work suggests that, in the pre‑error‑correction era, software‑level awareness of hardware heterogeneity is essential for extracting maximal performance from noisy quantum devices, and it opens avenues for dynamic, real‑time variability‑aware compilation as quantum hardware scales up.


Comments & Academic Discussion

Loading comments...

Leave a Comment