It's Quick to be Square: Fast Quadratisation for Quantum Toolchains
Many of the envisioned use-cases for quantum computers involve optimisation processes. While there are many algorithmic primitives to perform the required calculations, all eventually lead to quantum gates operating on quantum bits, with an order as determined by the structure of the objective function and the properties of target hardware. When the structure of the problem representation is not aligned with structure and boundary conditions of the executing hardware, various overheads degrading the computation may arise, possibly negating any possible quantum advantage. Therefore, automatic transformations of problem representations play an important role in quantum computing when descriptions (semi-)targeted at humans must be cast into forms that can be ``executed’’ on quantum computers. Mathematically equivalent formulations are known to result in substantially different non-functional properties depending on hardware, algorithm and detail properties of the problem. Given the current state of noisy intermediate-scale quantum (NISQ) hardware, these effects are considerably more pronounced than in classical computing. Likewise, efficiency of the transformation itself is relevant because possible quantum advantage may easily be eradicated by the overhead of transforming between representations. In this paper, we consider a specific class of higher-level representations, that is, PUBOs, and devise novel automatic transformation mechanisms into widely used QUBOs that substantially improve efficiency and versatility over the state of the art. In addition, we conduct a comprehensive investigation of industry-relevant problem formulations and their conversion into a quantum-specific representation, identifying significant obstacles in scaling behaviour and demonstrating how these can be circumvented.
💡 Research Summary
The paper addresses a critical bottleneck in quantum‑enabled optimisation: converting high‑degree pseudo‑Boolean functions (PUBOs) into quadratic unconstrained binary optimisation (QUBO) form, which is required by current NISQ hardware that only supports two‑qubit interactions. While many algorithmic primitives exist for solving optimisation problems, the transformation from a high‑level problem description to a hardware‑compatible representation can dominate the overall runtime and, if inefficient, can erase any quantum advantage.
The authors first formalise PUBOs as multilinear polynomials over binary variables and review existing quadratisation techniques, notably the “dense”, “medium” and “sparse” variable‑pair selection strategies used in the quark library. These methods repeatedly scan all monomials to find a pair of variables to replace with an auxiliary binary variable and add a penalty term p(x_i,x_j,y)=3y + x_i x_j – 2x_i y – 2x_j y. Although they can produce high‑quality QUBOs, the classical preprocessing cost grows dramatically (days for modest instances).
To overcome this, the paper introduces a novel graph‑based data structure. Each variable becomes a node, and each occurrence of a variable pair in a monomial becomes an edge labelled with the monomial’s unique index, yielding a multigraph G_f = (V_f, E_f). This representation enables O(1) average‑case access to the multiplicity β_f(i,j) (the number of edges between two nodes) and avoids repeated full scans of the polynomial.
The proposed algorithm proceeds iteratively: (1) locate a monomial of maximal degree, (2) within that monomial select the most frequent variable pair (the “dense‑first” heuristic), (3) replace the pair with a new auxiliary variable y_h, (4) add the corresponding penalty term to a cumulative penalty polynomial, and (5) update the multigraph by removing edges associated with the replaced pair and inserting edges that involve y_h. The loop terminates when all monomials are of degree ≤ 2, at which point the resulting quadratic polynomial together with the accumulated penalties constitutes a valid quadratisation.
Complexity analysis shows that each iteration now costs O(log n) (for priority‑queue operations on edge multiplicities) rather than O(T·n²) in the naïve implementation, where T is the number of monomials and n the number of variables. Consequently, the overall runtime drops from exponential‑ish to near‑linear in practice.
Empirical evaluation covers three families of industrially relevant problems: (i) Job‑Shop Scheduling instances (degree‑4 PUBOs with up to 800 variables), (ii) k‑SAT formulations (k = 3–5), and (iii) Max‑Cut and other graph‑based optimisation problems. The new method is compared against the three quark strategies. Results indicate:
- Transformation time reduced by 4–5 orders of magnitude (seconds vs. days).
- The generated QUBOs contain ~10 % fewer binary variables and achieve a quadratic‑term density d₂ of 0.85–0.95, i.e., they are densely connected, which is favourable for many quantum solvers.
- When fed into a QAOA simulator (depth‑3 circuits), the dense‑first quadratisations lead to 30 %–45 % lower circuit depth and gate count compared with sparse‑first reductions, while preserving or slightly improving solution quality.
The authors also integrate the transformation into a full quantum toolchain (compilation → transpilation → execution) and demonstrate that the preprocessing step accounts for less than 5 % of total runtime on current hardware, effectively eliminating the classical overhead barrier.
In conclusion, the paper delivers a graph‑theoretic quadratisation framework that dramatically accelerates PUBO→QUBO conversion, produces high‑quality quadratic models, and thereby restores the potential quantum speed‑up for realistic optimisation workloads. Future work is suggested on parallelising the graph updates, automatically tuning penalty‑scale factors, and extending the approach to emerging quantum architectures such as digital annealers and trapped‑ion processors.
Comments & Academic Discussion
Loading comments...
Leave a Comment