Linear-time classical approximate optimization of cubic-lattice classical spin glasses
Demonstrating quantum speedup for approximate optimization of classical spin glasses is of current interest. Such a demonstration must be done with respect to the best-known scaling of classical heuristics at a given optimality gap of a given problem. For cubic-lattice classical Ising spin glasses, recent theoretical and experimental developments open the possibility of showing quantum speedup for approximate optimization with quantum annealing. It is therefore desirable to understand the optimality-gap range over which such a speedup should be searched for. Here we show that on cubic-lattice tile-planting models, classical meta-heuristics that are linear-time by construction can reach optimality gaps at which simulated annealing and parallel tempering exhibit super-linear scaling. This implies that the optimality gaps achieved by linear-time classical meta-heuristics can serve as useful upper bounds for the optimality-gap range over which quantum speedups in approximate optimization should be searched for. We also explain how classical heuristics with fixed scaling that is beyond-cubic can provide upper bounds to optimality-gap ranges for beyond-quadratic quantum speedups in approximate optimization. These results encourage the development of classical heuristics with fixed scaling that achieve optimality gaps as small as possible.
💡 Research Summary
The paper addresses the challenge of establishing quantum speedup for approximate optimization of classical Ising spin‑glass problems on a three‑dimensional cubic lattice. Demonstrating such a speedup requires a clear benchmark: the best known scaling of classical heuristics at a given optimality gap ε = (E − E_gs)/|E_gs|. The authors propose a linear‑time (O(N)) meta‑heuristic that partitions the lattice into equal‑size contiguous subsystems, optimizes each subsystem independently using any chosen local optimizer (e.g., a fixed‑time simulated annealing run or a tensor‑network contraction), and then stitches the results together. Because the number of subsystems scales linearly with the number of spins, the overall algorithm’s runtime is guaranteed to be linear, which is the theoretical lower bound for any optimization procedure that must assign a value to each spin.
The study focuses on the “tile‑planting” model introduced in prior work, which allows systematic control of problem hardness via a parameter p₆ that mixes three base instance classes (F₂₂, F₄₂, and F₆). Two families of instances, gallus_26 (mix of F₂₂ and F₆) and gallus_46 (mix of F₄₂ and F₆), are generated for p₆ ranging from 0.8 to 1.0. In this range, Markov‑chain Monte‑Carlo (MCMC) hardness for exact optimization is known to increase monotonically, providing a convenient testbed for approximate methods.
Using a tensor‑network implementation of the subsystem optimizer with subsystem size ℓ = 5 and an inverse temperature β = 2, the authors evaluate the average optimality gap ε_lin achieved by the linear‑time meta‑heuristic across twenty random instances at each p₆ value. They find that ε_lin decreases as p₆ grows, reaching an average of about 7.5 % for the hardest instances (p₆ ≈ 0.95). This empirical bound is substantially tighter than the theoretical worst‑case guarantee of ≈ 11.8 % for any polynomial‑time approximation algorithm (assuming P ≠ NP).
To assess the relevance of ε_lin for quantum‑speedup claims, the same hardest instances are also tackled with two widely used classical heuristics: simulated annealing (SA) and parallel tempering with iso‑energetic cluster moves (PT+ICM). For target gaps in the range 0.5 %–2 %, both SA and PT+ICM exhibit super‑linear scaling (time ∝ N^α with α > 1). Consequently, any quantum algorithm that hopes to outperform these classical methods must achieve a gap smaller than the ε_lin bound while also beating the super‑linear scaling. In other words, ε_lin serves as an upper bound on the ε‑search space for quantum speedup: if a quantum algorithm can only reach ε > ε_lin, it cannot claim a genuine scaling advantage because the linear‑time classical meta‑heuristic already operates at the optimal O(N) bound.
The authors further discuss the role of fixed‑scaling classical heuristics (e.g., O(N^c) with c > 1). Such algorithms can provide upper bounds for “beyond‑quadratic” quantum speedups (e.g., quadratic versus linear scaling) by delineating ε ranges where all known classical methods are already super‑linear. This conceptual framework clarifies that quantum advantage is only plausible when the classical baseline is provably super‑linear at the same ε.
The paper’s implications are twofold. First, it demonstrates that linear‑time meta‑heuristics can achieve non‑trivial optimality gaps (single‑digit percent) on hard cubic‑lattice spin‑glass instances, thereby furnishing a practical benchmark for quantum annealing experiments. Second, by explicitly comparing ε_lin to the scaling behavior of SA and PT+ICM, the work narrows the parameter space where quantum annealers (such as D‑Wave systems that have reported ε ≈ 1 %–2 % on >5,000‑spin cubic lattices) could plausibly exhibit a speedup. The authors suggest that further improvements—e.g., larger subsystems, more sophisticated local optimizers, or multi‑pass bootstrapping—might push ε_lin below the 1 % level, tightening the bound even further.
In conclusion, the study provides a concrete, experimentally accessible methodology for establishing upper bounds on the optimality‑gap region relevant to quantum speedup claims. By showing that a simple, provably linear‑time classical algorithm can already achieve ε values comparable to those attained by state‑of‑the‑art quantum annealers, the work emphasizes that any demonstration of quantum advantage must be benchmarked against such linear‑time baselines and must operate in ε regimes where all known classical heuristics are demonstrably super‑linear. This contributes a valuable perspective to the ongoing discourse on quantum versus classical performance in combinatorial optimization.
Comments & Academic Discussion
Loading comments...
Leave a Comment