Branch-and-Bound Tensor Networks for Exact Ground-State Characterization
Characterizing the ground-state properties of disordered systems, such as spin glasses and combinatorial optimization problems, is fundamental to science and engineering. However, computing exact ground states and counting their degeneracies are generally NP-hard and #P-hard problems, respectively, posing a formidable challenge for exact algorithms. Recently, Tensor Networks methods, which utilize high-dimensional linear algebra and achieve massive hardware parallelization, have emerged as a rapidly developing paradigm for efficiently solving these tasks. Despite their success, these methods are fundamentally constrained by the exponential growth of space complexity, which severely limits their scalability. To address this bottleneck, we introduce the Branch-and-Bound Tensor Network (BBTN) method, which seamlessly integrates the adaptive search framework of branch-and-bound with the efficient contraction of tropical tensor networks, significantly extending the reach of exact algorithms. We show that BBTN significantly surpasses existing state-of-the-art solvers, setting new benchmarks for exact computation. It pushes the boundaries of tractability to previously unreachable scales, enabling exact ground-state counting for $\pm J$ spin glasses up to $64 \times 64$ and solving Maximum Independent Set problems on King’s subgraphs up to $100 \times 100$. For hard instances, BBTN dramatically reduces the computational cost of standard Tropical Tensor Networks, compressing years of runtime into minutes. Furthermore, it outperforms leading integer-programming solvers by over 30$\times$, establishing a versatile and scalable framework for solving hard problems in statistical physics and combinatorial optimization.
💡 Research Summary
The paper introduces a novel algorithm called Branch‑and‑Bound Tensor Network (BBTN) that combines the classic branch‑and‑bound (B&B) search paradigm with tropical tensor network contraction to compute exact ground‑state energies and degeneracies of hard combinatorial problems. Traditional exact methods for spin glasses and maximum independent set (MIS) problems are either NP‑hard (finding the optimum) or #P‑hard (counting optimal configurations), and existing tensor‑network approaches, while exploiting massive GPU parallelism, suffer from exponential memory growth tied to the network’s treewidth. The common mitigation, slicing, fixes a pre‑selected set of variables and enumerates all 2^{|V_f|} sub‑networks, which reduces memory but incurs an exponential time blow‑up, making it impractical beyond very small instances.
BBTN addresses both memory and time bottlenecks through two key mechanisms. First, pruning: using a global upper bound (the best energy found so far) and local lower bounds derived from the B&B framework (boundary‑equivalence principle and logical inference), BBTN discards any branch whose lower bound exceeds the global bound. This dramatically shrinks the search tree, especially for problems with sparse feasible regions. Second, online branching: rather than fixing a uniform set of variables across all branches, BBTN dynamically selects a subset R_f of the current branching region R based on a memory‑overflow metric ρ(T) = max(0, log₂|T| – log₂ T_target). The algorithm chooses R_f to minimize the branching factor γ, solving the characteristic equation γ·ρ(T) = Σ_i γ·ρ(T_i). This yields an unbalanced tree with far fewer leaf sub‑networks while keeping the largest intermediate tensor below a prescribed memory threshold.
The workflow proceeds as follows: (1) encode the problem as a tensor network T(G) using min‑sum tropical algebra (⊕ = min, ⊙ = +); (2) initialize the global bound to +∞; (3) at each node, apply pruning to eliminate infeasible or sub‑optimal configurations; (4) apply online branching to generate child sub‑networks with selected fixed variables; (5) recursively repeat until all leaves are reached; (6) sum the contraction results of all leaves to obtain the exact optimum and, when combined with real‑valued tensors, the exact count of optimal configurations.
The authors evaluate BBTN on two benchmark families. The first is the ±J Ising spin glass on 2‑D and 3‑D lattices, where couplings J_{ij}=±1 are drawn uniformly and a small external field h_i=0.5 is added. The second is the maximum independent set problem on King’s subgraphs (KSG), a graph class used to benchmark neutral‑atom quantum processors. They consider both random KSG with filling factor 0.8 (high degeneracy) and structured KSG that encode integer factorization instances (low degeneracy).
Performance results are striking. For 2‑D spin glasses, BBTN solves instances up to N=65 (≈4,225 spins) on a single GPU with a memory target of 2³¹ bytes, while slicing would require thousands of years of CPU time for N≈60 and traditional B&B fails beyond N=15. In 3‑D lattices and random regular graphs, BBTN remains tractable up to 8×8×8 lattices (≈512 spins). For MIS/MWIS on KSG, BBTN handles graphs up to N=100 (≈10,000 vertices) and consistently outperforms the state‑of‑the‑art integer‑programming solver SCIP by a factor of about 30. Even for structured KSG encoding 16‑bit integer factorization, BBTN finds optimal solutions within seconds, whereas slicing would take months or years.
Importantly, because BBTN contracts both tropical and ordinary tensors, it naturally yields the exact ground‑state degeneracy, enabling detailed statistical‑physics analyses of energy landscapes and providing insight into the solution‑space structure of combinatorial problems.
In summary, BBTN achieves a three‑fold breakthrough: (i) it dynamically controls memory usage through adaptive branching; (ii) it constructs a highly unbalanced search tree that eliminates the exponential blow‑up inherent to slicing; and (iii) it leverages the high‑throughput GPU implementation of tropical tensor contraction without modification. This makes BBTN the first scalable exact algorithm capable of solving large‑scale spin‑glass ground‑state and MIS problems that were previously out of reach, opening new avenues for both theoretical investigations in disordered systems and practical optimization on emerging quantum‑hardware platforms.
Comments & Academic Discussion
Loading comments...
Leave a Comment