Scalable Preparation of Matrix Product States with Sequential and Brick Wall Quantum Circuits
Preparing arbitrary quantum states requires exponential resources. Matrix Product States (MPS) admit more efficient constructions, particularly when accuracy is traded for circuit complexity. Existing approaches to MPS preparation mostly rely on heuristic circuits that are deterministic but quickly saturate in accuracy, or on variational optimization methods that reach high fidelities but scale poorly. This work introduces an end-to-end MPS preparation framework that combines the strengths of both strategies within a single pipeline. Heuristic staircase-like and brick wall disentangler circuits provide warm-start initializations for variational optimization, enabling high-fidelity state preparation for large systems. Target MPSs are either specified as physical quantum states or constructed from classical datasets via amplitude encoding, using step-by-step singular value decompositions or tensor cross interpolation. The framework incorporates entanglement-based qubit reordering, reformulated as a quadratic assignment problem, and low-level optimizations that reduce depths by up to 50% and CNOT counts by 33%. We evaluate the full pipeline on datasets of varying complexity across systems of 19-50 qubits and identify trade-offs between fidelity, gate count, and circuit depth. Optimized brick wall circuits typically achieve the lowest depths, while the optimized staircase-like circuits minimize gate counts. Overall, our results provide principled and scalable protocols for preparing MPSs as quantum circuits, supporting utility-scale applications on near-term quantum devices.
💡 Research Summary
This paper addresses the fundamental challenge of quantum state preparation (QSP) by presenting a complete, end‑to‑end pipeline that converts a target Matrix Product State (MPS) into an efficient quantum circuit. The authors identify two dominant bottlenecks in existing approaches: deterministic heuristic circuits quickly saturate in fidelity, while variational optimization suffers from poor scalability and barren‑plateau gradients. Their solution integrates the strengths of both strategies in a four‑stage workflow.
First, the target amplitudes—either from a known quantum ground state or from a classical dataset encoded via amplitude encoding—are compressed into an MPS. Two compression methods are supported: a full‑amplitude singular‑value‑decomposition (SVD) that scales as O(N χ³) and a tensor‑cross‑interpolation (TCI) algorithm that requires only O(N χ²) amplitude queries, enabling the handling of very large low‑rank data.
Second, the authors reorder the qubits to reduce entanglement bottlenecks. By computing the quantum mutual information (QMI) matrix Iij for the MPS, they formulate a cost function C(π)=∑ij Iij|π(i)−π(j)|η (η=1) and recognize it as a Quadratic Assignment Problem (QAP). Using fast approximate QAP solvers and a 2‑opt heuristic from the SciPy library, they obtain a near‑optimal permutation π that places strongly correlated qubits close together in the MPS chain, thereby lowering the required bond dimension and truncation error without repeated SVD reconstructions.
Third, the reordered MPS is mapped to a quantum circuit via two families of heuristic “disentangler” constructions. The Sequential Matrix Product Disentangler (SMPD) creates a staircase‑like layout based on left, right, or mixed canonical forms, while the Brick‑Wall Matrix Product Disentangler (BMPD) arranges nearest‑neighbor gates in a brick‑wall pattern, yielding shallower depth. Both families exploit recent isometric decompositions that implement any two‑qubit unitary with only two CNOTs (instead of the standard three) and apply gate‑pruning techniques to eliminate operations that do not affect fidelity, reducing both depth and total CNOT count.
Fourth, the heuristic circuits serve as warm‑starts for variational optimization. Two optimizers are employed: the Evenbly‑Vidal (EV) method, which iteratively optimizes each layer in the tensor‑network picture, and a Riemannian (R) optimizer that performs gradient descent directly on the unitary manifold. Because the initial parameters already approximate the target state, the optimization avoids barren plateaus and converges rapidly. The authors also explore a looped scheme where a new layer is generated heuristically, the whole circuit is re‑optimized, and the process repeats, allowing progressive improvement of both fidelity and circuit efficiency.
Experimental evaluation is performed on systems of 19–50 qubits using four classical datasets of increasing complexity: Gaussian and Lévy probability distributions, the chaotic Lorenz attractor, and S&P 500 time series. Key findings include: (i) BMPD combined with EV yields the shallowest circuits (depth ≈ 0.6 × O(N χ²)) while achieving >99 % fidelity; (ii) SMPD combined with R minimizes total CNOT count (≈ 0.7 × baseline) with comparable fidelity; (iii) QAP‑based qubit reordering reduces the maximal bond dimension by ~15 % on average, translating into lower computational cost in both compression and optimization stages; (iv) a modest number (2–3) of optimization‑loop iterations improves fidelity by 5–10 % without increasing gate count substantially.
The paper’s contributions are threefold. It unifies compression, qubit reordering, heuristic initialization, and variational refinement into a single, scalable framework. It introduces a practical QAP formulation for qubit placement, leveraging decades of combinatorial‑optimization research to overcome a previously expensive step. Finally, it demonstrates that modern two‑CNOT isometric decompositions and gate‑pruning can bring MPS‑based state preparation within the reach of near‑term noisy intermediate‑scale quantum (NISQ) devices.
Overall, the work provides a principled, resource‑aware protocol for preparing high‑fidelity MPS on quantum hardware, opening the door to utility‑scale applications in quantum chemistry, quantum machine learning, and quantum simulation where classical MPS representations are already successful.
Comments & Academic Discussion
Loading comments...
Leave a Comment