Loop Series Expansions for Tensor Networks

Loop Series Expansions for Tensor Networks
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Belief propagation (BP) can be a useful tool to approximately contract a tensor network, provided that the contributions from any closed loops in the network are sufficiently weak. In this manuscript we describe how a loop series expansion can be applied to systematically improve the accuracy of a BP approximation to a tensor network contraction, in principle converging arbitrarily close to the exact result. More generally, our result provides a framework for expanding a tensor network as a sum of component networks in a hierarchy of increasing complexity. We benchmark this proposal for the contraction of iPEPS, either representing the ground state of an AKLT model or with randomly defined tensors, where it is shown to improve in accuracy over standard BP by several orders of magnitude whilst incurring only a minor increase in computational cost. These results indicate that the proposed series expansions could be a useful tool to accurately evaluate tensor networks in cases that otherwise exceed the limits of established contraction routines.


💡 Research Summary

The paper introduces a systematic method to improve the accuracy of tensor‑network (TN) contractions that are initially approximated by belief propagation (BP). BP is a message‑passing algorithm that, given a fixed point of messages on each edge of a network, yields a scalar approximation Z≈∏ₙ˜Tₙ (the Bethe free‑energy approximation). While BP is computationally cheap, it completely neglects the contributions of closed loops, which can be significant in two‑dimensional and higher‑dimensional networks.

To remedy this, the authors adopt the loop‑series expansion originally proposed by Chertkov and Chernyak. For each edge they define two orthogonal projectors: P₍rs₎ projecting onto the BP ground‑state subspace (the outer product of the two converged messages) and P⁽C₎₍rs₎=I−P₍rs₎ projecting onto the excited subspace. By choosing for every edge either the ground or excited projector, the original network can be expressed as a sum over 2ᴹ configurations, where M is the number of edges. A configuration’s “degree” is the number of excited edges it contains; degree‑0 corresponds to the pure BP vacuum. Crucially, any configuration containing a dangling excitation (an excited edge that is the only one incident on a tensor) has zero weight, a result that follows directly from the BP fixed‑point equations. Consequently, only closed‑loop excitations contribute non‑zero terms.

The weight W(δₓ) of a configuration of degree x is obtained by contracting a small sub‑network in which the external indices are fixed to the BP messages and the internal indices are projected onto the excited subspace. The authors show that for configurations whose excited edges are disjoint, the total weight factorises as the product of the individual weights, allowing the whole expansion to be built from connected excitations. They further argue (Appendix A) that when the BP approximation is already reasonably accurate, the weights decay exponentially with degree, W(δₓ)≈e^{−k x}, with a positive constant k. This exponential suppression makes a low‑order truncation (e.g., up to degree 14) sufficient to achieve high precision.

The method is applied to infinite projected entangled‑pair states (iPEPS) on a hexagonal lattice. The closed network T=⟨ψ|ψ⟩ is formed by contracting a PEPS with its conjugate, yielding a network of two unique tensors a and b (the physical indices already contracted). Three observables are targeted: the free‑energy density f, the transfer matrix T_AB (obtained by cutting a single edge), and the two‑site reduced density matrix ρ_AB (obtained by inserting impurity tensors with open physical legs). For each observable the loop‑excitation contributions are computed up to a chosen degree, and each contribution is multiplied by a suppression factor exp


Comments & Academic Discussion

Loading comments...

Leave a Comment