Computing quantum entanglement with machine learning
Entanglement calculations in quantum field theories are extremely challenging and typically rely on the replica trick, where the problem is rephrased in a study of defects. We demonstrate that the use of deep generative models drastically outperforms standard Monte Carlo algorithms. Remarkably, such a machine-learning method enables high-precision estimates of Rényi entropies in three dimensions for very large lattices. Moreover, we propose a new paradigm for studying lattice defects with flow-based sampling.
💡 Research Summary
This paper presents a novel machine learning-based algorithm for computing quantum entanglement in lattice field theories, specifically focusing on Rényi entropies. Entanglement is a fundamental concept with wide-ranging applications, but its calculation in quantum field theories is notoriously difficult. The standard approach uses the replica trick, which reformulates the problem of calculating Rényi entropies into computing ratios of partition functions for systems with and without a specific geometric defect (a branch cut connecting replicas). Traditional Monte Carlo methods struggle to compute such partition function ratios directly.
The core innovation of this work is the application of Normalizing Flows (NFs), a type of deep generative model, to this problem. NFs learn an invertible transformation that maps samples from a simple prior distribution to samples from a complex target distribution. A key advantage is that this process allows for direct estimation of the ratio between the partition functions of the prior and target systems.
The authors’ most significant contribution is the introduction of the “defect coupling layer.” They recognize that the change in Rényi entropy due to a variation in subsystem size is governed by local modifications around the endpoint of the branch cut in replica space—a “defect.” Instead of constructing an NF that acts on the entire lattice, which is computationally prohibitive for large systems, they design a model that transforms only a small, carefully selected set of degrees of freedom (active sites) in the immediate vicinity of this defect. The transformation parameters for these active sites are determined by a neural network that takes the values of nearby “frozen sites” as input. This localization strategy drastically reduces the number of degrees of freedom the NF must process, leading to massive gains in computational efficiency.
The method is numerically validated for a (2+1)-dimensional scalar 𝜙⁴ theory at its critical point. Performance benchmarks show that the proposed NF approach consistently outperforms other flow-based samplers like Non-Equilibrium Monte Carlo (NEMC) and Stochastic Normalizing Flows (SNF). In terms of the total simulation time required to achieve a given accuracy for the entropic c-function, the NF method is superior across all lattice volumes tested, with the advantage becoming more pronounced for larger systems. Furthermore, for a fixed amount of statistical data, the NF provides more precise estimates of the c-function across different subsystem sizes.
In conclusion, this work successfully demonstrates that machine learning, particularly through the tailored use of Normalizing Flows with defect coupling layers, offers a powerful and efficient new paradigm for computing entanglement entropies in lattice field theories. While the defect coupling layer does not fundamentally solve the general scaling issues of NFs with system size, it reduces the practical computational cost to a point where NFs become highly competitive for a wide range of physically relevant volumes. The authors emphasize that the philosophy of their approach—identifying and acting only on the physically relevant local region—is a general strategy that can be applied to the study of other defect-related observables in lattice theories, such as interfaces in spin models or topological features in gauge theories.
Comments & Academic Discussion
Loading comments...
Leave a Comment