Non-Perturbative Trivializing Flows for Lattice Gauge Theories

Non-Perturbative Trivializing Flows for Lattice Gauge Theories
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Continuous normalizing flows are known to be highly expressive and flexible, which allows for easier incorporation of large symmetries and makes them a powerful computational tool for lattice field theories. Building on previous work, we present a general continuous normalizing flow architecture for matrix Lie groups that is equivariant under group transformations. We apply this to lattice gauge theories in two dimensions as a proof of principle and demonstrate competitive performance, showing its potential as a tool for future lattice computations.


💡 Research Summary

This paper introduces a fully gauge‑equivariant continuous normalizing flow (CNF) framework for lattice gauge theories, extending the expressive power of normalizing flows to matrix Lie groups such as SU(N). The authors formulate the flow as an ordinary differential equation (ODE) on the group manifold: (\dot U(t)=Z_{\theta}(U,t)U(t)), where the vector field (Z_{\theta}) is parameterized by a neural network and takes values in the Lie algebra (\mathfrak{su}(N)). By using a basis of the algebra, the divergence needed for the change‑of‑variables formula can be computed analytically, guaranteeing that the log‑density evolves according to (\dot L = -\nabla!\cdot!\dot U).

A central technical contribution is the adaptation of the adjoint sensitivity method to Lie‑group dynamics. The adjoint state (A(t)) lives in the same algebra and obeys its own ODE, allowing gradients with respect to the network parameters to be obtained by integrating backward in time. This approach dramatically reduces memory consumption while preserving exact gradients for arbitrary network architectures.

To keep the numerical integration on the group manifold, the authors employ a Crouch‑Grossman Runge‑Kutta scheme, which updates the group element via matrix exponentials and thus remains within SU(N) up to machine precision. This scheme simultaneously handles scalar quantities (log‑densities) and matrix‑valued variables, providing a stable and accurate integration pipeline.

Gauge equivariance is enforced by constructing the vector field from gauge‑invariant loop observables. The Wilson plaquette and a set of larger loops (including rectangular and L‑shaped loops) are traced, and their derivatives with respect to each link serve as building blocks. Because the derivative of any gauge‑invariant loop transforms covariantly under a gauge transformation, the resulting flow satisfies (Z_{\theta}(\Omega\cdot U)=\Omega Z_{\theta}(U)\Omega^{-1}). Since the prior distribution is the Haar measure (already gauge‑invariant), the flow automatically produces a gauge‑invariant target distribution.

Training minimizes the reverse Kullback‑Leibler divergence (D_{\mathrm{KL}}(q_T|p)=\mathbb{E}_{q_T}


Comments & Academic Discussion

Loading comments...

Leave a Comment