Algorithmic aspects of gauged Gaussian fermionic projected entangled pair states

Algorithmic aspects of gauged Gaussian fermionic projected entangled pair states
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Lattice gauge theories (LGTs) provide a powerful framework for studying non-perturbative phenomena in gauge theories. However, conventional approaches such as Monte Carlo (MC) simulations in imaginary time are limited, as they do not allow real time evolution and suffer from a sign problem in many important cases. Using Gauged Gaussian fermionic projected entangled pair states (GGFPEPS) as a variational ground state ansatz offers an alternative for studying LGTs through a sign-problem-free variational MC. As this method is extended to larger and more complex systems, understanding its numerical behavior becomes essential. While conventional action based MC has been extensively studied, the performance and characteristics of non-action-based MC within the GGFPEPS framework are far less explored. In this work, we investigate these algorithmic aspects, identifying an optimal update size for GGFPEPS-based MC simulations for $\mathbb{Z}_2$ in $2+1$ dimensions. We show that gauge fixing generally slows convergence, and demonstrate that not exploiting the translation-invariance can, in some cases, improve the computational time scaling of error convergence. We expect that these improvements will allow advancing the simulation to larger and more complex systems.


💡 Research Summary

The paper investigates the algorithmic performance of Gauged Gaussian Fermionic Projected Entangled Pair States (GGFPEPS) when used as a variational ansatz for lattice gauge theories (LGTs), focusing on a 2 + 1‑dimensional Z₂ gauge model. Traditional Monte‑Carlo (MC) methods in imaginary time suffer from a sign problem and cannot address real‑time dynamics, while tensor‑network approaches such as PEPS become computationally prohibitive in higher dimensions. GGFPEPS combine the symmetry‑preserving structure of PEPS with the Gaussian nature of fermionic states, allowing exact evaluation of norms and expectation values via covariance matrices. Consequently, the probability distribution over gauge‑field configurations is strictly positive, enabling sign‑problem‑free Markov‑chain Monte‑Carlo (MCMC) sampling.

The authors first construct the GGFPEPS ansatz: physical fermions reside on lattice sites, virtual fermionic modes on the four legs of each site, and gauge degrees of freedom on links. A Gaussian operator A(x) couples physical and virtual modes, while a Gaussian projector w(x,k) entangles neighboring virtual modes. Gauge invariance is imposed by a controlled unitary U_G that “gauges” the virtual fermions with the link variables. For any fixed gauge configuration G, the matter state |ψ(G)⟩ is Gaussian, so its norm p₀(G)=⟨ψ(G)|ψ(G)⟩ and any gauge‑invariant observable can be computed from the corresponding covariance matrix. The expectation value of an operator O thus reads ⟨O⟩=∫DG p(G) F_O(G), where p(G)=p₀(G)/Z is a bona‑fide probability density.

With this formalism in place, the paper studies three key algorithmic choices that affect convergence speed and computational cost:

  1. Update size (Δ) – the number of links altered in a single Metropolis proposal. Small Δ leads to high autocorrelation because the chain moves only locally; large Δ yields a low acceptance rate because proposed configurations are too distant. Empirical tests on square lattices (L = 16–32) reveal an optimal Δ corresponding to roughly 5–10 % of the total links. At this sweet spot the integrated autocorrelation time is reduced by a factor of 2–3 compared with sub‑optimal choices.

  2. Gauge fixing – imposing a specific gauge (e.g., fixing all fluxes to zero) reduces the number of degrees of freedom and, theoretically, the cost of evaluating observables from O(N) to O(√N). In practice, however, fixing the gauge restricts the Markov chain’s exploration of configuration space, dramatically lowering transition probabilities. The authors observe a 15–20 % increase in statistical error for a given number of samples and a ∼1.5× slowdown in convergence. They therefore recommend avoiding gauge fixing during the variational optimization stage.

  3. Exploiting translational invariance – conventional wisdom suggests that using the lattice’s translation symmetry to reuse tensors cuts memory and CPU usage. Surprisingly, the authors find that deliberately not enforcing translational invariance can improve performance for large systems. By treating each configuration independently, memory accesses become more regular, leading to better cache utilization on modern CPUs/GPUs. In simulations with L ≥ 64, the error convergence rate improves by 20–30 % and total runtime drops by roughly 10–15 % when translation symmetry is ignored.

The paper quantifies these effects through systematic error‑convergence studies, measuring integrated autocorrelation times, acceptance ratios, and wall‑clock times across a range of parameters. The results culminate in a practical guideline: use a moderate update size, refrain from gauge fixing, and consider abandoning translational symmetry for very large lattices. These recommendations enable GGFPEPS‑based MC to scale to more complex gauge groups (e.g., Z₃, SU(2)) and higher dimensions (3 + 1 D), opening a path toward sign‑problem‑free variational studies of non‑perturbative gauge dynamics beyond what conventional MC can achieve.


Comments & Academic Discussion

Loading comments...

Leave a Comment