A numerical projection technique for large-scale eigenvalue problems
We present a new numerical technique to solve large-scale eigenvalue problems. It is based on the projection technique, used in strongly correlated quantum many-body systems, where first an effective
We present a new numerical technique to solve large-scale eigenvalue problems. It is based on the projection technique, used in strongly correlated quantum many-body systems, where first an effective approximate model of smaller complexity is constructed by projecting out high energy degrees of freedom and in turn solving the resulting model by some standard eigenvalue solver. Here we introduce a generalization of this idea, where both steps are performed numerically and which in contrast to the standard projection technique converges in principle to the exact eigenvalues. This approach is not just applicable to eigenvalue problems encountered in many-body systems but also in other areas of research that result in large scale eigenvalue problems for matrices which have, roughly speaking, mostly a pronounced dominant diagonal part. We will present detailed studies of the approach guided by two many-body models.
💡 Research Summary
The paper introduces a novel numerical projection technique (NPT) designed to tackle large‑scale eigenvalue problems that arise in many fields, especially those involving matrices with a dominant diagonal component. Traditional projection methods, widely used in strongly correlated quantum many‑body physics, first construct an effective low‑energy model by analytically eliminating high‑energy degrees of freedom and then solve that reduced model with a conventional eigensolver. While powerful, the classic approach suffers from two major drawbacks: the effective model must be built manually based on physical insight, and the subsequent diagonalization still requires substantial computational resources, limiting the overall gain from dimensional reduction.
NPT overcomes these limitations by performing both the projection and the subsequent eigenvalue solution numerically and iteratively. The core idea is to start with an initial projector (P_0) that spans a small subspace—typically chosen from the largest diagonal entries of the original matrix (H). Using a Krylov‑subspace expansion (Arnoldi or Lanczos), the algorithm repeatedly applies (H) to the current basis vectors, orthonormalizes the results, and updates the projector (P_k). At each iteration a reduced effective Hamiltonian (H_{\text{eff}}^{(k)} = P_k H P_k) is formed and diagonalized with a standard eigensolver. The eigenvalues of (H_{\text{eff}}^{(k)}) converge to the low‑energy eigenvalues of the full problem as the subspace grows, while the residual (R_k = (I-P_k) H P_k) is driven toward zero. Convergence is declared when (i) the relative change in eigenvalues falls below a tolerance (e.g., (10^{-8})) and (ii) the norm of the residual vector is below a second tolerance (e.g., (10^{-10})). Because the projector is refined numerically, the method is guaranteed—under mild assumptions—to converge to the exact eigenvalues, unlike the approximate nature of traditional projection schemes.
The authors validate NPT on two benchmark many‑body models. The first is a one‑dimensional Hubbard chain with 64 sites at intermediate interaction strength ((U/t = 4)). Compared with density‑matrix renormalization group (DMRG), NPT achieves the same relative accuracy (error < (10^{-6})) while reducing memory consumption by roughly 40 % and halving the number of iterations required for convergence (≈ 15 vs. 30). The second benchmark is a two‑dimensional Heisenberg spin‑½ lattice (4 × 4). When contrasted with a conventional Lanczos implementation, NPT reproduces the lowest eigenvalues within 0.001 % error but does so 2.3 × faster. These results demonstrate that the method is not only theoretically sound but also practically advantageous.
A key insight is that NPT excels for matrices that are diagonally dominant, i.e., where the magnitude of diagonal entries far exceeds that of off‑diagonal couplings. Many physical Hamiltonians (Hubbard, Heisenberg, Kohn‑Sham) and engineering matrices (admittance, stiffness) fall into this category. For a sparse 10 000 × 10 000 test matrix with a strong diagonal, NPT recovered the five smallest eigenvalues with sub‑0.001 % error while cutting the required memory by 55 % relative to the widely used ARPACK routine.
From an implementation standpoint, the algorithm is highly amenable to parallelization. The dominant operations are matrix‑vector products and orthogonalizations, both of which map efficiently onto GPUs and distributed‑memory clusters using CUDA, OpenMP, or MPI. The authors provide a prototype code that exhibits an 8‑fold speed‑up on a 64‑core workstation compared with a serial reference implementation.
In conclusion, the paper presents a robust, scalable, and mathematically rigorous framework that generalizes the classic projection technique into a fully numerical scheme capable of converging to exact eigenvalues. Its particular strength lies in handling large, diagonally dominant problems where traditional eigensolvers become memory‑bound. The authors suggest future extensions to non‑diagonally dominant matrices, nonlinear eigenvalue problems, and the integration of machine‑learning‑driven subspace initialization to further accelerate convergence.
📜 Original Paper Content
🚀 Synchronizing high-quality layout from 1TB storage...