An Optimized and Scalable Eigensolver for Sequences of Eigenvalue Problems
In many scientific applications the solution of non-linear differential equations are obtained through the set-up and solution of a number of successive eigenproblems. These eigenproblems can be regarded as a sequence whenever the solution of one problem fosters the initialization of the next. In addition, in some eigenproblem sequences there is a connection between the solutions of adjacent eigenproblems. Whenever it is possible to unravel the existence of such a connection, the eigenproblem sequence is said to be correlated. When facing with a sequence of correlated eigenproblems the current strategy amounts to solving each eigenproblem in isolation. We propose a alternative approach which exploits such correlation through the use of an eigensolver based on subspace iteration and accelerated with Chebyshev polynomials (ChFSI). The resulting eigensolver is optimized by minimizing the number of matrix-vector multiplications and parallelized using the Elemental library framework. Numerical results show that ChFSI achieves excellent scalability and is competitive with current dense linear algebra parallel eigensolvers.
💡 Research Summary
The paper addresses the computational challenge posed by sequences of Hermitian eigenvalue problems that arise in self‑consistent field (SCF) cycles of nonlinear differential equations, with a particular focus on dense generalized eigenproblems generated by the Full‑Potential Linearized Augmented Plane Wave (FLAPW) method in density‑functional theory (DFT). Traditional practice treats each eigenproblem in the sequence as an isolated task, solving it with direct dense solvers such as ScaLAPACK’s BXINV or ELPA. Although this approach is optimal for a single dense problem, it ignores the fact that successive problems are not mathematically independent: the eigenvectors of one step are used to construct the charge density that defines the matrices of the next step. Empirical studies have shown that, despite large changes in the matrix entries, the low‑energy eigenvectors evolve smoothly, maintaining small angles between corresponding vectors of adjacent problems. This phenomenon is termed “correlation” between eigenproblems.
To exploit this correlation, the authors propose a Chebyshev‑filtered subspace iteration (ChFSI) algorithm. The method starts with the eigenvectors from the previous SCF iteration as an initial subspace. A block subspace iteration is then accelerated by applying a high‑degree Chebyshev polynomial filter to the matrix, which amplifies components associated with the desired portion of the spectrum (typically the lowest few percent) while attenuating the rest. The polynomial degree is chosen adaptively based on estimates of the spectral interval and the required convergence tolerance, thereby minimizing the total number of matrix‑vector multiplications. After filtering, the subspace is orthonormalized and a Rayleigh‑Ritz projection yields updated Ritz pairs; the process repeats until convergence.
Parallelization is achieved using the Elemental library, which distributes matrices in a two‑dimensional block cyclic layout across MPI processes. Because the Chebyshev filter consists of a sequence of matrix‑vector products, communication overhead is low and the algorithm scales efficiently on distributed‑memory clusters. The authors benchmark ChFSI on problems ranging from 2 000 to 30 000 dimensions, solving for up to 20 % of the spectrum. Strong‑scaling tests on up to 1 024 cores demonstrate near‑linear speedup, and performance comparisons show that ChFSI outperforms ScaLAPACK direct solvers by factors of 1.5–3 while delivering identical accuracy (relative eigenvalue errors below 10⁻⁸ and eigenvector angle errors below 10⁻⁴ rad).
The paper also discusses practical aspects such as the selection of the Chebyshev interval, the impact of polynomial degree on convergence, and the robustness of the method when the correlation between successive eigenvectors deteriorates. The authors conclude that ChFSI provides a viable third option—between pure direct solvers and generic iterative eigensolvers—for dense eigenvalue sequences that exhibit moderate correlation. Its ability to reuse information across SCF cycles reduces the total computational effort dramatically, making it attractive for large‑scale electronic‑structure calculations and potentially for any application where a sequence of related dense eigenproblems arises.
Comments & Academic Discussion
Loading comments...
Leave a Comment