NeuraLSP: An Efficient and Rigorous Neural Left Singular Subspace Preconditioner for Conjugate Gradient Methods
Numerical techniques for solving partial differential equations (PDEs) are integral for many fields across science and engineering. Such techniques usually involve solving large, sparse linear systems, where preconditioning methods are critical. In recent years, neural methods, particularly graph neural networks (GNNs), have demonstrated their potential through accelerated convergence. Nonetheless, to extract connective structures, existing techniques aggregate discretized system matrices into graphs, and suffer from rank inflation and a suboptimal convergence rate. In this paper, we articulate NeuraLSP, a novel neural preconditioner combined with a novel loss metric that leverages the left singular subspace of the system matrix’s near-nullspace vectors. By compressing spectral information into a fixed low-rank operator, our method exhibits both theoretical guarantees and empirical robustness to rank inflation, affording up to a 53% speedup. Besides the theoretical guarantees for our newly-formulated loss function, our comprehensive experimental results across diverse families of PDEs also substantiate the aforementioned theoretical advances.
💡 Research Summary
**
NeuraLSP introduces a novel neural preconditioner for solving large, sparse linear systems that arise from the discretization of partial differential equations (PDEs). The method departs from traditional algebraic multigrid (AMG) techniques, which construct the prolongation matrix P by aggregating variables based on the strength‑of‑connection graph of the system matrix A. Instead, NeuraLSP learns a low‑rank operator directly from the left singular subspace of a set of smoothed test vectors S, which approximate the near‑nullspace (smooth error) of A.
The core contribution is the Neural Left Singular Subspace (NLSS) loss, defined as
\
Comments & Academic Discussion
Loading comments...
Leave a Comment