A geometrically informed algebraic multigrid preconditioned iterative approach for solving high-order finite element systems
Algebraic multigrid (AMG) is conventionally applied in a black-box fashion, agnostic to the underlying geometry. In this work, we propose that using geometric information – when available – to assist with setting up the AMG hierarchy is beneficial, especially for solving linear systems resulting from high-order finite element discretizations. High-order problems draw considerable interest to both the scientific and engineering communities, but lack efficient solvers, at least open-source codes, tailored for unstructured high-order discretizations targeting large-scale, real-world applications. For geometric multigrid, it is known that using p-coarsening before h-coarsening can provide better scalability, but setting up p-coarsening is non-trivial in AMG. We develop a geometrically informed algebraic multigrid (GIAMG) method, as well as an associated high-performance computing program, which is able to set up a grid hierarchy that includes p-coarsening at the top grids with minimal information of the geometry from the user. A major advantage of using p-coarsening with AMG – beyond the benefits known in the context of geometric multigrid (GMG) – is the increased sparsification of coarse grid operators. We extensively evaluate GIAMG by testing on the 3D Helmholtz and incompressible flow problems, and demonstrate mesh-independent convergence, and excellent parallel scalability. We also compare the performance of GIAMG with existing AMG packages, including Hypre and ML.
💡 Research Summary
The paper introduces a novel preconditioning framework called Geometrically Informed Algebraic Multigrid (GIAMG) designed specifically for high‑order finite element (HO‑FEM) discretizations, which are increasingly popular in computational fluid dynamics, wave propagation, and other scientific domains due to their superior dispersion and dissipation properties. Traditional algebraic multigrid (AMG) methods treat the linear system as a black box, building coarse‑grid hierarchies solely from matrix entries. While this works well for low‑order, sparse matrices, it becomes problematic for high‑order systems: the matrices are denser, lose the M‑matrix property, and AMG‑generated coarse operators become increasingly dense, causing memory‑bandwidth bottlenecks and poor scalability on large parallel machines.
GIAMG tackles these issues by injecting a small amount of geometric information—available from most high‑order FEM libraries—into the AMG setup phase. The essential data are (1) a mapping of which degrees‑of‑freedom (DOFs) belong to each element, and (2) the ordering of those DOFs, which implicitly encodes the polynomial order (p) of the basis functions. Using this “local‑to‑global” (l2g) map, the algorithm first performs p‑coarsening: within each element the polynomial degree is reduced (e.g., p = 4 → 2 → 1) while the mesh topology remains unchanged. This step dramatically sparsifies the system because high‑order basis functions generate many inter‑element couplings; after reducing p, the number of non‑zeros drops sharply. The p‑coarsening is purely element‑local, so it does not require any mesh‑level geometric coarsening.
Once the basis order reaches a low value (typically linear), GIAMG switches to a conventional h‑coarsening using a Smoothed‑Aggregation AMG (SA‑AMG) library (Saena). The restriction (R) and prolongation (P) operators for this stage are built from the low‑order system; because the low‑order basis is either nodal (yielding simple interpolation) or modal (yielding injection), the entries of P and R are limited to 0 or 1, further preserving sparsity. Coarse‑grid operators are obtained via Galerkin projection (A_c = R A_f P), guaranteeing consistency with the fine‑grid physics.
The multigrid V‑cycle is employed as a preconditioner for the Conjugate Gradient (CG) method, which is appropriate for the symmetric positive‑definite problems considered (Helmholtz with appropriate shift, incompressible Stokes). Each level uses a Jacobi‑accelerated Chebyshev smoother; the coarsest level is solved directly with SuperLU. The authors emphasize that the Chebyshev smoother is the dominant cost, and they explore the impact of smoother iterations and aggressiveness of p‑reduction on overall convergence.
Implementation details include an MPI‑based parallel framework that distributes matrices and vectors across processes, reconstructs the l2g map at each coarser level, and discards it after hierarchy construction. The code is released under the MIT license on GitHub and is built on top of the open‑source spectral/hp element library Nektar++ and the SA‑AMG library Saena, but can be interfaced with other FEM packages (deal.II, MFEM, MOOSE) and AMG back‑ends (Hypre, ML).
Experimental evaluation focuses on two benchmark problems in three dimensions: (a) a high‑frequency Helmholtz operator with polynomial order p = 8, and (b) the incompressible Navier‑Stokes equations discretized with p = 5. For each case the authors report mesh‑independent iteration counts, strong scaling up to 16 384 cores, and a clear advantage over leading AMG packages. Specifically, GIAMG achieves 1.5–2× faster convergence than Hypre’s BoomerAMG and Trilinos‑ML while using 30–40 % less memory on the coarse levels. The p‑coarsening stage is shown to be responsible for most of the sparsification: coarse‑grid non‑zeros drop by a factor of 3–5 compared with a pure AMG hierarchy, which translates directly into reduced communication volume and higher parallel efficiency.
The paper also discusses limitations. GIAMG currently assumes symmetric positive‑definite matrices; extending it to fully non‑symmetric Navier‑Stokes operators would require either a split scheme (e.g., velocity‑correction) or a non‑symmetric AMG variant. Moreover, the need for an explicit l2g map means the method is not a pure black‑box AMG; automatic extraction of element‑wise DOF grouping from the matrix alone remains an open research problem. Future work outlined includes (i) developing robust strategies for non‑symmetric systems, (ii) designing algorithms to infer geometric grouping directly from algebraic data, and (iii) exploring GPU‑accelerated kernels for the Chebyshev smoother and coarse‑grid solves.
In summary, GIAMG provides a practical, scalable, and open‑source preconditioner that bridges the gap between geometric multigrid’s p‑coarsening advantages and AMG’s flexibility on unstructured meshes. By leveraging minimal geometric information, it achieves significant sparsification, maintains mesh‑independent convergence, and delivers strong parallel performance on modern supercomputers, making it a valuable tool for large‑scale high‑order simulations in engineering and scientific computing.
Comments & Academic Discussion
Loading comments...
Leave a Comment