A Neumann-Neumann Acceleration with Coarse Space for Domain Decomposition of Extreme Learning Machines
💡 Research Summary
This paper addresses the computational bottleneck of Extreme Learning Machines (ELMs) when applied to large‑scale partial differential equations (PDEs). While ELMs fix hidden‑layer weights and solve for the output layer via a least‑squares problem, the size of that problem grows quickly with the number of training points, limiting scalability. The authors previously introduced Domain Decomposition for ELMs (DDELM), which partitions the physical domain into non‑overlapping subdomains, trains a local ELM on each subdomain, and couples them through an auxiliary interface variable μ. DDELM reduces the global problem to a Schur‑complement system on the interface, but the resulting linear system can still be large and poorly conditioned, especially as the number of subdomains increases.
The contribution of this work is twofold: (1) the construction of a coarse space for the interface problem, and (2) a Neumann‑Neumann (NN) acceleration that exploits this coarse space. The interface degrees of freedom are split into “coarse” (corner) variables Π and “non‑coarse” variables Δ. By eliminating the flux terms associated with the coarse variables, the authors obtain a reduced Schur complement system on Δ that contains an embedded coarse problem on Π. This mirrors the structure of classical substructuring methods such as FETI‑DP and BDDC, where a coarse problem is essential for scalability.
The NN acceleration is derived by observing that the operator A K⁺ + B (where K⁺ is the pseudo‑inverse of the local block K) acts as a Dirichlet‑to‑Neumann map, while its transpose acts as a Neumann‑to‑Dirichlet map. The authors define a preconditioner P = Ā K̄⁺ B̄ that represents the latter map on the coarse variables. Multiplying the second block row of the reduced system by P modifies the normal equations to include a term θ PᵀP + (1‑θ)I, where θ∈
Comments & Academic Discussion
Loading comments...
Leave a Comment