Operator Learning with Domain Decomposition for Geometry Generalization in PDE Solving
Neural operators have become increasingly popular in solving \textit{partial differential equations} (PDEs) due to their superior capability to capture intricate mappings between function spaces over complex domains. However, the data-hungry nature of operator learning inevitably poses a bottleneck for their widespread applications. At the core of the challenge lies the absence of transferability of neural operators to new geometries. To tackle this issue, we propose operator learning with domain decomposition, a local-to-global framework to solve PDEs on arbitrary geometries. Under this framework, we devise an iterative scheme \textit{Schwarz Neural Inference} (SNI). This scheme allows for partitioning of the problem domain into smaller subdomains, on which local problems can be solved with neural operators, and stitching local solutions to construct a global solution. Additionally, we provide a theoretical analysis of the convergence rate and error bound. We conduct extensive experiments on several representative PDEs with diverse boundary conditions and achieve remarkable geometry generalization compared to alternative methods. These analysis and experiments demonstrate the proposed framework’s potential in addressing challenges related to geometry generalization and data efficiency.
💡 Research Summary
This paper tackles a fundamental limitation of neural operators: their poor ability to generalize to unseen geometries. The authors propose a “local‑to‑global” framework that integrates neural operator learning with classical domain decomposition methods (DDM). The workflow consists of three stages. First, a synthetic dataset is generated on a family of simple polygons (up to n vertices). Random Dirichlet/Neumann boundary conditions, coefficient fields, and source terms are sampled, and high‑fidelity solutions are obtained with a traditional FEM solver. Lie‑point symmetry transformations (rotations, scalings, value shifts) are applied to augment the data and to keep the input distribution bounded. Second, a neural operator—specifically the transformer‑based GNO‑T—is trained to approximate the local solution operator that maps a polygon together with its boundary data to the corresponding PDE solution. The choice of operator architecture is orthogonal to the framework; any expressive operator that can handle variable‑size inputs would work. Third, at inference time the target domain (which may be arbitrarily complex) is partitioned into K overlapping subdomains using a graph‑partitioner such as METIS, followed by d layers of neighbor expansion to create the overlap. For each subdomain, the current global iterate provides interface Dirichlet data; together with the global boundary data this forms the local boundary input Bⁿᵏ. A preprocessing transform Tₖ (e.g., translation, scaling) maps the local problem back into the training distribution, the trained operator G† is invoked, and a post‑processing inverse transform ˜Tₖ restores the physical scale. The local predictions are then combined via an additive Schwarz update:
uⁿ⁺¹ = uⁿ + τ ∑ₖ Rₖᵀ( ŵⁿ⁺¹ₖ − Rₖ uⁿ ),
where τ∈(0,1) controls the step size, Rₖ restricts the global field to subdomain k, and Rₖᵀ extends it by zero. This iterative scheme, named Schwarz Neural Inference (SNI), is proved to converge under the same conditions as the classical additive Schwarz method for elliptic PDEs. The authors derive a linear convergence rate and an error bound that separates the neural‑operator approximation error from the classical Schwarz interface error, showing that the total error scales as O(ε_G + ε_S).
The theoretical analysis is complemented by extensive experiments on four representative PDEs: the 2‑D Laplace equation, a nonlinear Poisson equation, a steady Navier‑Stokes flow, and a reaction‑diffusion system. Test domains include random composite polygons, domains with holes, and realistic engineering shapes that were never seen during training. Baselines comprise Fourier Neural Operators, DeepONet, globally trained GNO‑T, and recent geometry‑parameterization approaches. Across all benchmarks, SNI achieves substantially lower relative L² errors (often 30‑70 % reduction) and requires fewer training samples to reach a given accuracy. Moreover, the iterative process converges in a modest number of Schwarz iterations (typically < 10), and the memory footprint is reduced because each local inference deals only with a small subdomain.
Limitations are acknowledged: the current implementation focuses on 2‑D problems; extending to 3‑D complex geometries would increase computational cost and demand more sophisticated mesh handling. The framework also relies on a single type of local operator; exploring the sensitivity to different architectures (e.g., Fourier, DeepONet) is left for future work.
In summary, the paper introduces a novel, theoretically grounded method that combines neural operators with domain decomposition to overcome geometry‑generalization barriers. By learning on simple building blocks and stitching solutions via a Schwarz‑style iteration, it delivers data‑efficient, accurate, and scalable PDE solvers applicable to arbitrary shapes—an advancement with significant implications for scientific computing and industrial simulation.
Comments & Academic Discussion
Loading comments...
Leave a Comment