Expression Rates of Neural Operators for Linear Elliptic PDEs in Polytopes

Expression Rates of Neural Operators for Linear Elliptic PDEs in Polytopes
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We study the approximation rates of a class of deep neural network approximations of operators which arise as data-to-solution maps $\mathcal{S}$ of linear elliptic partial differential equations (PDEs), and act between pairs $X,Y$ of suitable infinite-dimensional spaces. We prove expression rate bounds for approximate neural operators $\mathcal{G}$ with the structure $\mathcal{G} = \mathcal{R} \circ \mathcal{A} \circ \mathcal{E}$, with linear encoders $\mathcal{E}$ and decoders $\mathcal{R}$. We focus in particular on deepONets emulating the coefficient-to-solution maps for elliptic PDEs set in polygons and in some polyhedra. Exploiting the regularity of the solution sets of elliptic PDEs in polytopes, we show algebraic rates of convergence for problems with data with finite regularity, and exponential rates for analytic data.


💡 Research Summary

The paper investigates the approximation capabilities of a class of deep neural network–based operators—specifically DeepONets—when used to emulate the coefficient‑to‑solution map of linear, second‑order elliptic partial differential equations posed on polygonal (2‑D) and polyhedral (3‑D) domains. The authors consider the data‑to‑solution operator (S:K\subset X\to Y), where (X) is a compact set of admissible diffusion coefficients (typically a subset of (L^{\infty}(\Omega))) and (Y) is a Hilbert space of weak solutions (usually (H^{1}_{0}(\Omega))). Their goal is to construct neural operators (G) of the form
\


Comments & Academic Discussion

Loading comments...

Leave a Comment