Neural-POD: A Plug-and-Play Neural Operator Framework for Infinite-Dimensional Functional Nonlinear Proper Orthogonal Decomposition

Reading time: 5 minute
...

📝 Original Info

  • Title: Neural-POD: A Plug-and-Play Neural Operator Framework for Infinite-Dimensional Functional Nonlinear Proper Orthogonal Decomposition
  • ArXiv ID: 2602.15632
  • Date: 2026-02-17
  • Authors: ** 논문에 명시된 저자 정보가 제공되지 않았으므로, 원문에 기재된 저자 리스트를 그대로 삽입해 주세요. **

📝 Abstract

The rapid development of AI for Science is often hindered by the "discretization", where learned representations remain restricted to the specific grids or resolutions used during training. We propose the Neural Proper Orthogonal Decomposition (Neural-POD), a plug-and-play neural operator framework that constructs nonlinear, orthogonal basis functions in infinite-dimensional space using neural networks. Unlike the classical Proper Orthogonal Decomposition (POD), which is limited to linear subspace approximations obtained through singular value decomposition (SVD), Neural-POD formulates basis construction as a sequence of residual minimization problems solved through neural network training. Each basis function is obtained by learning to represent the remaining structure in the data, following a process analogous to Gram--Schmidt orthogonalization. This neural formulation introduces several key advantages over classical POD: it enables optimization in arbitrary norms (e.g., $L^2$, $L^1$), learns mappings between infinite-dimensional function spaces that is resolution-invariant, generalizes effectively to unseen parameter regimes, and inherently captures nonlinear structures in complex spatiotemporal systems. The resulting basis functions are interpretable, reusable, and enabling integration into both reduced order modeling (ROM) and operator learning frameworks such as deep operator learning (DeepONet). We demonstrate the robustness of Neural-POD with different complex spatiotemporal systems, including the Burgers' and Navier-Stokes equations. We further show that Neural-POD serves as a high performance, plug-and-play bridge between classical Galerkin projection and operator learning that enables consistent integration with both projection-based reduced order models and DeepONet frameworks.

💡 Deep Analysis

📄 Full Content

Proper Orthogonal Decomposition (POD), also known as principal component analysis (PCA), is a foundational technique in computational modeling for extracting dominant coherent structures from high-dimensional data. Historically, it emerged from early ideas in statistics and stochastic-process theory (PCA and the Karhunen-Loève viewpoint) and was later adopted in fluid mechanics and turbulence as a dominant way to identify energetic coherent modes from spatiotemporal measurements and simulations [1][2][3][4]. POD supports a broad range of methodologies, including reduced order modeling (ROM), where POD modes provide low-rank representations that enable efficient simulation while preserving essential dynamics [5][6][7][8], and scientific machine learning (SciML), where POD-based decompositions offer structured low-dimensional representations for learning operators and dynamics (for example, POD-DeepONet architectures [9]). Beyond ROM and operator learning, POD is widely used for data compression, feature extraction and modal analysis in complex spatiotemporal systems. Classical POD is typically computed via singular value decomposition (SVD), yielding L 2 -optimal linear basis functions from snapshot data [10][11][12][13]. Yet, when modern applications demand robustness for different discretizations, diverse regimes and strongly nonlinear behaviours, traditional POD faces persistent obstacles.

These obstacles are well recognized. First, POD is often resolution-dependent: modes learned on one discretization may lose optimality or even effectiveness when the grid changes, limiting transferability across resolutions [14,15]. Second, POD may lose accuracy even for in-distribution interpolation and can fail more severely for outof-distribution extrapolation beyond the training regime, which limits its reliability in previously unseen scenarios [16,17]. Third, although POD is optimal in an L 2 sense within the training dataset, its linear subspace structure can be too restrictive to represent the nonlinear features that drive many physical systems [5,18]. As a consequence, sharp gradients, discontinuities and other strongly nonlinear structures may be poorly captured, especially when such features dominate the dynamics [19]. Taken together, these challenges complicate deployment in multi-resolution regimes and hinder real-time simulation, where one needs a compact representation that generalizes reliably as dynamical or physical regimes change.

To address these limitations, we propose neural proper orthogonal decomposition (Neural-POD), a drop-in framework that replaces linear POD modes with nonlinear basis functions parameterized by neural networks. The key idea is to preserve the progressive, mode-by-mode error reduction of POD while upgrading the representational power and enabling transfer across resolutions and regimes. Given snapshot data, Neural-POD learns the first mode by minimizing a reconstruction loss between the snapshots and a neural representation with a learnable time-dependent coefficient; subsequent modes are learned sequentially by training new networks on the residual from the previous approximation [20][21][22][23]. This iterative procedure yields an orthogonal set of Neural-POD modes while directly retaining the structures most relevant to the data. Crucially, after training, only the Neural-POD model parameters (and the associated low-dimensional coefficients) need to be stored, not resolutionspecific basis vectors nor the full snapshot matrix, which makes the representation compact and easy to deploy. This compactness enables rapid online evaluation and supports real-time or many-query tasks, where one repeatedly updates reduced states or operators under limited computational budgets.

Neural-POD offers additional benefits that directly target the above challenges. First, the training objective is flexible that allows optimization in task-relevant norms (for example L 2 , L 1 or H 1 ), rather than being prescribed to a fixed L 2 criterion. Second, representing modes as neural functions admits resolution independence and improves in-distribution and out-of-distribution across multi-resolution settings [24,25]. Third, the learned representation can be reused across parameter variations with substantially reduced retraining burden, enabling efficient parametric studies. Finally, Neural-POD can serve as a pretrained, drop-in component inside broader operatorlearning frameworks [26] that provides a structured latent representation that is both compact to store and fast to evaluate for real-time development.

These properties position Neural-POD as a bridge can between projection-based ROM and modern deep learning. In classical ROM, Neural-POD can be inserted into Galerkin projection frameworks to provide a more expressive and transferable basis while retaining the interpretability and structure of projection methods [27][28][29][30][31]. In operator learning, Neural-POD offers a principle low-d

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut