Flow matching Operators for Residual-Augmented Probabilistic Learning of Partial Differential Equations

Flow matching Operators for Residual-Augmented Probabilistic Learning of Partial Differential Equations
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Learning probabilistic surrogates for partial differential equations remains challenging in data-scarce regimes: neural operators require large amounts of high-fidelity data, while generative approaches typically sacrifice resolution invariance. We formulate flow matching in an infinite-dimensional function space to learn a probabilistic transport that maps low-fidelity approximations to the manifold of high-fidelity PDE solutions via learned residual corrections. We develop a conditional neural operator architecture based on feature-wise linear modulation for flow matching vector fields directly in function space, enabling inference at arbitrary spatial resolutions without retraining. To improve stability and representational control of the induced neural ODE, we parameterize the flow vector field as a sum of a linear operator and a nonlinear operator, combining lightweight linear components with a conditioned Fourier neural operator for expressive, input-dependent dynamics. We then formulate a residual-augmented learning strategy where the flow model learns probabilistic corrections from inexpensive low-fidelity surrogates to high-fidelity solutions, rather than learning the full solution mapping from scratch. Finally, we derive tractable training objectives that extend conditional flow matching to the operator setting with input-function-dependent couplings. To demonstrate the effectiveness of our approach, we present numerical experiments on a range of PDEs, including the 1D advection and Burgers’ equation, and a 2D Darcy flow problem for flow through a porous medium. We show that the proposed method can accurately learn solution operators across different resolutions and fidelities and produces uncertainty estimates that appropriately reflect model confidence, even when trained on limited high-fidelity data.


💡 Research Summary

The paper introduces a novel framework for learning probabilistic solution operators of partial differential equations (PDEs) that is both data‑efficient and resolution‑invariant. Traditional neural operators such as DeepONet or the Fourier Neural Operator (FNO) excel at mapping infinite‑dimensional input functions to high‑fidelity solutions, but they require large amounts of high‑resolution training data and produce deterministic outputs, necessitating separate post‑hoc uncertainty quantification. Conversely, generative models based on GANs, diffusion, or flow‑matching can model distributions over solutions, yet they often depend on a fixed discretisation, need retraining for new inputs, and can be unstable or computationally expensive.

The authors address these limitations by formulating flow matching in function space and by learning probabilistic residual corrections between a cheap low‑fidelity surrogate and the desired high‑fidelity solution. The key idea is to treat the residual (r = u_{\text{HF}} - u_{\text{LF}}) as a random field and to construct a continuous‑time transport map that pushes a simple prior distribution (Gaussian) on (r) to the true residual distribution. This transport is defined by an ODE \


Comments & Academic Discussion

Loading comments...

Leave a Comment