DAWN-FM: Data-Aware and Noise-Informed Flow Matching for Solving Inverse Problems
Inverse problems, which involve estimating parameters from incomplete or noisy observations, arise in various fields such as medical imaging, geophysics, and signal processing. These problems are often ill-posed, requiring regularization techniques to stabilize the solution. In this work, we employ Flow Matching (FM), a generative framework that integrates a deterministic processes to map a simple reference distribution, such as a Gaussian, to the target distribution. Our method DAWN-FM: Data-AWare and Noise-Informed Flow Matching incorporates data and noise embedding, allowing the model to access representations about the measured data explicitly and also account for noise in the observations, making it particularly robust in scenarios where data is noisy or incomplete. By learning a time-dependent velocity field, FM not only provides accurate solutions but also enables uncertainty quantification by generating multiple plausible outcomes. Unlike pretrained diffusion models, which may struggle in highly ill-posed settings, our approach is trained specifically for each inverse problem and adapts to varying noise levels. We validate the effectiveness and robustness of our method through extensive numerical experiments on tasks such as image deblurring and tomography. The code is available at: https://github.com/ahxmeds/DAWN-FM.git.
💡 Research Summary
**
The paper introduces DAWN‑FM, a novel framework that leverages Flow Matching (FM) for solving inverse problems such as image deblurring and computed tomography (CT) reconstruction. Traditional inverse‑problem approaches rely on regularization, variational methods, or pretrained diffusion models, but these often struggle when the forward operator is ill‑conditioned or when measurements are heavily corrupted by noise. DAWN‑FM addresses these challenges by making the generative process explicitly aware of both the observed data and the noise level.
Flow Matching is a deterministic ODE‑based generative model that learns a time‑dependent velocity field sθ(xₜ, t) to transport samples from a simple Gaussian prior π₀ to a target distribution π₁. The authors adopt a simple linear interpolation trajectory xₜ = (1‑t)x₀ + t x₁, where the true velocity is v = x₁‑x₀. The network is trained to minimize the mean‑squared error between sθ(xₜ, t) and v, with t sampled uniformly on
Comments & Academic Discussion
Loading comments...
Leave a Comment