Solving Inverse Problems with Flow-based Models via Model Predictive Control
Flow-based generative models provide strong unconditional priors for inverse problems, but guiding their dynamics for conditional generation remains challenging. Recent work casts training-free conditional generation in flow models as an optimal control problem; however, solving the resulting trajectory optimisation is computationally and memory intensive, requiring differentiation through the flow dynamics or adjoint solves. We propose MPC-Flow, a model predictive control framework that formulates inverse problem solving with flow-based generative models as a sequence of control sub-problems, enabling practical optimal control-based guidance at inference time. We provide theoretical guarantees linking MPC-Flow to the underlying optimal control objective and show how different algorithmic choices yield a spectrum of guidance algorithms, including regimes that avoid backpropagation through the generative model trajectory. We evaluate MPC-Flow on benchmark image restoration tasks, spanning linear and non-linear settings such as in-painting, deblurring, and super-resolution, and demonstrate strong performance and scalability to massive state-of-the-art architectures via training-free guidance of FLUX.2 (32B) in a quantised setting on consumer hardware.
💡 Research Summary
The paper introduces MPC‑Flow, a model‑predictive‑control (MPC) framework for solving inverse problems with pre‑trained flow‑based generative models. Continuous normalising flows (CNFs) trained via flow‑matching provide a powerful unconditional prior: a time‑dependent vector field (v_\theta(x,t)) defines an ODE that transports samples from a simple base distribution to the data distribution. Existing training‑free conditioning methods simply inject data‑fidelity terms into the flow dynamics, but they lack theoretical guarantees and often suffer from instability and high memory consumption.
Recent work reformulated conditional sampling as a deterministic optimal‑control problem: minimise the control energy (\int_0^1|u(t)|^2dt) plus a terminal loss (\Phi(x(1))) (e.g., a Gaussian‑likelihood term) subject to the controlled dynamics (\dot x = v_\theta(x,t)+u(t)). Solving this problem directly requires back‑propagation through the entire flow trajectory or solving adjoint equations, both of which scale poorly (memory O(N), compute O(N²)) for long horizons and large models.
MPC‑Flow tackles this bottleneck by decomposing the global optimisation into a sequence of short‑horizon sub‑problems, exactly as in classical MPC used in robotics and process control. At each time step (t) the current state (\hat x_t) is taken as the initial condition, a planning horizon (H) is chosen, and an optimal control trajectory over (
Comments & Academic Discussion
Loading comments...
Leave a Comment