Guaranteeing Higher Order Convergence Rates for Accelerated Wasserstein Gradient Flow Schemes
In this paper, we study higher-order-accurate-in-time minimizing movements schemes for Wasserstein gradient flows. We introduce a novel accelerated second-order scheme, leveraging the differential structure of the Wasserstein space in both Eulerian and Lagrangian coordinates. For sufficiently smooth energy functionals, we show that our scheme provably achieves an optimal quadratic convergence rate. Under the weaker assumptions of Wasserstein differentiability and $λ$-displacement convexity (for any $λ\in \mathbb{R}$), we show that our scheme still achieves a first-order convergence rate and has strong numerical stability. In particular, we show that the energy is nearly monotone in general, while when the energy is $L$-smooth and $λ$-displacement convex (with $λ>0$), we prove the energy is non-increasing and the norm of the Wasserstein gradient is exponentially decreasing along the iterates. Taken together, our work provides the first fully rigorous proof of accelerated second-order convergence rates for smooth functionals and shows that the scheme performs no worse than the classical scheme JKO scheme for functionals that are $λ$-displacement convex and Wasserstein differentiable.
💡 Research Summary
The paper addresses the long‑standing problem of obtaining rigorous higher‑order time discretizations for Wasserstein gradient flows, which are evolution equations of the form
∂ₜρ – ∇·(ρ∇δϕ/δμ)=0 on (0,∞)×ℝᵈ. The classical Jordan‑Kinderlehrer‑Otto (JKO) scheme provides an unconditionally stable, first‑order (O(τ)) variational time stepping, but no provably faster scheme has been known; existing higher‑order proposals either lack quantitative error bounds or achieve only a sub‑optimal O(√τ) rate.
The authors introduce a novel accelerated second‑order scheme by exploiting the differential structure of the Wasserstein space in both Eulerian and Lagrangian coordinates. They lift the energy functional ϕ:𝒫₂(ℝᵈ)→ℝ∪{+∞} to a functional on the Hilbert space H:=L²(ℝᵈ;ρ₀) via ϕ♯ρ₀(X)=ϕ(X♯ρ₀). In this lifted setting the gradient flow becomes an ordinary differential equation
dX/dt = –∇ϕ♯ρ₀(X) in H, where X(t,·) is the Lagrangian map pushing forward the reference measure ρ₀. This reformulation allows the authors to apply classical finite‑difference ideas directly in a linear space.
The accelerated scheme is defined variationally as
X_{τ}^{n+1}=argmin_{ξ∈H} ½ϕ♯ρ₀(ξ)+⟨∇ϕ♯ρ₀(X_{τ}^{n}),ξ⟩+ (1/2τ)‖ξ−X_{τ}^{n}‖²_H,
which yields the trapezoidal update
X_{τ}^{n+1}=X_{τ}^{n} – (τ/2)
Comments & Academic Discussion
Loading comments...
Leave a Comment