Constructive conditional normalizing flows

Constructive conditional normalizing flows
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Motivated by applications in conditional sampling, given a probability measure $μ$ and a diffeomorphism $ϕ$, we consider the problem of simultaneously approximating $ϕ$ and the pushforward $ϕ_{#}μ$ by means of the flow of a continuity equation whose velocity field is a perceptron neural network with piecewise constant weights. We provide an explicit construction based on a polar-like decomposition of the Lagrange interpolant of $ϕ$. The latter involves a compressible component, given by the gradient of a particular convex function, which can be realized exactly, and an incompressible component, which – after approximating via permutations – can be implemented through shear flows intrinsic to the continuity equation. For more regular maps $ϕ$ – such as the Knöthe-Rosenblatt rearrangement – we provide an alternative, probabilistic construction inspired by the Maurey empirical method, in which the number of discontinuities in the weights doesn’t scale inversely with the ambient dimension.


💡 Research Summary

The paper addresses the constructive realization of conditional normalizing flows by approximating a given diffeomorphism ϕ and its push‑forward ϕ#μ using the flow of a continuity equation whose velocity field is a perceptron neural network with piecewise‑constant weights. The authors present two main results.

The first theorem (Theorem 1.1) works for any C¹ diffeomorphism ϕ and a bounded density ρ. For any rectangular domain Ω and tolerance ε>0, they construct a piecewise‑constant control θ(t) with finitely many switches such that the time‑T flow ϕ_Tθ approximates ϕ both in Lᵖ(Ω) norm and in total variation (TV) distance of the pushed‑forward measures, each within ε. The construction proceeds by first approximating ϕ with a Lagrange interpolant ϕ_ε, then performing a polar‑like factorization of ϕ_ε into three components: a compressible map g_ε = ∇φ (gradient of a convex function) and two measure‑preserving maps m₁_ε, m₂_ε. The compressible part can be implemented exactly by a flow of the neural ODE with piecewise‑constant parameters (Lemma 2.9). The measure‑preserving parts are approximated by permutations of a fine cubical tiling (Lemma 2.6); each permutation is realized as a finite composition of explicit divergence‑free “swap” flows (Lemma 2.8). By concatenating these three flows, the authors obtain ϕ_Tθ, and Lemma 2.11 provides stability estimates that transfer the Lᵖ error into a TV error for the push‑forward measures.

The second theorem (Theorem 1.2) targets smoother diffeomorphisms belonging to the Sobolev space Hˢ with s > d/2 + 2 and finite action A_s(ϕ). They write ϕ as the time‑1 flow of a vector field u(t,·) ∈ L²(


Comments & Academic Discussion

Loading comments...

Leave a Comment