ArcFlow: Unleashing 2-Step Text-to-Image Generation via High-Precision Non-Linear Flow Distillation

ArcFlow: Unleashing 2-Step Text-to-Image Generation via High-Precision Non-Linear Flow Distillation
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Diffusion models have achieved remarkable generation quality, but they suffer from significant inference cost due to their reliance on multiple sequential denoising steps, motivating recent efforts to distill this inference process into a few-step regime. However, existing distillation methods typically approximate the teacher trajectory by using linear shortcuts, which makes it difficult to match its constantly changing tangent directions as velocities evolve across timesteps, thereby leading to quality degradation. To address this limitation, we propose ArcFlow, a few-step distillation framework that explicitly employs non-linear flow trajectories to approximate pre-trained teacher trajectories. Concretely, ArcFlow parameterizes the velocity field underlying the inference trajectory as a mixture of continuous momentum processes. This enables ArcFlow to capture velocity evolution and extrapolate coherent velocities to form a continuous non-linear trajectory within each denoising step. Importantly, this parameterization admits an analytical integration of this non-linear trajectory, which circumvents numerical discretization errors and results in high-precision approximation of the teacher trajectory. To train this parameterization into a few-step generator, we implement ArcFlow via trajectory distillation on pre-trained teacher models using lightweight adapters. This strategy ensures fast, stable convergence while preserving generative diversity and quality. Built on large-scale models (Qwen-Image-20B and FLUX.1-dev), ArcFlow only fine-tunes on less than 5% of original parameters and achieves a 40x speedup with 2 NFEs over the original multi-step teachers without significant quality degradation. Experiments on benchmarks show the effectiveness of ArcFlow both qualitatively and quantitatively.


💡 Research Summary

ArcFlow tackles the long‑standing trade‑off between inference speed and image quality in diffusion‑based text‑to‑image generation. Conventional few‑step distillation methods approximate the teacher’s multi‑step denoising trajectory with linear shortcuts, which fail to capture the continuously changing velocity (tangent) directions across timesteps, leading to noticeable quality loss. ArcFlow replaces this linear approximation with an explicit non‑linear flow trajectory by parameterizing the velocity field as a mixture of continuous momentum processes.

Specifically, the velocity at latent xₜ and time t is expressed as a weighted sum over K modes: vθ(xₜ,t)=∑ₖπₖ(xₜ)·vₖ(xₜ)·γₖ(xₜ)^{1−t}. Here, vₖ denotes a basic velocity vector, γₖ a momentum factor controlling exponential decay or growth, and πₖ a gating probability that ensures the mixture sums to one. Theorem 1 proves that with K≥N, the model can exactly match the teacher’s velocity at any N sampled timesteps, guaranteeing the capacity to represent the teacher’s trajectory.

Because each mode follows an exponential time dependence, the overall velocity field admits a closed‑form integral. The authors derive an analytic transition operator Φ that maps a latent from source time ts to target time te in a single step: Φ(x_{ts},ts,te;θ)=∑ₖπₖ(x_{ts})·vₖ(x_{ts})·C(γₖ,ts,te), where C(γ,ts,te) = (γ^{1−te}−γ^{1−ts})/ln γ for γ≠1 and reduces to ts−te when γ=1. This analytic solution eliminates discretization error and enables exact propagation with only two function evaluations (2 NFEs).

Training employs a mixed‑integration curriculum. For each interval


Comments & Academic Discussion

Loading comments...

Leave a Comment