Efficient Diffusion as Low Light Enhancer
The computational burden of the iterative sampling process remains a major challenge in diffusion-based Low-Light Image Enhancement (LLIE). Current acceleration methods, whether training-based or training-free, often lead to significant performance degradation, highlighting the trade-off between performance and efficiency. In this paper, we identify two primary factors contributing to performance degradation: fitting errors and the inference gap. Our key insight is that fitting errors can be mitigated by linearly extrapolating the incorrect score functions, while the inference gap can be reduced by shifting the Gaussian flow to a reflectance-aware residual space. Based on the above insights, we design Reflectance-Aware Trajectory Refinement (RATR) module, a simple yet effective module to refine the teacher trajectory using the reflectance component of images. Following this, we introduce \textbf{Re}flectance-aware \textbf{D}iffusion with \textbf{Di}stilled \textbf{T}rajectory (\textbf{ReDDiT}), an efficient and flexible distillation framework tailored for LLIE. Our framework achieves comparable performance to previous diffusion-based methods with redundant steps in just 2 steps while establishing new state-of-the-art (SOTA) results with 8 or 4 steps. Comprehensive experimental evaluations on 10 benchmark datasets validate the effectiveness of our method, consistently outperforming existing SOTA methods.
💡 Research Summary
The paper tackles the long‑standing efficiency bottleneck of diffusion‑based low‑light image enhancement (LLIE). While diffusion models have demonstrated impressive visual quality for LLIE, their iterative denoising process typically requires hundreds or thousands of steps, making them impractical for real‑time or edge‑device deployment. Existing acceleration techniques—either post‑hoc samplers or step‑wise distillation—reduce the number of steps but inevitably cause a noticeable drop in performance.
The authors identify two fundamental sources of degradation: (1) fitting error, the inevitable mismatch between the teacher model’s learned score function and the ideal score that would perfectly fit the data; and (2) inference gap, the distributional shift that arises because diffusion models are trained on a pure Gaussian noise flow, whereas LLIE demands a more deterministic transformation from a low‑light image to its well‑exposed counterpart.
To address these issues, the paper proposes three intertwined ideas. First, it mitigates fitting error by linearly extrapolating the teacher’s score function. A scaling factor ω∈(0,1] blends the teacher’s raw score ϵη with an ideal reference ˜ϵ, yielding a corrected term ω·ϵη+(1−ω)·˜ϵ that is less biased and more suitable for distillation. Second, it reduces the inference gap by shifting the diffusion trajectory into a residual space. Instead of using pure Gaussian noise ϵ, the method defines a residual ˜ϵ = (x_t – α_t·˜x₀)/σ_t, where ˜x₀ is an intermediate image lying between the low‑light input and the clean target. This residual space provides a distribution that is closer to the true data manifold, easing the student’s learning task.
The practical realization of the residual shift is the Reflectance‑Aware Trajectory Refinement (RATR) module. Inspired by Retinex theory, the authors decompose the low‑light image y into illumination h and reflectance components. Illumination is approximated by the maximum channel of y, while a non‑learning denoiser ψ(·) supplies a noise estimate z′ = |y – ψ(y)|. The refined intermediate image is then ˜x₀ = y – z′·h′. This reflectance‑based ˜x₀ is injected into the teacher’s trajectory, producing a refined path ˜x_η = ω·x_η + (1−ω)·˜x, where ˜x follows the residual dynamics.
With a refined teacher trajectory in hand, the authors perform trajectory distillation. They define a second‑order trajectory decoder Gθ that maps a latent at time t to an earlier time s using the student’s score function ϵθ. By also considering an intermediate step u∈
Comments & Academic Discussion
Loading comments...
Leave a Comment