High-accuracy sampling for diffusion models and log-concave distributions
We present algorithms for diffusion model sampling which obtain $δ$-error in $\mathrm{polylog}(1/δ)$ steps, given access to $\widetilde O(δ)$-accurate score estimates in $L^2$. This is an exponential improvement over all previous results. Specifically, under minimal data assumptions, the complexity is $\widetilde O(d,\mathrm{polylog}(1/δ))$ where $d$ is the dimension of the data; under a non-uniform $L$-Lipschitz condition, the complexity is $\widetilde O(\sqrt{dL},\mathrm{polylog}(1/δ))$; and if the data distribution has intrinsic dimension $d_\star$, then the complexity reduces to $\widetilde O(d_\star,\mathrm{polylog}(1/δ))$. Our approach also yields the first $\mathrm{polylog}(1/δ)$ complexity sampler for general log-concave distributions using only gradient evaluations.
💡 Research Summary
The paper introduces a novel meta‑algorithm called First‑Order Rejection Sampling (FORS) that achieves high‑accuracy sampling for diffusion models and general log‑concave distributions using only gradient (score) information. Traditional diffusion samplers that rely on score estimates require a number of steps that scales polynomially in the target error δ (e.g., 1/δ or 1/δ²), because discretization of stochastic differential equations introduces bias that cannot be eliminated without density evaluations. The authors observe that the problem of generating a sample from a density proportional to q(x)·e^{w(x)} can be solved without ever computing w(x) if one can draw unbiased estimates of w(x). This observation is a continuous‑space analogue of the classic Bernoulli‑factory problem.
FORS works as follows. For each candidate point x drawn from a proposal distribution q, a Poisson random variable J∼Poisson(2B) is sampled, where B bounds the magnitude of the unbiased estimator W_i of w(x). Then J i.i.d. copies W₁,…,W_J are drawn from a distribution W_x with E
Comments & Academic Discussion
Loading comments...
Leave a Comment