Posterior Sampling by Combining Diffusion Models with Annealed Langevin Dynamics

Reading time: 1 minute
...

📝 Original Info

  • Title: Posterior Sampling by Combining Diffusion Models with Annealed Langevin Dynamics
  • ArXiv ID: 2510.26324
  • Date: 2025-10-30
  • Authors: ** 정보 없음 (제공된 텍스트에 저자 정보가 포함되어 있지 않음) **

📝 Abstract

Given a noisy linear measurement $y = Ax + ξ$ of a distribution $p(x)$, and a good approximation to the prior $p(x)$, when can we sample from the posterior $p(x \mid y)$? Posterior sampling provides an accurate and fair framework for tasks such as inpainting, deblurring, and MRI reconstruction, and several heuristics attempt to approximate it. Unfortunately, approximate posterior sampling is computationally intractable in general. To sidestep this hardness, we focus on (local or global) log-concave distributions $p(x)$. In this regime, Langevin dynamics yields posterior samples when the exact scores of $p(x)$ are available, but it is brittle to score--estimation error, requiring an MGF bound (sub-exponential error). By contrast, in the unconditional setting, diffusion models succeed with only an $L^2$ bound on the score error. We prove that combining diffusion models with an annealed variant of Langevin dynamics achieves conditional sampling in polynomial time using merely an $L^4$ bound on the score error.

💡 Deep Analysis

📄 Full Content

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut