Direct Soft-Policy Sampling via Langevin Dynamics
Soft policies in reinforcement learning define policies as Boltzmann distributions over state-action value functions, providing a principled mechanism for balancing exploration and exploitation. However, realizing such soft policies in practice remains challenging. Existing approaches either depend on parametric policies with limited expressivity or employ diffusion-based policies whose intractable likelihoods hinder reliable entropy estimation in soft policy objectives. We address this challenge by directly realizing soft-policy sampling via Langevin dynamics driven by the action gradient of the Q-function. This perspective leads to Langevin Q-Learning (LQL), which samples actions from the target Boltzmann distribution without explicitly parameterizing the policy. However, directly applying Langevin dynamics suffers from slow mixing in high-dimensional and non-convex Q-landscapes, limiting its practical effectiveness. To overcome this, we propose Noise-Conditioned Langevin Q-Learning (NC-LQL), which integrates multi-scale noise perturbations into the value function. NC-LQL learns a noise-conditioned Q-function that induces a sequence of progressively smoothed value landscapes, enabling sampling to transition from global exploration to precise mode refinement. On OpenAI Gym MuJoCo benchmarks, NC-LQL achieves competitive performance compared to state-of-the-art diffusion-based methods, providing a simple yet powerful solution for online RL.
💡 Research Summary
The paper tackles the long‑standing problem of realizing soft policies—policies defined as Boltzmann distributions over state‑action values—in practical reinforcement learning. Traditional actor‑critic methods rely on parametric Gaussian actors, which lack the expressivity to capture multimodal Boltzmann distributions, while recent diffusion‑based policies achieve higher expressivity at the cost of intractable policy densities and expensive entropy estimation.
The authors observe that the score function of a soft policy, ∇ₐ log π_soft(a|s), is exactly the action‑gradient of the Q‑function, ∇ₐ Q(s,a). This insight eliminates the need for a separate score estimator: the Q‑network itself provides the exact score of the target distribution. Consequently, they propose Langevin Q‑Learning (LQL), an actor‑free algorithm that samples actions directly from the Boltzmann distribution by running Langevin dynamics driven by ∇ₐ Q(s,a). The update rule is
aₜ = aₜ₋₁ + (ε/2) ∇ₐ Q(s,aₜ₋₁) + √ε zₜ, zₜ∼N(0,I).
In the limit of infinitesimal step size ε and infinite steps T, the resulting a_T follows the exact soft policy. LQL therefore removes the actor network, avoids any entropy term, and guarantees that the sampled actions are mathematically correct draws from the target distribution.
However, vanilla Langevin dynamics suffers from slow mixing in high‑dimensional, non‑convex Q‑landscapes typical of deep RL. The dynamics can become trapped in local modes because they rely solely on local gradients. To address this, the authors introduce Noise‑Conditioned LQL (NC‑LQL). They define a sequence of noise scales σ₁ > σ₂ > … > σ_L ≈ 0 and construct a noise‑conditioned Q‑function:
Q_NC(s, ã, σ_i) = E_{a∼p(a|ã,s,σ_i)}
Comments & Academic Discussion
Loading comments...
Leave a Comment