Optimistic Training and Convergence of Q-Learning -- Extended Version
In recent work it is shown that Q-learning with linear function approximation is stable, in the sense of bounded parameter estimates, under the $(\varepsilon,κ)$-tamed Gibbs policy; $κ$ is inverse temperature, and $\varepsilon>0$ is introduced for additional exploration. Under these assumptions it also follows that there is a solution to the projected Bellman equation (PBE). Left open is uniqueness of the solution, and criteria for convergence outside of the standard tabular or linear MDP settings. The present work extends these results to other variants of Q-learning, and clarifies prior work: a one dimensional example shows that under an oblivious policy for training there may be no solution to the PBE, or multiple solutions, and in each case the algorithm is not stable under oblivious training. The main contribution is that far more structure is required for convergence. An example is presented for which the basis is ideal, in the sense that the true Q-function is in the span of the basis. However, there are two solutions to the PBE under the greedy policy, and hence also for the $(\varepsilon,κ)$-tamed Gibbs policy for all sufficiently small $\varepsilon>0$ and $κ\ge 1$.
💡 Research Summary
This paper revisits the stability and convergence properties of Q‑learning when linear function approximation is employed together with the so‑called (ε, κ)‑tamed Gibbs policy. Earlier work established that under this policy the parameter vector remains ultimately bounded, which in turn guarantees the existence of at least one solution to the projected Bellman equation (PBE). However, those results left two critical questions unanswered: (i) is the PBE solution unique, and (ii) does the algorithm actually converge to a useful fixed point?
The authors address both questions by a combination of analytical arguments and explicit counter‑examples. First, they formalize the mean‑flow vector field (\bar f(\theta)=\mathbb{E}
Comments & Academic Discussion
Loading comments...
Leave a Comment