(Doubly) Exponential Lower Bounds for Follow the Regularized Leader in Potential Games
Follow the regularized leader FTRL is the premier algorithm for online optimization. However, despite decades of research on its convergence in constrained optimization – and potential games in particular – its behavior remained hitherto poorly understood. In this paper, we establish that FTRL can take exponential time to converge to a Nash equilibrium in two-player potential games for any (permutation-invariant) regularizer and potentially vanishing learning rate. By known equivalences, this translates to an exponential lower bound for certain mirror descent counterparts, most notably multiplicative weights update. On the positive side, we establish the potential property for FTRL and obtain an exponential upper bound $\exp(O_ε(1/ε^2))$ for any no-regret dynamics executed in a lazy, alternating fashion, matching our lower bound up to factors in the exponent. Finally, in multi-player potential games, we show that fictitious play – the extreme version of FTRL – can take doubly exponential time to reach a Nash equilibrium. This constitutes an exponentially stronger lower bound for the foundational learning algorithm in games.
💡 Research Summary
This paper investigates the convergence behavior of the Follow‑the‑Regularized‑Leader (FTRL) algorithm in potential games, providing both lower‑bound hardness results and matching upper‑bound guarantees. The authors focus on three settings: (i) two‑player potential games, (ii) a class of lazy, alternating no‑regret dynamics, and (iii) multi‑player potential games under fictitious play (FP), which can be viewed as the η→∞ limit of FTRL.
Two‑player exponential lower bound.
For any m×m two‑player potential game, any permutation‑invariant regularizer R (including entropy, Euclidean, log‑barrier, Tsallis, etc.), and any non‑increasing learning‑rate schedule η(t)=t^{‑α} with α∈
Comments & Academic Discussion
Loading comments...
Leave a Comment