PL conditions do not guarantee convergence of gradient descent-ascent dynamics

PL conditions do not guarantee convergence of gradient descent-ascent dynamics
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We give an example of a function satisfying a two-sided Polyak-Lojasiewicz condition but for which a gradient descent-ascent flow line fails to converge to the saddle point, circling around it instead.


💡 Research Summary

The paper investigates the convergence properties of Gradient Descent‑Ascent (GDA) dynamics in the context of min‑max optimization when the objective function satisfies a two‑sided Polyak‑Lojasiewicz (PL) condition. The authors begin by recalling that the PL inequality, originally introduced for non‑convex minimization, guarantees a global linear rate of decrease of the objective value in terms of the norm of the gradient. Recent works have attempted to extend this guarantee to saddle‑point problems by assuming that the PL condition holds separately with respect to each player’s variable (the “two‑sided PL” condition). Under this assumption one would expect that the GDA flow, which simultaneously performs gradient descent on the minimizer and gradient ascent on the maximizer, converges to a stationary saddle point.

To challenge this expectation, the authors construct an explicit two‑dimensional example. The function is defined as

\


Comments & Academic Discussion

Loading comments...

Leave a Comment