Dynamic Regret via Discounted-to-Dynamic Reduction with Applications to Curved Losses and Adam Optimizer
We study dynamic regret minimization in non-stationary online learning, with a primary focus on follow-the-regularized-leader (FTRL) methods. FTRL is important for curved losses and for understanding adaptive optimizers such as Adam, yet existing dynamic regret analyses are less explored for FTRL. To address this, we build on the discounted-to-dynamic reduction and present a modular way to obtain dynamic regret bounds of FTRL-related problems. Specifically, we focus on two representative curved losses: linear regression and logistic regression. Our method not only simplifies existing proofs for the optimal dynamic regret of online linear regression, but also yields new dynamic regret guarantees for online logistic regression. Beyond online convex optimization, we apply the reduction to analyze the Adam optimizers, obtaining optimal convergence rates in stochastic, non-convex, and non-smooth settings. The reduction also enables a more detailed treatment of Adam with two discount parameters $(β_1,β_2)$, leading to new results for both clipped and clip-free variants of Adam optimizers.
💡 Research Summary
This paper tackles the problem of dynamic regret minimization in non‑stationary online learning, focusing on algorithms that belong to the Follow‑the‑Regularized‑Leader (FTRL) family. The authors build upon the recently introduced discounted‑to‑dynamic (D2D) reduction, but instead of applying it after a fully tuned discounted‑regret bound is derived, they propose a modular “template‑level” approach. At this level they keep the key components—stability terms (Λₜ), comparator‑dependent terms (φₜ), and the discounted loss aggregation—explicit, allowing flexible tuning later on. This yields a clean, reusable theorem (Theorem 1) that translates any discounted‑regret bound of the form
\
Comments & Academic Discussion
Loading comments...
Leave a Comment