Improved Dimension Dependence for Bandit Convex Optimization with Gradient Variations

Improved Dimension Dependence for Bandit Convex Optimization with Gradient Variations
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Gradient-variation online learning has drawn increasing attention due to its deep connections to game theory, optimization, etc. It has been studied extensively in the full-information setting, but is underexplored with bandit feedback. In this work, we focus on gradient variation in Bandit Convex Optimization (BCO) with two-point feedback. By proposing a refined analysis on the non-consecutive gradient variation, a fundamental quantity in gradient variation with bandits, we improve the dimension dependence for both convex and strongly convex functions compared with the best known results (Chiang et al., 2013). Our improved analysis for the non-consecutive gradient variation also implies other favorable problem-dependent guarantees, such as gradient-variance and small-loss regrets. Beyond the two-point setup, we demonstrate the versatility of our technique by achieving the first gradient-variation bound for one-point bandit linear optimization over hyper-rectangular domains. Finally, we validate the effectiveness of our results in more challenging tasks such as dynamic/universal regret minimization and bandit games, establishing the first gradient-variation dynamic and universal regret bounds for two-point BCO and fast convergence rates in bandit games.


💡 Research Summary

The paper tackles the long‑standing challenge of reducing the dimension dependence in bandit convex optimization (BCO) when the learner’s performance is measured against the gradient‑variation of the loss sequence. In the two‑point feedback setting, Chiang et al. (2013) introduced a non‑consecutive gradient‑variation term
\


Comments & Academic Discussion

Loading comments...

Leave a Comment