Robustness of Anytime Bandit Policies

Robustness of Anytime Bandit Policies
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

This paper studies the deviations of the regret in a stochastic multi-armed bandit problem. When the total number of plays n is known beforehand by the agent, Audibert et al. (2009) exhibit a policy such that with probability at least 1-1/n, the regret of the policy is of order log(n). They have also shown that such a property is not shared by the popular ucb1 policy of Auer et al. (2002). This work first answers an open question: it extends this negative result to any anytime policy. The second contribution of this paper is to design anytime robust policies for specific multi-armed bandit problems in which some restrictions are put on the set of possible distributions of the different arms.


💡 Research Summary

The paper investigates the concentration properties of regret in stochastic multi‑armed bandit problems, focusing on the distinction between “anytime” policies (which do not know the horizon in advance) and “horizon‑aware” policies (which do). Audibert et al. (2009) previously showed that when the total number of plays n is known, one can design a policy whose regret is O(log n) with probability at least 1 − 1/n. They also demonstrated that the popular UCB1 algorithm does not enjoy this high‑probability guarantee. The present work first resolves an open question by proving that this negative result extends to any anytime policy under the standard assumption that rewards are bounded in


Comments & Academic Discussion

Loading comments...

Leave a Comment