Contextual Bandit Algorithms with Supervised Learning Guarantees
We address the problem of learning in an online, bandit setting where the learner must repeatedly select among $K$ actions, but only receives partial feedback based on its choices. We establish two new facts: First, using a new algorithm called Exp4.P, we show that it is possible to compete with the best in a set of $N$ experts with probability $1-\delta$ while incurring regret at most $O(\sqrt{KT\ln(N/\delta)})$ over $T$ time steps. The new algorithm is tested empirically in a large-scale, real-world dataset. Second, we give a new algorithm called VE that competes with a possibly infinite set of policies of VC-dimension $d$ while incurring regret at most $O(\sqrt{T(d\ln(T) + \ln (1/\delta))})$ with probability $1-\delta$. These guarantees improve on those of all previous algorithms, whether in a stochastic or adversarial environment, and bring us closer to providing supervised learning type guarantees for the contextual bandit setting.
💡 Research Summary
The paper tackles the classic contextual bandit problem, where at each of T rounds a learner observes a context, selects one of K actions, and receives feedback only for the chosen action. While this setting has been extensively studied, most prior work provides either expected regret bounds or high‑probability guarantees that are either loose or limited to specific algorithmic families. The authors make two major contributions that bring bandit learning closer to the type of guarantees familiar from supervised learning.
First, they introduce Exp4.P, a variant of the well‑known Exp4 algorithm. The key innovation is the incorporation of confidence‑adjusted weight updates. For each expert in a pool of N experts, the algorithm constructs an unbiased loss estimator and augments it with a Hoeffding‑type confidence term that depends on the failure probability δ. By normalising the expert weights using these confidence‑adjusted estimates, the algorithm ensures that, with probability at least 1 − δ, the cumulative regret against the best expert is bounded by
\
Comments & Academic Discussion
Loading comments...
Leave a Comment