The Non-Bayesian Restless Multi-Armed Bandit: a Case of Near-Logarithmic Regret

The Non-Bayesian Restless Multi-Armed Bandit: a Case of Near-Logarithmic   Regret
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

In the classic Bayesian restless multi-armed bandit (RMAB) problem, there are $N$ arms, with rewards on all arms evolving at each time as Markov chains with known parameters. A player seeks to activate $K \geq 1$ arms at each time in order to maximize the expected total reward obtained over multiple plays. RMAB is a challenging problem that is known to be PSPACE-hard in general. We consider in this work the even harder non-Bayesian RMAB, in which the parameters of the Markov chain are assumed to be unknown \emph{a priori}. We develop an original approach to this problem that is applicable when the corresponding Bayesian problem has the structure that, depending on the known parameter values, the optimal solution is one of a prescribed finite set of policies. In such settings, we propose to learn the optimal policy for the non-Bayesian RMAB by employing a suitable meta-policy which treats each policy from this finite set as an arm in a different non-Bayesian multi-armed bandit problem for which a single-arm selection policy is optimal. We demonstrate this approach by developing a novel sensing policy for opportunistic spectrum access over unknown dynamic channels. We prove that our policy achieves near-logarithmic regret (the difference in expected reward compared to a model-aware genie), which leads to the same average reward that can be achieved by the optimal policy under a known model. This is the first such result in the literature for a non-Bayesian RMAB.


💡 Research Summary

The paper tackles the restless multi‑armed bandit (RMAB) problem in a non‑Bayesian setting, where the transition probabilities and reward distributions of the underlying Markov chains are unknown a priori. While the classic Bayesian RMAB assumes full knowledge of these parameters and is already PSPACE‑hard, the authors consider the even more challenging scenario in which the decision maker must learn the model while simultaneously maximizing cumulative reward. Their key insight is that, for many practical RMAB instances, the optimal Bayesian policy belongs to a finite, pre‑specified set of candidate policies; the particular member of this set that is optimal depends on the unknown parameters.

Leveraging this structure, the authors construct a meta‑policy that treats each candidate policy as an arm in a separate non‑Bayesian multi‑armed bandit (MAB) problem. Within each candidate policy, the selection of the K active arms at each time step is itself a “single‑arm‑selection” problem that can be solved optimally because each arm evolves independently as a Markov chain. The meta‑policy therefore reduces the original RMAB to a two‑level learning problem: (1) a lower‑level MAB that chooses the K arms given a fixed policy, and (2) an upper‑level MAB that chooses which policy to follow. The upper‑level learner employs a standard regret‑minimizing algorithm such as UCB1 or KL‑UCB, maintaining for each policy i the cumulative reward (\hat R_i(t)) and the number of times it has been selected (n_i(t)), and selecting the policy with the largest confidence‑bound index
\


Comments & Academic Discussion

Loading comments...

Leave a Comment