Online Algorithms for the Multi-Armed Bandit Problem with Markovian Rewards
We consider the classical multi-armed bandit problem with Markovian rewards. When played an arm changes its state in a Markovian fashion while it remains frozen when not played. The player receives a state-dependent reward each time it plays an arm. The number of states and the state transition probabilities of an arm are unknown to the player. The player’s objective is to maximize its long-term total reward by learning the best arm over time. We show that under certain conditions on the state transition probabilities of the arms, a sample mean based index policy achieves logarithmic regret uniformly over the total number of trials. The result shows that sample mean based index policies can be applied to learning problems under the rested Markovian bandit model without loss of optimality in the order. Moreover, comparision between Anantharam’s index policy and UCB shows that by choosing a small exploration parameter UCB can have a smaller regret than Anantharam’s index policy.
💡 Research Summary
The paper studies a classic multi‑armed bandit (MAB) problem in which each arm evolves according to a Markov chain when it is pulled, while it remains frozen when not selected. This “rested” Markovian bandit model captures many real‑world scenarios such as channel selection in wireless networks or task allocation in robotics, where the state of a resource changes only upon use. The player does not know the number of states, the transition probabilities, or the state‑dependent reward function of any arm; it must learn these from observed rewards while trying to maximize cumulative gain.
The authors first formalize the problem. For each arm i, let S_i be a finite state space, P_i(s,s′) the unknown transition matrix, and r_i(s) a bounded reward obtained when the arm is in state s and is played. When an arm is not played its state stays unchanged (the “rested” assumption). The goal is to minimize expected regret R(T)=T·μ*−E
Comments & Academic Discussion
Loading comments...
Leave a Comment