Flickering Multi-Armed Bandits
We introduce Flickering Multi-Armed Bandits (FMAB), a new MAB framework where the set of available arms (or actions) can change at each round, and the available set at any time may depend on the agent’s previously selected arm. We model this constrained, evolving availability using random graph processes, where arms are nodes and the agent’s movement is restricted to its local neighborhood. We analyze this problem under two random graph models: an i.i.d. Erdős–Rényi (ER) process and an Edge-Markovian process. We propose and analyze a two-phase algorithm that employs a lazy random walk for exploration to efficiently identify the optimal arm, followed by a navigation and commitment phase for exploitation. We establish high-probability and expected sublinear regret bounds for both graph settings. We show that the exploration cost of our algorithm is near-optimal by establishing a matching information-theoretic lower bound for this problem class, highlighting the fundamental cost of exploration under local-move constraints. We complement our theoretical guarantees with numerical simulations, including a scenario of a robotic ground vehicle scouting a disaster-affected region.
💡 Research Summary
**
The paper introduces a novel online learning framework called Flickering Multi‑Armed Bandits (FMAB), in which the set of actions available to the learner changes over time and the availability at each round depends on the arm selected in the previous round. This models situations such as a ground robot navigating a disaster‑affected area where roads may open or close unpredictably, and the robot can only move to neighboring locations that are currently reachable.
Formally, there are n arms, each with an unknown reward distribution on
Comments & Academic Discussion
Loading comments...
Leave a Comment