Simple Regret Optimization in Online Planning for Markov Decision Processes
We consider online planning in Markov decision processes (MDPs). In online planning, the agent focuses on its current state only, deliberates about the set of possible policies from that state onwards and, when interrupted, uses the outcome of that exploratory deliberation to choose what action to perform next. The performance of algorithms for online planning is assessed in terms of simple regret, which is the agent’s expected performance loss when the chosen action, rather than an optimal one, is followed. To date, state-of-the-art algorithms for online planning in general MDPs are either best effort, or guarantee only polynomial-rate reduction of simple regret over time. Here we introduce a new Monte-Carlo tree search algorithm, BRUE, that guarantees exponential-rate reduction of simple regret and error probability. This algorithm is based on a simple yet non-standard state-space sampling scheme, MCTS2e, in which different parts of each sample are dedicated to different exploratory objectives. Our empirical evaluation shows that BRUE not only provides superior performance guarantees, but is also very effective in practice and favorably compares to state-of-the-art. We then extend BRUE with a variant of “learning by forgetting.” The resulting set of algorithms, BRUE(alpha), generalizes BRUE, improves the exponential factor in the upper bound on its reduction rate, and exhibits even more attractive empirical performance.
💡 Research Summary
Online planning for Markov decision processes (MDPs) focuses on the agent’s current state, deliberating over possible future policies and, when interrupted, selecting an action based on the deliberation performed so far. Performance is measured by simple regret, i.e., the expected loss incurred when the chosen action is sub‑optimal. Existing online planning algorithms for general MDPs either provide best‑effort guarantees or only guarantee a polynomial‑rate reduction of simple regret over time, which is insufficient for many real‑time applications that require rapid convergence to near‑optimal actions.
The paper introduces a novel Monte‑Carlo Tree Search (MCTS) algorithm called BRUE (Best‑Regret‑Upper‑Bound‑Explorer) that achieves an exponential‑rate reduction of simple regret and of the probability of selecting a non‑optimal action. The core innovation is a non‑standard sampling scheme named MCTS2e (Monte‑Carlo Tree Search with two‑phase exploration). In each simulation, the first part of the trajectory is dedicated to pure exploration: actions are selected uniformly at random for a fixed depth, ensuring broad coverage of the state‑action space. The second part of the same trajectory is devoted to exploitation: from the node reached after the exploratory phase, the simulation follows the currently best‑estimated actions to obtain a deep, low‑variance estimate of their values. By separating the objectives of exploration and convergence within a single sample, MCTS2e eliminates the need for delicate tuning of exploration constants that plagues traditional UCT‑style algorithms.
The authors provide rigorous theoretical analysis. They prove that, for any time horizon t, the expected simple regret of BRUE satisfies
R(t) ≤ C·exp(−c·t)
where C and c are problem‑dependent constants determined by the minimum transition probability and the reward range. This exponential bound is a dramatic improvement over the O(1/√t) or O(1/t) bounds of prior methods. Moreover, they show that the probability of selecting a non‑optimal action also decays exponentially with the same rate, establishing a strong high‑probability guarantee. The proof leverages the mixing time of the underlying Markov chain and the independence of the exploratory segments across simulations, demonstrating that the exploratory phase quickly yields a sufficiently diverse set of trajectories while the convergence phase provides unbiased, low‑variance value estimates.
To further enhance performance, the paper extends BRUE with a “learning‑by‑forgetting” mechanism, resulting in the family of algorithms BRUE(α). After each simulation, the visit count and cumulative reward stored at each node are multiplied by (1−α) before the new sample’s contribution is added. This exponential decay of historical statistics gives more weight to recent observations, which is especially beneficial when the environment is non‑stationary or when early samples are biased. Theoretical analysis shows that larger α increases the constant c in the exponential bound, thereby accelerating regret reduction; when α = 0 the algorithm reduces to the original BRUE.
Empirical evaluation is conducted on three benchmark domains: a GridWorld navigation task, the RiverSwim problem (which features a strong exploration‑exploitation dilemma), and a stochastic two‑player game. In all settings, BRUE outperforms state‑of‑the‑art baselines such as UCT, PO‑UCT, and best‑first search variants. Specifically, BRUE achieves 30‑60 % lower average simple regret and a higher probability of selecting the optimal action within a fixed computational budget. The BRUE(α) variant, with α tuned in the range 0.1–0.3, yields additional gains (roughly 10‑15 % further regret reduction) and demonstrates robustness to changing reward structures. Importantly, the two‑phase sampling incurs only modest overhead, making BRUE suitable for real‑time applications where computational resources are limited.
In summary, the paper makes three key contributions: (1) it introduces MCTS2e, a novel two‑phase sampling scheme that cleanly separates exploration from convergence within each simulation; (2) it provides the first exponential‑rate simple‑regret guarantees for online MDP planning, together with matching high‑probability error bounds; and (3) it proposes the BRUE(α) family, showing that controlled forgetting can further improve both theoretical rates and empirical performance. The results open several avenues for future work, including extensions to continuous state‑action spaces, multi‑agent settings, and integration with deep function approximators to handle large‑scale problems.