BFTS: Thompson Sampling with Bayesian Additive Regression Trees

BFTS: Thompson Sampling with Bayesian Additive Regression Trees
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Contextual bandits are a core technology for personalized mobile health interventions, where decision-making requires adapting to complex, non-linear user behaviors. While Thompson Sampling (TS) is a preferred strategy for these problems, its performance hinges on the quality of the underlying reward model. Standard linear models suffer from high bias, while neural network approaches are often brittle and difficult to tune in online settings. Conversely, tree ensembles dominate tabular data prediction but typically rely on heuristic uncertainty quantification, lacking a principled probabilistic basis for TS. We propose Bayesian Forest Thompson Sampling (BFTS), the first contextual bandit algorithm to integrate Bayesian Additive Regression Trees (BART), a fully probabilistic sum-of-trees model, directly into the exploration loop. We prove that BFTS is theoretically sound, deriving an information-theoretic Bayesian regret bound of $\tilde{O}(\sqrt{T})$. As a complementary result, we establish frequentist minimax optimality for a “feel-good” variant, confirming the structural suitability of BART priors for non-parametric bandits. Empirically, BFTS achieves state-of-the-art regret on tabular benchmarks with near-nominal uncertainty calibration. Furthermore, in an offline policy evaluation on the Drink Less micro-randomized trial, BFTS improves engagement rates by over 30% compared to the deployed policy, demonstrating its practical effectiveness for behavioral interventions.


💡 Research Summary

The paper addresses contextual bandits for mobile health interventions, where decisions must adapt to complex, nonlinear user behavior. Standard linear reward models are biased, while neural‑network‑based Thompson Sampling (TS) is computationally heavy and fragile in online settings. Tree ensembles excel on tabular data but lack a principled Bayesian uncertainty estimate required for TS. To bridge this gap, the authors introduce Bayesian Forest Thompson Sampling (BFTS), the first contextual bandit algorithm that directly incorporates Bayesian Additive Regression Trees (BART) – a fully probabilistic sum‑of‑trees model – into the exploration loop.

In the problem setting, at each round $t$ a context $X_t\in


Comments & Academic Discussion

Loading comments...

Leave a Comment