POMDP-lite for Robust Robot Planning under Uncertainty
Reading time: 1 minute
...
📝 Original Info
- Title: POMDP-lite for Robust Robot Planning under Uncertainty
- ArXiv ID: 1602.04875
- Date: 2016-02-24
- Authors: Min Chen and Emilio Frazzoli and David Hsu and Wee Sun Lee
📝 Abstract
The partially observable Markov decision process (POMDP) provides a principled general model for planning under uncertainty. However, solving a general POMDP is computationally intractable in the worst case. This paper introduces POMDP-lite, a subclass of POMDPs in which the hidden state variables are constant or only change deterministically. We show that a POMDP-lite is equivalent to a set of fully observable Markov decision processes indexed by a hidden parameter and is useful for modeling a variety of interesting robotic tasks. We develop a simple model-based Bayesian reinforcement learning algorithm to solve POMDP-lite models. The algorithm performs well on large-scale POMDP-lite models with up to $10^{20}$ states and outperforms the state-of-the-art general-purpose POMDP algorithms. We further show that the algorithm is near-Bayesian-optimal under suitable conditions.📄 Full Content
Reference
This content is AI-processed based on open access ArXiv data.