POMDP Planning for Problems with Large State Space

Motion planning under uncertainty is critical when robots leave highly structured factory plants for our homes, offices, and outdoor environment. Partially Observable Markov Decision Processes (POMDPs) is a general and mathematically principled framework for planning in uncertain and partially observable domain.

However, POMDP is often considered as impractical for solving robotics problems, due to the extremely large planning space. Due to uncertainty, the robot never knows its exact state. Therefore, instead of planning with respect to a single state, a POMDP planner plans with respect to a set of states that are consistent with the available information. It represents this set of states as a probability distribution, called a belief. This means that a POMDP planner plans in a space whose size is exponential in the number of states.

Our goal is to create practical POMDP algorithms and software for common robotic tasks. To this end, we have developed a new point-based POMDP algorithm, called Successive Approximations of the Reachable Space under Optimal Policies (SARSOP), that exploits the notion of optimally reachable belief spaces to improve computational efficiency. We have successfully applied SARSOP to simulations of a set of common robotic tasks, including instances of coastal navigation, grasping, mobile robot exploration, and target tracking. All of these tasks are modeled as POMDPs with a large number of states. Our simulation results indicate that SARSOP substantially outperforms one of the fastest existing point-based algorithms.

Integrated exploration (15,517 states). Homecare (5,408 states).
References

H. Kurniawati, D. Hsu, and W. S. Lee. SARSOP: Efficient point-based POMDP planning by approximating optimally reachable belief spaces. In Proc. Robotics: Science & Systems, 2008.
bib | .pdf ]
[Software]

Last updated: 29 October 2011