User Tools

Site Tools


POMDPs in Robotics: State of The Art, Challenges, and Opportunities

Autonomous robots must be able make good decisions in the presence of sensor and outcome uncertainty. The Partially Observable Markov Decision Process (POMDP) is the general and principled framework to address such decision problems. POMDP-based methods have been used widely in robotics, with a number of successes and failures. The main objective of this workshop is to bring researchers together to discuss the recent developments in POMDPs and the remaining open problems that limit their applicability. We will explore if POMDPs can become an “everyday tool” in robotics. We will discuss the reasons why this hasn't happened yet and what we can do to overcome these roadblocks. We will also discuss existing applications of POMDPs in robotics, existing tools for solving POMDPs, learn the “tips and tricks” in applying POMDPs to physical robots, and discuss the gap between theory and practice. We hope to bring together researchers working on POMDP-based methods from theory to applications in order to share knowledge and explore ways to incorporate new methods and identify interesting new problems to tackle.

We solicit “Late Breaking” contributions to this workshop on any of the topics discussed above. This could include applications of POMDP-based methods in robotic domains, new POMDP methods or other relevant material. The submitted papers will be peer-reviewed, and the selected papers will be presented as a short-talk and as a poster presentation in the poster session.

Please submit papers of at most 3 page in RSS format to here by May 12th, 2017.
Each accepted paper will be presented as a 3 minute spotlight presentation and poster. The poster will be a paper poster. Each poster board is of size 30"x20", setup in portrait.

Schedule (July 15th, 2017)

09.00-09.45 POMDP tutorial
09.45-10.15 Jonathan How: Action and Observation Abstractions for Tractable Dec-POMDP Planning
10.15-10.30 Spotlight talks (3 min spotlight per paper)
10.30-11.00 Coffee break + interactive session
11.00-11.30 David Hsu: Robust Decision Making under Uncertainty: Online POMDP Planning and Beyond
11.30-11.45 Spotlight talks (3 min spotlight per paper)
11.45-12.15 Interactive session
12.15-14.00 Lunch
14.00-14.30 Ron Alterovitz: Motion Planning under Uncertainty for Medical Robots
14.30-15.00 Leslie Kaelbling: POMDPs for robots in the factory and in the wild
15.00-15.30 Coffee break
15.30-16.00 Suman Chakravorty: A Separation Principle for Stochastic Optimal Control and its Implications
16.00-16:30 Sergey Levine: Deep learning of robotic skills for partially observed tasks
16.30-17.30 Panel discussion

Invited Speakers

David Hsu (NUS)

Jonathan How (MIT)

Leslie Pack Kaelbling (MIT)

Ron Alterovitz (UNC Chapel Hill)

Sergey Levine (UC Berkeley)

Suman Chakravorty (Texas A&M)


Ali-akbar Agha-mohammadi
NASA-JPL, Caltech

Christopher Amato
Northeastern University

Hanna Kurniawati
University of Queensland

page.txt · Last modified: 2017/07/31 22:30 (external edit)

Page Tools