Planning, Control, Estimation


Decentralized Control of Partially Observable Markov Decision Processes using Belief Space Macro-actions

Multi-agent planning in domains with partial observability can be represented using Decentralized Partially Observable Markov Decision Processes (Dec-POMDPs), which are general frameworks for such problems. However, solving Dec-POMDPs is often intractable due to scaling issues related to number of agents, states, and observations. For real-world robotics applications, this is particularly limiting as continuous models of state and observation spaces must be discretized, leading to a doubly-exponential time complexity for solutions.

Our paper introduces a new framework for solving multi-agent planning problems, extending the Dec-POMDP to the Decentralized Partially Observable Semi-Markov Decision Process (Dec-POSMDP), allowing use of macro-actions which offer significant scalability improvements compared to primitive actions. In this paper, we first formally introduce the Dec-POSMDP framework, and then evaluate its performance on a complex multi-agent package delivery domain.


Vision-Based Pose Estimation of Quadcopters

State estimation and control of autonomous vehicles are highly active research areas. In the field of unmanned aerial systems, interest in solving these problems for fixed-pitch quadcopters is especially prevalent, due to their recent availability, relative inexpensiveness, and accessible dynamical models. State estimation on quadcopters is typically performed using onboard sensors (IMUs) or motion capture sensors when in a lab environment. However, motion capture systems are expensive and non-portable, as they require a fleet of infrared cameras installed in a lab space.

This work aims to solve the state estimation problem for quadcopters using a relatively inexpensive single camera system, utilizing vision-processing to identify key feature points of the vehicle and image-based pose estimation techniques for obtaining translation and rotation measurements. We utilize marker-based measurements of a quadcopter and an Unscented Kalman Filter (UKFs) for tracking of translational and rotational states. The paper concludes with results presented for a quadcopter following a complex trajectory, using both accurate motion capture sensory measurements as well as the vision-based pose measurements.


Optimal Racing Line Control

Optimal control of car racing lines has been of interest for researchers in the last several years. The problem of designing optimal trajectories for race cars and high performance vehicles has been analyzed using a wide range of kinematic and dynamical models. The optimal racing line problem can be considered a dual problem of the optimal vehicle control problem, and solving it in an efficient manner can be extremely useful in future autonomous vehicle systems.

This paper aims to investigate optimal control of vehicles on a race track, using the GPOPS MATLAB toolbox, starting with a relatively simple kinematic system model. Additional extensions are made to a dynamical model including tire friction forces, allowing more accurate trajectories to be obtained. Following this, an improved method of initial guessing is implemented to allow for problems up to 18 phases to be solved in GPOPS. The paper concludes with a brief discussion and analysis of improving performance in GPOPS, as the complexity of the problem requires significant effort to be spent on code optimization to allow solutions to be obtained in reasonable time.


Reinforcement Learning-based Quadcopter Control

Interest in control of unmanned rotorcraft has experienced a recent gain in momentum, primarily due to increased availability of inexpensive experimental frameworks, along with availability of control system software allowing rapid execution and testing of system response. Fixed-pitch quadcopters (rotorcraft with four propellers) have become especially popular within the hobby community, due to their intrinsically symmetric nature and ease of controller design.

This paper aims to investigate various methods of controlling quadcopters, starting with a relatively simple linearized system plant. Additional extensions are made to the modern control techniques, allowing waypoint-based trajectory control of the quadcopters. Following this, an investigation of reinforcement learning methods is conducted, allowing control of nonlinear system dynamics which are unknown a priori.


Learning and Planning Using Bayesian Nonparametrics

In collaboration with Duke University, we have developed and implemented a method for learning unknown nonlinear target dynamics. In the above video (conducted in the RAVEN lab at MIT ACL), we demonstrate a set of cameras simultaneously tracking a team of target vehicles while simultaneously learning their motion models using a Dirichlet Process-Gaussian Process (DP-GP) mixture model.