I am now a research staff member with IBM where I am working on AI and sequential decision making related problems.
I was a Postdoctoral Associate in the Laboratory of Information and Decision System at Massachusetts Institute of Technology where I worked with Prof. Jonathan P. How on scalable machine learning methods to represent knowledge and uncertainty for multiagent planning. I received my Ph.D. in Electrical and Computer Engineering from Duke University in 2014. During my Ph.D. study, I worked under the supervision of Prof. Lawrence Carin on developing scalable Bayesian nonparametric methods for learning and sequential decision making under uncertainty. I received my B.S. and M.S. in Electrical Engineering from Huazhong University of Science and Technology in 2005 and 2007, respectively. My research interests include statistical machine learning, AI and Robotics.
I co-organized a postdoc/student workshop for the MURI project on "Nonparametric Bayesian Models to Represent Knowledge and Uncertainty for Decentralized Planning" at MIT.
More recently, I co-organized an AAAI Spring Symposium on " Challenges and Opportunities in Multiagent Learning for the Real World" at Stanford University.
Here is my CV.Publications :
M. Riemer, I. Cases, R. Ajemian, M. Liu, I. Rish, Y. Tu, and G. Tesauro. Learning to Learn without Forgetting by Maximizing Transfer and Minimizing Interference. In International Conference on Learning Representations (ICLR), 2019 [acceptance rate: 31%]
S. Omidshafiei, D.K. Kim, M. Liu, G. Tesauro, M. Riemer, C. Amato, M. Campbell, and J.P. How. Learning to Teach in Cooperative Multiagent Reinforcement Learning. The 33th AAAI Conference on Artificial Intelligence (AAAI), 2019 [acceptance rate: 16.2%, Outstanding Student Paper Honorable Mention]
H. Wei and P. Zhu and M. Liu and J. P. How and S. Ferrari. Automatic Pan-Tilt Camera Control for Learning Dirichlet Process Gaussian Process (DPGP) Mixture Models of Multiple Moving Targets. IEEE Transactions on Automatic Control, 2019
M. Riemer, M. Liu and G. Tesauro. Learning Abstract Options. Neural Information Processing Systems (NIPS), 2018 [acceptance rate: 21%]
M. Liu, G. Chowdhary, B. da Silva, S. Liu and J. P. How. Gaussian Processes for Learning and Control --- Tutorial with Examples. Control Systems Magazine, IEEE, 2018
M. C. Machado, C. Rosenbaum, X. Guo, M. Liu, G. Tesauro and M. S. Campbell. Eigenoption Discovery through the Deep Successor Representation. In International Conference on Learning Representations (ICLR), 2018 [acceptance rate: 34%]
M. Liu, M. C. Machado, G. Tesauro and M. S. Campbell. The Eigenoption Critic Framework In NIPS Workshop on Hierarchial Reinforcement Learning, 2017
M. Liu, K. Sivakumar, S. Omidshafiei, C. Amato and J. P. How. Learning for Multi-robot Cooperation in Partially Observable Stochastic Environments with Macro-actions. (Video) IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2017 [acceptance rate: 45%]
Y. Chen, M. Everett, M. Liu and J. P. How. Socially Aware Motion Planning with Deep Reinforcement Learning. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2017 (Video, MIT News, IEEE Spectrum) [Winner-Best Student Paper Award and Finalist-Best Paper Award on Cognitive Robotics]
T. Banerjee, M. Liu and J. P. How. Quickest Change Detection Approach to Optimal Control in Markov Decision Processes with Model Changes. American Control Conference (ACC), 2017
Y. Chen, M. Liu, M. Everett and J. P. How. Decentralized Non-communicating Multiagent Collision Avoidance with Deep Reinforcement Learning. The International Conference on Robotics and Automation (ICRA), 2017 [acceptance rate: 41%, Finalist-Best Multi-Robot System Paper]
S. Omidshafiei, S. Liu, M. Everett, B. Lopez, C. Amato, M. Liu, J. P. How and J. Vian. Semantic-level Decentralized Multi-Robot Decision-Making using Probabilistic Macro-Observations. (Video) The International Conference on Robotics and Automation (ICRA), 2017
S. Omidshafiei, C. Amato, M. Liu, J. P. How and J. Vian. Scalable Accelerated Decentralized Multi-Robot Policy Search in Continuous Observation Spaces. The International Conference on Robotics and Automation (ICRA), 2017
Y. Chen, S. Liu, M. Liu, J. Miller and J. P. How. Motion Planning with Diffusion Maps. (Video) IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2016 [acceptance rate: 48%]
H. Wei, W. Lu, P. Zhu, S. Ferrari, M. Liu, R. H. Klein, S. Omidshafiei, and J. P. How. Information Value in Nonparametric Dirichlet Process Gaussian Process Mixture Models of Target Dynamics. Automatica
Y. Chen, M. Liu, and J. P. How. Augmented Dictionary Learning for Motion Prediction. The International Conference on Robotics and Automation (ICRA), 2016 [acceptance rate: 34.7%]
M. Liu, C. Amato, E. Anesta, J. D. Griffith, and J. P. How. Learning for Decentralized Control of Multiagent Systems in Large, Partially-Observable Stochastic Environments (Supplementary Material.) The 30th AAAI Conference on Artificial Intelligence (AAAI), 2016 [acceptance rate: 26%]
Y. Chen, M. Liu, S. Liu, J. Miller and J. P. How. Predictive Modeling of Pedestrian Motion Patterns with Bayesian Nonparametrics AIAA Guidance, Navigation, and Control Conference (GNC), 2016
M. Liu and J. P. How. Policy Based Reinforcement Learning in DEC-POMDPs with Bayesian Nonparametrics In NIPS Workshop on Learning, Inference and Control of Multi-Agent Systems, 2015 [contributed talk & best poster award]
M. Liu, C. Amato, X. Liao, L. Carin, and J. P. How. Stick-Breaking Policy Learning in DEC-POMDPs (Supplementary Material.) Int. Joint Conf. on Artificial Intelligence (IJCAI), 2015 [acceptance rate: 28.8%]
M. Liu, C. Amato, E. Anesta, J. D. Griffith, and J. P. How. Learning for Multiagent Decentralized Control in Large Partially Observable Stochastic Environments The 2nd Multidisciplinary Conference on Reinforcement Learning and Decision Making (RLDM), 2015
M. Liu Efficient Bayesian Nonparametric Methods for Model-Free Reinforcement Learning in Centralized and Decentralized Sequential Environments Thesis, 2014
G. Chowdhary, M. Liu, R. C. Grande, T. J. Walsh, J. P. How, and L. Carin. Off-policy Reinforcement Learning with Gaussian processes Automatica Sinica, IEEE/CAA Journal of, 2014
G. Chowdhary, M. Liu, R. C. Grande, T. J. Walsh, and J. P. How. Off-Policy Reinforcement Learning with Gaussian Processes The 1st Multidisciplinary Conference on Reinforcement Learning and Decision Making (RLDM), 2013
T. Campbell, M. Liu, B. Kulis, J. P. How, and L. Carin. Dynamic Clustering via Asymptotics of the Dependent Dirichlet Process Mixture NIPS, 2013 [acceptance rate: 25.4%]
M. Liu, X. Liao, and L. Carin. Online Expectation Maximization for Reinforcement Learning in POMDPs. Int. Joint Conf. on Artificial Intelligence (IJCAI), 2013 [acceptance rate: 28%]
M. Liu, G. Girish, J.How, and L. Carin. Transfer Learning for Reinforcement Learning with Dependent Dirichlet Process and Gaussian Process Poster. In NIPS Workshop on Bayesian Nonparametric Models (BNPM) for Reliable Planning And Decision-Making Under Uncertainty, 2012.
M. Liu, T. Campbell, J. P. How, L. Carin. DDP-GP: A Sequential Bayesian Nonparametric Method for Vehicular Trajectory Clustering. MURI workshop, 2012
M. Liu, X. Liao, and L. Carin. Infinite Regionalized Policy Representation ICML, 2011 [acceptance rate: 25.8%]
M. Liu, L. Belfore, Y. Shen, and M. Scerbo. Uterine Contraction Modeling and Simulation. in Selected Papers Presented at MODSIM World 2009 Conference and Expo, NASA, 2010.
M. Liu and Y. Shen. Multi-frame Super Resolution Based on Block Motion Vector Processing and Kernel Constrained Convex Set Projectionn. (Code.) in Proc. of SPIE Visual Communications and Image Processing (VCIP), 2009.
X. Li, Y Jiang, and M. Liu. A Near Optimum Detection in Alpha-Stable Impulse Noise In IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2009
X. Li, J. Sun, L. Jin, and M. Liu. Bi-parameter CGM Model for Approximation of Alpha-Stable PDF IEE Electronics Letters, 2008.
M. Liu, Y. Shen, and M. Scerbo. A Survey of Computerized Fetal Heart Rate Monitoring and Interpretation Techniques. In 2008 Modeling and Simulation Capstone Conference, Norfolk VA, 2008. [The special recognition award in eastern Virginia medical school/medical Track]
M. Liu, H. Cao, X. Li, and B. Wang. Super Resolution Reconstruction Based on Motion Estimation Error and Edge Adaptive Constraints. Journal of Image and Graphics (Chinese) 2007.
M. Liu, H. Cao, and X. Li. An Adaptive Algorithm for Super Resolution Reconstruction of Video Image Application Research of Computers (Chinese), 2007.
M. Liu, H. Cao, X. Li, and S. Yi. Super Resolution Reconstruction Based on Motion Estimation Error and Edge Adaptive Constraints. In SPIE Proc. Visual Information Processing XV, page 62460B, 2006.