Kenji Kawaguchi

PhD student
Massachusetts Institute of Technology (MIT)
Computer Science and Artificial Intelligence Laboratory (CSAIL)
Learning and Intelligent Systems Group (LIS)


Recent Publications

Refereed Conference Papers

Kenji Kawaguchi. Deep Learning without Poor Local Minima. In Advances in Neural Information Processing (NIPS), 2016.
[pdf] [BibTex] [Spotlight Video] [Talk] Selected for NIPS oral presentation (top 2% submissions)

Kenji Kawaguchi. Bounded Optimal Exploration in MDP. In Proceedings of the 30th AAAI Conference on Artificial Intelligence (AAAI), 2016.
[pdf] [BibTex]

Kenji Kawaguchi, Leslie Pack Kaelbling and Tomás Lozano-Pérez. Bayesian Optimization with Exponential Convergence. In Advances in Neural Information Processing (NIPS), 2015.
[pdf] [BibTex] [Code]

Refereed Journal Articles

Kenji Kawaguchi, Yu Maruyama and Xiaoyu Zheng. Global Continuous Optimization with Error Bound and Fast Convergence. Journal of Artificial Intelligence Research (JAIR), 56: 153-195, 2016.
[pdf] [BibTex] [Code]

Xiaoyu Zheng, Hiroto Itoh, Kenji Kawaguchi, Hitoshi Tamaki and Yu Maruyama. Application of Bayesian nonparametric models to the uncertainty and sensitivity analysis of source term in a BWR severe accident. Reliability Engineering & System Safety, 138: 253-262, 2015.
[pdf] [BibTeX]

Jun Ishikawa, Kenji Kawaguchi and Yu Maruyama. Analysis for iodine release from unit 3 of Fukushima Dai-ichi nuclear power plant with consideration of water phase iodine chemistry. Journal of Nuclear Science and Technology, 52(3):308-315, 2015.
[pdf] [BibTeX]

Preprints

Kenji Kawaguchi, Bo Xie and Le Song. Deep Semi-Random Features for Nonlinear Function Approximation. arXiv preprint arXiv:1702.08882, 2017.
[pdf] [BibTex] [Code]

Qianli Liao, Kenji Kawaguchi and Tomaso Poggio. Streaming Normalization: Towards Simpler and More Biologically-plausible Normalizations for Online and Recurrent Learning. arXiv preprint arXiv:1610.06160, 2016.
[pdf] [BibTeX]


Full Publication List


Code

Bayesian optimization with exponential convergence

Global Continuous Optimization with Error Bound and Fast Convergence courtesy of Erich Merrill

- Due to licensing issue, I cannot personally release this second code: Erich Merrill generously made his own implementation of our algorithm publically available.

Deep Semi-Random Features for Nonlinear Function Approximation


Research Interests

Machine Learning, Deep Learning, Convex/Nonconvex Optimization, Bayesian Optimization, Model-based Reinforcement Learning


Education

Massachusetts Institute of Technology             2014/09 - Present
Ph.D. student, Electrical Engineering and Computer Science
Advisors: Leslie Pack Kaelbling and Tomás Lozano-Pérez.

Massachusetts Institute of Technology             2016/02
M.S., Electrical Engineering and Computer Science
Advisors: Leslie Pack Kaelbling and Tomás Lozano-Pérez.
Thesis: Towards Practical Theory: Bayesian Optimization and Optimal Exploration


Awards

Funai Overseas Scholarship
FFIT
April 2014 - August 2016

Nakajima Foundation Fellowship
December 2013

Additional Honors & Awards