Kenji Kawaguchi. Deep Learning without Poor Local Minima.
In Advances in Neural Information Processing (NeurIPS), 2016.
[pdf] [BibTex] [Spotlight Video] [Talk] Selected for NeurIPS oral presentation (top 2% submissions)
Kenji Kawaguchi and Jiaoyang Huang. Gradient Descent Finds Global Minima for Generalizable Deep Neural Networks of Practical Sizes.
In Proceedings of the 57th Allerton Conference on Communication, Control, and Computing (Allerton), to appear, 2019.
[pdf] [BibTex] [Video]
Kenji Kawaguchi*, Bo Xie*, Vikas Verma, and Le Song. Deep Semi-Random Features for Nonlinear Function Approximation.
In Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI), 2018.
Kenji Kawaguchi, Leslie Pack Kaelbling and Yoshua Bengio. Generalization in Deep Learning. In Mathematics of Deep Learning, Cambridge University Press, to appear.
Prepint avaliable at: arXiv preprint arXiv:1710.05468, 2017.
[pdf] [BibTex] [Code]
Kenji Kawaguchi, Jiaoyang Huang and Leslie Pack Kaelbling. Every Local Minimum Value is the Global Minimum Value of Induced Model in Non-convex Machine Learning. Neural Computation, accepted, 2019.
Kenji Kawaguchi, Yu Maruyama and Xiaoyu Zheng. Global Continuous Optimization with Error Bound and Fast Convergence.
Journal of Artificial Intelligence Research (JAIR), 56, 153-195, 2016.
Tomaso Poggio, Kenji Kawaguchi, Qianli Liao, Brando Miranda, Lorenzo Rosasco, Xavier Boix, Jack Hidary and Hrushikesh Mhaskar. Theory of Deep Learning III: explaining the
Massachusetts Institute of Technology CBMM Memo No. 73, 2018.
Qianli Liao, Kenji Kawaguchi and Tomaso Poggio. Streaming Normalization: Towards Simpler and More Biologically-plausible Normalizations for Online and Recurrent Learning.
Massachusetts Institute of Technology CBMM Memo No. 57, 2016.
Invited talk at International Conference:
• Minisymposium on Theoretical Foundations of Deep Learning, ICIAM 2019, Spain.
Other invited talks:
• TBD, AI Seminar Series at Carnegie Mellon University (CMU), 2019.
• TBD, Harvard University / Professor Horng-Tzer Yau lab, 2019.
• TBD, Carnegie Mellon University (CMU) / SAILING lab, 2019.
• Elimination of All Bad Local Minima in Deep Learning, the PhILMs center (PNNL/SNL with Brown/Stanford/MIT/UCSB), 2019.
• Generalization in Deep Learning, MIT / Professor David Sontag lab, 2017.
• Deep Learning without Poor Local Minima, Google Research Cambridge, 2017.
• Deep Learning without Poor Local Minima, MIT / Professor Tomaso Poggio lab, 2016.
• Deep Learning without Poor Local Minima, MIT / Machine Learning Tea, 2016.
• "Every Local Minimum Value is the Global Minimum Value of Induced Model in Non-convex Machine Learning" at IAS Workshop on Theory of Deep Learning: Where next?, organized by Sanjeev Arora, Joan Bruna, Rong Ge, Suriya Gunasekar, Jason Lee, Bin Yu, 2019.
• "Deep Learning without poor local minima" at CRM/CIFAR Deep Learning Summer School, organized by Aaron Courville and Yoshua Bengio, 2016.
Invited research visits:
• Microsoft Research (MSR), Redmond, 2018 (3 weeks).
• TTIC, Chicago, 2019 (3 weeks).
• Invited to be in the participant list for the proposal of a 2021 program at Isaac Newton Institute for Mathematical Sciences at the University of Cambridge: invited by the organization team of Prof. Peter Bartlett, Prof. Arnulf Jentzen, Prof. Anders Hansen, Prof. Gitta Kutyniok, Prof. Stephane Mallat, and Prof. Carola Schönlieb.
Program Committee Member:
• AAAI Conference on Artificial Intelligence (AAAI), 2019.
• Conference on Uncertainty in Artificial Intelligence (UAI), 2019.
• AAAI Conference on Artificial Intelligence (AAAI), 2020.
Invited Journal reviewer: Journal of Machine Learning Research (JMLR), Neural Computation (MIT press), IEEE Transactions on Neural Networks and Learning Systems (IEEE TNNLS)
Invited Conference reviewer: Conference on Neural Information Processing Systems (NeurIPS), 2019
Massachusetts Institute of Technology present
Ph.D. student, Electrical Engineering and Computer Science
Advisor: Leslie Pack Kaelbling
Thesis committee: Yoshua Bengio and Suvrit Sra
GPA 5.0/5.0 (+ highest internal grades: all A+ except one class where A was the highest offered)
Massachusetts Institute of Technology 2016/02
M.S., Electrical Engineering and Computer Science
Advisors: Leslie Pack Kaelbling and Tomás Lozano-Pérez.
Thesis: Towards Practical Theory: Bayesian Optimization and Optimal Exploration
GPA 5.0/5.0 (+ highest internal grades)
Funai Overseas Scholarship
April 2014 - August 2016
Nakajima Foundation Fellowship
Additional Honors & Awards:
• AAAI Student Scholarship 2018
• NeurIPS Travel Award 2016
• A research fund from the Japan Science and Technology Agency
• More than 10 awards for leadership, sports and academics