Portrait of Liu Ziyin

Liu Ziyin

Email: liu.ziyin.p (at) gmail.com / ziyinl (at) mit.edu
Office: Room 26-209, MIT


I am a researcher at MIT and NTT Research. My research lies at the mysterious-and-fantastic union and intersection of mathematics, physics, neuroscience and artificial intelligence. At MIT, I work with Prof. Isaac Chuang. I also collaborate with Prof. Tomaso Poggio in the BCS department. My research focus is on the theoretical foundation of deep learning. Prior to coming to MIT, I received my PhD in physics at the University of Tokyo under the supervision of Prof. Masahito Ueda. I received a Bachelor's degree in physics and mathematics at Carnegie Mellon University. I also serve as an Area Chair for NeurIPS and ICLR. Personally, I am interested in art, literature, and philosophy. I also play Go. Also, Paul Liang is my great great great friend. If you have questions or want to collaborate, or just want to say hi, feel free to send an email.

About my name: When writing my name in publications, I stick to the the eastern convention, where the family name goes first and the given name goes last. So please write my name as "Liu Ziyin." At the same time, because my given name is Ziyin, feel free to just call me "Ziyin."

Talks I gave recently:
Universal Phenomena, Irreversibility, and Thermodynamics in Deep Representation Learning
How does physics help understand deep learning?

Doctor thesis: Symmetry breaking in deep learning (深層学習に於ける対称性の破れ, 2023).
Master thesis: Mean-field learning dynamics of deep neural networks (2020).

Research Interest

I am particularly interested identifying scientific principles of artificial intelligence (what is a principle?), and I think tools and intuitions from other fields of sciences an be of great help. Broadly speaking, I work to advance the following fields

For my selected works, see below, and for my full list of publication and preprints, see publications.

NTT interns I work(ed) with


Tutorial / Notes

Proof of a perfect platonic representation hypothesis (2025)

Selected Work

(* denotes equal contribution or corresponding author)

  1. Topological Invariance and Breakdown in Learning [arXiv]
    Yongyi Yang, Tomaso Poggio, Isaac Chuang, Liu Ziyin*
  2. Topological invariance demo 1 Topological invariance demo 2 Topological invariance demo 3
  3. A universal compression theory: Lottery ticket hypothesis and superpolynomial scaling laws [arXiv]
    Hong-Yi Wang, Di Luo, Tomaso Poggio, Isaac L. Chuang, Liu Ziyin*
  4. Neural Thermodynamics I: Entropic Forces in Deep and Universal Representation Learning [arXiv]
    Liu Ziyin*, Yizhou Xu*, Isaac Chuang
    NeurIPS 2025
  5. Formation of Representations in Neural Networks [paper]
    Liu Ziyin, Isaac Chuang, Tomer Galanti, Tomaso Poggio
    ICLR 2025 (spotlight: 5% of all submissions)
  6. Parameter Symmetry and Noise Equilibrium of Stochastic Gradient Descent [arXiv]
    Liu Ziyin, Mingze Wang, Hongchao Li, Lei Wu
    NeurIPS 2024
  7. Symmetry Induces Structure and Constraint of Learning [arXiv]
    Liu Ziyin
    ICML 2024
  8. Sparsity by Redundancy: Solving L1 with SGD [arXiv]
    Liu Ziyin*, Zihao Wang*
    ICML 2023
  9. Exact Solutions of a Deep Linear Network [arXiv]
    Liu Ziyin, Botao Li, Xiangming Meng
    NeurIPS 2022
  10. SGD Can Converge to Local Maxima [arXiv]
    Liu Ziyin, Botao Li, James B. Simon, Masahito Ueda
    ICLR 2022 (spotlight: 5% of all submissions)
  11. Noise and Fluctuation of Finite Learning Rate Stochastic Gradient Descent [arXiv]
    Kangqiao Liu*, Liu Ziyin*, Masahito Ueda
    ICML 2021
  12. Neural Networks Fail to Learn Periodic Functions and How to Fix It [arXiv]
    Liu Ziyin, Tilman Hartwig, Masahito Ueda
    NeurIPS 2020
  13. Deep Gamblers: Learning to Abstain with Portfolio Theory [arXiv]
    Liu Ziyin, Zhikang Wang, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency, Masahito Ueda
    NeuRIPS 2019

This page has been accessed several times since July 07, 2018.