Liu Ziyin

Liu Ziyin

Email: liu.ziyin.p (at) gmail.com / ziyinl (at) mit.edu
Office: Room 26-209, MIT

CV



I am a Postdoctoral Fellow at MIT and NTT Research. At MIT, I work with Prof. Isaac Chuang. I also collborate with Prof. Tomaso Poggio in the BCS department. My research focus is in the theoretical foundation of deep learning. Prior to coming to MIT, I received my phD in physics at the University of Tokyo under the supervision of Prof. Masahito Ueda. I received by Bachelor's degree in physics and mathematics at Carnegie Mellon University. Personally, I am interested in art, literature and philosophy. I also play Go. If you have questions or want to collaborate, or just want to say hi, feel free to send an email. Also, NTT Research might be hiring new interns. Please consider applying.

Doctor thesis: Symmetry breaking in deep learning (深層学習に於ける対称性の破れ, 2023).
Master thesis: Mean-field learning dynamics of deep neural networks (2020).

Research Interest

I am particularly interested identifying scientific principles of artificial intelligence (what is a principle?), and I think tools and intuitions from other fields of sciences an be of great help. Broadly speaking, I work to advance the following fields

Recent Preprints

  1. Remove Symmetries to Control Model Expressivity
  2. Liu Ziyin*, Yizhou Xu*, Isaac Chuang
    Preprint 2024
    [paper] [arXiv]
  3. When Does Feature Learning Happen? Perspective from an Analytically Solvable Model
  4. Yizhou Xu, Liu Ziyin
    Preprint 2024
    [paper] [arXiv]
  5. Law of Balance and Stationary Distribution of Stochastic Gradient Descent
  6. Liu Ziyin*, Hongchao Li*, Masahito Ueda
    Preprint 2023
    [paper] [arXiv]
  7. Probabilistic Stability of Stochastic Gradient Descent
  8. Liu Ziyin, Botao Li, Tomer Galanti, Masahito Ueda
    Preprint 2023
    [paper] [arXiv]

Publications

  1. Loss Symmetry and Noise Equilibrium of Stochastic Gradient Descent
  2. Liu Ziyin, Mingze Wang, Hongchao Li, Lei Wu
    NeurIPS 2024
    [paper] [arXiv]
  3. Symmetry Induces Structure and Constraint of Learning
  4. Liu Ziyin
    ICML 2024
    [arXiv]
  5. Zeroth, first, and second-order phase transitions in deep neural networks
  6. Liu Ziyin, Masahito Ueda
    Physical Review Research 2023
    [arXiv]
  7. Exact Solutions of a Deep Linear Network
  8. Liu Ziyin, Botao Li, Xiangming Meng
    Journal of Statistical Mechanics: Theory and Experiment, 2023
    [paper] [arXiv]
  9. On the stepwise nature of self-supervised learning
  10. James B. Simon, Maksis Knutins, Liu Ziyin, Daniel Geisz, Abraham J. Fetterman, Joshua Albrecht
    ICML 2023
    [paper] [arXiv]
  11. Sparsity by Redundancy: Solving L1 with SGD
  12. Liu Ziyin*, Zihao Wang*
    ICML 2023
    [paper] [arXiv]
  13. What shapes the loss landscape of self-supervised learning?
  14. Liu Ziyin, Ekdeep Singh Lubana, Masahito Ueda, Hidenori Tanaka
    ICLR 2023
    [paper] [arXiv]
  15. Exact Solutions of a Deep Linear Network
  16. Liu Ziyin, Botao Li, Xiangming Meng
    NeurIPS 2022
    [paper] [arXiv]
  17. Posterior Collapse of a Linear Latent Variable Model
  18. Zihao Wang*, Liu Ziyin*
    NeurIPS 2022 (oral: 1% of all submissions)
    [paper] [arXiv]
  19. Universal Thermodynamic Uncertainty Relation in Non-Equilibrium Dynamics
  20. Liu Ziyin, Masahito Ueda
    Physical Review Research (2022)
    [paper] [arXiv]
  21. Theoretically Motivated Data Augmentation and Regularization for Portfolio Construction
  22. Liu Ziyin, Kentaro Minami, Kentaro Imajo
    ICAIF 2022 (3rd ACM International Conference on AI in Finance)
    [paper] [arXiv]
  23. Power Laws and Symmetries in a Minimal Model of Financial Market Economy
  24. Liu Ziyin, Katsuya Ito, Kentaro Imajo, Kentaro Minami
    Physical Review Research (2022)
    [paper] [arXiv]
  25. Logarithmic landscape and power-law escape rate of SGD
  26. Takashi Mori, Liu Ziyin, Kangqiao Liu, Masahito Ueda
    ICML 2022
    [paper] [arXiv]
  27. SGD with a Constant Large Learning Rate Can Converge to Local Maxima
  28. Liu Ziyin, Botao Li, James B. Simon, Masahito Ueda
    ICLR 2022 (spotlight: 5% of all submissions)
    [paper] [arXiv]
  29. Strength of Minibatch Noise in SGD
  30. Liu Ziyin*, Kangqiao Liu*, Takashi Mori, Masahito Ueda
    ICLR 2022 (spotlight: 5% of all submissions)
    [paper] [arXiv]
  31. On the Distributional Properties of Adaptive Gradients
  32. Zhang Zhiyi*, Liu Ziyin*
    UAI 2021
    [paper] [arXiv]
  33. Noise and Fluctuation of Finite Learning Rate Stochastic Gradient Descent
  34. Kangqiao Liu*, Liu Ziyin*, Masahito Ueda
    ICML 2021
    [paper] [arXiv]
  35. Cross-Modal Generalization: Learning in Low Resource Modalities via Meta-Alignment
  36. Paul Pu Liang*, Peter Wu*, Liu Ziyin, Louis-Philippe Morency, Ruslan Salakhutdinov
    ACM Multimedia 2021
    NeurIPS 2020 Workshop on Meta Learning
    [arXiv] [code]
  37. Neural Networks Fail to Learn Periodic Functions and How to Fix It
  38. Liu Ziyin, Tilman Hartwig, Masahito Ueda
    NeurIPS 2020
    [paper] [arXiv]
  39. Deep Gamblers: Learning to Abstain with Portfolio Theory
  40. Liu Ziyin, Zhikang Wang, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency, Masahito Ueda
    NeuRIPS 2019
    [paper] [arXiv] [code]
  41. Think Locally, Act Globally: Federated Learning with Local and Global Representations
  42. Paul Pu Liang*, Terrance Liu*, Liu Ziyin, Ruslan Salakhutdinov, Louis-Philippe Morency
    NeurIPS 2019 Workshop on Federated Learning (oral, distinguished student paper award)
    [paper] [arXiv] [code]
  43. Multimodal Language Analysis with Recurrent Multistage Fusion
  44. Paul Pu Liang, Ziyin Liu, Amir Zadeh, Louis-Philippe Morency
    EMNLP 2018 (oral presentation)
    [paper] [supp] [arXiv] [slides]

I also engage in the following reviewing services: ICML, IJCAI, CVPR, ICCV, AISTATS, UAI, NeurIPS, ICLR, TMLR, IEEE-TSP, TPAMI, KDD, IEEE-TNNLS, JMLR, SIAM-SDM...


This page has been accessed several times since July 07, 2018.