Liu Ziyin

Liu Ziyin

Email: liu.ziyin.p (at) gmail.com / ziyinl@mit.edu
Office: Room 26-209, MIT



I am a NTT Postdoctoral Fellow at MIT, where I work with Prof. Isaac Chuang. My research focus is in the theoretical foundation of deep learning. Prior to coming to MIT, I was a phd student at the University of Tokyo in the Ueda lab. Personally, I am interested in art, literature and philosophy. I also play Go. If you have questions or want to collaborate, or just want to say hi, please shoot an email.

Doctor thesis: Symmetry breaking in deep learning (深層学習に於ける対称性の破れ, 2023).
Master thesis: Mean-field learning dynamics of deep neural networks (2020).

Research Interest

I am particularly interested identifying a scientific principles of artificial intelligence, and I think tools and intuitions from other fields of sciences an be of great help. Currently, I am interested in the following problems:

Past Research

So far, I have worked on the following problems: I do my best to establish solid deep learning methods:
  • achieving L1 penalty with gradient descent
  • a horse race inspired blackbox for uncertainty estimation in deep learning
  • a theoretically motivated data augmentation method for financial portfolio construction

  • I also do research on statistical physics:
  • a universal theormodynamic uncertainty relation
  • econophysics
  • Recent Preprints

    1. The Implicit Bias of Gradient Noise: A Symmetry Perspective
    2. Liu Ziyin, Mingze Wang, Lei Wu
      Preprint 2024
      [paper] [arXiv]
    3. When Does Feature Learning Happen? Perspective from an Analytically Solvable Model
    4. Yizhou Xu, Liu Ziyin
      Preprint 2024
      [paper] [arXiv]
    5. Law of Balance and Stationary Distribution of Stochastic Gradient Descent
    6. Liu Ziyin*, Hongchao Li*, Masahito Ueda
      Preprint 2023
      [paper] [arXiv]
    7. Probabilistic Stability of Stochastic Gradient Descent
    8. Liu Ziyin, Botao Li, Tomer Galanti, Masahito Ueda
      Preprint 2023
      [paper] [arXiv]

    Publications

    1. Symmetry Leads to Structure and Constraint of Learning
    2. Liu Ziyin
      ICML 2024
      [paper] [arXiv]
    3. Zeroth, first, and second-order phase transitions in deep neural networks
    4. Liu Ziyin, Masahito Ueda
      Physical Review Research 2023
      [paper] [arXiv]
    5. Exact Solutions of a Deep Linear Network
    6. Liu Ziyin, Botao Li, Xiangming Meng
      Journal of Statistical Mechanics: Theory and Experiment, 2023
      [paper] [arXiv]
    7. On the stepwise nature of self-supervised learning
    8. James B. Simon, Maksis Knutins, Liu Ziyin, Daniel Geisz, Abraham J. Fetterman, Joshua Albrecht
      ICML 2023
      [paper] [arXiv]
    9. Sparsity by Redundancy: Solving L1 with SGD
    10. Liu Ziyin*, Zihao Wang*
      ICML 2023
      [paper] [arXiv]
    11. What shapes the loss landscape of self-supervised learning?
    12. Liu Ziyin, Ekdeep Singh Lubana, Masahito Ueda, Hidenori Tanaka
      ICLR 2023
      [paper] [arXiv]
    13. Exact Solutions of a Deep Linear Network
    14. Liu Ziyin, Botao Li, Xiangming Meng
      NeurIPS 2022
      [paper] [arXiv]
    15. Posterior Collapse of a Linear Latent Variable Model
    16. Zihao Wang*, Liu Ziyin*
      NeurIPS 2022 (oral: 1% of all submissions)
      [paper] [arXiv]
    17. Universal Thermodynamic Uncertainty Relation in Non-Equilibrium Dynamics
    18. Liu Ziyin, Masahito Ueda
      Physical Review Research (2022)
      [paper] [arXiv]
    19. Theoretically Motivated Data Augmentation and Regularization for Portfolio Construction
    20. Liu Ziyin, Kentaro Minami, Kentaro Imajo
      ICAIF 2022 (3rd ACM International Conference on AI in Finance)
      [paper] [arXiv]
    21. Power Laws and Symmetries in a Minimal Model of Financial Market Economy
    22. Liu Ziyin, Katsuya Ito, Kentaro Imajo, Kentaro Minami
      Physical Review Research (2022)
      [paper] [arXiv]
    23. Logarithmic landscape and power-law escape rate of SGD
    24. Takashi Mori, Liu Ziyin, Kangqiao Liu, Masahito Ueda
      ICML 2022
      [paper] [arXiv]
    25. SGD with a Constant Large Learning Rate Can Converge to Local Maxima
    26. Liu Ziyin, Botao Li, James B. Simon, Masahito Ueda
      ICLR 2022 (spotlight: 5% of all submissions)
      [paper] [arXiv]
    27. Strength of Minibatch Noise in SGD
    28. Liu Ziyin*, Kangqiao Liu*, Takashi Mori, Masahito Ueda
      ICLR 2022 (spotlight: 5% of all submissions)
      [paper] [arXiv]
    29. On the Distributional Properties of Adaptive Gradients
    30. Zhang Zhiyi*, Liu Ziyin*
      UAI 2021
      [paper] [arXiv]
    31. Noise and Fluctuation of Finite Learning Rate Stochastic Gradient Descent
    32. Kangqiao Liu*, Liu Ziyin*, Masahito Ueda
      ICML 2021
      [paper] [arXiv]
    33. Cross-Modal Generalization: Learning in Low Resource Modalities via Meta-Alignment
    34. Paul Pu Liang*, Peter Wu*, Liu Ziyin, Louis-Philippe Morency, Ruslan Salakhutdinov
      ACM Multimedia 2021
      NeurIPS 2020 Workshop on Meta Learning
      [arXiv] [code]
    35. Neural Networks Fail to Learn Periodic Functions and How to Fix It
    36. Liu Ziyin, Tilman Hartwig, Masahito Ueda
      NeurIPS 2020
      [paper] [arXiv]
    37. Deep Gamblers: Learning to Abstain with Portfolio Theory
    38. Liu Ziyin, Zhikang Wang, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency, Masahito Ueda
      NeuRIPS 2019
      [paper] [arXiv] [code]
    39. Think Locally, Act Globally: Federated Learning with Local and Global Representations
    40. Paul Pu Liang*, Terrance Liu*, Liu Ziyin, Ruslan Salakhutdinov, Louis-Philippe Morency
      NeurIPS 2019 Workshop on Federated Learning (oral, distinguished student paper award)
      [paper] [arXiv] [code]
    41. Multimodal Language Analysis with Recurrent Multistage Fusion
    42. Paul Pu Liang, Ziyin Liu, Amir Zadeh, Louis-Philippe Morency
      EMNLP 2018 (oral presentation)
      [paper] [supp] [arXiv] [slides]

    I also engage in the following reviewing services: ICML, IJCAI, CVPR, ICCV, AISTATS, UAI, NeurIPS, ICLR, TMLR, IEEE-TSP, TPAMI, KDD, IEEE-TNNLS, JMLR, SIAM-SDM...


    This page has been accessed several times since July 07, 2018.