Liu Ziyin
Liu Ziyin
Email: liu.ziyin.p (at) gmail.com / ziyinl (at) mit.edu
Office: Room 26-209, MIT
CV
I am a Postdoctoral Fellow at MIT and NTT Research. At MIT, I work with Prof. Isaac Chuang. I also collborate with Prof. Tomaso Poggio in the BCS department. My research focus is in the theoretical foundation of deep learning. Prior to coming to MIT, I received my phD in physics at the University of Tokyo under the supervision of Prof. Masahito Ueda. I received by Bachelor's degree in physics and mathematics at Carnegie Mellon University. Personally, I am interested in art, literature and philosophy. I also play Go. If you have questions or want to collaborate, or just want to say hi, feel free to send an email. Also, NTT Research might be hiring new interns. Please consider applying.
Doctor thesis: Symmetry breaking in deep learning (深層学習に於ける対称性の破れ, 2023).
Master thesis: Mean-field learning dynamics of deep neural networks (2020).
Research Interest
I am particularly interested identifying scientific principles of artificial intelligence (what is a principle?), and I think tools and intuitions from other fields of sciences an be of great help. Broadly speaking, I work to advance the following fields
- Physics of Learning (symmetry breaking, phase transitions, fluctuation-dissipation relations, etc, in neural networks)
- Empirical Science of AI
- Design of Efficient and Principled Algorithms
Recent Preprints
- Remove Symmetries to Control Model Expressivity
Liu Ziyin*, Yizhou Xu*, Isaac Chuang
Preprint 2024
[paper] [arXiv]
- When Does Feature Learning Happen? Perspective from an Analytically Solvable Model
Yizhou Xu, Liu Ziyin
Preprint 2024
[paper] [arXiv]
- Law of Balance and Stationary Distribution of Stochastic Gradient Descent
Liu Ziyin*, Hongchao Li*, Masahito Ueda
Preprint 2023
[paper] [arXiv]
- Probabilistic Stability of Stochastic Gradient Descent
Liu Ziyin, Botao Li, Tomer Galanti, Masahito Ueda
Preprint 2023
[paper] [arXiv]
Publications
- Loss Symmetry and Noise Equilibrium of Stochastic Gradient Descent
Liu Ziyin, Mingze Wang, Hongchao Li, Lei Wu
NeurIPS 2024
[paper] [arXiv]
- Symmetry Induces Structure and Constraint of Learning
Liu Ziyin
ICML 2024
[arXiv]
- Zeroth, first, and second-order phase transitions in deep neural networks
Liu Ziyin, Masahito Ueda
Physical Review Research 2023
[arXiv]
- Exact Solutions of a Deep Linear Network
Liu Ziyin, Botao Li, Xiangming Meng
Journal of Statistical Mechanics: Theory and Experiment, 2023
[paper] [arXiv]
- On the stepwise nature of self-supervised learning
James B. Simon, Maksis Knutins, Liu Ziyin, Daniel Geisz, Abraham J. Fetterman, Joshua Albrecht
ICML 2023
[paper] [arXiv]
- Sparsity by Redundancy: Solving L1 with SGD
Liu Ziyin*, Zihao Wang*
ICML 2023
[paper] [arXiv]
- What shapes the loss landscape of self-supervised learning?
Liu Ziyin, Ekdeep Singh Lubana, Masahito Ueda, Hidenori Tanaka
ICLR 2023
[paper] [arXiv]
- Exact Solutions of a Deep Linear Network
Liu Ziyin, Botao Li, Xiangming Meng
NeurIPS 2022
[paper] [arXiv]
- Posterior Collapse of a Linear Latent Variable Model
Zihao Wang*, Liu Ziyin*
NeurIPS 2022 (oral: 1% of all submissions)
[paper] [arXiv]
- Universal Thermodynamic Uncertainty Relation in Non-Equilibrium Dynamics
Liu Ziyin, Masahito Ueda
Physical Review Research (2022)
[paper] [arXiv]
- Theoretically Motivated Data Augmentation and Regularization for Portfolio Construction
Liu Ziyin, Kentaro Minami, Kentaro Imajo
ICAIF 2022 (3rd ACM International Conference on AI in Finance)
[paper] [arXiv]
- Power Laws and Symmetries in a Minimal Model of Financial Market Economy
Liu Ziyin, Katsuya Ito, Kentaro Imajo, Kentaro Minami
Physical Review Research (2022)
[paper] [arXiv]
- Logarithmic landscape and power-law escape rate of SGD
Takashi Mori, Liu Ziyin, Kangqiao Liu, Masahito Ueda
ICML 2022
[paper] [arXiv]
- SGD with a Constant Large Learning Rate Can Converge to Local Maxima
Liu Ziyin, Botao Li, James B. Simon, Masahito Ueda
ICLR 2022 (spotlight: 5% of all submissions)
[paper] [arXiv]
- Strength of Minibatch Noise in SGD
Liu Ziyin*, Kangqiao Liu*, Takashi Mori, Masahito Ueda
ICLR 2022 (spotlight: 5% of all submissions)
[paper] [arXiv]
- On the Distributional Properties of Adaptive Gradients
Zhang Zhiyi*, Liu Ziyin*
UAI 2021
[paper] [arXiv]
- Noise and Fluctuation of Finite Learning Rate Stochastic Gradient Descent
Kangqiao Liu*, Liu Ziyin*, Masahito Ueda
ICML 2021
[paper] [arXiv]
- Cross-Modal Generalization: Learning in Low Resource Modalities via Meta-Alignment
Paul Pu Liang*, Peter Wu*, Liu Ziyin, Louis-Philippe Morency, Ruslan Salakhutdinov
ACM Multimedia 2021
NeurIPS 2020 Workshop on Meta Learning
[arXiv] [code]
- Neural Networks Fail to Learn Periodic Functions and How to Fix It
Liu Ziyin, Tilman Hartwig, Masahito Ueda
NeurIPS 2020
[paper] [arXiv]
- Deep Gamblers: Learning to Abstain with Portfolio Theory
Liu Ziyin, Zhikang Wang, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency, Masahito Ueda
NeuRIPS 2019
[paper] [arXiv] [code]
- Think Locally, Act Globally: Federated Learning with Local and Global Representations
Paul Pu Liang*, Terrance Liu*, Liu Ziyin, Ruslan Salakhutdinov, Louis-Philippe Morency
NeurIPS 2019 Workshop on Federated Learning (oral, distinguished student paper award)
[paper] [arXiv] [code]
- Multimodal Language Analysis with Recurrent Multistage Fusion
Paul Pu Liang, Ziyin Liu, Amir Zadeh, Louis-Philippe Morency
EMNLP 2018 (oral presentation)
[paper] [supp] [arXiv] [slides]
I also engage in the following reviewing services: ICML, IJCAI, CVPR, ICCV, AISTATS, UAI, NeurIPS, ICLR, TMLR, IEEE-TSP, TPAMI, KDD, IEEE-TNNLS, JMLR, SIAM-SDM...
This page has been accessed
times since July 07, 2018.