Liu Ziyin
Back
Recent Preprints
- Emergence of Hebbian Dynamics in Regularized Non-Local Learners
David Koplow, Tomaso Poggio, Liu Ziyin
Preprint 2025
[arXiv]
- Heterosynaptic Circuits Are Universal Gradient Machines
Liu Ziyin, Isaac Chuang, Tomaso Poggio
Preprint 2025
[arXiv]
- Parameter Symmetry Breaking and Restoration Determines the Hierarchical Learning in AI Systems
Liu Ziyin, Yizhou Xu, Tomaso Poggio, Isaac Chuang
Preprint 2025
[arXiv]
- Self-Assembly of a Biologically Plausible Learning Circuit
Qianli Liao*, Liu Ziyin*, Yulu Gan*, Brian Cheung, Mark Harnett, Tomaso Poggio
Preprint 2024
[arXiv]
Tutorial / Notes
Proof of a perfect platonic representation hypothesis (2025)
Publications
- Neural Thermodynamics I: Entropic Forces in Deep and Universal Representation Learning
Liu Ziyin*, Yizhou Xu*, Isaac Chuang
NeurIPS 2025
[arXiv]
- Law of Balance and Stationary Distribution of Stochastic Gradient Descent
Liu Ziyin*, Hongchao Li*, Masahito Ueda
Physical Review E
[arXiv]
- Compositional Generalization Requires More Than Disentangled Representations
Qiyao Liang, Daoyuan Qian, Liu Ziyin, Ila Fiete
ICML 2025
[arXiv]
- Understanding the Emergence of Multimodal Representation Alignment
Megan Tjandrasuwita, Chanakya Ekbote, Liu Ziyin, Paul Pu Liang
ICML 2025
[arXiv]
- Formation of Representations in Neural Networks
Liu Ziyin, Isaac Chuang, Tomer Galanti, Tomaso Poggio
ICLR 2025 (spotlight: 5% of all submissions)
[paper]
- Remove Symmetries to Control Model Expressivity
Liu Ziyin*, Yizhou Xu*, Isaac Chuang
ICLR 2025
[paper]
- When Does Feature Learning Happen? Perspective from an Analytically Solvable Model
Yizhou Xu*, Liu Ziyin*
ICLR 2025
[paper]
- Parameter Symmetry and Noise Equilibrium of Stochastic Gradient Descent
Liu Ziyin, Mingze Wang, Hongchao Li, Lei Wu
NeurIPS 2024
[paper] [arXiv]
- Symmetry Induces Structure and Constraint of Learning
Liu Ziyin
ICML 2024
[arXiv]
- Zeroth, first, and second-order phase transitions in deep neural networks
Liu Ziyin, Masahito Ueda
Physical Review Research 2023
[arXiv]
- Exact Solutions of a Deep Linear Network
Liu Ziyin, Botao Li, Xiangming Meng
Journal of Statistical Mechanics: Theory and Experiment, 2023
[paper] [arXiv]
- On the stepwise nature of self-supervised learning
James B. Simon, Maksis Knutins, Liu Ziyin, Daniel Geisz, Abraham J. Fetterman, Joshua Albrecht
ICML 2023
[paper] [arXiv]
- Sparsity by Redundancy: Solving L1 with SGD
Liu Ziyin*, Zihao Wang*
ICML 2023
[paper] [arXiv]
- What shapes the loss landscape of self-supervised learning?
Liu Ziyin, Ekdeep Singh Lubana, Masahito Ueda, Hidenori Tanaka
ICLR 2023
[paper] [arXiv]
- Exact Solutions of a Deep Linear Network
Liu Ziyin, Botao Li, Xiangming Meng
NeurIPS 2022
[paper] [arXiv]
- Posterior Collapse of a Linear Latent Variable Model
Zihao Wang*, Liu Ziyin*
NeurIPS 2022 (oral: 1% of all submissions)
[paper] [arXiv]
- Universal Thermodynamic Uncertainty Relation in Non-Equilibrium Dynamics
Liu Ziyin, Masahito Ueda
Physical Review Research (2022)
[paper] [arXiv]
- Theoretically Motivated Data Augmentation and Regularization for Portfolio Construction
Liu Ziyin, Kentaro Minami, Kentaro Imajo
ICAIF 2022 (3rd ACM International Conference on AI in Finance)
[paper] [arXiv]
- Power Laws and Symmetries in a Minimal Model of Financial Market Economy
Liu Ziyin, Katsuya Ito, Kentaro Imajo, Kentaro Minami
Physical Review Research (2022)
[paper] [arXiv]
- Logarithmic landscape and power-law escape rate of SGD
Takashi Mori, Liu Ziyin, Kangqiao Liu, Masahito Ueda
ICML 2022
[paper] [arXiv]
- SGD with a Constant Large Learning Rate Can Converge to Local Maxima
Liu Ziyin, Botao Li, James B. Simon, Masahito Ueda
ICLR 2022 (spotlight: 5% of all submissions)
[paper] [arXiv]
- Strength of Minibatch Noise in SGD
Liu Ziyin*, Kangqiao Liu*, Takashi Mori, Masahito Ueda
ICLR 2022 (spotlight: 5% of all submissions)
[paper] [arXiv]
- On the Distributional Properties of Adaptive Gradients
Zhang Zhiyi*, Liu Ziyin*
UAI 2021
[paper] [arXiv]
- Noise and Fluctuation of Finite Learning Rate Stochastic Gradient Descent
Kangqiao Liu*, Liu Ziyin*, Masahito Ueda
ICML 2021
[paper] [arXiv]
- Cross-Modal Generalization: Learning in Low Resource Modalities via Meta-Alignment
Paul Pu Liang*, Peter Wu*, Liu Ziyin, Louis-Philippe Morency, Ruslan Salakhutdinov
ACM Multimedia 2021
NeurIPS 2020 Workshop on Meta Learning
[arXiv] [code]
- Neural Networks Fail to Learn Periodic Functions and How to Fix It
Liu Ziyin, Tilman Hartwig, Masahito Ueda
NeurIPS 2020
[paper] [arXiv]
- Deep Gamblers: Learning to Abstain with Portfolio Theory
Liu Ziyin, Zhikang Wang, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency, Masahito Ueda
NeuRIPS 2019
[paper] [arXiv] [code]
- Think Locally, Act Globally: Federated Learning with Local and Global Representations
Paul Pu Liang*, Terrance Liu*, Liu Ziyin, Ruslan Salakhutdinov, Louis-Philippe Morency
NeurIPS 2019 Workshop on Federated Learning (oral, distinguished student paper award)
[paper] [arXiv] [code]
- Multimodal Language Analysis with Recurrent Multistage Fusion
Paul Pu Liang, Ziyin Liu, Amir Zadeh, Louis-Philippe Morency
EMNLP 2018 (oral presentation)
[paper] [supp] [arXiv] [slides]