Liu Ziyin

Back


Recent Preprints

  1. Emergence of Hebbian Dynamics in Regularized Non-Local Learners
  2. David Koplow, Tomaso Poggio, Liu Ziyin
    Preprint 2025
    [arXiv]
  3. Heterosynaptic Circuits Are Universal Gradient Machines
  4. Liu Ziyin, Isaac Chuang, Tomaso Poggio
    Preprint 2025
    [arXiv]
  5. Parameter Symmetry Breaking and Restoration Determines the Hierarchical Learning in AI Systems
  6. Liu Ziyin, Yizhou Xu, Tomaso Poggio, Isaac Chuang
    Preprint 2025
    [arXiv]
  7. Self-Assembly of a Biologically Plausible Learning Circuit
  8. Qianli Liao*, Liu Ziyin*, Yulu Gan*, Brian Cheung, Mark Harnett, Tomaso Poggio
    Preprint 2024
    [arXiv]

Tutorial / Notes

Proof of a perfect platonic representation hypothesis (2025)

Publications

  1. Neural Thermodynamics I: Entropic Forces in Deep and Universal Representation Learning
  2. Liu Ziyin*, Yizhou Xu*, Isaac Chuang
    NeurIPS 2025
    [arXiv]
  3. Law of Balance and Stationary Distribution of Stochastic Gradient Descent
  4. Liu Ziyin*, Hongchao Li*, Masahito Ueda
    Physical Review E
    [arXiv]
  5. Compositional Generalization Requires More Than Disentangled Representations
  6. Qiyao Liang, Daoyuan Qian, Liu Ziyin, Ila Fiete
    ICML 2025
    [arXiv]
  7. Understanding the Emergence of Multimodal Representation Alignment
  8. Megan Tjandrasuwita, Chanakya Ekbote, Liu Ziyin, Paul Pu Liang
    ICML 2025
    [arXiv]
  9. Formation of Representations in Neural Networks
  10. Liu Ziyin, Isaac Chuang, Tomer Galanti, Tomaso Poggio
    ICLR 2025 (spotlight: 5% of all submissions)
    [paper]
  11. Remove Symmetries to Control Model Expressivity
  12. Liu Ziyin*, Yizhou Xu*, Isaac Chuang
    ICLR 2025
    [paper]
  13. When Does Feature Learning Happen? Perspective from an Analytically Solvable Model
  14. Yizhou Xu*, Liu Ziyin*
    ICLR 2025
    [paper]
  15. Parameter Symmetry and Noise Equilibrium of Stochastic Gradient Descent
  16. Liu Ziyin, Mingze Wang, Hongchao Li, Lei Wu
    NeurIPS 2024
    [paper] [arXiv]
  17. Symmetry Induces Structure and Constraint of Learning
  18. Liu Ziyin
    ICML 2024
    [arXiv]
  19. Zeroth, first, and second-order phase transitions in deep neural networks
  20. Liu Ziyin, Masahito Ueda
    Physical Review Research 2023
    [arXiv]
  21. Exact Solutions of a Deep Linear Network
  22. Liu Ziyin, Botao Li, Xiangming Meng
    Journal of Statistical Mechanics: Theory and Experiment, 2023
    [paper] [arXiv]
  23. On the stepwise nature of self-supervised learning
  24. James B. Simon, Maksis Knutins, Liu Ziyin, Daniel Geisz, Abraham J. Fetterman, Joshua Albrecht
    ICML 2023
    [paper] [arXiv]
  25. Sparsity by Redundancy: Solving L1 with SGD
  26. Liu Ziyin*, Zihao Wang*
    ICML 2023
    [paper] [arXiv]
  27. What shapes the loss landscape of self-supervised learning?
  28. Liu Ziyin, Ekdeep Singh Lubana, Masahito Ueda, Hidenori Tanaka
    ICLR 2023
    [paper] [arXiv]
  29. Exact Solutions of a Deep Linear Network
  30. Liu Ziyin, Botao Li, Xiangming Meng
    NeurIPS 2022
    [paper] [arXiv]
  31. Posterior Collapse of a Linear Latent Variable Model
  32. Zihao Wang*, Liu Ziyin*
    NeurIPS 2022 (oral: 1% of all submissions)
    [paper] [arXiv]
  33. Universal Thermodynamic Uncertainty Relation in Non-Equilibrium Dynamics
  34. Liu Ziyin, Masahito Ueda
    Physical Review Research (2022)
    [paper] [arXiv]
  35. Theoretically Motivated Data Augmentation and Regularization for Portfolio Construction
  36. Liu Ziyin, Kentaro Minami, Kentaro Imajo
    ICAIF 2022 (3rd ACM International Conference on AI in Finance)
    [paper] [arXiv]
  37. Power Laws and Symmetries in a Minimal Model of Financial Market Economy
  38. Liu Ziyin, Katsuya Ito, Kentaro Imajo, Kentaro Minami
    Physical Review Research (2022)
    [paper] [arXiv]
  39. Logarithmic landscape and power-law escape rate of SGD
  40. Takashi Mori, Liu Ziyin, Kangqiao Liu, Masahito Ueda
    ICML 2022
    [paper] [arXiv]
  41. SGD with a Constant Large Learning Rate Can Converge to Local Maxima
  42. Liu Ziyin, Botao Li, James B. Simon, Masahito Ueda
    ICLR 2022 (spotlight: 5% of all submissions)
    [paper] [arXiv]
  43. Strength of Minibatch Noise in SGD
  44. Liu Ziyin*, Kangqiao Liu*, Takashi Mori, Masahito Ueda
    ICLR 2022 (spotlight: 5% of all submissions)
    [paper] [arXiv]
  45. On the Distributional Properties of Adaptive Gradients
  46. Zhang Zhiyi*, Liu Ziyin*
    UAI 2021
    [paper] [arXiv]
  47. Noise and Fluctuation of Finite Learning Rate Stochastic Gradient Descent
  48. Kangqiao Liu*, Liu Ziyin*, Masahito Ueda
    ICML 2021
    [paper] [arXiv]
  49. Cross-Modal Generalization: Learning in Low Resource Modalities via Meta-Alignment
  50. Paul Pu Liang*, Peter Wu*, Liu Ziyin, Louis-Philippe Morency, Ruslan Salakhutdinov
    ACM Multimedia 2021
    NeurIPS 2020 Workshop on Meta Learning
    [arXiv] [code]
  51. Neural Networks Fail to Learn Periodic Functions and How to Fix It
  52. Liu Ziyin, Tilman Hartwig, Masahito Ueda
    NeurIPS 2020
    [paper] [arXiv]
  53. Deep Gamblers: Learning to Abstain with Portfolio Theory
  54. Liu Ziyin, Zhikang Wang, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency, Masahito Ueda
    NeuRIPS 2019
    [paper] [arXiv] [code]
  55. Think Locally, Act Globally: Federated Learning with Local and Global Representations
  56. Paul Pu Liang*, Terrance Liu*, Liu Ziyin, Ruslan Salakhutdinov, Louis-Philippe Morency
    NeurIPS 2019 Workshop on Federated Learning (oral, distinguished student paper award)
    [paper] [arXiv] [code]
  57. Multimodal Language Analysis with Recurrent Multistage Fusion
  58. Paul Pu Liang, Ziyin Liu, Amir Zadeh, Louis-Philippe Morency
    EMNLP 2018 (oral presentation)
    [paper] [supp] [arXiv] [slides]