Liu Ziyin

Back


Recent Preprints

  1. Heterosynaptic Circuits Are Universal Gradient Machines
  2. Liu Ziyin, Isaac Chuang, Tomaso Poggio
    Preprint 2025
    [arXiv]
  3. Parameter Symmetry Breaking and Restoration Determines the Hierarchical Learning in AI Systems
  4. Liu Ziyin, Yizhou Xu, Tomaso Poggio, Isaac Chuang
    Preprint 2025
    [arXiv]
  5. Self-Assembly of a Biologically Plausible Learning Circuit
  6. Qianli Liao*, Liu Ziyin*, Yulu Gan*, Brian Cheung, Mark Harnett, Tomaso Poggio
    Preprint 2024
    [arXiv]

Tutorial / Notes

Proof of a perfect platonic representation hypothesis (2025)

Publications

  1. Emergence of Hebbian Dynamics in Regularized Non-Local Learners
  2. David Koplow, Tomaso Poggio, Liu Ziyin
    ICML 2026
    [arXiv]
  3. A universal compression theory: Lottery ticket hypothesis and superpolynomial scaling laws
  4. Hong-Yi Wang, Di Luo, Tomaso Poggio, Isaac L. Chuang, Liu Ziyin*
    ICLR 2026
    [arXiv]
  5. Neural Thermodynamics I: Entropic Forces in Deep and Universal Representation Learning
  6. Liu Ziyin*, Yizhou Xu*, Isaac Chuang
    NeurIPS 2025
    [arXiv]
  7. Law of Balance and Stationary Distribution of Stochastic Gradient Descent
  8. Liu Ziyin*, Hongchao Li*, Masahito Ueda
    Physical Review E
    [arXiv]
  9. Compositional Generalization Requires More Than Disentangled Representations
  10. Qiyao Liang, Daoyuan Qian, Liu Ziyin, Ila Fiete
    ICML 2025
    [arXiv]
  11. Understanding the Emergence of Multimodal Representation Alignment
  12. Megan Tjandrasuwita, Chanakya Ekbote, Liu Ziyin, Paul Pu Liang
    ICML 2025
    [arXiv]
  13. Formation of Representations in Neural Networks
  14. Liu Ziyin, Isaac Chuang, Tomer Galanti, Tomaso Poggio
    ICLR 2025 (spotlight: 5% of all submissions)
    [paper]
  15. Remove Symmetries to Control Model Expressivity
  16. Liu Ziyin*, Yizhou Xu*, Isaac Chuang
    ICLR 2025
    [paper]
  17. When Does Feature Learning Happen? Perspective from an Analytically Solvable Model
  18. Yizhou Xu*, Liu Ziyin*
    ICLR 2025
    [paper]
  19. Parameter Symmetry and Noise Equilibrium of Stochastic Gradient Descent
  20. Liu Ziyin, Mingze Wang, Hongchao Li, Lei Wu
    NeurIPS 2024
    [paper] [arXiv]
  21. Symmetry Induces Structure and Constraint of Learning
  22. Liu Ziyin
    ICML 2024
    [arXiv]
  23. Zeroth, first, and second-order phase transitions in deep neural networks
  24. Liu Ziyin, Masahito Ueda
    Physical Review Research 2023
    [arXiv]
  25. Exact Solutions of a Deep Linear Network
  26. Liu Ziyin, Botao Li, Xiangming Meng
    Journal of Statistical Mechanics: Theory and Experiment, 2023
    [paper] [arXiv]
  27. On the stepwise nature of self-supervised learning
  28. James B. Simon, Maksis Knutins, Liu Ziyin, Daniel Geisz, Abraham J. Fetterman, Joshua Albrecht
    ICML 2023
    [paper] [arXiv]
  29. Sparsity by Redundancy: Solving L1 with SGD
  30. Liu Ziyin*, Zihao Wang*
    ICML 2023
    [paper] [arXiv]
  31. What shapes the loss landscape of self-supervised learning?
  32. Liu Ziyin, Ekdeep Singh Lubana, Masahito Ueda, Hidenori Tanaka
    ICLR 2023
    [paper] [arXiv]
  33. Exact Solutions of a Deep Linear Network
  34. Liu Ziyin, Botao Li, Xiangming Meng
    NeurIPS 2022
    [paper] [arXiv]
  35. Posterior Collapse of a Linear Latent Variable Model
  36. Zihao Wang*, Liu Ziyin*
    NeurIPS 2022 (oral: 1% of all submissions)
    [paper] [arXiv]
  37. Universal Thermodynamic Uncertainty Relation in Non-Equilibrium Dynamics
  38. Liu Ziyin, Masahito Ueda
    Physical Review Research (2022)
    [paper] [arXiv]
  39. Theoretically Motivated Data Augmentation and Regularization for Portfolio Construction
  40. Liu Ziyin, Kentaro Minami, Kentaro Imajo
    ICAIF 2022 (3rd ACM International Conference on AI in Finance)
    [paper] [arXiv]
  41. Power Laws and Symmetries in a Minimal Model of Financial Market Economy
  42. Liu Ziyin, Katsuya Ito, Kentaro Imajo, Kentaro Minami
    Physical Review Research (2022)
    [paper] [arXiv]
  43. Logarithmic landscape and power-law escape rate of SGD
  44. Takashi Mori, Liu Ziyin, Kangqiao Liu, Masahito Ueda
    ICML 2022
    [paper] [arXiv]
  45. SGD with a Constant Large Learning Rate Can Converge to Local Maxima
  46. Liu Ziyin, Botao Li, James B. Simon, Masahito Ueda
    ICLR 2022 (spotlight: 5% of all submissions)
    [paper] [arXiv]
  47. Strength of Minibatch Noise in SGD
  48. Liu Ziyin*, Kangqiao Liu*, Takashi Mori, Masahito Ueda
    ICLR 2022 (spotlight: 5% of all submissions)
    [paper] [arXiv]
  49. On the Distributional Properties of Adaptive Gradients
  50. Zhang Zhiyi*, Liu Ziyin*
    UAI 2021
    [paper] [arXiv]
  51. Noise and Fluctuation of Finite Learning Rate Stochastic Gradient Descent
  52. Kangqiao Liu*, Liu Ziyin*, Masahito Ueda
    ICML 2021
    [paper] [arXiv]
  53. Cross-Modal Generalization: Learning in Low Resource Modalities via Meta-Alignment
  54. Paul Pu Liang*, Peter Wu*, Liu Ziyin, Louis-Philippe Morency, Ruslan Salakhutdinov
    ACM Multimedia 2021
    NeurIPS 2020 Workshop on Meta Learning
    [arXiv] [code]
  55. Neural Networks Fail to Learn Periodic Functions and How to Fix It
  56. Liu Ziyin, Tilman Hartwig, Masahito Ueda
    NeurIPS 2020
    [paper] [arXiv]
  57. Deep Gamblers: Learning to Abstain with Portfolio Theory
  58. Liu Ziyin, Zhikang Wang, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency, Masahito Ueda
    NeuRIPS 2019
    [paper] [arXiv] [code]
  59. Think Locally, Act Globally: Federated Learning with Local and Global Representations
  60. Paul Pu Liang*, Terrance Liu*, Liu Ziyin, Ruslan Salakhutdinov, Louis-Philippe Morency
    NeurIPS 2019 Workshop on Federated Learning (oral, distinguished student paper award)
    [paper] [arXiv] [code]
  61. Multimodal Language Analysis with Recurrent Multistage Fusion
  62. Paul Pu Liang, Ziyin Liu, Amir Zadeh, Louis-Philippe Morency
    EMNLP 2018 (oral presentation)
    [paper] [supp] [arXiv] [slides]