Stata Center. 32 Vassar Street

Cambridge, MA 02139

Office G32-578

jerryzli AT mit DOT edu

My CV (last updated 2/19/2018)I am a Ph.D student studying theoretical computer science at MIT. My advisor is Ankur Moitra. I did my masters at MIT under the wonderful supervision of Nir Shavit. I am partially supported by an NSF Graduate Research Fellowship. My primary research interests are in learning theory and distributed algorithms, but I am broadly interested in many other things in TCS. I particularly like applications of analysis and analytic techniques to TCS problems.

As an undergrad at the University of Washington, I worked on complexity of branching programs, and how we could prove hardness of techniques used for naturally arising learning problems in database theory and AI.

In my free time I enjoy being remarkably mediocre at ultimate frisbee, chess, and piano, amongst other things.

**Spectral Signatures for Backdoor Attacks on Deep Nets**

(by contribution) Brandon Tran, Jerry Li, Aleksander Mądry

manuscript**Privately Learning High-Dimensional Distributions**

Gautam Kamath, Jerry Li, Vikrant Singhal, Jonathan Ullman

manuscript**Byzantine Stochastic Gradient Descent**

Dan Alistarh, Zeyuan Allen-Zhu, Jerry Li

manuscript**SEVER: A Robust Meta-Algorithm for Stochastic Optimization**

Ilias Diakonikolas, Gautam Kamath, Daniel M. Kane, Jerry Li, Jacob Steinhardt, Alistair Stewart

manuscript

**On the limitations of first order approximation in GAN dynamics**

Jerry Li, Aleksander Mądry, John Peebles, Ludwig Schmidt

preliminary version in PADL 2017 as*Towards Understanding the Dynamics of Generative Adversarial Networks*

to appear, ICML 2018**Fast and Sample Near-Optimal Algorithms for Learning Multidimensional Histograms**

Ilias Diakonikolas, Jerry Li, Ludwig Schmidt

to appear, COLT 2018**Distributionally Linearizable Data Structures**

Dan Alistarh, Trevor Brown, Justin Kopinsky, Jerry Li, Giorgi Nadiradze

to appear, SPAA 2018

**Mixture Models, Robustness, and Sum of Squares Proofs**

Samuel B. Hopkins, Jerry Li

to appear, STOC 2018

**Robustly Learning a Gaussian: Getting Optimal Error, Efficiently**

Ilias Diakonikolas, Gautam Kamath, Daniel Kane, Jerry Li, Ankur Moitra, Alistair Stewart

SODA 2018

**Communication-Efficient Distributed Learning of Discrete Distributions**

Ilias Diakonikolas, Elena Grigorescu, Jerry Li, Abhiram Natarajan, Krzysztof Onak, Ludwig Schmidt

NIPS 2017,**Oral Presentation****QSGD: Communication-Optimal Stochastic Gradient Descent, with Applications to Training Neural Networks**

Dan Alistarh, Demjan Grubić, Jerry Li, Ryota Tomioka, Milan Vojnovic

preliminary version in OPT 2016

NIPS 2017,**Spotlight Presentation**

**Invited for presentation at NVIDIA GTC**

[code][poster][video]**Being Robust (in High Dimensions) can be Practical**

Ilias Diakonikolas, Gautam Kamath, Daniel Kane, Jerry Li, Ankur Moitra, Alistair Stewart

ICML 2017

[code]

**ZipML: An End-to-end Bitwise Framework for Dense Generalized Linear Models**

(by contribution) Hantian Zhang*, Jerry Li*, Kaan Kara, Dan Alistarh, Ji Liu, Ce Zhang

*equal contribution

ICML 2017

**The Power of Choice in Priority Scheduling**

Dan Alistarh, Justin Kopinsky, Jerry Li, Giorgi Nadiradze

PODC 2017**Robust Sparse Estimation Tasks in High Dimensions**

Jerry Li

COLT 2017

merged with this paper**Robust Proper Learning for Mixtures of Gaussians via Systems of Polynomial Inequalities**

Jerry Li, Ludwig Schmidt.

COLT 2017

**Sample Optimal Density Estimation in Nearly-Linear Time**

Jayadev Acharya, Ilias Diakonikolas, Jerry Li, Ludwig Schmidt.

SODA 2017

TCS+ talk by Ilias, which discussed the piecewise polynomial framework and our results at a high level

**Robust Estimators in High Dimensions, without the Computational Intractability**

Ilias Diakonikolas, Gautam Kamath, Daniel Kane, Jerry Li, Ankur Moitra, Alistair Stewart

FOCS 2016

**Invited to Highlights of Algorithms 2017**

**Invited to appear in special issue of SIAM Journal on Computing for FOCS 2016.**

MIT News, USC Viterbi News**Fast Algorithms for Segmented Regression**

Jayadev Acharya, Ilias Diakonikolas, Jerry Li, Ludwig Schmidt

ICML 2016

**Replacing Mark Bits with Randomness in Fibonacci Heaps**

Jerry Li, John Peebles.

ICALP 2015

**Fast and Near-Optimal Algorithms for Approximating Distributions by Histograms**

Jayadev Acharya, Ilias Diakonikolas, Chinmay Hegde, Jerry Li, Ludwig Schmidt.

PODS 2015

**The SprayList: A Scalable Relaxed Priority Queue**

Dan Alistarh, Justin Kopinsky, Jerry Li, Nir Shavit.

PPoPP 2015,**Best Artifact Award**

See also the full version

[code]

Slashdot, MIT News**On the Importance of Registers for Computability**

Rati Gelashvili, Mohsen Ghaffari, Jerry Li, Nir Shavit.

OPODIS 2014

The following two papers are subsumed by the journal paper **Model Counting of Query Expressions: Limitations of Propositional Methods**

Paul Beame, Jerry Li, Sudeepa Roy, Dan Suciu.

ICDT 2014

**Invited to appear in special issue of ACM Transactions on Database Systems for ICDT 2014.****Lower bounds for exact model counting and applications in probabilistic databases**

Paul Beame, Jerry Li, Sudeepa Roy, and Dan Suciu.

UAI 2013, selected for plenary presentation.

**Exact Model Counting of Query Expressions: Limitations of Propositional Methods**

Paul Beame, Jerry Li, Sudeepa Roy, Dan Suciu.

ACM Transactions on Database Systems (TODS), Vol. 42, Issue 1, pages 1:1-1:46, March 2017.

**Efficient training of neural networks**

Dan Alistarh, Jerry Li, Ryota Tomioka, Milan Vojnovic

in submission

**The SprayList: A Scalable Relaxed Priority Queue**

Jerry Li.

Master's thesis

**Solutions of the Stochastic Dirichlet Problem**

Jerry Li.

My undergraduate thesis, a literature review of the basics of stochastic calculus

**Tracking Serial Criminals with a Road Metric**

Mark Bun, Jerry Li, Ian Zemke.

Our 2010 MCM submission, which was awarded an Outstanding Winner prize (the top prize).

**Robustly Learning a Gaussian in High Dimensions: Getting Optimal Error, Efficiently**

SODA 2018, January 2018

**Mixture Models, Robustness, and Sum-of-Squares Proofs**

Microsoft Research Redmond, December 2017

MIT Algorithms and Complexity Semniar, November 2017

**QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding**

NIPS 2017, December 2017

**Being Robust (in High Dimensions) can be Practical**

ICML 2017, August 2017

**Robust Proper Learning for Mixtures of Gaussians via Systems of Polynomial Inequalities**

COLT 2017, July 2017

**Efficient Robust Sparse Estimation in High Dimensions**

COLT 2017, July 2017. Joint with Simon Du

**Robust Estimators In High Dimensions without the Computational Intractability**[slides]

**Quantized Stochastic Gradient Descent**

MIT ML Tea, October 2016

**Fast Algorithms for Segmented Regression**[slides]

ICML 2016 [video]

**Fast and Near-Optimal Algorithms for Approximating Distributions by Histograms**[slides]

PODS 2015

**Model Counting of Query Expressions: Limitations of Propositional Methods**[slides]

ICDT 2015

MIT Theory Lunch, 2014

TA for 6.852, Distributed Algorithms, Fall 2014.

TA for the UW Math REU under Dr. James Morrow, Summer 2013.

TA for MATH 334/5/6, Advanced Accelerated Second Year Honors Calculus, 2012-2013.

TA for CS 373, Algorithms and Data Structures, Spring 2012.

TA for CS 344, Databases, Winter 2012.

I'm participating in Algorithms Office Hours. If you're affiliated with MIT, and have algorithmic questions, please contact us!

I am on the steering committee for SLOGN*

I organized the Great Ideas in Theoretical Computer Science (aka theory lunch) in the 2013-2014 academic year.

I stole the boombox from the Glorious Office 3 times, then promptly lost it back each time.