Welcome to my page! I am a PhD candidate at the MIT Operations Research Center, advised by Rahul Mazumder. My research lies at the intersection of machine learning and optimization. Specifically, I am working on developing fast optimization algorithms for more interpretable yet expressive machine learning models. For instance, check out our sparse learning toolkit L0Learn.
Before coming to MIT, I did my masters at UIUC where I worked with ChengXiang Zhai on improving information recall in search engines. I also did a research internship with the Core Machine Learning team at Amazon, where my work with Charles Elkan reduced the time for serving machine learning models by $\sim$ 3000x.
PhD in Operations Research, 2021
Massachusetts Institute of Technology
MS in Computer Science, 2016
University of Illinois at Urbana-Champaign
BE in Electrical & Computer Eng., 2014
American University of Beirut
L0Learn fits regularization paths for L0-regularized regression. Specifically, it can solve the following class of problems for $q \in \{1,2 \}$: $$ \min_{\beta} \frac{1}{2} || y - X \beta ||^2 + \lambda ||\beta||_0 + \gamma||\beta||_q^q,$$ over a grid of $\lambda$ and $\gamma$ values. Path-wise optimization can be done using either cyclic coordinate descent or local combinatorial search. The core of the toolkit is implemented in C++ and employs many computational tricks and heuristics, leading to very competitive running times compared to popular solvers for the Lasso.
We provide an easy-to-use R interface for L0Learn. For more information on installation and usage, please check L0Learn’s Vignette.