My research is at the interface of Machine Learning, Statistics, and Optimization. I am interested in formalizing the process of learning, in analyzing the learning models, and in deriving and implementing the emerging learning methods. A significant thrust of my research is on developing theoretical and algorithmic tools for online prediction and decision-making. My recent interests include understanding neural networks and, more generally, learning in overparametrized models.

The research group currently focuses on:

  1. Statistical Learning: We study the problem of building a good predictor based on an i.i.d. sample. While much is understood in this classical setting, our current focus is on the Deep Learning models. In particular, we study various measures of complexity of neural networks that govern their out-of-sample performance. Our recent focus is on statistical and computational aspects of interpolation methods, as well as understanding the phenomenon of benign overfitting in overparametrized models.
  2. Online Learning: We aim to develop robust prediction methods that do not rely on the i.i.d. or stationary nature of data. In contrast to the well-studied setting of Statistical Learning, methods that predict in an online fashion are arguably more complex and nontrivial. This field has some beautiful connections to Statistical Learning and the theory of empirical processes.
  3. Contextual Bandits and Reinforcement Learning: In these problems, data are collected in an active manner and feedback is limited. Our work focuses on understanding the sample complexity and on developing computationally efficient methods. Among the highlights is a recent reduction from Contextual Bandits to Supervised Learning. In addition to bridging decision-making and classical machine learning, the new contextual bandit methods perform well in applications.