My research group works at the interface of Machine Learning, Statistics, and Optimization. We are interested in formalizing the process of learning, in analyzing the learning models, and in deriving and implementing the emerging learning methods. A significant thrust of our research is on developing theoretical and algorithmic tools for online prediction, decision-making, and reinforcement learning. Our recent interests include understanding reinforcement learning and decision making, neural networks and overparametrized models, as well as large language models.

The research group currently focuses on:

  1. Reinforcement Learning and Decision Making: In these problems, data are collected in an active manner and feedback is limited. Our work focuses on understanding the sample complexity, on developing computationally efficient methods, and on building a bridge between supervised learning and decision making. In the last few years, we have developed a theory of decision making based on DEC, a quantity that governs the sample complexity of RL and interactive decision making (see course notes for an introduction). Recent interests include LLMs and reinforcement learning in the context of fine-tuning and reasoning.
  2. Online Learning: We aim to develop robust prediction methods that do not rely on the i.i.d. or stationary nature of data. In contrast to the well-studied setting of Statistical Learning, methods that predict in an online fashion are arguably more complex and nontrivial. This field has some beautiful connections to Statistical Learning and the theory of empirical processes.
  3. Statistical Learning: We study the problem of learning with i.i.d. data. Our recent work focused on understanding the phenomenon of benign overfitting in overparametrized models. Our current interests include diffusion models, as well as bridging the classical statistical ideas with the new techniques in decision making and interactive learning.