Dhruv Rohatgi

Hi! I'm a fourth-year graduate student in EECS at MIT. I'm very fortunate to be advised by Ankur Moitra. My current research interests are in computational statistics and reinforcement learning theory.

I am grateful to have been supported by an NDSEG Fellowship and an MIT Akamai Presidential Fellowship.

To reach me: [first initial][last name]@mit.edu

Publications and Preprints

  1. Exploration is Harder than Prediction: Cryptographically Separating Reinforcement Learning from Supervised Learning
    with Ankur Moitra and Noah Golowich
    FOCS 2024 ♦ 65th Annual Symposium on Foundations of Computer Science (to appear).
  2. Lasso with Latents: Efficient Estimation, Covariate Rescaling, and Computational-Statistical Gaps
    with Jonathan Kelner, Frederic Koehler, and Raghu Meka
    COLT 2024 ♦ 37th Annual Conference on Learning Theory (to appear).
  3. Exploring and Learning in Sparse Linear MDPs without Computationally Intractable Oracles
    with Ankur Moitra and Noah Golowich
    STOC 2024 ♦ 56th Annual ACM Symposium on Theory of Computing (to appear).
  4. Provable Benefits of Score Matching
    with Chirag Pabbaraju, Anish Sevekari, Holden Lee, Ankur Moitra, and Andrej Risteski
    NeurIPS 2023 (Spotlight Presentation) ♦ 37th Conference on Neural Information Processing Systems (to appear).
  5. Feature Adaptation for Sparse Linear Regression
    with Jonathan Kelner, Frederic Koehler, and Raghu Meka
    NeurIPS 2023 (Spotlight Presentation) ♦ 37th Conference on Neural Information Processing Systems (to appear).
  6. Learning in Observable POMDPs, without Computationally Intractable Oracles
    with Ankur Moitra and Noah Golowich
    NeurIPS 2022 ♦ 36th Conference on Neural Information Processing Systems.
  7. Provably Auditing Ordinary Least Squares in Low Dimensions
    with Ankur Moitra
    ICLR 2023 ♦ 11th International Conference on Learning Representations.
  8. Distributional Hardness Against Preconditioned Lasso via Erasure-Robust Designs
    with Jonathan Kelner, Frederic Koehler, and Raghu Meka
    NeurIPS 2022 ♦ 36th Conference on Neural Information Processing Systems.
  9. Planning in Observable POMDPs in Quasipolynomial Time
    with Ankur Moitra and Noah Golowich
    STOC 2023 ♦ 55th Annual ACM Symposium on Theory of Computing.
  10. Robust Generalized Method of Moments: A Finite Sample Viewpoint
    with Vasilis Syrgkanis
    Preliminary version selected for Oral Presentation at MLECON Workshop (NeurIPS 2021)
    NeurIPS 2022 ♦ 36th Conference on Neural Information Processing Systems.
  11. On the Power of Preconditioning in Sparse Linear Regression
    with Jonathan Kelner, Frederic Koehler, and Raghu Meka
    FOCS 2021 ♦ 62nd Annual IEEE Symposium on Foundations of Computer Science.
  12. Truncated Linear Regression in High Dimensions
    with Costis Daskalakis and Manolis Zampetakis
    NeurIPS 2020 ♦ 34th Conference on Neural Information Processing Systems.
  13. Constant-Expansion Suffices for Compressed Sensing with Generative Priors
    with Costis Daskalakis and Manolis Zampetakis
    NeurIPS 2020 (Spotlight Presentation) ♦ 34th Conference on Neural Information Processing Systems.
  14. Regarding two conjectures on clique and biclique partitions
    with John C. Urschel and Jake Wellens
    The Electronic Journal of Combinatorics (2021).
  15. Near-Optimal Bounds for Online Caching with Machine-Learned Advice
    SODA 2020 ♦ 31st ACM-SIAM Symposium on Discrete Algorithms.
  16. Conditional Hardness for Approximate Earth Mover Distance
    APPROX 2019 ♦ 22nd Intl. Conference on Approximation Algorithms for Combinatorial Optimization Problems.
  17. Off-diagonal Ordered Ramsey Numbers of Matchings
    The Electronic Journal of Combinatorics (2019).

Website design adapted from Frederic Koehler and Manolis Zampetakis.