About

I'm a Technical Staff member at MIT Lincoln Laboratory working on applications of machine learning to cybersecurity. I completed a PhD in machine learning at MIT advised by Tamara Broderick; my PhD research mostly focused on issues of approximate computation and robustness. Prior to graduate school, I was at Brown University for my undergraduate and worked for a year at Vision Systems, Inc. in Providence, RI.

Refereed Publications

  • Measuring the robustness of Gaussian processes to kernel choice.
    W. T. Stephenson, S. Ghosh, T. D. Nguyen, M. Yurochkin, S. K. Deshpande, and T. Broderick. AISTATS. 2022.
    Paper
  • Can we globally optimize cross-validation loss? Quasiconvexity in ridge regression.
    W. T. Stephenson, Z. Frangella, M. Udell, and T. Broderick. NeurIPS. 2021.
    Paper
  • Approximate cross-validation with low-rank data in high dimensions.
    W. T. Stephenson, M. Udell, and T. Broderick. NeurIPS. 2020.
    Paper
  • Approximate cross-validation for structured models.
    S. Ghosh*, W. T. Stephenson*, T. D. Nguyen, S. K. Deshpande, and T. Broderick (* denotes equal contribution) NeurIPS. 2020.
    Paper
  • Approximate cross-validation in high dimensions with guarantees.
    W. T. Stephenson and T. Broderick. AISTATS. 2020.
    Paper
  • A Swiss army infinitesimal jackknife.
    R. Giordano, W. T. Stephenson, R. Liu, M. I. Jordan, and T. Broderick. AISTATS. 2019.
    (Notable paper award) Paper
  • Sensitivity of Bayesian inference to data perturbations.
    L. Masoero*, W. T. Stephenson* and T. Broderick. (* denotes equal contribution) Symposium on Advances in Approximate Bayesian Inference. 2018.
    Paper
  • Understanding covariance estimates in expectation propagation.
    W. T. Stephenson and T. Broderick. NIPS 2016 Workshop on Advances in Approximate Bayesian Inference. 2016.
    Paper
  • Scalable adaptation of state complexity for nonparametric hidden Markov models.
    M. Hughes, W. T. Stephenson, and E. Sudderth. NIPS. 2015.
    Paper

Invited talks

Awards

  • AISTATS 2022, top 10% of peer reviewers
  • NeurIPS 2020, top 10% of peer reviewers
  • NeurIPS 2019, top 30% of peer reviewers