Modeling with structured low-rank matrices is ubiquitous in applied statistical problems arising in science and engineering. In this talk, I will describe a couple of such problems arising in modern applied statistics with a focus on statistical modeling and modern computational algorithms.
The first part of the talk describes a general recommender system problem, where, the task is to predict a user's rating of an item based on (a) his/her ratings of other items and similar decisions made by other users and (b) additional meta-features characterizing properties of items and/or users. Under the assumption that the underlying latent factors driving the ratings are low-rank, I will describe a disciplined optimization framework for this task and describe scalable algorithms leveraging first order methods in convex optimization and novel numerical linear algebra techniques for the problem.
The second part of the talk revisits the classical low-rank Factor Analysis (FA) problem, widely used in statistics, econometrics and psychometrics, via a modern optimization lens. In this problem, one seeks to approximate an observed covariance matrix ($\Sigma$), by the sum of a Positive Semidefinite (PSD) low-rank component ($\Theta$) and a diagonal matrix ($\Phi$) (with nonnegative entries) subject to both $\Sigma - \Phi$ and $\Sigma - \Theta$ being PSD. I will describe a flexible family of rank-constrained, nonlinear Semidefinite Optimization based formulations for this task and introduce a novel reformulation of the general rank-constrained FA estimation problem. I will also describe a unified algorithmic framework for the proposed formulation.
(The first part of the talk presents joint work with Trevor Hastie, Will Fithian (Stanford) and the second part of the talk is joint with Dimitris Bertsimas (MIT) )