This project is about moment matching algorithms for topic models. Due to their computational speed and theoretical guarantees, they have recently emerged as strong competitors to graphical-model approximate inference techniques, such as variational inference or Gibbs sampling.

In brief, these moment matching algorithms are based on the construction of moment / cumulant tensors from the data and matching them to the respective theoretical expressions in order to learn such model parameters as, e.g., the topic matrix. Construction of these algorithms is twofold: (a) derivation of the moment / cumulant tensors and (b) construction of particular joint diagonalization type algorithms for matching the tensors. The two steps can be implemented independently and then combined. In our recent paper

A. Podosinnikova, F. Bach, S. Lacoste-Julien. Rethinking LDA: Moment Matching for Discrete ICA. NIPS, 2015.

we analyze both steps. For the tensors step, we focus on the latent Dirichlet allocation (LDA) and discrete independent component analysis (DICA) models. Importantly, we show that the latter model is similar and sometimes equivalent to the former and its cumulants have potentially better sample complexity than the LDA moments. For the algorithmic step, we demonstrate that the joint diagonalization algorithm has better performance than the tensor power method and spectral algorithm, which were earlier considered for the problem. Although we deal with tensors, we obtain fast algorithms through projecting these tensors onto vectors.

Our code is available. It contains implementations of the joint diagonalization algorithm (Matlab/C++), tensor power method (Matlab), and spectral algorithm (Matlab) for both the LDA moments and the DICA cumulants. If you are interested in reproducing our experiments and/or our datasets, check this repo. We used this Matlab/MEX/C implementation of LDA. If you are only interested in our Matlab/MEX/C++ implementation of the joint diagonalization algorithm, it is available here.

Other versions of this paper: ArXiV and HAL.

This project is about a new algorithm (TRPCA) for robust Principal Component Analysis (PCA). PCA is a widely used dimensionality reduction tool in machine learning, statistics, and data analysis. However, it is well known that PCA is strongly affected by outliers and a lot of effort has been put into robustification of PCA. The main idea behind TRPCA is the minimization of the trimmed reconstruction error over the Stiefel manifold, which results in a parameter free and computationally fast algorithm. More details can be found on the project's web page or in the paper

A. Podosinnikova, S. Setzer, M. Hein. Robust PCA: Optimization of the Robust Reconstruction Error over the Stiefel Manifold. GCPR, 2014.

The code is available on GitHub.