Algorithms & Complexity Seminar, MIT : 2017-18

Organizers: Akshay Degwekar (), Pritish Kamath (), Govind Ramnarayan ()

The Algorithms & Complexity Seminar for the 2017-18 year will usually (unless otherwise stated) meet on Wednesdays 4pm-5pm in 32-G575 (Theory Lab on the 5th floor of the Stata Center). The style and format of these meetings are variable. Please feel free to contact the organizers to find out more details. To receive announcements of upcoming talks, subscribe to the mailing list by either visiting the mailman page or send an empty email to compalgsem-subscribe@lists.csail.mit.edu.

Upcoming Talks

• Wednesday, October 25, 2017: Pritish Kamath (MIT). [Time: 4.30-5.30pm]
Topic. Non-Interactive Agreement & Dimension Reduction for Polynomials
Abstract. The "Non-interactive Agreement Distillation" problem, specified by a joint distribution $P(x,y)$ and a target alphabet size $k$, is defined as follows: Two players, Alice and Bob, observe sequences $(X_1, ... , X_n)$ and $(Y_1, ... , Y_n)$ respectively where $\{(X_i, Y_i)\}$ are drawn i.i.d. from $P(x,y)$. Both players look at their share of randomness, and output an element of $[k]$. Their goal is to maximize the probability that their outputs are the same, while ensuring that their outputs are marginally uniform.

Given $P(x,y)$ and $k$, what is the largest correlation (or agreement probability) that the players can achieve? In the special case where $P(x,y)$ is the distribution of $\rho$-correlated Gaussians, the largest achievable correlation is also known as the "Optimal Noise-Stability of a $k$-partition" and has received wide interest in Computer Science. More generally, it is also related to the notions of common information studied in Information Theory.

It turns out that this value is not well understood, even in the special case of correlated Gaussians! Moreover, this value is a priori not even "computable", as there is no immediate upper bound on the number of samples the players need to draw in order to achieve the best possible correlation!

In this work, we obtain explicit (computable) bounds on the number of samples needed to get $\varepsilon$-close to the maximum achievable correlation (or even for the more general problem of "non-interactive simulation"). This implies computability of the maximum achievable correlation. [Our bounds significantly improve upon the results obtained recently by De, Mossel & Neeman, using fundamentally different techniques.]

At the heart of our result is a new technique that we call "Dimension Reduction for Polynomials". It is analogous to the Johnson-Lindenstrauss lemma, which is a dimension reduction for vectors. This technique is mostly elementary, and we believe it to be of independent interest.

This talk will discuss the motivational aspects of the problem, its moral and technical connections to other problems in computer science and will mainly focus on the new technique of "dimension reduction for polynomials".

• Wednesday, November 1, 2017: Andrej Risteski (MIT).
Topic. TBA
Abstract. TBA

Past Talks

• Wednesday, September 6, 2017: Dor Minzer (Tel Aviv University).
Topic. An approach for 2-to-1 Games Conjecture via expansion on the Grassmann Graph
Abstract. The PCP theorem characterizes the computational class NP, so as to allow proving approximation problems are NP-hard.

One of the fundamental open questions in PCP theory is whether a special type of PCP, namely 2-to-1 Games, is still NP-hard. This conjecture is a variant of Khot's well-known Unique Games conjecture.

A recent line of study suggests a new attack on the 2-to-1 games conjecture (albeit with imperfect completeness). This approach is based on a mathematical object called the Grassmann Graph, and relies on an unproven combinatorial hypothesis on it. This, in turn, leads to the study of edge expansion in this graph. More specifically, to the study of sets of vertices in the Grassmann Graph, whose edge expansion is not optimal.

It turns out that the study of the combinatiral hypothesis is in fact equivalent to the study of edge expansion (Barak Kothari). Thus extending these results would complete the approach, proving that distinguishing between 2-to-1 games with close to 1 value, and 2-to-1 games with arbitrarily small value, is NP-hard.

This talk discusses this line of study.

Based on joint works with Irit Dinur, Subhash Khot, Guy Kindler and Muli Safra.
• Wednesday, September 27, 2017: Paul Hand (Rice U.). [Room: 32-D463 (Star)]
Topic. Deep Compressed Sensing
Abstract. Combining principles of compressed sensing with deep neural network-based generative image priors has recently been empirically shown to require 10X fewer measurements than traditional compressed sensing in certain scenarios. As deep generative priors (such as those obtained via generative adversarial training) improve, analogous improvements in the performance of compressed sensing and other inverse problems may be realized across the imaging sciences.

In joint work with Vladislav Voroninski, we provide a theoretical framework for studying inverse problems subject to deep generative priors. In particular, we prove that with high probability, the non-convex empirical risk objective for enforcing random deep generative priors subject to compressive random linear observations of the last layer of the generator has no spurious local minima, and that for a fixed network depth, these guarantees hold at order-optimal sample complexity.
• Thursday, September 28, 2017: Yuval Dagan (Technion). [Time: 3-4pm]
Topic. Trading information complexity for error
Abstract. We consider the standard two-party communication model. The central problem studied in this article is how much one can save in information complexity by allowing an error. Our major result is that the $\varepsilon$-error randomized communication complexity of set disjointness is $n[C_{\mathrm{DISJ}} - \Theta(h(\varepsilon))] + o(n)$, where $C_{\mathrm{DISJ}} \approx 0.4827$ is the constant of set disjointness found by Braverman et al.

Joint work with Yuval Filmus, Hamed Hatami and Yaqiao Li.
• Wednesday, October 11, 2017: Lijie Chen (MIT).
Topic. On The Power of Statistical Zero Knowledge
Abstract. We examine the power of statistical zero knowledge proofs (captured by the complexity class $\mathsf{SZK}$) and their variants. First, we give the strongest known relativized evidence that $\mathsf{SZK}$ contains hard problems, by exhibiting an oracle relative to which $\mathsf{SZK}$ (indeed, even $\mathsf{NISZK}$) is not contained in the class $\mathsf{UPP}$, containing those problems solvable by randomized algorithms with unbounded error. This answers an open question of Watrous from 2002 [Aar]. Second, we "lift" this oracle separation to the setting of communication complexity, thereby answering a question of Goos et al. (ICALP 2016). Third, we give relativized evidence that perfect zero knowledge proofs (captured by the class $\mathsf{PZK}$) are weaker than general zero knowledge proofs. Specifically, we exhibit oracles relative to which $\mathsf{SZK}$ does not belong to $\mathsf{PZK}$, $\mathsf{NISZK}$ does not belong to $\mathsf{NIPZK}$, and $\mathsf{PZK}$ is not equal to $\mathsf{coPZK}$. The first of these results answers a question raised in 1991 by Aiello and Hastad (Information and Computation), and the second answers a question of Lovett and Zhang (2016). We also describe additional applications of these results outside of structural complexity.

The technical core of our results is a stronger hardness amplification theorem for approximate degree, which roughly says that composing the gapped-majority function with any function of high approximate degree yields a function with high threshold degree.

Joint work with Adam Bouland, Dhiraj Holden, Justin Thaler, and Prashant Nalini Vasudevan.