Info Confirmed Speakers Schedule Register
Recent Trends in Theoretical Computer Science

The workshop will bring together leading researchers from different areas of Theoretical Computer Science with the goal of presenting the latest research to a broad TCS audience from Chicago and surrounding areas, and increasing the interaction between different areas of Theoretical Computer Science.

When: January 31st, 2020
Where: Toyota Technological Institute at Chicago (TTIC)
6045 S Kenwood Ave,
Chicago, IL 60637
Organizers: Arturs Backurs and Sepideh Mahabadi (TTIC)
Register
All researchers interested in theoretical computer science are invited to attend. Attending the workshop is free, but please register via this form before January 15th.
Confirmed Speakers:
• Nima Anari (Stanford University)
• Sebastien Bubeck (Microsoft Research Redmond)
• Arkadev Chattopadhyay (Tata Institute of Fundamental Research)
• Ilias Diakonikolas (University of Wisconsin-Madison)
• Sebastian Forster (University of Salzburg)
• Ankit Garg (Microsoft Research India)
• Nika Haghtalab (Cornell University)
• Prahladh Harsha (Tata Institute of Fundamental Research)
• Piotr Indyk (Massachusetts Institute of Technology)
• Sanjeev Khanna (University of Pennsylvania)
• Dakshita Khurana (University of Illinois at Urbana-Champaign)
• Pravesh Kothari (Carnegie Mellon University)
• Robert Krauthgamer (The Weizmann Institute of Science)
• Marvin Künnemann (Max Planck Institute for Informatics)
• Tengyu Ma (Stanford University)
• Rafael Oliveira (University of Toronto and University of Waterloo)
• Sofya Raskhodnikova (Boston University)
• Noga Ron-Zewi (University of Haifa)
• Thatchaphol Saranurak (Toyota Technological Institute at Chicago)
• Balasubramanian Sivan (Google Research)
• Omri Weinstein (Columbia University)
• Henry Yuen (University of Toronto)
Schedule
8:30 -- 9:00 Registration + Light Breakfast
Session 1
9:00 -- 9:20 Thatchaphol Saranurak A Deterministic Algorithm for Balanced Cut with Applications

Finding a minimum balanced cut is a powerful method for implementing the divide-and-conquer paradigm on graphs.We give a new deterministic algorithm for finding an approximate minimum balanced cut in almost-linear time. Previous algorithms are either randomized or take quadratic time. This implies near-optimal deterministic algorithms for a whole range of important graph problems, including approximate max flow, electric flow, Laplacian solvers, expander decomposition, dynamic connectivity, etc.

Joint work with Julia Chuzhoy, Yu Gao, Jason Li, Danupon Nanongkai, and Richard Peng.

9:20 -- 9:40 Sanjeev Khanna Sublinear Algorithms for Graph Coloring

In this talk, we will examine a classical graph coloring problem through the lens of sublinear algorithms. It is well-known that any graph with maximum degree Delta admits a proper vertex coloring with (Delta + 1) colors that can be found via a simple sequential greedy algorithm in linear time and space. But can one find such a coloring via a sublinear algorithm? In this talk, we will present results that answer this question in the affirmative for several well-studied models of sublinear computation including graph streaming, sublinear time, and massively parallel computation (MPC). We obtain these results by proving a palette sparsification theorem which says that if every vertex independently samples O(log n) colors, then almost certainly a (Delta + 1) coloring can be found using only the sampled colors. We will show that this palette sparsification theorem naturally leads to essentially optimal sublinear algorithms in each of the above-mentioned models of computation.

This is based on joint work with Sepehr Assadi and Yu Chen.

9:40 -- 10:00 Sebastian Forster Computing and Testing Small Connectivity in Near-Linear Time and Queries via Fast Local Cut Algorithms

Consider the following "local" cut-detection problem in a directed graph: We are given a seed vertex x and need to remove at most k edges so that at most \nu edges can be reached from x (a "local" cut) or output \bot to indicate that no such cut exists. If we are given query access to the input graph, then this problem can in principle be solved without reading the whole graph and with query complexity depending on k and \nu. In this paper we consider a slack variant of this problem where, when such a cut exists, we can output a cut with up to O(k\nu) edges reachable from x.

We present a simple randomized algorithm spending O(k^2\nu) time and O(k\nu) queries for the above variant, improving in particular a previous time bound of (k^{O(k)}\nu) by Chechik et al. [SODA '17]. We also extend our algorithm to handle an approximate variant. We demonstrate that these local algorithms are versatile primitives for designing substantially improved algorithms for classic graph problems by providing the following three applications. (1) A randomized algorithm for the classic k-vertex connectivity problem that takes near-linear time when k=polylog(n), namely \tilde O(m+nk^3) time in undirected graphs and \tilde O(mk^2) time in directed graphs. Our techniques also yield an improved approximation scheme. (2) Property testing algorithms for k-edge and -vertex connectivity with query complexities that are near-linear in k, exponentially improving the state of the art. This resolves two open problems, one by Goldreich and Ron [STOC '97] and one by Orenstein and Ron [Theor. Comput Sci. '11]. (3) A faster algorithm for computing the maximal k-edge connected subgraphs, improving prior work of Chechik et al. [SODA '17].

10:00 -- 10:20 Prahladh Harsha High Dimensional Expander and some coding applications

Expander graphs, over the last few decades, have played a pervasive role in almost all areas of theoretical computer science. Recently, various high-dimensional analogues of these objects have been studied in mathematics and even more recently, there have been some surprising applications in computer science, especially in the area of coding theory. In this talk, we will explore these high-dimensional expanders from a spectral viewpoint and give an alternate characterization if terms of random walks. We will then see an application of high-dimensional expanders towards local testability and list-decoding.

10:20 -- 10:45 Break
Session 2
10:45 -- 11:05 Dakshita Khurana New Techniques for Zero-Knowledge Proofs

Zero-knowledge is one of the central areas of research in cryptography. A zero-knowledge proof allows a prover to convince a verifier about the truth of an NP statement while revealing no information beyond the truth of the statement. In the first part of this talk, I will give an overview of how zero-knowledge is defined, and of different techniques that we know, that help us build zero-knowledge proof systems. In the second part of the talk, I will give an overview of a new non-black-box simulation technique. I will describe how this gives us the first constructions of witness hiding proof systems requiring only a single message each from the verifier and the prover. Part 2 of the talk is based on joint work with Nir Bitansky and Omer Paneth.

11:05 -- 11:25 Noga Ron-Zewi Local Proofs Approaching the Witness Length

Interactive oracle proofs are a hybrid between interactive proofs and probabilistically-checkable proofs, where the prover is allowed to interact with a verifier by sending relatively long messages to the verifier, who in turn is only allowed to query a few of the bits that were sent.

In this work we construct, for any NP relation for which membership can be decided in polynomial-time and bounded polynomial space (e.g., SAT, Hamiltonicity, Clique, Vertex-Cover, etc.), interactive oracle proofs in which the proof length approaches the witness length. The number of rounds, as well as the number of queries made by the verifier, are constant. Our proof leverages recent constructions of high-rate locally testable tensor codes. In particular, we bypass the barrier imposed by the low rate of multiplication codes (e.g., Reed-Solomon, Reed-Muller or AG codes) - a core component in prior constructions.

Joint work with Ron Rothblum.

11:25 -- 11:45 Pravesh Kothari Efficient List-Decodable Estimation Via Sum-of-Squares

List-decodable learning studies high dimensional statistical estimation in the setting where an α (< 1/2) fraction of the input sample are drawn according to the model and the remaining ( possibly majority) points are arbitrary, potentially adversarially chosen inliers. Since only a minority of the sample are inliers, it is not possible to recover the unknown parameters uniquely. Thus, the goal in list-decodable learning is to recover a small - a fixed constant depending on α - list of parameters that contains the true parameters.

In this talk, I will survey a unified approach based on the sum of squares method for list-decodable learning. This is a single unified approach that yields the state of the art efficient algorithms for the problems of mean estimation/clustering spherical mixture models, linear regression and subspace recovery in the list-decodable setting.

Based on joint works with Ainesh Bakshi, Sushrut Karmalkar, Adam Klivans, and Jacob Steinhardt.

11:45 -- 1:15 Lunch (Provided)
Session 3
1:15 -- 1:35 Ilias Diakonikolas Distribution-Independent PAC Learning of Halfspaces with Massart Noise

I will describe recent algorithmic progress on the problem of learning halfspaces in the distribution-independent PAC model, when the labels of the examples have been corrupted by Massart noise.

1:35 -- 1:55 Piotr Indyk Learning-based algorithms for nearest neighbor search

Learning-based algorithms use machine learning to improve their performance, by adapting their behavior to the properties of the input distribution. In this talk I will give a brief overview of this emerging field, as well as describe our recent work on learning-based algorithms for nearest neighbor search (NNS). Our approach is inspired by recent theoretical work on NNS for general metric spaces using space partitions (Andoni et al. 2018b,c). Specifically, we develop a new framework for building space partitions by reducing the problem to balanced graph partitioning followed by supervised classification. We instantiate this general approach with the KaHIP graph partitioner and neural networks, respectively, to obtain a new partitioning procedure called Neural Locality-Sensitive Hashing (Neural LSH). On several standard benchmarks for NNS, our experiments show that the partitions obtained by Neural LSH consistently outperform partitions found by quantization-based and tree-based methods as well as classic, data-oblivious LSH.

Joint work with Yihe Dong, Ilya Razenshteyn and Tal Wagner. To appear in ICLR'20.

1:55 -- 2:15 Tengyu Ma Optimization of Regularized Wide Neural Networks

We will show that gradient descent can optimize a regularized infinite-width two-layer neural network in polynomial iterations. We will also discuss recent progress on the theory of deep learning very briefly as the context of this result as well as interesting open questions.

2:15 -- 2:35 Balasubramanian Sivan Strategizing against No-regret Learners

How should a player who repeatedly plays a game against a no-regret learner strategize to maximize his utility? We study this question and show that under some mild assumptions, the player can always guarantee himself a utility of at least what he would get in a Stackelberg equilibrium of the game. When the no-regret learner has only two actions, we show that the player cannot get any higher utility than the Stackelberg equilibrium utility. But when the no-regret learner has more than two actions and plays a mean-based no-regret strategy, we show that the player can get strictly higher than the Stackelberg equilibrium utility. We provide a characterization of the optimal game-play for the player against a mean-based no-regret learner as a solution to a control problem. When the no-regret learner's strategy also guarantees him a no-swap regret, we show that the player cannot get anything higher than a Stackelberg equilibrium utility.

Joint work with Yuan Deng, Jon Schneider

2:35 -- 2:55 Ankit Garg Learning arithmetic circuits in the average case via lower bounds

The problem of learning arithmetic circuits is the following: given a polynomial as a black box that is promised to have a small arithmetic circuit computing it, can we find this arithmetic circuit? This problem is hard in the worst case and so previous works have focused on regimes where the NP-hardness does not kick in (e.g. constant top fan-in etc.). We focus on the average case where one assumes certain non-degeneracy assumptions on the circuit (formally these amount to assuming the circuit parameters lie outside a certain variety and hence if they are chosen randomly according to any reasonable distribution, the non-degeneracy condition is satisfied whp). Kayal and Saha gave a meta framework for designing these algorithms for circuit classes where we have lower bounds. They carried this out for depth 3 circuits (sums of products of linear forms) and we (in joint work with Kayal and Saha) carry it out for depth 4 powering circuits (sums of powers of low degree polynomials). I will talk about the meta framework and then about our specific results. I will also talk about a potential application to learning mixtures of general Gaussians.

2:55 -- 3:20 Break
Session 4
3:20 -- 3:40 Henry Yuen (Cancelled) Connes' Embedding Problem through the lens of complexity theory

The Connes Embedding Problem (CEP) is a long-standing question in the field of operator algebras, and is known to be equivalent to a number of other conjectures across mathematics. Remarkably, the CEP is also connected to fundamental questions in quantum information theory. In particular, a positive resolution to the CEP implies the existence of an algorithm to approximately compute the optimal winning probability of nonlocal games. This motivates an intriguing complexity-theoretic approach to exploring the CEP: obtaining lower bounds on the complexity of nonlocal games implies limits on the CEP, whereas upper bounds gives evidence towards its positive resolution.

I will give an overview of this fascinating connection between the Connes Embedding Problem, nonlocal games, and computational complexity, and I will discuss some new results about the complexity of nonlocal games.

3:40 -- 4:00 Marvin Künnemann A Fine-grained Analogue of Schaefer's Theorem in P

Recent years have seen a surge of methods to analyze the complexity landscape within the polynomial-time regime, allowing us to classify which problems are unlikely to have faster solutions than currently known. Unfortunately, most of these results establish relationships only between isolated problems rather than between general classes of problems. The most important exception is a recent completeness result that shows that the fundamental Orthogonal Vectors problem (OV) is complete for the class of model-checking first-order properties [Gao, Impagliazzo, Kolokolova, and Williams TALG'19].

In this talk, we discuss advances towards a detailed understanding of this class of problems, to strengthen our class-based understanding of hardness within P. In particular, for a notable subclass, we determine precisely which first-order properties admit a polynomial-factor improvement over the natural baseline algorithm (conditional on a plausible assumption on the hardness of finding cliques in hypergraphs). This establishes a dichotomy in P that is analogous to Schaefer's Theorem for constraint satisfaction problems.

Joint work with Karl Bringmann and Nick Fischer.

4:00 -- 4:20 Omri Weinstein An Adaptive Step Toward the Multiphase Conjecture

In 2010, Patrascu proposed the Multiphase problem as a candidate for proving *polynomial* lower bounds on the operational time of dynamic data structures, and showed it would imply unconditional lower bounds on many important dynamic data structure problems. After 10 years, there has been almost no progress on this conjecture. We show an ~\Omega(\sqrt{n}) cell-probe lower bound on the Multiphase problem for data structures with general (adaptive) updates, and queries with unbounded but "layered" adaptivity. This captures all known set-intersection data structures and significantly strengthens previous Multiphase lower bounds, which only captured non-adaptive data structures. Our main technical result is a communication lower bound on a 4-party variant of Patrascu's Number-On-Forehead game, using information complexity techniques.

Joint work with Young Kun Ko.

4:20 -- 4:40 Arkadev Chattopadhyay The Log-Approximate-Rank Conjecture is False

We construct a simple and total XOR function F on 2n variables that has only O(n) spectral norm, O(n^2) approximate rank and O(n^{2.5}) approximate nonnegative rank. We show it has polynomially large randomized bounded-error communication complexity of Omega(sqrt(n)). This yields the first exponential gap between the logarithm of the approximate rank and randomized communication complexity for total functions. Thus, F witnesses a refutation of the Log-Approximate-Rank Conjecture which was posed by Lee and Shraibman (2007) as a very natural analogue for randomized communication of the still unresolved Log-Rank Conjecture for deterministic communication. The best known previous gap for any total function between the two measures was a recent 4th-power separation by Goos, Jayram, Pitassi and Watson (2017).

Additionally, our function F refutes Grolmusz's Conjecture (1997) and a variant of the Log-Approximate-Nonnegative-Rank Conjecture, suggested by Kol, Moran, Shpilka and Yehudayoff (2014), both of which are implied by the LARC. The complement of F has exponentially large approximate nonnegative rank. This answers a question of Lee (2012) and Kol et al. (2014), showing that approximate nonnegative rank can be exponentially larger than approximate rank. The function F also falsifies a conjecture about parity measures of Boolean functions made by Tsang, Wong, Xie and Zhang (2013). The latter conjecture implied the Log-Rank Conjecture for XOR functions.

Remarkably, after our manuscript was published in the public domain, two groups of researchers, Anshu-Boddu-Touchette (2018) and Sinha-de-Wolf (2018), showed independently that the function F even refutes the Quantum-Log-Approximate-Rank Conjecture.

(This is joint work with Nikhil Mande and Suhail Sherif)

4:40 -- 5:00 Rafael Oliveira Towards a theory of non-commutative optimization

Scaling problems, such as operator scaling and tensor scaling, have recently found several applications in diverse areas of math and CS. They have been used to give efficient algorithms for non-commutative rational identity testing, compute the optimal constant in Brascamp-Lieb inequalities, one-body quantum marginal problem, solution to the Paulsen problem, and the search version of the Horn's problem, to name a few. Scaling problems are natural geodesic convex optimization problems on Riemannian manifolds that arise from the symmetries of non-commutative groups, and the algorithms (and their analyses) developed in these previous works heavily relied on this general form of convexity.

In this talk we discuss our recent systematic development of a theory of non-commutative optimization, which greatly extends ordinary (Euclidean) convex optimization. We will see how NC optimization unify and generalize the algorithms developed for the uniform and non-uniform versions of the operator scaling and tensor scaling problems.

Based on joint work with Peter Buergisser, Cole Franks, Ankit Garg, Michael Walter and Avi Wigderson.

5:00 -- 5:20 Break
Session 5
5:20 -- 5:40 Sofya Raskhodnikova Approximating the Distance to Monotonicity of Boolean Functions

We study the problem of approximating the distance to monotonicity of Boolean functions over the hypercube domain. For a function f:{0,1}^n -> {0,1}, the distance to monotonicity of f is the minimum fraction of points at which f has to be modified to make it monotone. We give a nonadaptive algorithm that, given f:{0,1}^n -> {0,1} whose distance to monotonicity is at least \alpha, makes poly(n, 1/\alpha) queries and returns an O(\sqrt{n} polylog n)-multiplicative approximation to the distance to monotonicity of f. Furthermore, we show that for any constant k > 0, approximating the distance to monotonicity up to n^{0.5 - k}-factor requires 2^{n^k} nonadaptive queries, thereby ruling out a poly(n, 1/\alpha)-query nonadaptive algorithm for such approximations. This answers a question of Seshadhri (Property Testing Review '14) for the case of nonadaptive algorithms. Approximating the distance to a property is closely related to tolerantly testing that property. Our lower bound stands in contrast to standard (non-tolerant) testing of monotonicity that can be done nonadaptively with O(\sqrt{n} polylog n) queries.

We obtain our lower bound by proving an analogous bound for erasure-resilient testers. Our method yields the same lower bounds for other properties (unateness and being a k-junta). These lower bounds improve exponentially on the existing lower bounds for these properties.

Joint work with Ramesh Krishnan Pallavoor and Erik Waingarten

5:40 -- 6:00 Sebastien Bubeck How to trap a gradient flow

I will discuss a new strategy to find stationary points of non-convex functions in low-dimensional spaces. In particular we resolve an open problem from 1993 by Stephen A. Vavasis on the complexity of this problem in 2D.

Joint work with Dan Mikulincer.

6:00 -- 6:20 Nika Haghtalab Smoothed Analysis of Online Learning

In the smoothed analysis of algorithms, an adversary chooses an arbitrary input which is subjected to a small random perturbation before being presented to an algorithm. The goal is then to design an algorithm that always has good expected performance, for a worst-case adversary and on average over the perturbation. In this paper, we apply the smoothed analysis framework to the problem of regret-minimization in online learning with adaptive adversaries, and show that under the smoothed analysis framework, online learnability is equivalent to offline learnability. That is, similar to offline (i.i.d) learning, the VC dimension of a hypothesis class characterizes the regret obtainable against smoothed adaptive adversaries. This is in stark contrast to the worse-case regret bounds that are characterized by the Littlestone dimension of a class of hypotheses, as the Littlestone dimension of even simple classes with constant VC dimension can be unbounded.

This is based on a joint work with Tim Roughgarden and Abhishek Shetty.

6:20 -- 6:40 Robert Krauthgamer Coresets for Clustering in Graphs of Bounded Treewidth

We initiate the study of coresets for clustering in graph metrics, i.e., the shortest-path metric of edge-weighted graphs, which is useful for example for road networks and data visualization. A coreset is a technique that reduces the data while maintaining a (1+epsilon)-approximation to the objective, and leads to significantly improved running time, storage, or communication in various settings.

I will discuss how every graph G of treewidth tw(G) admits a coreset of size O(tw(G)) for k-Median clustering (omitting dependence on k and epsilon). Moreover, this coreset can be constructed in near-linear time. We also have a matching lower bound Omega(tw(G)), which justifies restricting the graph's topology.

Joint work with Vladimir Braverman, Lingxiao Huang, Shaofeng H.-C. Jiang, and Xuan Wu.

6:40 -- 7:00 Nima Anari Limited Correlations, Fractional Log-concavity, and Fast Mixing Random Walks

The analysis of random walks on high-dimensional expanders has recently enabled us to answer open questions about sampling/counting in combinatorial objects like matroids. The key to this connection was showing that matroids have “spectrally negative” correlations. In this talk, I will show how to go beyond the setting of matroids and analyze random walks in the presence of limited positive correlations. I will show how to apply this method to the hardcore model on bounded-degree graphs and the monomer distribution in monomer-dimer systems. 

Based on joint work with Kuikui Liu and Shayan OveisGharan