The workshop will bring together leading researchers from different areas of Theoretical Computer Science
with the goal of presenting the latest research to a broad TCS audience from Chicago and surrounding areas, and increasing the interaction between different areas
of Theoretical Computer Science.

All researchers interested in theoretical computer science are invited to attend.
Attending the workshop is free, but please
register via this
form before January 15th.

Poster

We also call for posters to be presented at the workshop.
Please fill out
this form
by December 31st to present your poster.
The posters on works submitted to STOC 2020 are not allowed.

Confirmed Speakers:

• Sebastien Bubeck (Microsoft Research Redmond)

Non-Stochastic Multi-Player Multi-Armed Bandits

I will motivate a cooperative multi-player version of the classical multi-armed bandit problem.
I will give a low-communication strategy for the players that achieves the optimal sqrt(T) regret.
Finally I will present some tantalizing open problems for the scenario with *no* communication at all.

• Arkadev Chattopadhyay (Tata Institute of Fundamental Research)

• Ilias Diakonikolas (University of Wisconsin-Madison)

Distribution-Independent PAC Learning of Halfspaces with Massart Noise

I will describe recent algorithmic progress on the problem
of learning halfspaces in the distribution-independent PAC model,
when the labels of the examples have been corrupted by Massart noise.

• Sebastian Forster (University of Salzburg)

Computing and Testing Small Connectivity in Near-Linear Time and
Queries via Fast Local Cut Algorithms

Consider the following "local" cut-detection problem in a directed
graph: We are given a seed vertex x and need to remove at most k edges
so that at most \nu edges can be reached from x (a "local" cut) or
output \bot to indicate that no such cut exists. If we are given query
access to the input graph, then this problem can in principle be solved
without reading the whole graph and with query complexity depending on k
and \nu. In this paper we consider a slack variant of this problem
where, when such a cut exists, we can output a cut with up to O(k\nu)
edges reachable from x.

We present a simple randomized algorithm spending O(k^2\nu) time and
O(k\nu) queries for the above variant, improving in particular a
previous time bound of (k^{O(k)}\nu) by Chechik et al. [SODA '17]. We
also extend our algorithm to handle an approximate variant. We
demonstrate that these local algorithms are versatile primitives for
designing substantially improved algorithms for classic graph problems
by providing the following three applications. (1) A randomized
algorithm for the classic k-vertex connectivity problem that takes
near-linear time when k=polylog(n), namely \tilde O(m+nk^3) time in
undirected graphs and \tilde O(mk^2) time in directed graphs. Our
techniques also yield an improved approximation scheme. (2) Property
testing algorithms for k-edge and -vertex connectivity with query
complexities that are near-linear in k, exponentially improving the
state of the art. This resolves two open problems, one by Goldreich and
Ron [STOC '97] and one by Orenstein and Ron [Theor. Comput Sci. '11].
(3) A faster algorithm for computing the maximal k-edge connected
subgraphs, improving prior work of Chechik et al. [SODA '17].

• Ankit Garg (Microsoft Research India)

Learning arithmetic circuits in the average case via lower bounds

The problem of learning arithmetic circuits is the following: given a
polynomial as a black box that is promised to have a small arithmetic circuit
computing it, can we find this arithmetic circuit? This problem is hard in the
worst case and so previous works have focused on regimes where the NP-hardness does not
kick in (e.g. constant top fan-in etc.). We focus on the average case where one assumes
certain non-degeneracy assumptions on the circuit (formally these amount to assuming the
circuit parameters lie outside a certain variety and hence if they are chosen randomly
according to any reasonable distribution, the non-degeneracy condition is satisfied
whp). Kayal and Saha gave a meta framework for designing these algorithms for circuit
classes where we have lower bounds. They carried this out for depth 3 circuits (sums
of products of linear forms) and we (in joint work with Kayal and Saha) carry it out for depth
4 powering circuits (sums of powers of low degree polynomials). I will talk about the meta framework
and then about our specific results.
I will also talk about a potential application to learning mixtures of general Gaussians.

• Nika Haghtalab (Cornell University)

• Prahladh Harsha (Tata Institute of Fundamental Research)

• Piotr Indyk (Massachusetts Institute of Technology)

• Sanjeev Khanna (University of Pennsylvania)

• Pravesh Kothari (Carnegie Mellon University)

• Marvin Künnemann (Max Planck Institute for Informatics)

• Dakshita Khurana (Columbia University)

New Techniques for Zero-Knowledge Proofs

Zero-knowledge is one of the central areas of research in cryptography.
A zero-knowledge proof allows a prover to convince a verifier about the truth
of an NP statement while revealing no information beyond the truth of the statement.
In the first part of this talk, I will give an overview of how zero-knowledge is
defined, and of different techniques that we know, that help us build zero-knowledge
proof systems. In the second part of the talk, I will give an overview of a new
non-black-box simulation technique. I will describe how this gives us the first
constructions of witness hiding proof systems requiring only a single message each
from the verifier and the prover. Part 2 of the talk is based on joint work with
Nir Bitansky and Omer Paneth.

• Tengyu Ma (Stanford University)

Designing Explicit Regularizers for Deep Models

I will outline some new results on designing explicit regularizers to
improve the generalization performances of deep neural networks. We derive
data-dependent generalization bounds for deep neural networks. We empirically
regularize the bounds and obtain improved generalization performance, especially
when we lack other sources of regularization, such as implicit/algorithmic
regularization, data augmentation, etc. I will also touch on recent results
on applying these techniques to imbalanced datasets.

Based on joint work with Colin Wei, Kaidi Cao, Adrien Gaidon, and Nikos Arechiga

• Rafael Oliveira (University of Toronto and University of Waterloo)

• Sofya Raskhodnikova (Boston University)

Approximating the Distance to Monotonicity of Boolean Functions

We study the problem of approximating the distance to monotonicity
of Boolean functions over the hypercube domain. For a function
f:{0,1}^n -> {0,1}, the distance to monotonicity of f is the minimum
fraction of points at which f has to be modified to make it monotone.
We give a nonadaptive algorithm that, given f:{0,1}^n -> {0,1} whose
distance to monotonicity is at least \alpha, makes poly(n, 1/\alpha)
queries and returns an O(\sqrt{n} polylog n)-multiplicative approximation
to the distance to monotonicity of f. Furthermore, we show that for any
constant k > 0, approximating the distance to monotonicity up to
n^{0.5 - k}-factor requires 2^{n^k} nonadaptive queries, thereby ruling out a
poly(n, 1/\alpha)-query nonadaptive algorithm for such approximations. This
answers a question of Seshadhri (Property Testing Review '14) for the case of
nonadaptive algorithms. Approximating the distance to a property is closely
related to tolerantly testing that property. Our lower bound stands in contrast
to standard (non-tolerant) testing of monotonicity that can be
done nonadaptively with O(\sqrt{n} polylog n) queries.

We obtain our lower bound by proving an analogous bound for erasure-resilient
testers. Our method yields the same lower bounds for other properties (unateness
and being a k-junta). These lower bounds improve exponentially on the existing lower
bounds for these properties.

Joint work with Ramesh Krishnan Pallavoor and Erik Waingarten

• Noga Ron-Zewi (University of Haifa)

Local Proofs Approaching the Witness Length

Interactive oracle proofs are a hybrid between interactive proofs
and probabilistically-checkable proofs, where the prover is allowed
to interact with a verifier by sending relatively long messages to
the verifier, who in turn is only allowed to query a few of the bits
that were sent.

In this work we construct, for any NP relation for which membership
can be decided in polynomial-time and bounded polynomial space
(e.g., SAT, Hamiltonicity, Clique, Vertex-Cover, etc.), interactive
oracle proofs in which the proof length approaches the witness length.
The number of rounds, as well as the number of queries made by the verifier,
are constant. Our proof leverages recent constructions of high-rate locally
testable tensor codes. In particular, we bypass the barrier imposed by the
low rate of multiplication codes (e.g., Reed-Solomon, Reed-Muller or AG codes)
- a core component in prior constructions.

Joint work with Ron Rothblum.

• Thatchaphol Saranurak (Toyota Technological Institute at Chicago)

• Balasubramanian Sivan (Google Research)

Strategizing against No-regret Learners

How should a player who repeatedly plays a game against a no-regret
learner strategize to maximize his utility? We study this question and
show that under some mild assumptions, the player can always guarantee
himself a utility of at least what he would get in a Stackelberg equilibrium
of the game. When the no-regret learner has only two actions, we show that
the player cannot get any higher utility than the Stackelberg equilibrium
utility. But when the no-regret learner has more than two actions and plays
a mean-based no-regret strategy, we show that the player can get strictly
higher than the Stackelberg equilibrium utility. We provide a characterization
of the optimal game-play for the player against a mean-based no-regret learner
as a solution to a control problem. When the no-regret learner's strategy also
guarantees him a no-swap regret, we show that the player cannot get anything
higher than a Stackelberg equilibrium utility.

Joint work with Yuan Deng, Jon Schneider

• Omri Weinstein (Columbia University)

An Adaptive Step Toward the Multiphase Conjecture

In 2010, Patrascu proposed the Multiphase problem as a candidate for proving *polynomial*
lower bounds on the operational time of dynamic data structures, and showed it would imply
unconditional lower bounds on many important dynamic data structure problems. After 10
years, there has been almost no progress on this conjecture. We show an ~\Omega(\sqrt{n})
cell-probe lower bound on the Multiphase problem for data structures with general (adaptive)
updates, and queries with unbounded but "layered" adaptivity. This captures all known
set-intersection data structures and significantly strengthens previous Multiphase lower
bounds, which only captured non-adaptive data structures. Our main technical result is a
communication lower bound on a 4-party variant of Patrascu's Number-On-Forehead game,
using information complexity techniques.