My Projects

Here are summaries of some of my research projects. This page is still in construction. (In the following I often use abbreviations such as [NW’94] or [IW’98] (with links to the papers) to refer to well-known previous work, and use full author names to refer to recent papers).

Note that a single paper may contribute to multiple projects below.

Project Links:

Algorithmic Method for Proving Circuit Lower Bounds

The algorithmic approach, which connects algorithm design to circuit lower bounds, lies behind the recent breakthrough result that mathsf{NEXP} notsubset mathsf{ACC}^0 [Wil’14]. The most notable part of this approach is it seems to avoid all previous barriers for circuit lower bounds, and puts together nearly every gem in complexity theory, ranging from time hierarchy theorem to mathsf{IP} = mathsf{PSPACE}. A recent breakthrough [Murray-Williams’18] significantly improved the state-of-the-art lower bounds from the paradigm by showing mathsf{NTIME}[n^{mathrm{polylog}(n)}] notsubset mathsf{ACC}^0.

Some of my papers are devoted to obtaining better circuit lower bounds using the algorithmic method. They are summarized below.

1. Tighter Connections between Circuit-Analysis Algorithms and Circuit Lower Bounds

  • In previous work on the circuit-analysis-algorithms-to-circuit-lower-bounds connection, to prove a lower bound for mathcal{C} circuits, one usually has to design algorithms for widetilde{mathcal{C}} circuits, where widetilde{mathcal{C}} is a circuit class larger than mathcal{C}.

  • In [Chen-Williams’19], we draw connection from PCP of Proximity (PCPP), and show a tighter connetion between circuit-analysis algorithms and circuit lower bounds: In order to prove lower bounds for mathcal{C} circuits, it suffices to start from a non-trivial derandomization of an XOR (AND or OR also suffice) of two mathcal{C} circuits. The PCPP based techniques in this paper turned out to be very useful in following subsequent work.

2. Average-Case Circuit Lower Bounds from Algorithmic Method

3. Construction of Rigid Matrices using Algorithmic Method

4. Almost-Everywhere Circuit Lower Bounds via Algorithmic Method

  • All previous lower bounds proved via the algorithmic method are infinitely-often. That is, for a certain hard function f, they only guarantee that f is hard for infinitely many input lengths. Ideally, one would want f to be hard on all sufficiently large input lengths. There are attempts to addresses this issue, in Fortnow-Santhanam’11, an intermediate notion called significant separation was introduced, and it is showed that mathsf{NEXP} notsubset mathsf{ACC}^0 can be made to a significant separation.

  • In [Chen-Lyu-Williams’20], we proved mathsf{E}^{mathsf{NP}} notsubset i.o.textrm{-}mathsf{ACC}^0, which is the first almost-everywhere separation proved via the algorithmic method. Our framework can also be used to extend several other known infinite-often lower bounds to almost-everywhere lower bounds. For example, we also proved that there is a function f in mathsf{E}^{mathsf{NP}} which cannot be 1/2+1/2^{n^{o(1)}}-approximated by 2^{n^{o(1)}}-size mathsf{ACC}^0 circuits, on all sufficiently large input lengths.

Optimal Derandomization from Hardness Assumptions

The classic hardness-to-randomness paradigm ([Yao’82,BM’84,NW’94,IW’98,IW’99]) shows that derandomization results (such as mathsf{P} = mathsf{BPP}) can be derivied from hardness assumptions (such as mathsf{E} notsubset i.o.textrm{-}mathsf{SIZE}[2^{o(n)}]). Some of my past papers are concerned about achieving fastest derandomization under (very strong but still plasuible) hardness assumptions.

1. What is the optimal derandomization from non-uniform hardness assumptions?

  • The classic - line of work shows that exponential circuit lower bounds imply mathsf{P} = mathsf{BPP}. However the polynomial overhead could be huge. A quick caculation shows that even starting from mathsf{TIME}[2^n] notsubset i.o.textrm{-}mathsf{SIZE}[2^{0.99 cdot n)}], one can only derandomize mathsf{BPTIME}[T(n)] in at least mathsf{TIME}[T(n)^7] time based on [IW’99]. Under hardness assumptions against non-uniform Merlin-Arthur protocols, [Doron-Moshkovitz-Oh-Zuckerman’20] showed that mathsf{BPTIME}[T(n)] subset mathsf{TIME}[T(n)^2].

  • In [Chen-Tell’20], we proved that under the standard assumption OWFs exists, together with a nesseary assumption against deterministic algorithms with advice, mathsf{BPTIME}[T(n)] subset mathsf{TIME}[T(n) cdot n]. Moreover, the overhead of n is optimal under the Nondeterministic Strong Exponential-Time Hypothesis.

2. What is the optimal derandomization from uniform hardness assumptions?

  • The derandomization results mentioned in last bullet all assumed hardness against non-uniform algorithms (algorithms with advice). Can we obtain similar derandomization results under hardness assumptions against uniform algorithms? In a classic paper [IW’98], it is proved that mathsf{EXP} ne mathsf{BPP} implies that mathsf{BPP} can be derandomized on average in sub-exponential time. That is, for every mathsf{BPP} algorithm L, there is a sub-exponential time algorithm such that it successfully derandomized L with 1 - n^{-omega(1)} probability, with respect to every polynomial-time samplable distribution. There are several follow up papers to [IW’98], but the fastest derandomization they obtained from a uniform hardness assumption is still a quasi-polynomial time derandomization of mathsf{BPP} on average (see [TV’07]).

  • In [Chen-Rothblum-Tell-Yogev’20], we proved that under the randomized Exponential-time Hypothesis, one can derandomize mathsf{BPP} in n^{textrm{polyloglog}(n)} time on average.

Hardness Magnification

The recent surprising phenomenon of hardness magnification (similar phenomenon was known before, but the interest in them has been revived since [Oliveira-Santhanam’18]) shows that innocent-looking weak circuit lower bounds for specific problems such as MCSP can imply major circuit lower bounds such as NP not in P/poly. For example, if a sparse variant of MCSP does not have n cdot mathrm{polylog}(n) size circuit, then mathsf{NP} ne mathsf{P} ([McKay-Murray-Williams’19]). Several of my past papers aimed to better understand this phenomenon and are summarized below.

1. What properties of sparse-MCSP enabled such a hardness magnification phenomenon?

2. How close are we to use HM theorems to prove new lower bounds?

  • Most previous work requires to prove either super cubic De Morgan Formula lower bounds for sparse-MCSP, for which the best known lower bound is quadratic, or an n cdot mathrm{polylog}(n) size circuit lower bound, which seems a bit far away from the current techniques.

  • In [Chen-Jin-Williams’20], we established several sharp thresholds for HM theorems. For example, we show that sparse-MCSP cannot be computed by sub-quadratic size randomized De Morgan Formulas while improving that to super-quadratic would imply mathsf{PSPACE} notsubset mathsf{NC}^1.

3. What prevents us from proving lower bounds required by the hardness magnification (HM) theorems?

  • The natural proof barrier does not seem to apply here since HM theorems only work for special functions. So one natural question stemmed from the hardness magnification phenomenon is to understand whether there is another inherent barrier, which prevents us from using current techniques to prove the lower bounds required by HM theorems.

  • In [Chen-Hirahara-Oliveira-Pich-Rajgopal-Santhanam’20], we formulated a concrete barrier called Locality Barrier. Roughly speaking, the locality barrier says that if your lower bounds methods are robust enough to handle small-fan-in oracles, then it cannot be used to prove the lower bounds required by HM theorems. Unfortunately, it seems most lower bounds techniques we are aware of (random restrictions, approximation method, communication complexity based lower bounds, etc.) are subject to this barrier.

4. How close are we to prove super-polynomial mathsf{TC}^{0} Lower Bounds?

  • One frontier question in circuit complexity is to prove super-polynomial lower bounds for mathsf{TC}^0 circuits (circuits consist entirely of Majority gates). Certain n^{1+1/exp(d)}-size lower bounds are known a certain mathsf{NC}^1-complete problem P_{mathsf{NC}^1} against depth-d mathsf{TC}^0 circuits [IPS’97]. [AK’10] proved that for P_{mathsf{NC}^1}, if one can show n^{1+1/d}-size lower bounds against depth-d mathsf{TC}^0 circuits, then mathsf{NC}^1 ne mathsf{TC}^0.

  • In [Chen-Tell’19], we showed that any n^{1+1/exp(o(d))}-size lower bound for P_{mathsf{NC}^1} against depth-d mathsf{TC}^0 circuits would already imply mathsf{NC}^1 ne mathsf{TC}^0. That is, qualitatively we are very close to the threshold between barely-super-linear hardness and super-polynomial hardness.

Hardness of Approximation in Fine-Grained Complexity, with Connection to Communication Complexity

See [my master thesis] for now.