# jemdoc: menu{MENU}{index.html}, showsource, analytics{UA-31107207-1} = My Projects Here are summaries of some of my research projects. This page is still *in construction*. (In the following I often use abbreviations such as [https://ieeexplore.ieee.org/document/21916 \[NW'94\]] or [https://ieeexplore.ieee.org/document/743524 \[IW'98\]] (with links to the papers) to refer to well-known previous work, and use full author names to refer to recent papers). Note that a single paper may contribute to multiple projects below. === Project Links: - [projects.html\#algorithmic-method Algorithmic Method for Proving Circuit Lower Bounds] - [projects.html\#optimal-derand Optimal Derandomization from Hardness Assumptions] - [projects.html\#hardness-magnification Hardness Magnification] ~~~ {}{raw} ~~~ == Algorithmic Method for Proving Circuit Lower Bounds The algorithmic approach, which connects algorithm design to circuit lower bounds, lies behind the recent breakthrough result that $\mathsf{NEXP} \not\subset \mathsf{ACC}^0$ [https://dl.acm.org/doi/10.1145/2559903 \[Wil'14\]]. The most notable part of this approach is it seems to avoid all previous barriers for circuit lower bounds, and puts together nearly every gem in complexity theory, ranging from time hierarchy theorem to $\mathsf{IP} = \mathsf{PSPACE}$. A recent breakthrough [https://eccc.weizmann.ac.il/report/2017/188/ \[Murray-Williams'18\]] significantly improved the state-of-the-art lower bounds from the paradigm by showing $\mathsf{NTIME}[n^{\mathrm{polylog}(n)}] \not\subset \mathsf{ACC}^0$. Some of my papers are devoted to obtaining better circuit lower bounds using the algorithmic method. They are summarized below. === *1. Tighter Connections between Circuit-Analysis Algorithms and Circuit Lower Bounds* - In previous work on the circuit-analysis-algorithms-to-circuit-lower-bounds connection, to prove a lower bound for $\mathcal{C}$ circuits, one usually has to design algorithms for $\widetilde{\mathcal{C}}$ circuits, where $\widetilde{\mathcal{C}}$ is a circuit class larger than $\mathcal{C}$. - *In [https://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10841 \[Chen-Williams'19\]]*, we draw connection from [https://people.seas.harvard.edu/~salil/research/shortPCP.pdf PCP of Proximity (PCPP)], and show a tighter connetion between circuit-analysis algorithms and circuit lower bounds: In order to prove lower bounds for $\mathcal{C}$ circuits, it suffices to start from a non-trivial derandomization of an XOR (AND or OR also suffice) of two $\mathcal{C}$ circuits. The PCPP based techniques in this paper turned out to be very useful in following subsequent work. === *2. Average-Case Circuit Lower Bounds from Algorithmic Method* - Most previous work using algorithmic method can only achieve /worst-case/ lower bounds. While average-case lower bounds (w.r.t. uniform distribution) have more applications espeically for building PRGs. [https://eccc.weizmann.ac.il/report/2017/173/ R.Chen-Oliveira-Santhanam'18] took the first step in that direction and proved that $\mathsf{NEXP}$ cannot be $1/2 + 1/\mathrm{polylog}(n)$-approximated by $\mathsf{ACC}^0$. - *In [https://eccc.weizmann.ac.il/report/2019/031/ \[Chen'19\]]*, we proved that $\mathsf{NTIME}[n^{\mathrm{polylog}(n)}]$ cannot be $1/2 + 1/\mathrm{polylog}(n)$-approximated by $\mathsf{ACC}^0$, improving on both of [https://eccc.weizmann.ac.il/report/2017/173/ \[R.Chen-Oliveira-Santhanam'18\]] and [https://eccc.weizmann.ac.il/report/2017/188/ \[Murray-Williams'18\]]. - *In [https://eccc.weizmann.ac.il/report/2020/010/ \[Chen-Ren'20\]]*, building crucially on [https://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10841 \[Chen-Williams'19\]] and [https://eccc.weizmann.ac.il/report/2019/031/ \[Chen'19\]], we proved that $\mathsf{NTIME}[n^{\mathrm{polylog}(n)}]$ cannot be $1/2 + 1/\mathrm{poly}(n)$-approximated by $\mathsf{ACC}^0$. === *3. Construction of Rigid Matrices using Algorithmic Method* #- A rigid matrix is a matrix having high hamming distance to all low-rank matrices. It is a long standing open question to give an explicit construction === *4. Almost-Everywhere Circuit Lower Bounds via Algorithmic Method* - All previous lower bounds proved via the algorithmic method are /infinitely-often/. That is, for a certain hard function $f$, they only guarantee that $f$ is hard for infinitely many input lengths. Ideally, one would want $f$ to be hard on all sufficiently large input lengths. There are attempts to addresses this issue, in [http://people.cs.uchicago.edu/~fortnow/papers/robust.pdf Fortnow-Santhanam'11], an intermediate notion called /significant separation/ was introduced, and it is showed that $\mathsf{NEXP} \not\subset \mathsf{ACC}^0$ can be made to a significant separation. - *In [https://eccc.weizmann.ac.il/report/2020/150/ \[Chen-Lyu-Williams'20\] ]*, we proved $\mathsf{E}^{\mathsf{NP}} \not\subset i.o.\textrm{-}\mathsf{ACC}^0$, which is the first /almost-everywhere/ separation proved via the algorithmic method. Our framework can also be used to extend several other known infinite-often lower bounds to almost-everywhere lower bounds. For example, we also proved that there is a function $f \in \mathsf{E}^{\mathsf{NP}}$ which cannot be $1/2+1/2^{n^{o(1)}}$-approximated by $2^{n^{o(1)}}$-size $\mathsf{ACC}^0$ circuits, on all sufficiently large input lengths. ~~~ {}{raw} ~~~ == Optimal Derandomization from Hardness Assumptions The classic hardness-to-randomness paradigm (\[[https://ieeexplore.ieee.org/document/4568378 Yao'82],[https://ieeexplore.ieee.org/document/4568382 BM'84],[https://ieeexplore.ieee.org/document/21916 NW'94],[https://ieeexplore.ieee.org/document/743524 IW'98],[https://dl.acm.org/doi/10.1145/258533.258590 IW'99]\]) shows that derandomization results (such as $\mathsf{P} = \mathsf{BPP}$) can be derivied from hardness assumptions (such as $\mathsf{E} \not\subset i.o.\textrm{-}\mathsf{SIZE}[2^{o(n)}]$). Some of my past papers are concerned about achieving fastest derandomization under (very strong but still plasuible) hardness assumptions. === *1. What is the optimal derandomization from non-uniform hardness assumptions?* - The classic - line of work shows that exponential circuit lower bounds imply $\mathsf{P} = \mathsf{BPP}$. However the polynomial overhead could be huge. A quick caculation shows that even starting from $\mathsf{TIME}[2^n] \not\subset i.o.\textrm{-}\mathsf{SIZE}[2^{0.99 \cdot n)}]$, one can only derandomize $\mathsf{BPTIME}[T(n)]$ in at least $\mathsf{TIME}[T(n)^7]$ time based on [https://dl.acm.org/doi/10.1145/258533.258590 \[IW'99\]]. Under hardness assumptions against non-uniform Merlin-Arthur protocols, [https://eccc.weizmann.ac.il/report/2019/099/ \[Doron-Moshkovitz-Oh-Zuckerman'20\]] showed that $\mathsf{BPTIME}[T(n)] \subset \mathsf{TIME}[T(n)^2]$. - *In [https://eccc.weizmann.ac.il/report/2020/148/ \[Chen-Tell'20\]]*, we proved that under the standard assumption OWFs exists, together with a *nesseary* assumption against deterministic algorithms with advice, $\mathsf{BPTIME}[T(n)] \subset \mathsf{TIME}[T(n) \cdot n]$. Moreover, the overhead of $n$ is optimal under the [https://eccc.weizmann.ac.il//report/2015/148/ Nondeterministic Strong Exponential-Time Hypothesis]. === *2. What is the optimal derandomization from uniform hardness assumptions?* - The derandomization results mentioned in last bullet all assumed hardness against non-uniform algorithms (algorithms with advice). Can we obtain similar derandomization results under hardness assumptions against uniform algorithms? In a classic paper [https://ieeexplore.ieee.org/document/743524 \[IW'98\]], it is proved that $\mathsf{EXP} \ne \mathsf{BPP}$ implies that $\mathsf{BPP}$ can be derandomized /on average/ in sub-exponential time. That is, for every $\mathsf{BPP}$ algorithm $L$, there is a sub-exponential time algorithm such that it successfully derandomized $L$ with $1 - n^{-\omega(1)}$ probability, with respect to /every/ polynomial-time samplable distribution. There are several follow up papers to [https://ieeexplore.ieee.org/document/743524 \[IW'98\]], but the fastest derandomization they obtained from a uniform hardness assumption is still a quasi-polynomial time derandomization of $\mathsf{BPP}$ on average (see [https://ieeexplore.ieee.org/document/1004348 \[TV'07\]]). - *In [https://eccc.weizmann.ac.il/report/2019/169/ \[Chen-Rothblum-Tell-Yogev'20\]]*, we proved that under the randomized Exponential-time Hypothesis, one can derandomize $\mathsf{BPP}$ in $n^{\textrm{polyloglog}(n)}$ time on average. ~~~ {}{raw} ~~~ == Hardness Magnification The recent surprising phenomenon of hardness magnification (similar phenomenon was known before, but the interest in them has been revived since [https://eccc.weizmann.ac.il/report/2018/139/ \[Oliveira-Santhanam'18\]]) shows that innocent-looking weak circuit lower bounds for specific problems such as MCSP can imply major circuit lower bounds such as NP not in P/poly. For example, if a sparse variant of MCSP does not have $n \cdot \mathrm{polylog}(n)$ size circuit, then $\mathsf{NP} \ne \mathsf{P}$ ([https://people.csail.mit.edu/rrw/MCSP-MKTP-stoc19.pdf \[McKay-Murray-Williams'19\]]). Several of my past papers aimed to better understand this phenomenon and are summarized below. === *1. What properties of sparse-MCSP enabled such a hardness magnification phenomenon?* - *In [https://eccc.weizmann.ac.il/report/2019/075/ \[Chen-McKay-Murray-Williams'19\]] and [https://eccc.weizmann.ac.il/report/2019/118/ \[Chen-Jin-Williams'19\]]*, we showed similar phenomenon for equally sparse problems in $\mathsf{NP}$. This suggests that perhaps the most important properties of these hard problems are their sparsity. === *2. How close are we to use HM theorems to prove new lower bounds?* - Most previous work requires to prove either super cubic De Morgan Formula lower bounds for sparse-MCSP, for which the best known lower bound is quadratic, or an $n \cdot \mathrm{polylog}(n)$ size circuit lower bound, which seems a bit far away from the current techniques. - *In [https://eccc.weizmann.ac.il/report/2020/065/ \[Chen-Jin-Williams'20\]]*, we established several sharp thresholds for HM theorems. For example, we show that sparse-MCSP cannot be computed by sub-quadratic size randomized De Morgan Formulas while improving that to super-quadratic would imply $\mathsf{PSPACE} \not\subset \mathsf{NC}^1$. === *3. What prevents us from proving lower bounds required by the hardness magnification (HM) theorems?* - The natural proof barrier does not seem to apply here since HM theorems only work for special functions. So one natural question stemmed from the hardness magnification phenomenon is to understand whether there is another inherent barrier, which prevents us from using current techniques to prove the lower bounds required by HM theorems. - *In [https://eccc.weizmann.ac.il/report/2019/168/ \[Chen-Hirahara-Oliveira-Pich-Rajgopal-Santhanam'20\]]*, we formulated a concrete barrier called /Locality Barrier/. Roughly speaking, the locality barrier says that if your lower bounds methods are robust enough to handle small-fan-in oracles, then it cannot be used to prove the lower bounds required by HM theorems. Unfortunately, it seems most lower bounds techniques we are aware of (random restrictions, approximation method, communication complexity based lower bounds, etc.) are subject to this barrier. === *4. How close are we to prove super-polynomial $\mathsf{TC}^{0}$ Lower Bounds?* - One frontier question in circuit complexity is to prove super-polynomial lower bounds for $\mathsf{TC}^0$ circuits (circuits consist entirely of Majority gates). Certain $n^{1+1/\exp(d)}$-size lower bounds are known a certain $\mathsf{NC}^1$-complete problem $P_{\mathsf{NC}^1}$ against depth-$d$ $\mathsf{TC}^0$ circuits [https://dl.acm.org/doi/10.1137/S0097539792282965 \[IPS'97\]]. [https://dl.acm.org/doi/10.1145/1706591.1706594 \[AK'10\]] proved that for $P_{\mathsf{NC}^1}$, if one can show $n^{1+1/d}$-size lower bounds against depth-$d$ $\mathsf{TC}^0$ circuits, then $\mathsf{NC}^1 \ne \mathsf{TC}^0$. - *In [https://eccc.weizmann.ac.il/report/2018/199/ \[Chen-Tell'19\]]*, we showed that any $n^{1+1/\exp(o(d))}$-size lower bound for $P_{\mathsf{NC}^1}$ against depth-$d$ $\mathsf{TC}^0$ circuits would already imply $\mathsf{NC}^1 \ne \mathsf{TC}^0$. That is, qualitatively we are very close to the threshold between /barely-super-linear/ hardness and /super-polynomial/ hardness. == Hardness of Approximation in Fine-Grained Complexity, with Connection to Communication Complexity See [http://www.mit.edu/~lijieche/master-thesis.pdf \[my master thesis\]] for now. # == Hardness of Counting Distinct Elements Privately in Distributed Settings #In the summer of 2020, I did an internship in Google MTV, supervised #*$\mathsf{CountDisticnt}$ in distributed settings*: There are n users each holding one element and want to estimate how many distinct elements they have, subject to the constraint of differential privacy.