Hardness vs. Randomness: Superfast and Non-Black-Box

The classic hardness-to-randomness paradigm ([Yao’82,Blum-Micali’84,Nisan-Wigderson’94,Impagliazzo-Wigderson’97,Impagliazzo-Wigderson’98]) shows that derandomization results (such as \(\mathsf{P} = \mathsf{BPP}\)) can be derived from hardness assumptions (such as \(\mathsf{E} = \mathsf{TIME}[2^{O(n)}] \not\subset i.o.\textrm{-}\mathsf{SIZE}[2^{o(n)}]\)). Some of my past papers ([Chen-Tell’21a], [Chen-Tell’21b]) are motivated by the following two fundamental questions regarding derandomization.

1. What is the fastest derandomization?

2. What is the minimum hardness assumption for derandomization?

1. What is the optimal derandomization from plausible hardness assumptions?

  • Background. The classic line of work shows that exponential circuit lower bounds imply \(\mathsf{P} = \mathsf{BPP}\). However the polynomial overhead could be huge. A quick caculation shows that even starting from \(\mathsf{TIME}[2^n] \not\subset i.o.\textrm{-}\mathsf{SIZE}[2^{0.99 \cdot n)}]\), one can only derandomize \(\mathsf{BPTIME}[T(n)]\) in at least \(\mathsf{TIME}[T(n)^c]\) time for a constant \(c \ge 7\), based on [Impagliazzo-Wigderson’97].
    In an exciting recent work, under hardness assumptions against non-uniform Merlin-Arthur protocols, [Doron-Moshkovitz-Oh-Zuckerman’20] showed that \(\mathsf{BPTIME}[T(n)] \subset \mathsf{TIME}[T(n)^2]\).

  • Near-optimal worst-case derandomization of \(\mathsf{BPTIME}[T(n)]\). An immeidate question following [DMOZ’20] is whether one can further improve the quadratic blow-up in their work. In [Chen-Tell’21a], we proved that under the standard assumption that OWFs exist, together with a nesseary assumption against deterministic algorithms with advice, \(\mathsf{BPTIME}[T(n)] \subset \mathsf{TIME}[T(n) \cdot n]\). Moreover, the overhead of \(n\) is optimal under the Nondeterministic Strong Exponential-Time Hypothesis (NSETH).

  • Randomness is indistinguishable from useless. Given this negative result under NSETH, it appears hard (or impossible) to further improve the overhead of \(n\) above (since it would refute NSETH). One way to get around this barrier is to weaken the correctness requirement of derandomization slightly.
    In [Chen-Tell’21b], we show that under plausible hardness assumptions, one can derandomize a \(T(n)\)-time randomized algorithm \(A\) to obtain a \(T(n) \cdot n^{o(1)}\)-time deterministic algorithm \(B\), such that no poly-time uniform adversary \(S\) can find a mistake (an input \(x\) such that \(A(x) \ne B(x)\)) with non-negligible probability. In other words, this derandomization is indistinguishable from being correct and has very small overhead. Arguably, such derandomization already suffices for many practical applications.
    PRGs are not enough. Surprisingly, due to its black-box nature, one can show that no PRGs can give us the small overhead derandomization mentioned above. In fact, we rely on a non-black-box approach based on targeted PRGs to achieve this strong derandomization. See Section 3 for more details.

2. What is the minimum hardness assumption for showing \(\mathsf{prP} = \mathsf{prBPP}\)?

  • Background. Classic work shows that circuit lower bounds for \(\mathsf{E} = \mathsf{TIME}[2^{O(n)}]\) is equivalent to the existence of pseudorandom generators (PRGs) for \(\mathsf{P}_{/\mathsf{poly}}\) (a.k.a. black-box derandomization). However, it is not clear whether those circuit lower bounds are also necessary for derandomization, since an approach for derandomization can be white-box (in the sense that the derandomization algorithm can heavily exploit the structure of the given input circuit).

  • Derandomization from new hardness assumptions. In [Chen-Tell’21b], we introduce a new type of hardness assumption, almost-all-input hardness against uniform probabilistic algorithms, and show that such assumptions are both sufficient and necessary for \(\mathsf{prP} = \mathsf{prBPP}\). In other words, this new type of assumption could be the right assumption for derandomization.

3. A new hardness vs. randomness framework based on targeted PRGs

  • PRGs vs. Targeted PRGs. A targeted PRG (introduced by [Goldreich’11]) \(G\) gets a circuit \(C\) with \(n\)-bit inputs and a short seed of length \(s\), such that its output distribution \(G(C,U_s)\) and the uniform distribution \(U_n\) are indistinguishable by \(C\). In particular, unlike PRGs, the targeted PRGs are white-box derandomization in the sense that its outputs can depend on the input circuit \(C\).

  • A general framework for constructing targeted PRGs/HSGs. The main conceptual contribution of [Chen-Tell’21b] is a general framework for constructing targeted PRGs (or HSGs) from a new type of hardness assumption. This new framework helps us to obtain both a faster derandomization (see “randomness is indistinguishable from useless” part in Section 1) and new hardness assumptions for \(\mathsf{prP} = \mathsf{prBPP}\) (see Section 2). We believe this new framework will find other applications in complexity theory or TCS in general.
    Our construction crucially builds on the doubly efficient proof system of [Goldwasser, Kalai, and Rothblum’15]. Very roughly speaking, the targeted PRGs are constructed from the honest prover strategy function when running the GKR protocol on the given input circuit.