Publications — Pritish Kamath
Click on any title to see the corresponding abstract!
2017

Improved Bounds for Universal OneBit Compressive Sensing [pdf]
Jayadev Acharya, Arnab Bhattacharyya, Pritish Kamath
International Symposium on Information Theory (ISIT), 2017
Abstract. Unlike compressive sensing where the measurement outputs are assumed to be realvalued and have infinite precision, in onebit compressive sensing, measurements are quantized to one bit, their signs. In this work, we show how to recover the support of sparse highdimensional vectors in the onebit compressive sensing framework with an asymptotically nearoptimal number of measurements. We also improve the bounds on the number of measurements for approximately recovering vectors from onebit compressive sensing measurements. Our results are universal, namely the same measurement scheme works simultaneously for all sparse vectors.
Our proof of optimality for support recovery is obtained by showing an equivalence between the task of support recovery using 1bit compressive sensing and a wellstudied combinatorial object known as Union Free Families.

QuerytoCommunication Lifting for $\mathrm{P}^{\mathrm{NP}}$ [pdf]
Mika Göös, Pritish Kamath, Toniann Pitassi, Thomas Watson
Conference on Computational Complexity (CCC), 2017
Abstract. We prove that the $\mathrm{P}^{\mathrm{NP}}$type query complexity (alternatively, decision list width) of any boolean function $f$ is quadratically related to the $\mathrm{P}^{\mathrm{NP}}$type communication complexity of a lifted version of $f$. As an application, we show that a certain "product" lower bound method of Impagliazzo and Williams (CCC 2010) fails to capture $\mathrm{P}^{\mathrm{NP}}$ communication complexity up to polynomial factors, which answers a question of Papakonstantinou, Scheder, and Song (CCC 2014).

Compression in a Distributed Setting [pdf  coming soon]
Badih Ghazi, Elad Haramaty, Pritish Kamath, Madhu Sudan
Innovations in Theoretical Computer Science (ITCS), 2017
Abstract. Motivated by an attempt to understand the formation and development of (human) language, we introduce a "distributed compression" problem. In our problem a sequence of pairs of players from a set of $K$ players are chosen and tasked to communicate messages drawn from an unknown distribution $Q$. Arguably languages are created and evolve to compress frequently occurring messages, and we focus on this aspect. The only knowledge that players have about the distribution $Q$ is from previously drawn samples, but these samples differ from player to player. The only common knowledge between the players is restricted to a common prior distribution $P$ and some constant number of bits of information (such as a learning algorithm). Letting $T_\varepsilon$ denote the number of iterations it would take for a typical player to obtain an $\epsilon$approximation to $Q$ in total variation distance, we ask whether $T_\varepsilon$ iterations suffice to compress the messages down roughly to their entropy and give a partial positive answer.
We show that a natural uniform algorithm can compress the communication down to an average cost per message of $O(H(Q) + \log (D(P  Q) + O(1))$ in $\widetilde{O}(T_\varepsilon)$ iterations while allowing for $O(\varepsilon)$error, where $D(\cdot  \cdot)$ denotes the KLdivergence between distributions. For large divergences this compares favorably with the static algorithm that ignores all samples and compresses down to $H(Q) + D(P  Q)$ bits, while not requiring $T_\varepsilon\cdot K$ iterations that it would take players to develop optimal but separate compressions for each pair of players. Along the way we introduce a "datastructural" view of the task of communicating with a natural language and show that our natural algorithm can also be implemented by an efficient data structure, whose storage is comparable to the storage requirements of $Q$ and whose query complexity is comparable to the lengths of the message to be compressed. Our results give a plausible mathematical analogy to the mechanisms by which human languages get created and evolve, and in particular highlights the possibility of coordination towards a joint task (agreeing on a language) while engaging in distributed learning.
2016

The Optimality of Correlated Sampling [pdf]
Mohammad Bavarian, Badih Ghazi, Elad Haramaty, Pritish Kamath, Ronald L. Rivest, Madhu Sudan
(Manuscript), 2016
Abstract. In the correlated sampling problem, two players, say Alice and Bob, are given two distributions, say $P$ and $Q$ respectively, over the same universe and access to shared randomness. The two players are required to output two elements, without any interaction, sampled according to their respective distributions, while trying to minimize the probability that their outputs disagree. A wellknown protocol due to Holenstein, with close variants (for similar problems) due to Broder, and to Kleinberg and Tardos, solves this task with disagreement probability at most $2 \delta/(1+\delta)$, where $\delta$ is the total variation distance between $P$ and $Q$. This protocol has been used in several different contexts including sketching algorithms, approximation algorithms based on rounding linear programming relaxations, the study of parallel repetition and cryptography.
In this note, we give a surprisingly simple proof that this protocol is in fact tight. Specifically, for every $\delta \in (0,1)$, we show that any correlated sampling scheme should have disagreement probability at least $2\delta/(1+\delta)$. This partially answers a recent question of Rivest.
Our proof is based on studying a new problem we call constrained agreement. Here, Alice is given a subset $A \subseteq [n]$ and is required to output an element $i \in A$, Bob is given a subset $B \subseteq [n]$ and is required to output an element $j \in B$, and the goal is to minimize the probability that $i \neq j$. We prove tight bounds on this question, which turn out to imply tight bounds for correlated sampling. Though we settle basic questions about the two problems, our formulation also leads to several questions that remain open.

Decidability of NonInteractive Simulation of Joint Distributions [pdf]
Badih Ghazi, Pritish Kamath, Madhu Sudan
Foundations of Computer Science (FOCS), 2016
Abstract. We present decidability results for a subclass of "noninteractive" simulation problems, a wellstudied class of problems in information theory. A noninteractive simulation problem is specified by two distributions $P(x,y)$ and $Q(u,v)$: The goal is to determine if two players, Alice and Bob, that observe sequences $X^n$ and $Y^n$ respectively, where $\{(X_i, Y_i)\}_{i=1}^n$ are drawn i.i.d. from $P(x,y)$, can generate pairs $U$ and $V$ respectively (without communicating with each other) with a joint distribution that is arbitrarily close in total variation to $Q(u,v)$. Even when $P$ and $Q$ are extremely simple: e.g., $P$ is uniform on the triples $\{(0,0), (0,1), (1,0)\}$ and $Q$ is a "doubly symmetric binary source", i.e., $U$ and $V$ are uniform $\pm 1$ variables with correlation say $0.49$, it is open if $P$ can simulate $Q$.
In this work, we show that whenever $P$ is a distribution on a finite domain and $Q$ is a $2 \times 2$ distribution, then the noninteractive simulation problem is decidable: specifically, given $\delta > 0$ the algorithm runs in time bounded by some function of $P$ and $\delta$ and either gives a noninteractive simulation protocol that is $\delta$close to $Q$ or asserts that no protocol gets $O(\delta)$close to $Q$. The main challenge to such a result is determining explicit (computable) convergence bounds on the number $n$ of samples that need to be drawn from $P(x,y)$ to get $\delta$close to $Q$. We invoke contemporary results from the analysis of Boolean functions such as the invariance principle and a regularity lemma to obtain such explicit bounds.

Communication complexity of permutationinvariant functions [pdf]
Badih Ghazi, Pritish Kamath, Madhu Sudan
Symposium On Discete Algorithms (SODA), 2016
Abstract. Motivated by the quest for a broader understanding of upper bounds in communication complexity, at least for simple functions, we introduce the class of ''permutationinvariant'' functions. A partial function $f:\{0,1\}^n \times \{0,1\}^n\to \{0,1,?\}$ is permutationinvariant if for every bijection $\pi:\{1,\ldots,n\} \to \{1,\ldots,n\}$ and every $\mathbf{x}, \mathbf{y} \in \{0,1\}^n$, it is the case that $f(\mathbf{x}, \mathbf{y}) = f(\mathbf{x}^{\pi}, \mathbf{y}^{\pi})$. Most of the commonly studied functions in communication complexity are permutationinvariant. For such functions, we present a simple complexity measure (computable in time polynomial in $n$ given an implicit description of $f$) that describes their communication complexity up to polynomial factors and up to an additive error that is logarithmic in the input size. This gives a coarse taxonomy of the communication complexity of simple functions.
Our work highlights the role of the wellknown lower bounds of functions such as 'SetDisjointness' and 'Indexing', while complementing them with the relatively lesserknown upper bounds for 'GapInnerProduct' (from the sketching literature) and 'SparseGapInnerProduct' (from the recent work of Canonne et al. [ITCS 2015]). We also present consequences to the study of communication complexity with imperfectly shared randomness where we show that for total permutationinvariant functions, imperfectly shared randomness results in only a polynomial blowup in communication complexity after an additive $O(\log \log n)$ overhead.
2015

Communication with partial noiseless feedback [pdf]
Bernhard Haeupler, Pritish Kamath, Ameya Velingker
International Workshop on Randomization and Computation (RANDOM), 2015
Abstract. We introduce the notion of oneway communication schemes with partial noiseless feedback. In this setting, Alice wishes to communicate a message to Bob by using a communication scheme that involves sending a sequence of bits over a channel while receiving feedback bits from Bob for $\delta$ fraction of the transmissions. An adversary is allowed to corrupt up to a constant fraction of Alice's transmissions, while the feedback is always uncorrupted. Motivated by questions related to coding for interactive communication, we seek to determine the maximum error rate, as a function of $0 \le \delta \le 1$, such that Alice can send a message to Bob via some protocol with $\delta$ fraction of noiseless feedback. The case $\delta = 1$ corresponds to full feedback, in which the result of Berlekamp ['64] implies that the maximum tolerable error rate is $1/3$, while the case $\delta = 0$ corresponds to no feedback, in which the maximum tolerable error rate is $1/4$, achievable by use of a binary errorcorrecting code.
In this work, we show that for any $\delta \in (0,1]$ and $\gamma \in [0, 1/3)$, there exists a randomized communication scheme with noiseless $\delta$feedback, such that the probability of miscommunication is low, as long as no more than a $\gamma$ fraction of the rounds are corrupted. Moreover, we show that for any $\delta \in (0, 1]$ and $\gamma < f(\delta)$, there exists a deterministic communication scheme with noiseless $\delta$feedback that always decodes correctly as long as no more than a $\gamma$ fraction of rounds are corrupted. Here $f$ is a monotonically increasing, piecewise linear, continuous function with $f(0) = 1/4$ and $f(1) = 1/3$. Also, the rate of communication in both cases is constant (dependent on $\delta$ and $\gamma$ but independent of the input length).
2014
2013

Arithmetic Circuits : A chasm at depth three [pdf] [FOCS slides] [FOCS video] [TCS+ video]
Ankit Gupta, Pritish Kamath, Neeraj Kayal, Ramprasad Saptharishi
Foundations of Computer Science (FOCS), 2013 (invited to SICOMP)
Abstract. We show that, over $\mathbb{Q}$, if an $n$variate polynomial of degree $d = n^{O(1)}$ is computable by an arithmetic circuit of size $s$ (respectively by an algebraic branching program of size $s$) then it can also be computed by a depth three circuit (i.e. a $\Sigma \Pi \Sigma$circuit) of size $\exp(O(\sqrt{d \log d \log n \log s}))$ (respectively of size $\exp(O(\sqrt{d \log n \log s}))$). In particular this yields a $\Sigma \Pi \Sigma$ circuit of size $\exp(O(\sqrt{d} \cdot \log d))$ computing the $d \times d$ determinant $\mathrm{Det}_d$. It also means that if we can prove a lower bound of $\exp(\omega(\sqrt{d} \cdot \log^{3/2} d))$ on the size of any $\Sigma \Pi \Sigma$circuit computing the $d \times d$ permanent ${\rm Perm}_d$ then we get superpolynomial lower bounds for the size of any arithmetic circuit computing ${\rm Perm}_d$. We then give some further results pertaining to derandomizing polynomial identity testing and circuit lower bounds.
The $\Sigma \Pi \Sigma $ circuits that we construct have the property that (some of) the intermediate polynomials have degree much higher than $d$. Indeed such a counterintuitive construction is unavoidable  it is known that in any $\Sigma \Pi \Sigma$ circuit $C$ computing either ${\rm Det}_d$ or ${\rm Perm}_d$, if every multiplication gate has fanin at most $d$ (or any constant multiple thereof) then $C$ must have size at least $\exp(\Omega(d))$.

Approaching the chasm at depth four [pdf]
Ankit Gupta, Pritish Kamath, Neeraj Kayal, Ramprasad Saptharishi
Conference on Computational Complexity (CCC), 2013 (to appear in J.ACM)
(cowinner of the Best Paper Award)
Abstract. AgrawalVinay (FOCS 2008), Koiran (TCS 2012) and Tavenas (MFCS 2012) have recently shown that an $\exp(\omega(\sqrt{n}\log n))$ lower bound for depth four homogeneous circuits computing the permanent with bottom layer of $\times$ gates having fanin bounded by $\sqrt{n}$ translates to superpolynomial lower bound for general arithmetic circuits computing the permanent. Motivated by this, we examine the complexity of computing the permanent and determinant via such homogeneous depth four circuits with bounded bottom fanin.
We show here that any homogeneous depth four arithmetic circuit with bottom fanin bounded by $\sqrt{n}$ computing the permanent (or the determinant) must be of size $\exp(\Omega(\sqrt{n}))$.
Before 2013 (Undergraduate Research)

Preservation under substructures modulo bounded cores [pdf]
Abhisekh Sankaran, Bharat Adsul, Vivek Madan, Pritish Kamath, Supratik Chakraborty
International Workshop on Logic, Language, Information and Computation (WoLLIC), 2012
Abstract. We investigate a modeltheoretic property that generalizes the classical notion of preservation under substructures. We call this property preservation under substructures modulo bounded cores, and present a syntactic characterization via $\Sigma^0_2$ sentences for properties of arbitrary structures definable by FO sentences. Towards a sharper characterization, we conjecture that the count of existential quantifiers in the $\Sigma^0_2$ sentence equals the size of the smallest bounded core. We show that this conjecture holds for special fragments of FO and also over special classes of structures. We present a (not FOdefinable) class of finite structures for which the conjecture fails, but for which the classical ŁośTarski preservation theorem holds. As a fallout of our studies, we obtain combinatorial proofs of the ŁośTarski theorem for some of the aforementioned cases.

Faster algorithms for alternating refinement relations [pdf]
Krishnendu Chatterjee, Siddhesh Chaubal, Pritish Kamath
Computer Science and Logic (CSL), 2012
Abstract. One central issue in the formal design and analysis of reactive systems is the notion of refinement that asks whether all behaviors of the implementation is allowed by the specification. The local interpretation of behavior leads to the notion of simulation. Alternating transition systems (ATSs) provide a general model for composite reactive systems, and the simulation relation for ATSs is known as alternating simulation. The simulation relation for fair transition systems is called fair simulation. In this work our main contributions are as follows:
 We present an improved algorithm for fair simulation with Büchi fairness constraints; our algorithm requires $O(n^3 \cdot m)$ time as compared to the previous known $O(n^6)$time algorithm, where $n$ is the number of states and $m$ is the number of transitions.
 We present a game based algorithm for alternating simulation that requires $O(m^2)$time as compared to the previous known $O((n \cdot m)^2)$time algorithm, where $n$ is the number of states and $m$ is the size of transition relation.
 We present an iterative algorithm for alternating simulation that matches the time complexity of the game based algorithm, but is more space efficient than the game based algorithm.

Using dominances for solving the protein family identification problem [pdf]
Noël MalodDognin, Mathilde Le BoudicJamin, Pritish Kamath, Rumen Andonov
Workshop on Algorithms in Bioinformatics (WABI), 2011
Identification of protein families is a computational biology challenge that needs efficient and reliable methods. Here we introduce the concept of dominance and propose a novel combined approach based on Distance Alignment Search Tool (DAST), which contains an exact algorithm with bounds. Our experiments show that this method successfully finds the most similar proteins in a set without solving all instances.
