Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ankit Garg is active.

Publication


Featured researches published by Ankit Garg.


symposium on the theory of computing | 2013

From information to exact communication

Mark Braverman; Ankit Garg; Denis Pankratov; Omri Weinstein

We develop a new local characterization of the zero-error information complexity function for two-party communication problems, and use it to compute the exact internal and external information complexity of the 2-bit AND function: IC(AND,0) = C<sub>∧</sub>≅ 1.4923 bits, and IC<sup>ext</sup>(AND,0) = log<sub>2</sub> 3 ≅ 1.5839 bits. This leads to a tight (upper and lower bound) characterization of the communication complexity of the set intersection problem on subsets of {1,...,n} (the player are required to compute the intersection of their sets), whose randomized communication complexity tends to C<sub>∧</sub>⋅ n pm o(n) as the error tends to zero. The information-optimal protocol we present has an infinite number of rounds. We show this is necessary by proving that the rate of convergence of the r-round information cost of AND to IC(AND,0)=C<sub>∧</sub> behaves like Θ(1/r<sup>2</sup>), i.e. that the r-round information complexity of AND is C<sub>∧</sub>+Θ(1/r<sup>2</sup>). We leverage the tight analysis obtained for the information complexity of AND to calculate and prove the exact communication complexity of the <i>set disjointness</i> function Disj<sub>n</sub>(X,Y) = - v<sub>i=1</sub><sup>n</sup> AND(x<sub>i</sub>,y<sub>i</sub>) with error tending to 0, which turns out to be = C<sub>DISJ</sub>⋅ n pm o(n), where C<sub>DISJ</sub>≅ 0.4827. Our rate of convergence results imply that an asymptotically optimal protocol for set disjointness will have to use ω(1) rounds of communication, since every r-round protocol will be sub-optimal by at least Ω(n/r<sup>2</sup>) bits of communication. We also obtain the tight bound of 2/ln2 k pm o(k) on the communication complexity of disjointness of sets of size ≤ k. An asymptotic bound of Θ(k) was previously shown by Hastad and Wigderson.


symposium on the theory of computing | 2015

Small Value Parallel Repetition for General Games

Mark Braverman; Ankit Garg

We prove a parallel repetition theorem for general games with value tending to 0. Previously Dinur and Steurer proved such a theorem for the special case of projection games. We use information theoretic techniques in our proof. Our proofs also extend to the high value regime (value close to 1) and provide alternate proofs for the parallel repetition theorems of Holenstein and Rao for general and projection games respectively. We also extend the example of Feige and Verbitsky to show that the small-value parallel repetition bound we obtain is tight. Our techniques are elementary in that we only need to employ basic information theory and discrete probability in the small-value parallel repetition proof.


computer science symposium in russia | 2013

Information Lower Bounds via Self-reducibility

Mark Braverman; Ankit Garg; Denis Pankratov; Omri Weinstein

We use self-reduction methods to prove strong information lower bounds on two of the most studied functions in the communication complexity literature: Gap Hamming Distance (GHD) and Inner Product (IP). In our first result we affirm the conjecture that the information cost of GHD : is linear even under the uniform distribution, which strengthens the Ω(n) bound recently shown by [15], and answering an open problem from [10]. In our second result we prove that the information cost of IP n is arbitrarily close to the trivial upper bound n as the permitted error tends to zero, again strengthening the Ω(n) lower bound recently proved by [9].


international colloquium on automata, languages and programming | 2014

Public vs Private Coin in Bounded-Round Information

Mark Braverman; Ankit Garg

We precisely characterize the role of private randomness in the ability of Alice to send a message to Bob while minimizing the amount of information revealed to him. We give an example of a (randomized) message which can be transmitted while revealing only I bits of information using private randomness, but requires Alice to reveal I + logI − O(1) bits of information if only public coins are allowed. This gives the first example of an ω(1) additive separation between these two models. Our example also shows that the one-round compression construction of Harsha et al. [HJMR07] cannot be improved.


foundations of computer science | 2015

Near-Optimal Bounds on Bounded-Round Quantum Communication Complexity of Disjointness

Mark Braverman; Ankit Garg; Young Kun Ko; Jieming Mao; Dave Touchette

We prove a near optimal round-communication trade off for the two-party quantum communication complexity of disjointness. For protocols with r rounds, we prove a lower bound of a#x03A9;(n/r) on the communication required for computing disjointness of input size n, which is optimal up to logarithmic factors. The previous best lower bound was a#x03A9;(n/r2) due to Jain, Radha krishnan and Sen. Along the way, we develop several tools for quantum information complexity, one of which is a lower bound for quantum information complexity in terms of the generalized discrepancy method. As a corollary, we get that the quantum communication complexity of any boolean function f is at most 2O(QIC(f)), where QIC(f) is the prior-free quantum information complexity of f (with error 1/3).


conference on innovations in theoretical computer science | 2018

Alternating Minimization, Scaling Algorithms, and the Null-Cone Problem from Invariant Theory

Peter Bürgisser; Ankit Garg; Rafael Mendes de Oliveira; Michael Walter; Avi Wigderson

Alternating minimization heuristics seek to solve a (difficult) global optimization task through iteratively solving a sequence of (much easier) local optimization tasks on different parts (or blocks) of the input parameters. While popular and widely applicable, very few examples of this heuristic are rigorously shown to converge to optimality, and even fewer to do so efficiently. In this paper we present a general framework which is amenable to rigorous analysis, and expose its applicability. Its main feature is that the local optimization domains are each a group of invertible matrices, together naturally acting on tensors, and the optimization problem is minimizing the norm of an input tensor under this joint action. The solution of this optimization problem captures a basic problem in Invariant Theory, called the null-cone problem. This algebraic framework turns out to encompass natural computational problems in combinatorial optimization, algebra, analysis, quantum information theory, and geometric complexity theory. It includes and extends to high dimensions the recent advances on (2-dimensional) operator scaling. Our main result is a fully polynomial time approximation scheme for this general problem, which may be viewed as a multi-dimensional scaling algorithm. This directly leads to progress on some of the problems in the areas above, and a unified view of others. We explain how faster convergence of an algorithm for the same problem will allow resolving central open problems. Our main techniques come from Invariant Theory, and include its rich non-commutative duality theory, and new bounds on the bitsizes of coefficients of invariant polynomials. They enrich the algorithmic toolbox of this very computational field of mathematics, and are directly related to some challenges in geometric complexity theory (GCT).


Theory of Computing Systems \/ Mathematical Systems Theory | 2016

Information Lower Bounds via Self-Reducibility

Mark Braverman; Ankit Garg; Denis Pankratov; Omri Weinstein

We use self-reduction methods to prove strong information lower bounds on two of the most studied functions in the communication complexity literature: Gap Hamming Distance (GHD) and Inner Product (IP). In our first result we affirm the conjecture that the information cost of GHD is linear even under the uniform distribution, which strengthens the Ω(n) bound recently shown by Kerenidis et al. (2012), and answers an open problem from Chakrabarti et al. (2012). In our second result we prove that the information cost of IPn is arbitrarily close to the trivial upper bound n as the permitted error tends to zero, again strengthening the Ω(n) lower bound recently proved by Braverman and Weinstein (Electronic Colloquium on Computational Complexity (ECCC) 18, 164 2011). Our proofs demonstrate that self-reducibility makes the connection between information complexity and communication complexity lower bounds a two-way connection. Whereas numerous results in the past (Chakrabarti et al. 2001; Bar-Yossef et al. J. Comput. Syst. Sci. 68(4), 702–732 2004; Barak et al. 2010) used information complexity techniques to derive new communication complexity lower bounds, we explore a generic way in which communication complexity lower bounds imply information complexity lower bounds in a black-box manner.


conference on theory of quantum computation communication and cryptography | 2016

Lower Bound on Expected Communication Cost of Quantum Huffman Coding

Anurag Anshu; Ankit Garg; Aram Wettroth Harrow; Penghui Yao

Data compression is a fundamental problem in quantum and classical information theory. A typical version of the problem is that the sender Alice receives a (classical or quantum) state from some known ensemble and needs to transmit them to the receiver Bob with average error below some specified bound. We consider the case in which the message can have a variable length and the goal is to minimize its expected length. For classical messages this problem has a well-known solution given by Huffman coding. In this scheme, the expected length of the message is equal to the Shannon entropy of the source (with a constant additive factor) and the scheme succeeds with zero error. This is a single-shot result which implies the asymptotic result, viz. Shannons source coding theorem, by encoding each state sequentially. For the quantum case, the asymptotic compression rate is given by the von-Neumann entropy. However, we show that there is no one-shot scheme which is able to match this rate, even if interactive communication is allowed. This is a relatively rare case in quantum information theory when the cost of a quantum task is significantly different than the classical analogue. Our result has implications for direct sum theorems in quantum communication complexity and one-shot formulations of Quantum Reverse Shannon theorem.


foundations of computer science | 2016

A Deterministic Polynomial Time Algorithm for Non-commutative Rational Identity Testing

Ankit Garg; Leonid Gurvits; Rafael Mendes de Oliveira; Avi Wigderson


symposium on the theory of computing | 2016

Communication lower bounds for statistical estimation problems via a distributed data processing inequality

Mark Braverman; Ankit Garg; Tengyu Ma; Huy L. Nguyen; David P. Woodruff

Collaboration


Dive into the Ankit Garg's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Avi Wigderson

Institute for Advanced Study

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Leonid Gurvits

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Anurag Anshu

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aram Wettroth Harrow

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge