Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ragesh Jaiswal is active.

Publication


Featured researches published by Ragesh Jaiswal.


foundations of computer science | 2006

Approximately List-Decoding Direct Product Codes and Uniform Hardness Amplification

Russell Impagliazzo; Ragesh Jaiswal; Valentine Kabanets

We consider the problem of approximately locally listdecoding direct product codes. For a parameter k, the kwise direct product encoding of an N-bit message msg is an N^{k}-length string over the alphabet {0, 1}^k indexed by ktuples (i_1, . . . , i_k) in {1, . . . ,N}^k so that the symbol at position (i_1, . . . , i_k) of the codeword is msg(i_1) . . . msg(i_k). Such codes arise naturally in the context of hardness amplification of Boolean functions via the Direct Product Lemma (and the closely related Yao�s XOR Lemma), where typically k ll N (e.g., k = poly logN). We describe an efficient randomized algorithm for approximate local list-decoding of direct product codes. Given access to a word which agrees with the k-wise direct product encoding of some message msg in at least an in fraction of positions, our algorithm outputs a list of poly(1/in) Boolean circuits computing N-bit strings (viewed as truth tables of logN-variable Boolean functions) such that at least one of them agrees with msg in at least 1 - delta fraction of positions, for delta = O(k^{-0.1}), provided that in =Omega(poly(1/k)); the running time of the algorithm is polynomial in logN and 1/in. When in ge e-^{k^{alpha} } for a certain constant alpha > 0, we get a randomized approximate listdecoding algorithm that runs in time quasi-polynomial in 1/in (i.e., (1/in)^{poly log 1/in}). By concatenating the k-wise direct product codes with Hadamard codes, we obtain locally list-decodable codes over the binary alphabet, which can be efficiently approximately list-decoded from fewer than 1/2 - in fraction of corruptions as long as in = Omega(poly(1/k)). As an immediate application, we get uniform hardness amplification for P^{NP_parallel} , the class of languages reducible to NP through one round of parallel oracle queries: If there is a language in P^{NP_parallel} that cannot be decided by any BPP algorithm on more that 1 - 1/n^{Omega(1)} fraction of inputs, then there is another language in P^{NP_parallel} that cannot be decided by any BPP algorithm on more than 1/2 + 1/n^{omega(1)} fraction of inputs.


symposium on the theory of computing | 2008

Uniform direct product theorems: simplified, optimized, and derandomized

Russell Impagliazzo; Ragesh Jaiswal; Valentine Kabanets; Avi Wigderson

The classical Direct-Product Theorem for circuits says that if a Boolean function f: {0,1}n -> {0,1} is somewhat hard to compute on average by small circuits, then the corresponding k-wise direct product function fk(x1,...,xk)=(f(x1),...,f(xk)) (where each xi -> {0,1}n) is significantly harder to compute on average by slightly smaller circuits. We prove a fully uniform version of the Direct-Product Theorem with information-theoretically optimal parameters, up to constant factors. Namely, we show that for given k and ε, there is an efficient randomized algorithm A with the following property. Given a circuit C that computes fk on at least ε fraction of inputs, the algorithm A outputs with probability at least 3/4 a list of O(1/ε) circuits such that at least one of the circuits on the list computes f on more than 1-δ fraction of inputs, for δ = O((log 1/ε)/k). Moreover, each output circuit is an AC0 circuit (of size poly(n,k,log 1/δ,1/ε)), with oracle access to the circuit C. Using the Goldreich-Levin decoding algorithm [5], we also get a fully uniform version of Yaos XOR Lemma [18] with optimal parameters, up to constant factors. Our results simplify and improve those in [10]. Our main result may be viewed as an efficient approximate, local, list-decoding algorithm for direct-product codes (encoding a function by its values on all k-tuples) with optimal parameters. We generalize it to a family of derandomized direct-product codes, which we call intersection codes, where the encoding provides values of the function only on a subfamily of k-tuples. The quality of the decoding algorithm is then determined by sampling properties of the sets in this family and their intersections. As a direct consequence of this generalization we obtain the first derandomized direct product result in the uniform setting, allowing hardness amplification with only constant (as opposed to a factor of k) increase in the input length. Finally, this general setting naturally allows the decoding of concatenated codes, which further yields nearly optimal derandomized amplification.


foundations of computer science | 2009

Bounded Independence Fools Halfspaces

Ilias Diakonikolas; Parikshit Gopalan; Ragesh Jaiswal; Rocco A. Servedio; Emanuele Viola

We show that any distribution on


Algorithmica | 2014

A Simple D2-Sampling Based PTAS for k-Means and Other Clustering Problems

Ragesh Jaiswal; Amit Kumar; Sandeep Sen

pmo^n


international cryptology conference | 2007

Chernoff-type direct product theorems

Russell Impagliazzo; Ragesh Jaiswal; Valentine Kabanets

that is


SIAM Journal on Computing | 2009

Uniform Direct Product Theorems: Simplified, Optimized, and Derandomized

Russell Impagliazzo; Ragesh Jaiswal; Valentine Kabanets; Avi Wigderson

k


theory of cryptography conference | 2009

Security Amplification for Interactive Cryptographic Primitives

Yevgeniy Dodis; Russell Impagliazzo; Ragesh Jaiswal; Valentine Kabanets

-wise independent fools any halfspace (a.k.a.~threshold)


Journal of Cryptology | 2008

Chernoff-Type Direct Product Theorems

Russell Impagliazzo; Ragesh Jaiswal; Valentine Kabanets

h : pmo^n to pmo


Information Processing Letters | 2015

Improved analysis of D 2 -sampling based PTAS for k-means and other clustering problems

Ragesh Jaiswal; Mehul Kumar; Pulkit Yadav

, i.e., any function of the form


international workshop and international workshop on approximation, randomization, and combinatorial optimization. algorithms and techniques | 2012

Analysis of k-Means++ for Separable Data

Ragesh Jaiswal; Nitin Garg

h(x) = sign(sum_{i = 1}^n w_i x_i -theta)

Collaboration


Dive into the Ragesh Jaiswal's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anup Bhattacharya

Indian Institute of Technology Delhi

View shared research outputs
Top Co-Authors

Avatar

Nir Ailon

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Avi Wigderson

Institute for Advanced Study

View shared research outputs
Top Co-Authors

Avatar

Ilias Diakonikolas

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arindam Pal

Tata Consultancy Services

View shared research outputs
Top Co-Authors

Avatar

Sandeep Sen

Indian Institute of Technology Delhi

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge