Madhu Sudan
Microsoft
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Madhu Sudan.
international symposium on information theory | 2002
Ari Juels; Madhu Sudan
We describe a simple and novel cryptographic construction that we refer to as a fuzzy vault. A player Alice may place a secret value κ in a fuzzy vault and “lock” it using a set A of elements from some public universe U. If Bob tries to “unlock” the vault using a set B of similar length, he obtains κ only if B is close to A, i.e., only if A and B overlap substantially. In constrast to previous constructions of this flavor, ours possesses the useful feature of order invariance, meaning that the ordering of A and B is immaterial to the functioning of the vault. As we show, our scheme enjoys provable security against a computationally unbounded attacker. Fuzzy vaults have potential application to the problem of protecting data in a number of real-world, error-prone environments. These include systems in which personal information serves to authenticate users for, e.g., the purposes of password recovery, and also to biometric authentication systems, in which readings are inherently noisy as a result of the refractory nature of image capture and processing.
Journal of the ACM | 1998
Sanjeev Arora; Carsten Lund; Rajeev Motwani; Madhu Sudan; Mario Szegedy
We show that every language in NP has a probablistic verifier that checks membership proofs for it using logarithmic number of random bits and by examining a constant number of bits in the proof. If a string is in the language, then there exists a proof such that the verifier accepts with probability 1 (i.e., for every choice of its random string). For strings not in the language, the verifier rejects every provided “proof” with probability at least 1/2. Our result builds upon and improves a recent result of Arora and Safra [1998] whose verifiers examine a nonconstant number of bits in the proof (though this number is a very slowly growing function of the input length). As a consequence, we prove that no MAX SNP-hard problem has a polynomial time approximation scheme, unless NP = P. The class MAX SNP was defined by Papadimitriou and Yannakakis [1991] and hard problems for this class include vertex cover, maximum satisfiability, maximum cut, metric TSP, Steiner trees and shortest superstring. We also improve upon the clique hardness results of Feige et al. [1996] and Arora and Safra [1998] and show that there exists a positive ε such that approximating the maximum clique size in an N-vertex graph to within a factor of Nε is NP-hard.
foundations of computer science | 1992
Sanjeev Arora; Carsten Lund; Rajeev Motwani; Madhu Sudan; Mario Szegedy
The class PCP(f(n),g(n)) consists of all languages L for which there exists a polynomial-time probabilistic oracle machine that used O(f(n)) random bits, queries O(g(n)) bits of its oracle and behaves as follows: If x in L then there exists an oracle y such that the machine accepts for all random choices but if x not in L then for every oracle y the machine rejects with high probability. Arora and Safra (1992) characterized NP as PCP(log n, (loglogn)/sup O(1)/). The authors improve on their result by showing that NP=PCP(logn, 1). The result has the following consequences: (1) MAXSNP-hard problems (e.g. metric TSP, MAX-SAT, MAX-CUT) do not have polynomial time approximation schemes unless P=NP; and (2) for some epsilon >0 the size of the maximal clique in a graph cannot be approximated within a factor of n/sup epsilon / unless P=NP.<<ETX>>
SIAM Journal on Computing | 1996
Ronitt Rubinfeld; Madhu Sudan
The study of self-testing and self-correcting programs leads to the search for robust characterizations of functions. Here the authors make this notion precise and show such a characterization for polynomials. From this characterization, the authors get the following applications. Simple and efficient self-testers for polynomial functions are constructed. The characterizations provide results in the area of coding theory by giving extremely fast and efficient error-detecting schemes for some well-known codes. This error-detection scheme plays a crucial role in subsequent results on the hardness of approximating some NP-optimization problems.
IEEE Transactions on Information Theory | 1996
Andres Albanese; Johannes Blömer; Jeff Edmonds; Michael Luby; Madhu Sudan
We introduce a new method, called priority encoding transmission, for sending messages over lossy packet-based networks. When a message is to be transmitted, the user specifies a priority value for each part of the message. Based on the priorities, the system encodes the message into packets for transmission and sends them to (possibly multiple) receivers. The priority value of each part of the message determines the fraction of encoding packets sufficient to recover that part. Thus even if some of the encoding packets are lost en-route, each receiver is still able to recover the parts of the message for which a sufficient fraction of the encoding packets are received. For any set of priorities for a message, we define a natural quantity called the girth of the priorities. We develop systems for implementing any given set of priorities such that the total length of the encoding packets is equal to the girth. On the other hand, we give an information-theoretic lower bound that shows that for any set of priorities the total length of the encoding packets must be at least the girth. Thus the system we introduce is optimal in terms of the total encoding length. This work has immediate applications to multimedia and high-speed networks applications, especially in those with bursty sources and multiple receivers with heterogeneous capabilities. Implementations of the system show promise of being practical.
foundations of computer science | 1995
Benny Chor; Oded Goldreich; Eyal Kushilevitz; Madhu Sudan
We describe schemes that enable a user to access k replicated copies of a database (k/spl ges/2) and privately retrieve information stored in the database. This means that each individual database gets no information on the identity of the item retrieved by the user. For a single database, achieving this type of privacy requires communicating the whole database, or n bits (where n is the number of bits in the database). Our schemes use the replication to gain substantial saving. In particular, we have: A two database scheme with communication complexity of O(n/sup 1/3/). A scheme for a constant number, k, of databases with communication complexity O(n/sup 1/k/). A scheme for 1/3 log/sub 2/ n databases with polylogarithmic (in n) communication complexity.
Journal of the ACM | 1998
David R. Karger; Rajeev Motwani; Madhu Sudan
We consider the problem of coloring <italic>k</italic>-colorable graphs with the fewest possible colors. We present a randomized polynomial time algorithm that colors a 3-colorable graph on <italic>n</italic> vertices with min{<italic>O</italic>(Δ<supscrpt>1/3</supscrpt> log<supscrpt>1/2</supscrpt> Δ log <italic>n</italic>), <italic>O</italic>(<italic>n</italic><supscrpt>1/4</supscrpt> log<supscrpt>1/2</supscrpt> <italic>n</italic>)} colors where Δ is the maximum degree of any vertex. Besides giving the best known approximation ratio in terms of <italic>n</italic>, this marks the first nontrivial approximation result as a function of the maximum degree Δ. This result can be generalized to <italic>k</italic>-colorable graphs to obtain a coloring using min{<italic>O</italic>(Δ<supscrpt>1-2/<italic>k</italic></supscrpt> log<supscrpt>1/2</supscrpt> Δ log <italic>n</italic>), <italic>O</italic>(<italic>n</italic><supscrpt>1−3/(<italic>k</italic>+1)</supscrpt> log<supscrpt>1/2</supscrpt> <italic>n</italic>)} colors. Our results are inspired by the recent work of Goemans and Williamson who used an algorithm for <italic>semidefinite optimization problems</italic>, which generalize linear programs, to obtain improved approximations for the MAX CUT and MAX 2-SAT problems. An intriguing outcome of our work is a duality relationship established between the value of the optimum solution to our semidefinite program and the Lovász &thgr;-function. We show lower bounds on the gap between the optimum solution of our semidefinite program and the actual chromatic number; by duality this also demonstrates interesting new facts about the &thgr;-function.
Archive | 2001
Nadia Creignou; Sanjeev Khanna; Madhu Sudan
Preface 1. Introduction 2. Complexity Classes 3. Boolean Constraint Satisfaction Problems 4. Characterizations of Constraint Functions 5. Implementation of Functions and Reductions 6. Classification Theorems for Decision, Counting and Quantified Problems 7. Classification Theorems for Optimization Problems 8. Input-Restricted Constrained Satisfaction Problems 9. The Complexity of the Meta-Problems 10. Concluding Remarks Bibliography Index.
symposium on the theory of computing | 1997
Sanjeev Arora; Madhu Sudan
NP = PCP(logn, 1) and related results crucially depend upon the close connection between the probability with which a function passes a low degree test and the distance of this function to the nearest degree d polynomial. In this paper we study a test proposed by Rubinfeld and Sudan [30]. The strongest previously known connection for this test states that a function passes the test with probability δ for some δ > 7/8 iff the function has agreement ≈ δ with a polynomial of degree d. We present a new, and surprisingly strong, analysis which shows that the preceding statement is true for arbitrarily small δ, provided the field size is polynomially larger than d/δ. The analysis uses a version of Hilbert irreducibility, a tool of algebraic geometry. As a consequence we obtain an alternate construction for the following proof system: A constant prover 1-round proof system for NP languages in which the verifier uses O(logn) random bits, receives answers of size O(logn) bits, and has an error probability of at most 2− log 1− . Such a proof system, which implies the NP-hardness of approximating Set Cover to within Ω(logn) factors, has already been obtained by Raz and Safra [29]. Raz and Safra obtain their result by giving a strong analysis, in the sense described above, of a new low-degree test that they present. A second consequence of our analysis is a self tester/corrector for any buggy program that (supposedly) computes a polynomial over a finite field. If the program is correct only on δ fraction of inputs where δ = 1/ |F| 0.5, then the tester/corrector determines δ and generates O( 1 δ ) values for every input, such that one of them is the correct output. In fact, our results yield something stronger: Given the buggy program, we can construct O( 1 δ ) randomized programs such that one of them is correct on every input, with high probability. Such a strong selfcorrector is a useful tool in complexity theory with some applications known. ∗[email protected]. Supported by an NSF CAREER award, an Alfred P. Sloan Fellowship, and a Packard Fellowship. †[email protected]. Laboratory for Computer Science, MIT, Cambridge, MA 02139. Part of this work was done when this author was at the IBM Thomas J. Watson Research Center.
symposium on the theory of computing | 1994
Avrim Blum; Prasad Chalasani; Don Coppersmith; Bill Pulleyblank; Prabhakar Raghavan; Madhu Sudan
We are given a set of points p1, . . . , pn and a symmetric distance matrix (dij) giving the distance between pi and pj . We wish to construct a tour that minimizes ∑n i=1 `(i), where `(i) is the latency of pi, defined to be the distance traveled before first visiting pi. This problem is also known in the literature as the deliveryman problem or the traveling repairman problem. It arises in a number of applications including diskhead scheduling, and turns out to be surprisingly different from the traveling salesman problem in character. We give exact and approximate solutions to a number of cases, including a constant-factor approximation algorithm whenever the distance matrix satisfies the triangle inequality. ∗School of Computer Science, CMU. Supported in part by NSF National Young Investigator grant CCR9357793. †School of Computer Science, CMU. ‡IBM T.J. Watson Research Center.