Ran Raz
Weizmann Institute of Science
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ran Raz.
symposium on the theory of computing | 1997
Ran Raz; Shmuel Safra
We introduce a new low-degree-test, one that uses the restriction of low-degree polynomials to planes (i. e., afine sub-spaces of dimension 2), rather than the restriction to lines (i. e., afine sub-spaces of dimension 1). We prove the new test to be of a very small emorprobability (in particular, much smaller than constant). The new test enables us to prove a low-error characterization of NP in terms of PCP. Specifically, OUT theorem states that, for any given c > 0, membership in any NP language can be verijied with 0(1) accesses, each r’eading logarithmic number of bits, and such that the error-probability is 2‘“~’-’ n. Our results are in fact stronger, as stated below. One application of the new characterization of NP is that approximating SETCOVER to within a logarithmic factors is NP-hard. Previous analysis for low-degree-tests, as well as previous characten”zations of NP in terms of PCP, have managed to achieve, with constant number of accesses, error-probability of, at best, a constant. The proof for the smail err-or-probability of our new low-degree-test is, nevertheless, significantly simpler than previous proofs. In particular, it is combinatorial and geometrical in nature, rather than algebraic. 1 Characterizations of NP in terms of PCP Characterizing the class NP, by itself or with respect to other computational-complexity clssses, is perhaps one *URL http: //www.math.tau. ac.il/school/courses/PCP tWejzmm ~st., ISRAEL. ~-a@wj~dom.wej--
SIAM Journal on Computing | 1998
Ran Raz
ac,J 1Tel-Aviv University, ISRAEL. Permissionto nmhcdigiiol/hmi copies ol’iill w poti ,d’hs motcrizl Ibr personal or claswoom use is grmltd without fee provided Ih:)t the copies ‘are nof m.adt or disirihutcd I“or pI-o~II or ccrmmcrc. ial :Id\’Jm:Igt. Ilw L’OPV right notice. the Iil!c ot’lhc puhiimlion nnd iLs tiate nppcm’, imd uotice is given that copyright is by pcnniwi(m oflhe AChl, 10C. To copyolherwise. 10rcpuhlish. 10 POM m swvws or In redis(rilul!c In lists. requires specilic permission amikrr I’te .STOC ‘ 97 El ]>;lSO. ‘] ’~XSS [ IS.A Copyrighl1997 ACM O-8979 I-XNL61’9705 ,
symposium on the theory of computing | 1999
Ran Raz
3.50 of the most fundamental avenues of research in theory of computer-science. Since the early days, when the classes P and NP were defined, and the question was posed as to whether they are the same or do they differ, many problems were shown to be NP-complete, thereby increasing the weight on finding stricter characterization for the class NP. NP has since been given a few alternative characterizations. The one most commonly applied being Cook’s [CO071], which characterizes NP in terms of efficient verification of proofs (or nondeterministic computations). A new perspective, by which improved characterizations of NP can be obtained, has been recently proposed. The motivation for which stems from questions regarding the hardness of approximation versions, for problems whose exact computation is known to be NP-hard. This avenue of research was initiated by [FGL+91], which introduced a new methodology for proving hardness results for approximation problems. The method takes advantage of results in a seemingly unrelated area — that of interactive proofs [GMR89, Bab85, BGKW88, LFKN92, Sha92, BFL91] — however interprets those results with quite a different perspective in mind. Much effort has been invested since towards a better understanding of this methodology, and the class NP has consequently gained stricter characterizations [AS92, ALM+92, BGS95], which are referred to as characterizations of NP in terms of PCP (or, in short, PCP characterizations of NP). The PCP characterization of NP — though has taken around 20 years to be formulated — seems now as the most natural extension of the old characterization of NP [CO071], if one has in mind proving hardness results for approximation problems. This characterization has already been used to obtain quite a few hardness results for approximation problems [FGL+91, AS92, ALM+ 92, PY91, LY94, BGLR93, KLS93, BGS95, Hiis96a, H5a96b, H5s97]. The previous characterization of NP in terms of the PCP hierarchy [AS92, ALM+ 92], seemed at first ss the best possible up to constant factors. A stronger characterization was later conjectured in [BGLR93]; one that, as an outcome, implies NP-hardness of approximating SET-COVER to within logarithmic factors [LY94, BGLR93]. The conjecture itself, more-
symposium on the theory of computing | 2005
Ran Raz
We show that a parallel repetition of any two-prover one-round proof system (MIP(2,1)) decreases the probability of error at an exponential rate. No constructive bound was previously known. The constant in the exponent (in our analysis) depends only on the original probability of error and on the total number of possible answers of the two provers. The dependency on the total number of possible answers is logarithmic, which was recently proved to be almost the best possible [U. Feige and O. Verbitsky, Proc.11th Annual IEEE Conference on Computational Complexity, IEEE Computer Society Press, Los Alamitos, CA, 1996, pp. 70--76].
conference on computational complexity | 2004
Ran Raz; Amir Shpilka
Communication complexity has become a central completity model. In that model, we count the amount of communication bits needed between two parties in order to solve certain computational problems. We show that for certain communication complezity problems quantum communication protocols are exponentially faster than classical ores. More explicitly, we give an example for a communication complexity relation (0~ promise problem) P such that: 1) The quantum communication complexity of P is O(log m). 2) The classical probabilistic communication complexity of P is Q(m’l’/ logm). (where m is the length of the inputs). This gives an ezponential gap between quantum communication complexity and classical probabilistic communication complexity. Only a quadratic gap was previously known. Our problem P is of geometrical nature, and is a finite precision variation of the following problem: Player I gets as input a unit vector z E R” and two orthogonal subspaces MO, MI c R”. Player II gets as input an orthogonal matrixT : R” + R”. Their goal is to answer 0 if T(x) E MO and 1 if T(x) E M,, (and any an~luer in any other case). We give an almost tight analysis for the quantum communication complexity and for the classical-probabilistic communication complexity of this problem.
foundations of computer science | 1997
Ran Raz; Pierre McKenzie
We show how to extract random bits from two or more independent weak random sources in cases where only one source is of linear min-entropy and all other sources are of logarithmic min-entropy. Our main results are as follows: <ol><li>A long line of research, starting by Nisan and Zuckerman[14], gives explicit constructions of seeded-extractors, that is, extractors that use a short seed of truly random bits to extract randomness from a weak random source. For every such extractor <i>E</i>, with seed of length <i>d</i>, we construct an extractor <i>E</i>′, with seed of length <i>d</i>′=<i>O</i>(<i>d</i>), that achieves the same parameters as <i>E</i> but only requires the seed to be of min-entropy larger than (1⁄2+δ) •<i>d</i>′ (rather than fully random), where δ is an arbitrary small constant.</li> <li>Fundamental results of Chor and Goldreich and Vazirani [6,21] show how to extract Ω(<i>n</i>) random bits from two (independent) sources of length <i>n</i> and min-entropy larger than (1⁄2δ) • <i>n</i>, where δ is an arbitrary small constant. We show how to extract Ω(<i>n</i>) random bits (with optimal probability of error) when only one source is of min-entropy (1⁄2+δ) •<i>n</i> and the other source is of logarithmic min entropy.<sup>3</sup> </li><li>A recent breakthrough of Barak, Impagliazzo and Wigderson[4] shows how to extract Ω(<i>n</i>) random bits from a constant number of (independent) sources of length <i>n</i> and min-entropy larger than δ <i>n</i>, where δ is an arbitrary small constant. We show how to extract Ω (<i>n</i>) random bits (with optimal probability of error) when only one source is of min-entropy δ <i>n</i> and all other (constant number of) sources are of logarithmic min-entropy.</li><li>A very recent result of Barak, Kindler, Shaltiel, Sudakov and Wigderson[5] shows how to extract a constant number of random bits from three (independent) sources of length <i>n</i> and min-entropy larger than δ<i>n</i>, where δ is an arbitrary small constant. We show how to extract Ω(<i>n</i>)/ random bits, with sub-constant probability of error, from one source of min-entropy δ <i>n</i> and two sources of logarithmic min-entropy.</li><li>In the same paper, Barak, Kindler, Shaltiel, Sudakov and Wigderson[5] give an explicit coloring of the complete bipartite graph of size 2<i><sup>n</sup></i> x 2<i><sup>n</sup></i> with two colors, such that there is no monochromatic subgraph of size larger than 2<sup>δ</sup><i><sup>n</sup></i> x 2 2<sup>δ</sup><i><sup>n</sup></i>, where δ is an arbitrary small constant. We give an explicit coloring of the complete bipartite graph of size 2<i>n</i> x 2<i>n</i> with a constant number of colors, such that there is no monochromatic subgraph of size larger than 2<sup>δ</sup><i><sup>n</sup></i> x <i>n</i><i><sup>5</sup></i>.</li></ol>We also give improved constructions of mergers and condensers. In particular, <ol><li>We show that using a constant number of truly random bits, one can condense a source of length <i>n</i> and min-entropy rate δ into a source of length Ω (<i>n</i>) and min-entropy rate 1-δ, where δ is an arbitrary small constant.</li><li>We show that using a constant number of truly random bits, one can merge a constant number of sources of length <i>n</i>, such that at least one of them is of min-entropy rate 1-δ, into one source of length Ω(<i>n</i>) and min-entropy rate slightly less than 1-δ, where δ is any small constant.</li></ol>
symposium on the theory of computing | 1995
Ran Raz
We give a deterministic polynomial time algorithm for polynomial identity testing in the following two cases: 1. Non commutative arithmetic formulas: the algorithm gets as an input an arithmetic formula in the non-commuting variables x/sub i/,...,x/sub n/ and determines whether or not the output of the formula is identically 0 (as a formal expression). 2. Pure arithmetic circuits: the algorithm gets as an input a pure arithmetic circuit (as defined by N. Nisan and A. Wigderson (1996)) in the variables x/sub i/,...,x/sub n/ and determines whether or not the output of the circuit is identically 0 (as a formal expression). We also give a deterministic polynomial time identity testing algorithm for non commutative algebraic branching programs as defined by N. Nisan (1991). One application is a deterministic polynomial time identity testing for multilinear arithmetic circuits of depth 3. Finally, we observe an exponential lower bound for the size of pure arithmetic circuits for the permanent and for the determinant. (Only lower bounds for the depth of pure circuits were previously known by N. Nisan and A. Wigderson (1996).
Journal of the ACM | 2010
Dana Moshkovitz; Ran Raz
, for the monotone depth of functions in monotone-P. As a result we achieve the separation of the following classes. 1. monotone-NC ≠ monotone-P. 2. For every i≥1, monotone-≠ monotone-. 3. More generally: For any integer function D(n), up to (for some ε>0), we give an explicit example of a monotone Boolean function, that can be computed by polynomial size monotone Boolean circuits of depth D(n), but that cannot be computed by any (fan-in 2) monotone Boolean circuits of depth less than Const·D(n) (for some constant Const).Only a separation of monotone- from monotone- was previously known. Our argument is more general: we define a new class of communication complexity search problems, referred to below as DART games, and we prove a tight lower bound for the communication complexity of every member of this class. As a result we get lower bounds for the monotone depth of many functions. In particular, we get the following bounds: 1. For st-connectivity, we get a tight lower bound of . That is, we get a new proof for Karchmer–Wigdersons theorem, as an immediate corollary of our general result. 2. For the k-clique function, with , we get a tight lower bound of Ω(k log n). This lower bound was previously known for k≤ log n [1]. For larger k, however, only a bound of Ω(k) was previously known.
SIAM Journal on Computing | 2000
Maria Luisa Bonet; Toniann Pitassi; Ran Raz
We show that a parallel repetition of any two-prover one-round proof system (MIP(2; 1)) decreases the probability of error at an exponential rate. No constructive bound was previously known. The constant in the exponent (in our analysis) depends only on the original probability of error and on the total number of possible answers of the two provers. The dependency on the total number of possible answers is logarithmic, which was recently proved to be almost the best possible (U. Feige and O. Verbitsky, Proc. 11th Annual IEEE Conference on Computational Complexity, IEEE Computer Society Press, Los Alamitos, CA, 1996, pp. 70{76).
conference on computational complexity | 2008
Ran Raz; Amir Yehudayoff
We show that the NP-Complete language 3Sat has a PCPverifier that makes two queries to a proof of almost-linear size and achieves sub-constant probability of error o(1). The verifier performs only projection tests, meaning that the answer to the first query determines at most one accepting answer to the second query.Previously, by the parallel repetition theorem, there were PCP Theorems with two-query projection tests, but only (arbitrarily small) constant error and polynomial size.There were also PCP Theorems with sub-constant error andalmost-linear size, but a constant number of queries that is larger than 2.As a corollary, we obtain a host of new results. In particular, our theorem improves many of the hardness of approximation results that are proved using the parallel repetition theorem. A partial list includes the following:(1) 3Sat cannot be efficiently approximated to withina factor of 7/8+o(1), unless P = NP. This holds even under almost-linear reductions. Previously, the best knownNP-hardness factor was 7/8+epsilon for any constant epsilonGt0, under polynomial reductions.(2) 3Lin cannot be efficiently approximated to withina factor of 1/2+o(1), unless P = NP. This holdseven under almost-linear reductions. Previously, the best known NP-hardness factor was 1/2+epsilon for any constant epsilonGt0, under polynomial reductions.(3) A PCP Theorem with amortized query complexity 1 + o(1)and amortized free bit complexity o(1). Previously, the best known amortized query complexity and free bit complexity were 1+epsilon and epsilon, respectively, for any constant epsilon Gt 0.One of the new ideas that we use is a new technique for doing the composition step in the (classical) proof of the PCP Theorem, without increasing the number of queries to the proof. We formalize this as a composition of new objects that we call Locally Decode/Reject Codes (LDRC). The notion of LDRC was implicit in several previous works, and we make it explicit in this work. We believe that the formulation of LDRCs and their construction are of independent interest.