Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dana Moshkovitz is active.

Publication


Featured researches published by Dana Moshkovitz.


ACM Transactions on Algorithms | 2006

Algorithmic construction of sets for k -restrictions

Noga Alon; Dana Moshkovitz; Shmuel Safra

This work addresses k-restriction problems, which unify combinatorial problems of the following type: The goal is to construct a short list of strings in Σm that satisfies a given set of k-wise demands. For every k positions and every demand, there must be at least one string in the list that satisfies the demand at these positions. Problems of this form frequently arise in different fields in Computer Science.The standard approach for deterministically solving such problems is via almost k-wise independence or k-wise approximations for other distributions. We offer a generic algorithmic method that yields considerably smaller constructions. To this end, we generalize a previous work of Naor et al. [1995]. Among other results, we enhance the combinatorial objects in the heart of their method, called splitters, and construct multi-way splitters, using a new discrete version of the topological Necklace Splitting Theorem [Alon 1987].We utilize our methods to show improved constructions for group testing [Ngo and Du 2000] and generalized hashing [Alon et al. 2003], and an improved inapproximability result for SET-COVER under the assumption P &neq; NP.


Theory of Computing | 2012

The Projection Games Conjecture and the NP-Hardness of ln n-Approximating Set-Cover

Dana Moshkovitz

We suggest the research agenda of establishing new hardness of approximation results based on the “projection games conjecture”, i.e., an instantiation of the Sliding Scale Conjecture of Bellare, Goldwasser, Lund and Russell to projection games.


symposium on the theory of computing | 2006

On basing one-way functions on NP-hardness

Adi Akavia; Oded Goldreich; Shafi Goldwasser; Dana Moshkovitz

We consider the possibility of basing one-way functions on NP-Hardness; that is, we study possible reductions from a worst-case decision problem to the task of average-case inverting a polynomial-time computable function f. Our main findings are the following two negative results:If given y one can efficiently compute |f-1(y)| then the existence of a (randomized) reduction of NP to the task of inverting f implies that coNP ⊆ AM. Thus, it follows that such reductions cannot exist unless coNP ⊆ AM. For any function f, the existence of a (randomized) non-adaptive reduction of NP to the task of average-case inverting f implies that coNP ⊆ AM.Our work builds upon and improves on the previous works of Feigenbaum and Fortnow (SIAM Journal on Computing, 1993) and Bogdanov and Trevisan (44th FOCS, 2003), while capitalizing on the additional computational structure of the search problem associated with the task of inverting polynomial-time computable functions. We believe that our results illustrate the gain of directly studying the context of one-way functions rather than inferring results for it from a the general study of worst-case to average-case reductions.


conference on computational complexity | 2014

AM with Multiple Merlins

Scott Aaronson; Russell Impagliazzo; Dana Moshkovitz

We introduce and study a new model of interactive proofs: AM(k), or Arthur-Merlin with k non-communicating Merlins. Unlike with the better-known MIP, here the assumption is that each Merlin receives an independent random challenge from Arthur. One motivation for this model (which we explore in detail) comes from the close analogies between it and the quantum complexity class QMA(k), but the AM(k) model is also natural in its own right. We illustrate the power of multiple Merlins by giving an AM(2) protocol for 3SAT, in which the Merlins challenges and responses consist of only n1/2+o(1) bits each. Our protocol has the consequence that, assuming the Exponential Time Hypothesis (ETH), any algorithm for approximating a dense CSP with a polynomial-size alphabet must take n(log n)1-o(1) time. Algorithms nearly matching this lower bound are known, but their running times had never been previously explained. Brandao and Harrow have also recently used our 3SAT protocol to show quasipolynomial hardness for approximating the values of certain entangled games. In the other direction, we give a simple quasipolynomial-time approximation algorithm for free games, and use it to prove that, assuming the ETH, our 3SAT protocol is essentially optimal. More generally, we show that multiple Merlins never provide more than a polynomial advantage over one: that is, AM(k) = AM for all k=poly(n). The key to this result is a sub sampling theorem for free games, which follows from powerful results by Alon et al. And Barak et al. On sub sampling dense CSPs, and which says that the value of any free game can be closely approximated by the value of a logarithmic-sized random subgame.


foundations of computer science | 2008

Two Query PCP with Sub-Constant Error

Dana Moshkovitz; Ran Raz

We show that the NP-Complete language 3Sat has a PCPverifier that makes two queries to a proof of almost-linear size and achieves sub-constant probability of error o(1). The verifier performs only projection tests, meaning that the answer to the first query determines at most one accepting answer to the second query.Previously, by the parallel repetition theorem, there were PCP Theorems with two-query projection tests, but only (arbitrarily small) constant error and polynomial size.There were also PCP Theorems with sub-constant error andalmost-linear size, but a constant number of queries that is larger than 2.As a corollary, we obtain a host of new results. In particular, our theorem improves many of the hardness of approximation results that are proved using the parallel repetition theorem. A partial list includes the following:(1) 3Sat cannot be efficiently approximated to withina factor of 7/8+o(1), unless P = NP. This holds even under almost-linear reductions. Previously, the best knownNP-hardness factor was 7/8+epsilon for any constant epsilonGt0, under polynomial reductions.(2) 3Lin cannot be efficiently approximated to withina factor of 1/2+o(1), unless P = NP. This holdseven under almost-linear reductions. Previously, the best known NP-hardness factor was 1/2+epsilon for any constant epsilonGt0, under polynomial reductions.(3) A PCP Theorem with amortized query complexity 1 + o(1)and amortized free bit complexity o(1). Previously, the best known amortized query complexity and free bit complexity were 1+epsilon and epsilon, respectively, for any constant epsilon Gt 0.One of the new ideas that we use is a new technique for doing the composition step in the (classical) proof of the PCP Theorem, without increasing the number of queries to the proof. We formalize this as a composition of new objects that we call Locally Decode/Reject Codes (LDRC). The notion of LDRC was implicit in several previous works, and we make it explicit in this work. We believe that the formulation of LDRCs and their construction are of independent interest.


symposium on the theory of computing | 2006

Sub-constant error low degree test of almost-linear size

Dana Moshkovitz; Ran Raz

Given a function f:Fm→F over a finite field F, a low degree tester tests its agreement with an m-variate polynomial of total degree at most d over F. The tester is usually given access to an oracle A providing the supposed restrictions of f to affine subspaces of constant dimension (e.g., lines, planes, etc.). The tester makes very few (probabilistic) queries to f and to A (say, one query to f and one query to A), and decides whether to accept or reject based on the replies.We wish to minimize two parameters of a tester: its error and its size. The error bounds the probability that the tester accepts although the function is far from a low degree polynomial. The size is the number of bits required to write the oracle replies on all possible testers queries.Low degree testing is a central ingredient in most constructions of probabilistically checkable proofs (PCPs) and locally testable codes (LTCs). The error of the low degree tester is related to the soundness of the PCP and its size is related to the size of the PCP (or the length of the LTC).We design and analyze new low degree testers that have both sub-constant error o(1) and almost-linear size n1+o(1) (where n=|F|m). Previous constructions of sub-constant error testers had polynomial size [13, 16]. These testers enabled the construction of PCPs with sub-constant soundness, but polynomial size [13, 16, 9]. Previous constructions of almost-linear size testers obtained only constant error [13, 7]. These testers were used to construct almost-linear size LTCs and almost-linear size PCPs with constant soundness [13, 7, 5, 6, 8].


SIAM Journal on Computing | 2008

Sub-Constant Error Low Degree Test of Almost-Linear Size

Dana Moshkovitz; Ran Raz

Given (the table of) a function


symposium on the theory of computing | 2011

NP-hardness of approximately solving linear equations over reals

Subhash Khot; Dana Moshkovitz

f : mathbb{F}^m rightarrow mathbb{F}


compiler construction | 2010

Sub-Constant Error Probabilistically Checkable Proof of Almost-Linear Size

Dana Moshkovitz; Ran Raz

over a finite field


foundations of computer science | 2014

Parallel Repetition from Fortification

Dana Moshkovitz

mathbb{F}

Collaboration


Dive into the Dana Moshkovitz's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ran Raz

Weizmann Institute of Science

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ofer Grossman

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Scott Aaronson

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Alantha Newman

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge