Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ronitt Rubinfeld is active.

Publication


Featured researches published by Ronitt Rubinfeld.


symposium on the theory of computing | 1990

Self-testing/correcting with applications to numerical problems

Manuel Blum; Michael Luby; Ronitt Rubinfeld

Suppose someone gives us an extremely cast program P that we can call as a black box to compute a function f. Should we trust that P works correctly? A self-testing/correcting pair for f allows us to: (1) estimate the probability that P(x) ¬= f(x) when x is randomly chosen; (2) on any input x, compute f(x) correctly as long as P is not too faulty on average. Furthermore, both (1) and (2) take time only slightly more than the original running time of P. We present general techniques for constructing simple to program self-testing/correcting pairs for a variety of numerical functions, including integer multiplication, modular multiplication, matrix multiplication, inverting matrices, computing the determinant of a matrix, computing the rank of a matrix, integer division, modular exponentiation, and polynomial multiplication


SIAM Journal on Computing | 1996

Robust Characterizations of Polynomials withApplications to Program Testing

Ronitt Rubinfeld; Madhu Sudan

The study of self-testing and self-correcting programs leads to the search for robust characterizations of functions. Here the authors make this notion precise and show such a characterization for polynomials. From this characterization, the authors get the following applications. Simple and efficient self-testers for polynomial functions are constructed. The characterizations provide results in the area of coding theory by giving extremely fast and efficient error-detecting schemes for some well-known codes. This error-detection scheme plays a crucial role in subsequent results on the hardness of approximating some NP-optimization problems.


symposium on the theory of computing | 1994

On the learnability of discrete distributions

Michael J. Kearns; Yishay Mansour; Dana Ron; Ronitt Rubinfeld; Robert E. Schapire; Linda Sellie

We introduce and investigate a new model of learning probability distributions from independent draws. Our model is inspired by the popular Probably Approximately Correct (PAC) model for learning boolean functions from labeled examples [24], in the sense that we emphasize efficient and approximate learning, and we study the learnability of restricted classes of target distributions. The dist ribut ion classes we examine are often defined by some simple computational mechanism for transforming a truly random string of input bits (which is not visible to the learning algorithm) into the stochastic observation (output) seen by the learning algorithm. In this paper, we concentrate on discrete distributions over {O, I}n. The problem of inferring an approximation to an unknown probability distribution on the basis of independent draws has a long and complex history in the pattern recognition and statistics literature. For instance, the problem of estimating the parameters of a Gaussian density in highdimensional space is one of the most studied statistical problems. Distribution learning problems have often been investigated in the context of unsupervised learning, in which a linear mixture of two or more distributions is generating the observations, and the final goal is not to model the distributions themselves, but to predict from which distribution each observation was drawn. Data clustering methods are a common tool here. There is also a large literature on nonpararnetric density estimation, in which no assumptions are made on the unknown target density. Nearest-neighbor approaches to the unsupervised learning problem often arise in the nonparametric setting. While we obviously cannot do justice to these areas here, the books of Duda and Hart [9] and Vapnik [25] provide excellent overviews and introductions to the pattern recognition work, as well as many pointers for further reading. See also Izenman’s recent survey article [16]. Roughly speaking, our work departs from the traditional statistical and pattern recognition approaches in two ways. First, we place explicit emphasis on the comput ationrd complexity of distribution learning. It seems fair to say that while previous research has provided an excellent understanding of the information-theoretic issues involved in dis-


foundations of computer science | 2000

Testing that distributions are close

Tuğkan Batu; Lance Fortnow; Ronitt Rubinfeld; Warren D. Smith; Patrick White

Given two distributions over an n element set, we wish to check whether these distributions are statistically close by only sampling. We give a sublinear algorithm which uses O(n/sup 2/3//spl epsiv//sup -4/ log n) independent samples from each distribution, runs in time linear in the sample size, makes no assumptions about the structure of the distributions, and distinguishes the cases when the distance between the distributions is small (less than max(/spl epsiv//sup 2//32/sup 3//spl radic/n,/spl epsiv//4/spl radic/n=)) or large (more than /spl epsiv/) in L/sub 1/-distance. We also give an /spl Omega/(n/sup 2/3//spl epsiv//sup -2/3/) lower bound. Our algorithm has applications to the problem of checking whether a given Markov process is rapidly mixing. We develop sublinear algorithms for this problem as well.


symposium on the theory of computing | 1991

Self-testing/correcting for polynomials and for approximate functions

Peter Gemmell; Richard J. Lipton; Ronitt Rubinfeld; Madhu Sudan; Avi Wigderson

The study of self-testing/correcting programs was introduced in [8] in order to allow one to use program P to compute function f without trusting that P works correctly. A self-tester for f estimates the fraction of x for which P (x) = f(x); and a self-corrector for f takes a program that is correct on most inputs and turns it into a program that is correct on every input with high probability . Both access P only as a black-box and in some precise way are not allowed to compute the function f . Self-correcting is usually easy when the function has the random self-reducibility property. One class of such functions that has this property is the class of multivariate polynomials over finite fields [4] [12]. We extend this result in two directions. First, we show that polynomials are random self-reducible over more general domains: specifically, over the rationals and over noncommutative rings. Second, we show that one can get self-correctors even when the program satisfies weaker conditions, i.e. when the program has more errors, or when the program behaves in a more adversarial manner by changing the function it computes between successive calls. Self-testing is a much harder task. Previously it was known how to self-test for a few special examples of functions, such as the class of linear functions. We show that one can self-test the whole class of polynomial functions over Zp for prime p. ∗U.C. Berkeley. Supported by NSF Grant No. CCR 8813632 †Princeton University. ‡Princeton University. Supported by DIMACS (Center for Discrete Mathematics and Theoretical Computer Science), NSF-STC88-09648. §U.C. Berkeley. Part of this work was done while this author was visiting IBM Almaden. ¶Hebrew University and Princeton University. Partially supported by the Wolfson Research Awards administered by the Israel Academy of Sciences and Humanities. 1[12] independently introduces a notion which is essentially equivalent to self-correcting. We initiate the study of self-testing (and self-correcting) programs which only approximately compute f . This setting captures in particular the digital computation of real valued functions. We present a rigorous framework and obtain the first results in this area: namely that the class of linear functions, the log function and floating point exponentiation can be self-tested. All of the above functions also have self-correctors.


symposium on the theory of computing | 1998

Spot-checkers

Funda Ergün; Sampath Kannan; S. Ravi Kumar; Ronitt Rubinfeld; Mahesh Viswanathan

On Labor Day weekend, the highway patrol sets up spot-checks at random points on the freeways with the intention of deterring a large fraction of motorists from driving incorrectly. We explore a very similar idea in the context of program checking to ascertain with minimal overhead that a program output is reasonably correct. Our model of spot-checking requires that the spot-checker must run asymptotically much faster than the combined length of the input and output. We then show that the spot-checking model can be applied to problems in a wide range of areas, including problems regarding graphs, sets, and algebra. In particular, we present spot-checkers for sorting, convex hull, element distinctness, set containment, set equality, total orders, and correctness of group and field operations. All of our spot-checkers are very simple to state and rely on testing that the input and/or output have certain simple properties that depend on very few bits. Our results also give property tests as defined by Rubinfeld and Sudan (1996, SIAM J. Comput.25, 252?271), Rubinfeld (1994, “Proc. 35th Foundations of Computer Science,” pp. 288?299), and Goldreich et al. (1998, J. Assoc. Comput. Mach.45, 653?750).


international conference on cluster computing | 2001

Testing random variables for independence and identity

Tuğkan Batu; Eldar Fischer; Lance Fortnow; Ravi Kumar; Ronitt Rubinfeld; Patrick White

Given access to independent samples of a distribution A over [n] /spl times/ [m], we show how to test whether the distributions formed by projecting A to each coordinate are independent, i.e., whether A is /spl epsi/-close in the L/sub 1/ norm to the product distribution A/sub 1//spl times/A/sub 2/ for some distributions A/sub 1/ over [n] and A/sub 2/ over [m]. The sample complexity of our test is O/spl tilde/(n/sup 2/3/m/sup 1/3/poly(/spl epsi//sup -1/)), assuming without loss of generality that m/spl les/n. We also give a matching lower bound, up to poly (log n, /spl epsi//sup -1/) factors. Furthermore, given access to samples of a distribution X over [n], we show how to test if X is /spl epsi/-close in L/sub 1/ norm to an explicitly specified distribution Y. Our test uses O/spl tilde/(n/sup 1/2/poly(/spl epsi//sup -1/)) samples, which nearly matches the known tight bounds for the case when Y is uniform.


Journal of Computer and System Sciences | 2006

Tolerant property testing and distance approximation

Michal Parnas; Dana Ron; Ronitt Rubinfeld

In this paper we study a generalization of standard property testing where the algorithms are required to be more tolerant with respect to objects that do not have, but are close to having, the property. Specifically, a tolerant property testing algorithm is required to accept objects that are e1 -close to having a given property P and reject objects that are e2-far from having P, for some parameters 0 ≤ e1 < e2 1. Another related natural extension of standard property testing that we study, is distance approximation. Here the algorithm should output an estimate e of the distance of the object to P, where this estimate is sufficiently close to the true distance of the object to P. We first formalize the notions of tolerant property testing and distance approximation and discuss the relationship between the two tasks, as well as their relationship to standard property testing. We then apply these new notions to the study of two problems: tolerant testing of clustering and distance approximation for monotonicity. We present and analyze algorithms whose query complexity is either polylogarithmic or independent of the size of the input.


principles of distributed computing | 2001

Selective private function evaluation with applications to private statistics

Ran Canetti; Yuval Ishai; Ravi Kumar; Michael K. Reiter; Ronitt Rubinfeld; Rebecca N. Wright

Motivated by the application of private statistical analysis of large databases, we consider the problem of <i>selective private function evaluation</i> (SPFE). In this problem, a client interacts with one or more servers holding copies of a database <i>x</i> = <i>x</i><subscrpt>1</subscrpt>, … , <i>x<subscrpt>n</subscrpt></i> in order to compute <i>f</i>(<i>x</i><subscrpt><i>i</i><subscrpt>1</subscrpt></subscrpt>, … , <i>x</i><subscrpt><i>i</i><subscrpt><i>m</i></subscrpt></subscrpt>), for some function <i>f</i> and indices <i>i</i> = <i>i</i><subscrpt>1</subscrpt>, … , <i>i<subscrpt>m</subscrpt></i> chosen by the client. Ideally, the client must learn nothing more about the database than <i>f</i>(<i>x<subscrpt>i</subscrpt></i>, … , <i>x</i><subscrpt><i>i</i><subscrpt><i>m</i></subscrpt></subscrpt>), and the servers should learn nothing. Generic solutions for this problem, based on standard techniques for secure function evaluation, incur communication complexity that is at least linear in <i>n</i>, making them prohibitive for large databases even when <i>f</i> in relatively simple and <i>m</i> is small. We present various approaches for constructing sublinear-communication SPFE protocols, both for the general problem and for special cases of interest. Our solutions not only offer sublinear communication complexity, but are also practical in many scenarios.


foundations of computer science | 1996

Short paths in expander graphs

Jon M. Kleinberg; Ronitt Rubinfeld

Graph expansion has proved to be a powerful general tool for analyzing the behavior of routing algorithms and the inter-connection networks on which they run. We develop new routing algorithms and structural results for bounded-degree expander graphs. Our results are unified by the fact that they are all based upon, and extend, a body of work: asserting that expanders are rich in short, disjoint paths. In particular, our work has consequences for the disjoint paths problem, multicommodity flow, and graph minor containment. We show: (i) A greedy algorithm for approximating the maximum disjoint paths problem achieves a polylogarithmic approximation ratio in bounded-degree expanders. Although our algorithm is both deterministic and on-line, its performance guarantee is an improvement over previous bounds in expanders. (ii) For a multicommodity flow problem with arbitrary demands on a bounded-degree expander there is a (1+/spl epsi/)-optimal solution using only flow paths of polylogarithmic length. It follows that the multicommodity flow algorithm of Awerbuch and Leighton runs in nearly linear time per commodity in expanders. Our analysis is based on establishing the following: given edge weights on an expander G, one can increase some of the weights very slightly so the resulting shortest-path metric is smooth-the min-weight path between any pair of nodes uses a polylogarithmic number of edges. (iii) Every bounded-degree expander on n nodes contains every graph with O(n/log/sup 0(1)/n) nodes and edges as a minor.

Collaboration


Dive into the Ronitt Rubinfeld's collaboration.

Researchain Logo
Decentralizing Knowledge