Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rocco A. Servedio is active.

Publication


Featured researches published by Rocco A. Servedio.


Journal of Computer and System Sciences | 2004

Learning DNF in time 2 õ ( n 1/3 )

Adam R. Klivans; Rocco A. Servedio

Using techniques from learning theory, we show that any s-term DNF over n variables can be computed by a polynomial threshold function of degree O(n1/3 log s). This upper bound matches, up to a logarithmic factor, the longstanding lower bound given by Minsky and Papert in their 1968 book Perceptrons. As a consequence of this upper bound we obtain the fastest known algorithm for learning polynomial size DNF, one of the central problems in computational learning theory.


foundations of computer science | 2002

Learning intersections and thresholds of halfspaces

Adam R. Klivans; Ryan O'Donnell; Rocco A. Servedio

We give the first polynomial time algorithm to learn any function of a constant number of halfspaces under the uniform distribution to within any constant error parameter. We also give the first quasipolynomial time algorithm for learning any function of a polylog number of polynomial-weight halfspaces under any distribution. As special cases of these results we obtain algorithms for learning intersections and thresholds of halfspaces. Our uniform distribution learning algorithms involve a novel non-geometric approach to learning halfspaces; we use Fourier techniques together with a careful analysis of the noise sensitivity of functions of halfspaces. Our algorithms for learning under any distribution use techniques from real approximation theory to construct low degree polynomial threshold functions.


Journal of Machine Learning Research | 2003

Smooth boosting and learning with malicious noise

Rocco A. Servedio

We describe a new boosting algorithm which generates only smooth distributions which do not assign too much weight to any single example. We show that this new boosting algorithm can be used to construct efficient PAC learning algorithms which tolerate relatively high rates of malicious noise. In particular, we use the new smooth boosting algorithm to construct malicious noise tolerant versions of the PAC-model p-norm linear threshold learning algorithms described by Servedio (2002). The bounds on sample complexity and malicious noise tolerance of these new PAC algorithms closely correspond to known bounds for the online p-norm algorithms of Grove, Littlestone and Schuurmans (1997) and Gentile and Littlestone (1999). As special cases of our new algorithms we obtain linear threshold learning algorithms which match the sample complexity and malicious noise tolerance of the online Perceptron and Winnow algorithms. Our analysis reveals an interesting connection between boosting and noise tolerance in the PAC setting.


symposium on the theory of computing | 2001

Learning DNF in time

Adam R. Klivans; Rocco A. Servedio

Using techniques from learning theory, we show that any s-term DNF over n variables can be computed by a polynomial threshold function of degree O(n^{1/3} \log s). This upper bound matches, up to a logarithmic factor, the longstanding lower bound given by Minsky and Papert in their 1968 book {\em Perceptrons}. As a consequence of this upper bound we obtain the fastest known algorithm for learning polynomial size DNF, one of the central problems in computational learning theory.


international conference on machine learning | 2008

Random classification noise defeats all convex potential boosters

Philip M. Long; Rocco A. Servedio

A broad class of boosting algorithms can be interpreted as performing coordinate-wise gradient descent to minimize some potential function of the margins of a data set. This class includes AdaBoost, LogitBoost, and other widely used and well-studied boosters. In this paper we show that for a broad class of convex potential functions, any such boosting algorithm is highly susceptible to random classification noise. We do this by showing that for any such booster and any nonzero random classification noise rate η, there is a simple data set of examples which is efficiently learnable by such a booster if there is no noise, but which cannot be learned to accuracy better than 1/2 if there is random classification noise at rate η. This negative result is in contrast with known branching program based boosters which do not fall into the convex potential function framework and which can provably learn to high accuracy in the presence of random classification noise.


Journal of Computer and System Sciences | 2004

Learning functions of k relevant variables

Elchanan Mossel; Ryan O'Donnell; Rocco A. Servedio

We consider a fundamental problem in computational learning theory: learning an arbitrary Boolean function that depends on an unknown set of k out of n Boolean variables. We give an algorithm for learning such functions from uniform random examples that runs in time roughly (nk)ω/ω+1, where ω < 2.376 is the matrix multiplication exponent. We thus obtain the first-polynomial factor improvement on the naive nk time bound which can be achieved via exhaustive search. Our algorithm and analysis exploit new structural properties of Boolean functions.


SIAM Journal on Computing | 2011

Testing Fourier Dimensionality and Sparsity

Parikshit Gopalan; Ryan O'Donnell; Rocco A. Servedio; Amir Shpilka; Karl Wimmer

We present a range of new results for testing properties of Boolean functions that are defined in terms of the Fourier spectrum. Broadly speaking, our results show that the property of a Boolean function having a concise Fourier representation is locally testable. We give the first efficient algorithms for testing whether a Boolean function has a sparse Fourier spectrum (small number of nonzero coefficients) and for testing whether the Fourier spectrum of a Boolean function is supported in a low-dimensional subspace of


foundations of computer science | 1999

Boosting and hard-core sets

Adam R. Klivans; Rocco A. Servedio

\mathbb{F}_2^n


international symposium on information theory | 2004

LP decoding corrects a constant fraction of errors

Jon Feldman; Tal Malkin; Rocco A. Servedio; Cliff Stein; Martin J. Wainwright

. In both cases we also prove lower bounds showing that any testing algorithm—even an adaptive one—must have query complexity within a polynomial factor of our algorithms, which are nonadaptive. Building on these results, we give an “implicit learning” algorithm that lets us test any subproperty of Fourier concision. We also present some applications of these results to exact learning and decoding. Our technical contributions include new structural results about sparse Boolean functions and new analysis of the pairwise independent hashing of Fourier coefficients from [V. Feldman, P. Gopalan, S. Khot, and A. Ponnuswami, Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS), 2006, pp. 563-576].


symposium on the theory of computing | 2002

Learnability beyond AC 0

Jeffrey C. Jackson; Adam R. Klivans; Rocco A. Servedio

This paper connects two fundamental ideas from theoretical computer science hard-core set construction, a type of hardness amplification from computational complexity, and boosting, a technique from computational learning theory. Using this connection we give fruitful applications of complexity-theoretic techniques to learning theory and vice versa. We show that the hard-core set construction of R. Impagliazzo (1995), which establishes the existence of distributions under which boolean functions are highly inapproximable, may be viewed as a boosting algorithm. Using alternate boosting methods we give an improved bound for hard-core set construction which matches known lower bounds from boosting and thus is optimal within this class of techniques. We then show how to apply techniques from R. Impagliazzo to give a new version of Jacksons celebrated Harmonic Sieve algorithm for learning DNF formulae under the uniform distribution using membership queries. Our new version has a significant asymptotic improvement in running time. Critical to our arguments is a careful analysis of the distributions which are employed in both boosting and hard-core set constructions.

Collaboration


Dive into the Rocco A. Servedio's collaboration.

Top Co-Authors

Avatar

Ilias Diakonikolas

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ryan O'Donnell

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adam R. Klivans

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Philip M. Long

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Constantinos Daskalakis

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge