Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shahar Mendelson is active.

Publication


Featured researches published by Shahar Mendelson.


Annals of Statistics | 2005

Local rademacher complexities

Peter L. Bartlett; Olivier Bousquet; Shahar Mendelson

We propose new bounds on the error of learning algorithms in terms of a data-dependent notion of complexity. The estimates we establish give optimal rates and are based on a local and empirical version of Rademacher averages, in the sense that the Rademacher averages are computed from the data, on a subset of functions with small empirical error. We present some applications to classification and prediction with convex function classes, and with kernel classes in particular.


Lecture Notes in Computer Science | 2003

A few notes on statistical learning theory

Shahar Mendelson

In these notes our aim is to survey recent (and not so recent) results regarding the mathematical foundations of learning theory. The focus in this article is on the theoretical side and not on the applicative one; hence, we shall not present examples which may be interesting from the practical point of view but have little theoretical significance. This survey is far from being complete and it focuses on problems the author finds interesting (an opinion which is not necessarily shared by the majority of the learning community). Relevant books which present a more evenly balanced approach are, for example 1, 4, 34, 35


european conference on computational learning theory | 2001

Rademacher and Gaussian Complexities: Risk Bounds and Structural Results

Peter L. Bartlett; Shahar Mendelson

We investigate the use of certain data-dependent estimates of the complexity of a function class, called Rademacher and Gaussian complexities. In a decision theoretic setting, we prove general risk bounds in terms of these complexities. We consider function classes that can be expressed as combinations of functions from basis classes and show how the Rademacher and Gaussian complexities of such a function class can be bounded in terms of the complexity of the basis classes. We give examples of the application of these techniques in finding data-dependent risk bounds for decision trees, neural networks and support vector machines.


Combinatorica | 2007

Complexity measures of sign matrices

Nati Linial; Shahar Mendelson; Gideon Schechtman; Adi Shraibman

In this paper we consider four previously known parameters of sign matrices from a complexity-theoretic perspective. The main technical contributions are tight (or nearly tight) inequalities that we establish among these parameters. Several new open problems are raised as well.


Journal of the ACM | 2015

Learning without Concentration

Shahar Mendelson

We obtain sharp bounds on the estimation error of the Empirical Risk Minimization procedure, performed in a convex class and with respect to the squared loss, without assuming that class members and the target are bounded functions or have rapidly decaying tails. Rather than resorting to a concentration-based argument, the method used here relies on a “small-ball” assumption and thus holds for classes consisting of heavy-tailed functions and for heavy-tailed targets. The resulting estimates scale correctly with the “noise level” of the problem, and when applied to the classical, bounded scenario, always improve the known bounds.


Annals of Probability | 2005

A probabilistic approach to the geometry of the ℓᵨⁿ-ball

Franck Barthe; Olivier Guédon; Shahar Mendelson; Assaf Naor

This article investigates, by probabilistic methods, various geometric questions on B n p , the unit ball of ln p . We propose realizations in terms of independent random variables of several distributions on B n p , including the normalized volume measure. These representations allow us to unify and extend the known results of the sub-independence of coordinate slabs in B n p . As another application, we compute moments of linear functionals on B n p , which gives sharp constants in Khinchines inequalities on B n p and determines the 2-constant of all directions on B n p . We also study the extremal values of several Gaussian averages on sections of B n p (including mean width and l-norm), and derive several monotonicity results as p varies. Applications to balancing vectors in l 2 and to covering numbers of polyhedra complete the exposition.


IEEE Transactions on Information Theory | 2002

Rademacher averages and phase transitions in Glivenko-Cantelli classes

Shahar Mendelson

We introduce a new parameter which may replace the fat-shattering dimension. Using this parameter we are able to provide improved complexity estimates for the agnostic learning problem with respect to any L/sub p/ norm. Moreover, we show that if fat/sub /spl epsi//(F) = O(/spl epsi//sup -p/) then F displays a clear phase transition which occurs at p=2. The phase transition appears in the sample complexity estimates, covering numbers estimates, and in the growth rate of the Rademacher averages associated with the class. As a part of our discussion, we prove the best known estimates on the covering numbers of a class when considered as a subset of L/sub p/ spaces. We also estimate the fat-shattering dimension of the convex hull of a given class. Both these estimates are given in terms of the fat-shattering dimension of the original class.


conference on learning theory | 2002

Localized Rademacher Complexities

Peter L. Bartlett; Olivier Bousquet; Shahar Mendelson

We investigate the behaviour of global and local Rademacher averages. We present new error bounds which are based on the local averages and indicate how data-dependent local averages can be estimated without a priori knowledge of the class at hand.


Annals of Statistics | 2010

Regularization in kernel learning

Shahar Mendelson; Joe Neeman

Supported in part by Australian Research Council Discovery Grant DP0559465 and by Israel Science Foundation Grant 666/06.


Journal of the European Mathematical Society | 2017

Sparse recovery under weak moment assumptions

Guillaume Lecué; Shahar Mendelson

We prove that iid random vectors that satisfy a rather weak moment assumption can be used as measurement vectors in Compressed Sensing, and the number of measurements required for exact reconstruction is the same as the best possible estimate -- exhibited by a random gaussian matrix. We also prove that this moment condition is necessary, up to a

Collaboration


Dive into the Shahar Mendelson's collaboration.

Top Co-Authors

Avatar

Guillaume Lecué

University of Marne-la-Vallée

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Emanuel Milman

Technion – Israel Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge