Andrea Caponnetto
University of Genoa
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Andrea Caponnetto.
Neural Computation | 2004
Lorenzo Rosasco; Ernesto De Vito; Andrea Caponnetto; Michele Piana; Alessandro Verri
In this letter, we investigate the impact of choosing different loss functions from the viewpoint of statistical learning theory. We introduce a convexity assumption, which is met by all loss functions commonly used in the literature, and study how the bound on the estimation error changes with the loss. We also derive a general result on the minimizer of the expected risk for a convex loss function in the case of classification. The main outcome of our analysis is that for classification, the hinge loss appears to be the loss of choice. Other things being equal, the hinge loss leads to a convergence rate practically indistinguishable from the logistic loss rate and much better than the square loss rate. Furthermore, if the hypothesis space is sufficiently rich, the bounds obtained for the hinge loss are not loosened by the thresholding stage.
Analysis and Applications | 2006
Ernesto De Vito; Lorenzo Rosasco; Andrea Caponnetto
We study the discretization of inverse problems defined by a Carleman operator. In particular, we develop a discretization strategy for this class of inverse problems and we give a convergence analysis. Learning from examples, as well as the discretization of integral equations, can be analyzed in our setting.
Analysis and Applications | 2011
Ming Li; Andrea Caponnetto
We consider a wide class of error bounds developed in the context of statistical learning theory which are expressed in terms of functionals of the regression function, for instance, its norm in a reproducing kernel Hilbert space or other functional space. These bounds are unstable in the sense that a small perturbation of the regression function can induce an arbitrary large increase of the relevant functional and make the error bound useless. Using a known result involving Fano inequality, we show how stability can be recovered.
Journal of Machine Learning Research | 2005
Ernesto De Vito; Lorenzo Rosasco; Andrea Caponnetto; Umberto De Giovannini; Francesca Odone
Journal of Machine Learning Research | 2008
Andrea Caponnetto; Charles A. Micchelli; Massimiliano Pontil; Yiming Ying
Journal of Machine Learning Research | 2004
Ernesto De Vito; Lorenzo Rosasco; Andrea Caponnetto; Michele Piana; Alessandro Verri
Analysis and Applications | 2010
Andrea Caponnetto; Yuan Yao
Archive | 2005
Andrea Caponnetto; Ernesto De Vito
Journal of Machine Learning Research | 2006
Andrea Caponnetto; Alexander Rakhlin
Archive | 2005
Andrea Caponnetto; Alexander Rakhlin