Charles A. Micchelli
State University of New York System
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Charles A. Micchelli.
Neural Computation | 2005
Charles A. Micchelli; Massimiliano Pontil
In this letter, we provide a study of learning in a Hilbert space of vector-valued functions. We motivate the need for extending learning theory of scalar-valued functions by practical considerations and establish some basic results for learning vector-valued functions that should prove useful in applications. Specifically, we allow an output space Y to be a Hilbert space, and we consider a reproducing kernel Hilbert space of functions whose values lie in Y. In this setting, we derive the form of the minimal norm interpolant to a finite set of data and apply it to study some regularization functionals that are important in learning theory. We consider specific examples of such functionals corresponding to multiple-output regularization networks and support vector machines, for both regression and classification. Finally, we provide classes of operator-valued kernels of the dot product and translation-invariant type.
SIAM Journal on Numerical Analysis | 1983
Olin G. Johnson; Charles A. Micchelli; George Paul
Dubois, Greenbaum and Rodrigue proposed using a truncated Neumann series as an approximation to the inverse of a matrix A for the purpose of preconditioning conjugate gradient iterative approximations to
Journal of the American Statistical Association | 1996
M. Gasca; Charles A. Micchelli
Ax = b
Inverse Problems | 2011
Charles A. Micchelli; Lixin Shen; Yuesheng Xu
. If we assume that A has been symmetrically scaled to have unit diagonal and is thus of the form
SIAM Journal on Numerical Analysis | 1979
Avraham A. Melkman; Charles A. Micchelli
(I - G)
conference on learning theory | 2005
Andreas Argyriou; Charles A. Micchelli; Massimiliano Pontil
, then the Neumann series is a power series in G with unit coefficients. The incomplete inverse was thought of as a replacement of the incomplete Cholesky decomposition suggested by Meijerink and van der Vorst in the family of methods ICCG
international conference on machine learning | 2006
Andreas Argyriou; Raphael Hauser; Charles A. Micchelli; Massimiliano Pontil
(n)
Archive | 1990
Wolfgang Dahmen; Charles A. Micchelli; M. Gasca
. The motivation for the replacement was the desire to have a preconditioned conjugate gradient method which only involved vector operations and which utilized long vectors.We here suggest parameterizing the incomplete inverse to form a preconditioning matrix whose inverse is a polynomial in G. We then show how to select the parameters to minimize the condition number of the product of the polynomial and
SIAM Journal on Numerical Analysis | 2002
Zhongying Chen; Charles A. Micchelli; Yuesheng Xu
(I - G)
Machine Learning | 2007
Charles A. Micchelli; Massimiliano Pontil
. Theoretically the resulting algorithm...