Clint Scovel
Los Alamos National Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Clint Scovel.
Annals of Statistics | 2007
Ingo Steinwart; Clint Scovel
For binary classification we establish learning rates up to the order of n −1 for support vector machines (SVMs) with hinge loss and Gaussian RBF kernels. These rates are in terms of two assumptions on the considered distributions: Tsybakov’s noise assumption to establish a small estimation error, and a new geometric noise condition which is used to bound the approximation error. Unlike previously proposed concepts for bounding the approximation error, the geometric noise assumption does not employ any smoothness assumption. 1. Introduction. In recent years support vector machines (SVMs) have been the subject of many theoretical considerations. Despite this effort, their learning performance on restricted classes of distributions is still widely unknown. In particular, it is unknown under which nontrivial circumstances SVMs can guarantee fast learning rates. The aim of this work is to use concepts like Tsybakov’s noise assumption and local Rademacher averages to establish learning rates up to the order of n −1 for nontrivial distributions. In addition to these concepts that are used to deal with the stochastic part of the analysis we also introduce a geometric assumption for distributions that allows us to estimate the approximation properties of Gaussian RBF kernels. Unlike many other concepts introduced for bounding the approximation error, our geometric assumption is not in terms of smoothness but describes the concentration and the noisiness of the data-generating distribution near the decision boundary. Let us formally introduce the statistical classification problem. To this end let us fix a subset X ⊂ R d . We write Y := {−1,1}. Given a finite training set
IEEE Transactions on Information Theory | 2006
Ingo Steinwart; Don R. Hush; Clint Scovel
Although Gaussian radial basis function (RBF) kernels are one of the most often used kernels in modern machine learning methods such as support vector machines (SVMs), little is known about the structure of their reproducing kernel Hilbert spaces (RKHSs). In this work, two distinct explicit descriptions of the RKHSs corresponding to Gaussian RBF kernels are given and some consequences are discussed. Furthermore, an orthonormal basis for these spaces is presented. Finally, it is discussed how the results can be used for analyzing the learning performance of SVMs
Journal of Multivariate Analysis | 2009
Ingo Steinwart; Don R. Hush; Clint Scovel
In most papers establishing consistency for learning algorithms it is assumed that the observations used for training are realizations of an i.i.d. process. In this paper we go far beyond this classical framework by showing that support vector machines (SVMs) only require that the data-generating process satisfies a certain law of large numbers. We then consider the learnability of SVMs for @a-mixing (not necessarily stationary) processes for both classification and regression, where for the latter we explicitly allow unbounded noise.
Machine Learning | 2003
Don R. Hush; Clint Scovel
This paper studies the convergence properties of a general class of decomposition algorithms for support vector machines (SVMs). We provide a model algorithm for decomposition, and prove necessary and sufficient conditions for stepwise improvement of this algorithm. We introduce a simple “rate certifying” condition and prove a polynomial-time bound on the rate of convergence of the model algorithm when it satisfies this condition. Although it is not clear that existing SVM algorithms satisfy this condition, we provide a version of the model algorithm that does. For this algorithm we show that when the slack multiplier C satisfies √1/2 ≤ C ≤ mL, where m is the number of samples and L is a matrix norm, then it takes no more than 4LC2m4/∈ iterations to drive the criterion to within ∈ of its optimum.
conference on learning theory | 2005
Ingo Steinwart; Clint Scovel
We establish learning rates to the Bayes risk for support vector machines (SVMs) using a regularization sequence λ n = n -α , where a ∈ (0,1) is arbitrary. Under a noise condition recently proposed by Tsybakov these rates can become faster than n -1/2 . In order to deal with the approximation error we present a general concept called the approximation error function which describes how well the infinite sample versions of the considered SVMs approximate the data-generating distribution. In addition we discuss in some detail the relation between the classical approximation error and the approximation error function. Finally, for distributions satisfying a geometric noise assumption we establish some learning rates when the used RKHS is a Sobolev space.
Journal of Nonlinear Science | 1995
Robert I. McLachlan; Clint Scovel
SummaryWe use recent results on symplectic integration of Hamiltonian systems with constraints to construct symplectic integrators on cotangent bundles of manifolds by embedding the manifold in a linear space. We also prove that these methods are equivariant under cotangent lifts of a symmetry group acting linearly on the ambient space and consequently preserve the corresponding momentum. These results provide an elementary construction of symplectic integrators for Lie-Poisson systems and other Hamiltonian systems with symmetry. The methods are illustrated on the free rigid body, the heavy top, and the double spherical pendulum.
Siam Review | 2013
Houman Owhadi; Clint Scovel; Timothy John Sullivan; Mike McKerns; M. Ortiz
We propose a rigorous framework for Uncertainty Quantification (UQ) in which the UQ objectives and the assumptions/information set are brought to the forefront. This framework, which we call \emph{Optimal Uncertainty Quantification} (OUQ), is based on the observation that, given a set of assumptions and information about the problem, there exist optimal bounds on uncertainties: these are obtained as values of well-defined optimization problems corresponding to extremizing probabilities of failure, or of deviations, subject to the constraints imposed by the scenarios compatible with the assumptions and information. In particular, this framework does not implicitly impose inappropriate assumptions, nor does it repudiate relevant information. Although OUQ optimization problems are extremely large, we show that under general conditions they have finite-dimensional reductions. As an application, we develop \emph{Optimal Concentration Inequalities} (OCI) of Hoeffding and McDiarmid type. Surprisingly, these results show that uncertainties in input parameters, which propagate to output uncertainties in the classical sensitivity analysis paradigm, may fail to do so if the transfer functions (or probability distributions) are imperfectly known. We show how, for hierarchical structures, this phenomenon may lead to the non-propagation of uncertainties or information across scales. In addition, a general algorithmic framework is developed for OUQ and is tested on the Caltech surrogate model for hypervelocity impact and on the seismic safety assessment of truss structures, suggesting the feasibility of the framework for important complex systems. The introduction of this paper provides both an overview of the paper and a self-contained mini-tutorial about basic concepts and issues of UQ.
Archive | 1991
Clint Scovel
In this paper I review several techniques to construct Symplectic Integration Algorithms. I also discuss algorithms for systems with other invariants such as Lie-Poisson structures reversible systems volume preserving flows. Numerical results are presented.
IEEE Geoscience and Remote Sensing Letters | 2010
James Theiler; Clint Scovel; Brendt Wohlberg; Bernard R. Foy
We derive a class of algorithms for detecting anomalous changes in hyperspectral image pairs by modeling the data with elliptically contoured (EC) distributions. These algorithms are generalizations of well-known detectors that are obtained when the EC function is Gaussian. The performance of these EC-based anomalous change detectors is assessed on real data using both real and simulated changes. In these experiments, the EC-based detectors substantially outperform their Gaussian counterparts.
Journal of Machine Learning Research | 2002
Adam Cannon; J. Mark Ettinger; Don R. Hush; Clint Scovel
We extend the VC theory of statistical learning to data dependent spaces of classifiers. This theory can be viewed as a decomposition of classifier design into two components; the first component is a restriction to a data dependent hypothesis class and the second is empirical risk minimization within that class. We define a measure of complexity for data dependent hypothesis classes and provide data dependent versions of bounds on error deviance and estimation error. We also provide a structural risk minimization procedure over data dependent hierarchies and prove consistency. We use this theory to provide a framework for studying the trade-offs between performance and computational complexity in classifier design. As a consequence we obtain a new family of classifiers with dimension independent performance bounds and efficient learning procedures.