Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vladimir Cherkassky is active.

Publication


Featured researches published by Vladimir Cherkassky.


international symposium on neural networks | 2005

A combined SVM and LDA approach for classification

Tao Xiong; Vladimir Cherkassky

This paper describes a new large margin classifier, named SVM/LDA. This classifier can be viewed as an extension of support vector machine (SVM) by incorporating some global information about the data. The SVM/LDA classifier can be also seen as a generalization of linear discriminant analysis (LDA) by incorporating the idea of (local) margin maximization into standard LDA formulation. We show that existing SVM software can be used to solve the SVM/LDA formulation. We also present empirical comparisons of the proposed algorithm with SVM and LDA using both synthetic and real world benchmark data.


Archive | 2007

Learning from data

Vladimir Cherkassky; Filip M. Mulier

= Abstract An interdisciplinary framework for learning methodologies-covering statistics, neural networks, and fuzzy logic, this book provides a unified treatment of the principles and methods for learning dependencies from data. It establishes a general conceptual framework in which various learning methods from statistics, neural networks, and fuzzy logic can be applied-showing that a few fundamental principles underlie most new methods being proposed today in statistics, engineering, and computer science. Complete with over one hundred illustrations, case studies, and examples making this an invaluable text.


NeuroImage | 2005

Support vector machines for temporal classification of block design fMRI data

Stephen M. LaConte; S.C. Strother; Vladimir Cherkassky; Jon E. Anderson; Xiaoping Hu

This paper treats support vector machine (SVM) classification applied to block design fMRI, extending our previous work with linear discriminant analysis [LaConte, S., Anderson, J., Muley, S., Ashe, J., Frutiger, S., Rehm, K., Hansen, L.K., Yacoub, E., Hu, X., Rottenberg, D., Strother S., 2003a. The evaluation of preprocessing choices in single-subject BOLD fMRI using NPAIRS performance metrics. NeuroImage 18, 10-27; Strother, S.C., Anderson, J., Hansen, L.K., Kjems, U., Kustra, R., Siditis, J., Frutiger, S., Muley, S., LaConte, S., Rottenberg, D. 2002. The quantitative evaluation of functional neuroimaging experiments: the NPAIRS data analysis framework. NeuroImage 15, 747-771]. We compare SVM to canonical variates analysis (CVA) by examining the relative sensitivity of each method to ten combinations of preprocessing choices consisting of spatial smoothing, temporal detrending, and motion correction. Important to the discussion are the issues of classification performance, model interpretation, and validation in the context of fMRI. As the SVM has many unique properties, we examine the interpretation of support vector models with respect to neuroimaging data. We propose four methods for extracting activation maps from SVM models, and we examine one of these in detail. For both CVA and SVM, we have classified individual time samples of whole brain data, with TRs of roughly 4 s, thirty slices, and nearly 30,000 brain voxels, with no averaging of scans or prior feature selection.


IEEE Transactions on Neural Networks | 1996

Comparison of adaptive methods for function estimation from samples

Vladimir Cherkassky; Don Gehring; Filip M. Mulier

The problem of estimating an unknown function from a finite number of noisy data points has fundamental importance for many applications. This problem has been studied in statistics, applied mathematics, engineering, artificial intelligence, and, more recently, in the fields of artificial neural networks, fuzzy systems, and genetic optimization. In spite of many papers describing individual methods, very little is known about the comparative predictive (generalization) performance of various methods. We discuss subjective and objective factors contributing to the difficult problem of meaningful comparisons. We also describe a pragmatic framework for comparisons between various methods, and present a detailed comparison study comprising several thousand individual experiments. Our approach to comparisons is biased toward general (nonexpert) users. Our study uses six representative methods described using a common taxonomy. Comparisons performed on artificial data sets provide some insights on applicability of various methods. No single method proved to be the best, since a methods performance depends significantly on the type of the target function, and on the properties of training data.


IEEE Transactions on Neural Networks | 1999

Model complexity control for regression using VC generalization bounds

Vladimir Cherkassky; Xuhui Shao; Filip M. Mulier; Vladimir Vapnik

It is well known that for a given sample size there exists a model of optimal complexity corresponding to the smallest prediction (generalization) error. Hence, any method for learning from finite samples needs to have some provisions for complexity control. Existing implementations of complexity control include penalization (or regularization), weight decay (in neural networks), and various greedy procedures (aka constructive, growing, or pruning methods). There are numerous proposals for determining optimal model complexity (aka model selection) based on various (asymptotic) analytic estimates of the prediction risk and on resampling approaches. Nonasymptotic bounds on the prediction risk based on Vapnik-Chervonenkis (VC)-theory have been proposed by Vapnik. This paper describes application of VC-bounds to regression problems with the usual squared loss. An empirical study is performed for settings where the VC-bounds can be rigorously applied, i.e., linear models and penalized linear models where the VC-dimension can be accurately estimated, and the empirical risk can be reliably minimized. Empirical comparisons between model selection using VC-bounds and classical methods are performed for various noise levels, sample size, target functions and types of approximating functions. Our results demonstrate the advantages of VC-based complexity control with finite samples.


Neural Computation | 1995

Self-organization as an iterative kernel smoothing process

Filip M. Mulier; Vladimir Cherkassky

Kohonens self-organizing map, when described in a batch processing mode, can be interpreted as a statistical kernel smoothing problem. The batch SOM algorithm consists of two steps. First, the training data are partitioned according to the Voronoi regions of the map unit locations. Second, the units are updated by taking weighted centroids of the data falling into the Voronoi regions, with the weighing function given by the neighborhood. Then, the neighborhood width is decreased and steps 1, 2 are repeated. The second step can be interpreted as a statistical kernel smoothing problem where the neighborhood function corresponds to the kernel and neighborhood width corresponds to kernel span. To determine the new unit locations, kernel smoothing is applied to the centroids of the Voronoi regions in the topological space. This interpretation leads to some new insights concerning the role of the neighborhood and dimensionality reduction. It also strengthens the algorithms connection with the Principal Curve algorithm. A generalized self-organizing algorithm is proposed, where the kernel smoothing step is replaced with an arbitrary nonparametric regression method.


IEEE Transactions on Neural Networks | 1997

The Nature Of Statistical Learning Theory

Vladimir Cherkassky

If you really want to be smarter, reading can be one of the lots ways to evoke and realize. Many people who like reading will have more knowledge and experiences. Reading can be a way to gain information from economics, politics, science, fiction, literature, religion, and many others. As one of the part of book categories, the nature of statistical learning theory always becomes the most wanted book. Many people are absolutely searching for this book. It means that many love to read this kind of book.


Neural Computation | 2003

Comparison of model selection for regression

Vladimir Cherkassky; Yunqian Ma

We discuss empirical comparison of analytical methods for model selection. Currently, there is no consensus on the best method for finite-sample estimation problems, even for the simple case of linear estimators. This article presents empirical comparisons between classical statistical methodsAkaike information criterion (AIC) and Bayesian information criterion (BIC)and the structural risk minimization (SRM) method, basedon Vapnik-Chervonenkis (VC) theory, for regression problems. Our study is motivated by empirical comparisons in Hastie, Tibshirani, and Friedman (2001), which claims that the SRM method performs poorly for model selection and suggests that AIC yields superior predictive performance. Hence, we present empirical comparisons for various data sets and different types of estimators (linear, subset selection, and k-nearest neighbor regression). Our results demonstrate the practical advantages of VC-based model selection; it consistently outperforms AIC for all data sets. In our study, SRM and BIC methods show similar predictive performance. This discrepancy (between empirical results obtained using the same data) is caused by methodological drawbacks in Hastie et al. (2001), especially in their loose interpretation and application of SRM method. Hence, we discuss methodological issues important for meaningful comparisons and practical application of SRM method. We also point out the importance of accurate estimation of model complexity (VC-dimension) for empirical comparisons and propose a new practical estimate of model complexity for k-nearest neighbors regression.


Archive | 1994

From Statistics to Neural Networks

Vladimir Cherkassky; Jerome H. Friedman; Harry Wechsler

Predictive learning has been traditionally studied in applied mathematics (function approximation), statistics (nonparametric regression), and engineering (pattern recognition). Recently the fields of artificial intelligence (machine learning) and connectionism (neural networks) have emerged, increasing interest in this problem, both in terms of wider application and methodological advances. This paper reviews the underlying principles of many of the practical approaches developed in these fields, with the goal of placing them in a common perspective and providing a unifying overview.


Neural Networks | 1991

Constrained topological mapping for nonparametric regression analysis

Vladimir Cherkassky; Hossein Lari-Najafi

Abstract The idea of using Kohonens self-organizing maps is applied to the problem of nonparametric regression analysis, that is, evaluation (approximation) of the unknown function of N-1 variables given a number of data points (possibly corrupted by random noise) in N-dimensional input space. Simple examples show that the original Kohonens algorithm performs poorly for regression problems of even low dimensionality, due to the fact that topologically correct ordering of units in N-dimensional space may violate the natural topological ordering of projections of those units onto (N-1)-dimensional subspace of independent variables. A modification of the original algorithm called the constrained topological mapping algorithm is proposed for regression analysis applications. Given a number of data points in N-dimensional input space, the proposed algorithm performs correct topological mapping of units (as the original algorithm) and at the same time preserves topological ordering of projections of these units onto (N-1)-dimensional subspace of independent coordinates. Simulation examples illustrate good performance (i.e., accuracy, convergence) of the proposed algorithm for approximating 2- and 3-variable functions. Moreover, for multivariate problems the proposed neural approach may alleviate “the curse of dimensionality,” that is, reduce the size of the training set required for evaluation of the unknown function (of many variables), by increasing the number of units (knots) in the topological map.

Collaboration


Dive into the Vladimir Cherkassky's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yunqian Ma

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lichen Liang

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar

Tao Xiong

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar

Vladimir M. Krasnopolsky

National Oceanic and Atmospheric Administration

View shared research outputs
Top Co-Authors

Avatar

Dimitri P. Solomatine

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge