Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vladimir Koltchinskii is active.

Publication


Featured researches published by Vladimir Koltchinskii.


IEEE Transactions on Information Theory | 2001

Rademacher penalties and structural risk minimization

Vladimir Koltchinskii

We suggest a penalty function to be used in various problems of structural risk minimization. This penalty is data dependent and is based on the sup-norm of the so-called Rademacher process indexed by the underlying class of functions (sets). The standard complexity penalties, used in learning problems and based on the VC-dimensions of the classes, are conservative upper bounds (in a probabilistic sense, uniformly over the set of all underlying distributions) for the penalty we suggest. Thus, for a particular distribution of training examples, one can expect better performance of learning algorithms with the data-driven Rademacher penalties. We obtain oracle inequalities for the theoretical risk of estimators, obtained by structural minimization of the empirical risk with Rademacher penalties. The inequalities imply some form of optimality of the empirical risk minimizers. We also suggest an iterative approach to structural risk minimization with Rademacher penalties, in which the hierarchy of classes is not given in advance, but is determined in the data-driven iterative process of risk minimization. We prove probabilistic oracle inequalities for the theoretical risk of the estimators based on this approach as well.


arXiv: Probability | 2000

Rademacher Processes and Bounding the Risk of Function Learning

Vladimir Koltchinskii; Dmitriy Panchenko

We construct data dependent upper bounds on the risk in function learning problems. The bounds are based on local norms of the Rademacher process indexed by the underlying function class, and they do not require prior knowledge about the distribution of training examples or any specific properties of the function class. Using Talagrand’s type concentration inequalities for empirical and Rademacher processes, we show that the bounds hold with high probability that decreases exponentially fast when the sample size grows. In typical situations that are frequently encountered in the theory of function learning, the bounds give nearly optimal rate of convergence of the risk to zero.


IEEE Transactions on Information Theory | 2001

On inverse problems with unknown operators

Sam Efromovich; Vladimir Koltchinskii

Consider a problem of recovery of a smooth function (signal, image) f/spl isin//spl Fscr//spl isin/L/sub 2/([0, 1]/sup d/) passed through an unknown filter and then contaminated by a noise. A typical model discussed in the paper is described by a stochastic differential equation dY/sub f//sup /spl epsi//(t)=(Hf)(t)dt+/spl epsi/dW(t), t/spl isin/[0, 1]/sup d/, /spl epsi/>0 where H is a linear operator modeling the filter and W is a Brownian motion (sheet) modeling a noise. The aim is to recover f with asymptotically (as /spl epsi//spl rarr/0) minimax mean integrated squared error. Traditionally, the problem is studied under the assumption that the operator H is known, then the ill-posedness of the problem is the main concern. In this paper, a more complicated and more realistic case is considered where the operator is unknown; instead, a training set of n pairs {(e/sub l/, Y(e/sub l/)/sup /spl sigma//), l=1, 2,..., n}, where {e/sub l/} is an orthonormal system in L/sub 2/ and {Y(e/sub l/)/sup /spl sigma//} denote the solutions of stochastic differential equations of the above type with f=e/sub l/ and /spl epsi/=/spl sigma/ is available. An optimal (in a minimax sense over considered operators and signals) data-driven recovery of the signal is suggested. The influence of /spl epsi/, /spl sigma/, and n on the recovery is thoroughly studied; in particular, we discuss an interesting case of a larger noise during the training and present formulas for threshold levels for n beyond which no improvement in recovery of input signals occurs. We also discuss the case where H is an unknown perturbation of a known operator. We describe a class of perturbations for which the accuracy of recovery of the signal is asymptotically the same (up to a constant) as in the case of precisely known operator.


Bernoulli | 2000

Random matrix approximation of spectra of integral operators

Vladimir Koltchinskii; Evarist Giné

~H n, obtained by deleting its diagonal. It is proved that the l 2 distance between the ordered spectrum of Hn and the ordered spectrum of H tends to zero a.s. if and only if H is Hilbert‐Schmidt. Rates of convergence and distributional limit theorems for the difference between the ordered spectra of the operators Hn (or ~ H n) and H are also obtained under somewhat stronger conditions. These results apply in particular to the kernels of certain functions Ha j(L) of partial differential operators L (heat kernels, Green functions). This paper is dedicated to Richard M. Dudley on his sixtieth birthday.


Annals of Probability | 2006

Concentration inequalities and asymptotic results for ratio type empirical processes

Evarist Giné; Vladimir Koltchinskii

Let ℱ be a class of measurable functions on a measurable space


NeuroImage | 2006

fMRI pattern classification using neuroanatomically constrained boosting.

Manel Martínez-Ramón; Vladimir Koltchinskii; Gregory L. Heileman; Stefan Posse

(S,\mathcal{S})


Journal of Theoretical Probability | 1994

Komlos-Major-Tusnady approximation for the general empirical process and Haar expansions of classes of functions

Vladimir Koltchinskii

with values in [0,1] and let Pn=n−1∑i=1nδXi be the empirical measure based on an i.i.d. sample (X1,…,Xn) from a probability distribution P on


Archive | 1998

Asymptotics of Spectral Projections of Some Random Matrices Approximating Integral Operators

Vladimir Koltchinskii

(S,\mathcal{S})


conference on learning theory | 2002

Some Local Measures of Complexity of Convex Hulls and Generalization Bounds

Olivier Bousquet; Vladimir Koltchinskii; Dmitriy Panchenko

. We study the behavior of suprema of the following type: \[\sup_{r_{n}\textless\sigma_{P}f\leq \delta_{n}}\frac{|P_{n}f-Pf|}{\phi(\sigma_{P}f)},\] where σPf≥Var1/2Pf and ϕ is a continuous, strictly increasing function with ϕ(0)=0. Using Talagrand’s concentration inequality for empirical processes, we establish concentration inequalities for such suprema and use them to derive several results about their asymptotic behavior, expressing the conditions in terms of expectations of localized suprema of empirical processes. We also prove new bounds for expected values of sup-norms of empirical processes in terms of the largest σPf and the L2(P) norm of the envelope of the function class, which are especially suited for estimating localized suprema. With this technique, we extend to function classes most of the known results on ratio type suprema of empirical processes, including some of Alexander’s results for VC classes of sets. We also consider applications of these results to several important problems in nonparametric statistics and in learning theory (including general excess risk bounds in empirical risk minimization and their versions for L2-regression and classification and ratio type bounds for margin distributions in classification).


Annals of Probability | 2004

Weighted uniform consistency of kernel density estimators

Evarist Giné; Vladimir Koltchinskii; Joel Zinn

Pattern classification in functional MRI (fMRI) is a novel methodology to automatically identify differences in distributed neural substrates resulting from cognitive tasks. Reliable pattern classification is challenging due to the high dimensionality of fMRI data, the small number of available data sets, interindividual differences, and dependence on the acquisition methodology. Thus, most previous fMRI classification methods were applied in individual subjects. In this study, we developed a novel approach to improve multiclass classification across groups of subjects, field strengths, and fMRI methods. Spatially normalized activation maps were segmented into functional areas using a neuroanatomical atlas and each map was classified separately using local classifiers. A single multiclass output was applied using a weighted aggregation of the classifiers outputs. An Adaboost technique was applied, modified to find the optimal aggregation of a set of spatially distributed classifiers. This Adaboost combined the region-specific classifiers to achieve improved classification accuracy with respect to conventional techniques. Multiclass classification accuracy was assessed in an fMRI group study with interleaved motor, visual, auditory, and cognitive task design. Data were acquired across 18 subjects at different field strengths (1.5 T, 4 T), with different pulse sequence parameters (voxel size and readout bandwidth). Misclassification rates of the boosted classifier were between 3.5% and 10%, whereas for the single classifier, these were between 15% and 23%, suggesting that the boosted classifier provides a better generalization ability together with better robustness. The high computational speed of boosting classification makes it attractive for real-time fMRI to facilitate online interpretation of dynamically changing activation patterns.

Collaboration


Dive into the Vladimir Koltchinskii's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter Dorato

University of New Mexico

View shared research outputs
Top Co-Authors

Avatar

Jon A. Wellner

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Richard Nickl

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge