Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vladimir Vovk is active.

Publication


Featured researches published by Vladimir Vovk.


conference on learning theory | 1995

A game of prediction with expert advice

Vladimir Vovk

We consider the following problem. At each point of discrete time the learner must make a prediction; he is given the predictions made by a pool of experts. Each prediction and the outcome, which is disclosed after the learner has made his prediction, determine the incurred loss. It is known that, under weak regularity, the learner can ensure that his cumulative loss never exceedscL+alnn, wherecandaare some constants,nis the size of the pool, andLis the cumulative loss incurred by the best expert in the pool. We find the set of those pairs (c, a) for which this is true.


Bernoulli | 1999

Prequential probability: principles and properties

A. Philip Dawid; Vladimir Vovk

Forcaster has to predict, sequentially, a string of uncertain quantities (X1, X2, .. .), whose values are determined and revealed, one by one, by Nature. Various criteria may be proposed to assess Forecasters empirical performance. The weak prequential principle requires that such a criterion should depend on Forecasters behaviour or strategy only through the actual forecasts issued. A wide variety of appealing criteria are shown to respect this principle. We further show that many such criteria also obey the strong prequential principle, which requires that, when both Nature and


NeuroImage | 2011

Machine learning classification with confidence: Application of transductive conformal predictors to MRI-based diagnostic and prognostic markers in depression

Ilia Nouretdinov; Sergi G. Costafreda; Alexander Gammerman; Alexey Ya. Chervonenkis; Vladimir Vovk; Vladimir Vapnik; Cynthia H.Y. Fu

There is rapidly accumulating evidence that the application of machine learning classification to neuroimaging measurements may be valuable for the development of diagnostic and prognostic prediction tools in psychiatry. However, current methods do not produce a measure of the reliability of the predictions. Knowing the risk of the error associated with a given prediction is essential for the development of neuroimaging-based clinical tools. We propose a general probabilistic classification method to produce measures of confidence for magnetic resonance imaging (MRI) data. We describe the application of transductive conformal predictor (TCP) to MRI images. TCP generates the most likely prediction and a valid measure of confidence, as well as the set of all possible predictions for a given confidence level. We present the theoretical motivation for TCP, and we have applied TCP to structural and functional MRI data in patients and healthy controls to investigate diagnostic and prognostic prediction in depression. We verify that TCP predictions are as accurate as those obtained with more standard machine learning methods, such as support vector machine, while providing the additional benefit of a valid measure of confidence for each prediction.


conference on learning theory | 1997

Derandomizing stochastic prediction strategies

Vladimir Vovk

In this paper we continue study of the games of prediction with expert advice with uncountably many experts. A convenient interpretation of such games is to construe the pool of experts as one “stochastic predictor”, who chooses one of the experts in the pool at random according to the prior distribution on the experts and then replicates the (deterministic ) predictions of the chosen expert. We notice that if the stochastic predictor‘s total loss is at most L with probability at least p then the learner‘s loss can be bounded by cL + aln \frac{1}{p} for the usual constants c and a. This interpretation is used to revamp known results and obtain new results on tracking the best expert. It is also applied to merging overconfident experts and to fitting polynomials to data.


conference on learning theory | 1998

Universal portfolio selection

Vladimir Vovk; Chris Watkins

A typical problem in portfolio selection in stock markets is that it is not clear which of the many available strategies should be used. We apply a general algorithm of prediction with expert advice (the Aggregating Algorithm) to two different idealizations of the stock market. One is the well-known game introduced by Cover in connection with his “universal portfolio” algorithm; the other is a more realistic modification of Cover’s game introduced in this paper, where market’s participants are allowed to take “short positions”, so that the algorithm may be applied to currency and futures markets. Besides applying the Aggregating Algorithm to a countable (or finite) family of arbitrary investment strategies, we also apply it, in the case of Cover’s game, to the uncountable family of “constant rebalanced portfolios” considered by Cover. We generalize Cover’s worst-case bounds for his “universal portfolio” algorithm (which can be regarded as a special case of the Aggregating Algorithm corresponding to learning rate 1) to the case of learning rates not exceeding 1. Finally, we discuss a general approach to designing investment strategies in which, instead of making statistical or other assumptions about the market, natural assumptions of computability are made about possible investment strategies; this approach leads to natural extensions of the notion of Kolmogorov complexity.


Statistical Science | 2006

The Sources of Kolmogorov’s Grundbegriffe

Glenn Shafer; Vladimir Vovk

Andrei Kolmogorovs Grundbegriffe der Wahrscheinlichkeits-rechnung put probabilitys modern mathematical formalism in place. It also provided a philosophy of probability--an explanation of how the formalism can be connected to the world of experience. In this article, we examine the sources of these two aspects of the Grundbegriffe--the work of the earlier scholars whose ideas Kolmogorov synthesized.


Journal of Artificial Intelligence Research | 2011

Regression conformal prediction with nearest neighbours

Harris Papadopoulos; Vladimir Vovk; Alexander Gammerman

In this paper we apply Conformal Prediction (CP) to the k-Nearest Neighbours Regression (k-NNR) algorithm and propose ways of extending the typical nonconformity measure used for regression so far. Unlike traditional regression methods which produce point predictions, Conformal Predictors output predictive regions that satisfy a given confidence level. The regions produced by any Conformal Predictor are automatically valid, however their tightness and therefore usefulness depends on the nonconformity measure used by each CP. In effect a nonconformity measure evaluates how strange a given example is compared to a set of other examples based on some traditional machine learning algorithm. We define six novel nonconformity measures based on the k-Nearest Neighbours Regression algorithm and develop the corresponding CPs following both the original (transductive) and the inductive CP approaches. A comparison of the predictive regions produced by our measures with those of the typical regression measure suggests that a major improvement in terms of predictive region tightness is achieved by the new measures.


foundations of computer science | 2002

On-line confidence machines are well-calibrated

Vladimir Vovk

Transductive Confidence Machine (TCM) and its computationally efficient modification, inductive confidence machine (ICM), are ways of complementing machine-learning algorithms with practically useful measures of confidence. We show that when TCM and ICM are used in the on-line mode, their confidence measures are well-calibrated, in the sense that predictive regions at confidence level 1-/spl delta/ will be wrong with relative frequency at most /spl delta/ (approaching /spl delta/ in the case of randomised TCM and ICM) in the long run. This is not just an asymptotic phenomenon: actually the error probability of randomised TCM and ICM is d at every trial and errors happen independently at different trials.


theory and applications of models of computation | 2006

On-Line regression competitive with reproducing kernel hilbert spaces

Vladimir Vovk

We consider the problem of on-line prediction of real-valued labels, assumed bounded in absolute value by a known constant, of new objects from known labeled objects. The prediction algorithm’s performance is measured by the squared deviation of the predictions from the actual labels. No stochastic assumptions are made about the way the labels and objects are generated. Instead, we are given a benchmark class of prediction rules some of which are hoped to produce good predictions. We show that for a wide range of infinite-dimensional benchmark classes one can construct a prediction algorithm whose cumulative loss over the first N examples does not exceed the cumulative loss of any prediction rule in the class plus


Finance and Stochastics | 2012

Continuous-time trading and the emergence of probability

Vladimir Vovk

O(\sqrt{N})

Collaboration


Dive into the Vladimir Vovk's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexander Shen

University of Montpellier

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge