Yuri Kalnishkan
Royal Holloway, University of London
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yuri Kalnishkan.
algorithmic learning theory | 2007
Steven Busuttil; Yuri Kalnishkan
This paper deals with the problem of making predictions in the online mode of learning where the dependence of the outcome y t on the signal x t can change with time. The Aggregating Algorithm (AA) is a technique that optimally merges experts from a pool, so that the resulting strategy suffers a cumulative loss that is almost as good as that of the best expert in the pool. We apply the AA to the case where the experts are all the linear predictors that can change with time. KAARCh is the kernel version of the resulting algorithm. In the kernel case, the experts are all the decision rules in some reproducing kernel Hilbert space that can change over time. We show that KAARCh suffers a cumulative square loss that is almost as good as that of any expert that does not change very rapidly.
Theoretical Computer Science | 2013
Fedor Zhdanov; Yuri Kalnishkan
This paper derives an identity connecting the square loss of ridge regression in on-line mode with the loss of the retrospectively best regressor. Some corollaries about the properties of the cumulative loss of on-line ridge regression are also obtained.
algorithmic learning theory | 2001
Yuri Kalnishkan; Michael V. Vyugin; Volodya Vovk
The paper introduces a way of re-constructing a loss function from predictive complexity. We show that a loss function and expectations of the corresponding predictive complexity w.r.t. the Bernoulli distribution are related through the Legendre transformation. It is shown that if two loss functions specify the same complexity then they are equivalent in a strong sense. The expectations are also related to the so-called generalized entropy.
algorithmic learning theory | 1999
Yuri Kalnishkan
In this paper we introduce a general method that allows to prove tight linear inequalities between different types of predictive complexity and thus we generalise our previous results. The method relies upon probabilistic considerations and allows to describe (using geometrical terms) the sets of coefficients which correspond to true inequalities. We also apply this method to the square-loss and logarithmic complexity and describe their relations which were not covered by our previous research.
algorithmic learning theory | 2008
Alexey V. Chernov; Yuri Kalnishkan; Fedor Zhdanov; Vladimir Vovk
This paper compares two methods of prediction with expert advice, the Aggregating Algorithm and the Defensive Forecasting, in two different settings. The first setting is traditional, with a countable number of experts and a finite number of outcomes. Surprisingly, these two methods of fundamentally different origin lead to identical procedures. In the second setting the experts can give advice conditional on the learners future decision. Both methods can be used in the new setting and give the same performance guarantees as in the traditional setting. However, whereas defensive forecasting can be applied directly, the AA requires substantial modifications.
european conference on machine learning | 2007
Steven Busuttil; Yuri Kalnishkan
Consider the online regression problem where the dependence of the outcome y t on the signal x t changes with time. Standard regression techniques, like Ridge Regression, do not perform well in tasks of this type. We propose two methods to handle this problem: WeCKAAR, a simple modification of an existing regression technique, and KAARCh, an application of the Aggregating Algorithm. Empirical results on artificial data show that in this setting, KAARCh is superior to WeCKAAR and standard regression techniques. On options implied volatility data, the performance of both KAARCh and WeCKAAR is comparable to that of the proprietary technique currently being used at the Russian Trading System Stock Exchange (RTSSE).
conference on learning theory | 2002
Yuri Kalnishkan; Michael V. Vyugin
This paper investigates the behaviour of the constant c(s) from the Aggregating Algorithm. Some conditions for mixability are derived and it is shown that for many non-mixable games c(s) still converges to 1. The condition c(s) ? 1 is shown to imply the existence of weak predictive complexity and it is proved that many games specify complexity up to ?n.
conference on learning theory | 2007
Yuri Kalnishkan; Vladimir Vovk; Michael V. Vyugin
In this paper the concept of asymptotic complexity of languages is introduced. This concept formalises the notion of learnability in a particular environment and generalises Lutz and Fortnows concepts of predictability and dimension. Then asymptotic complexities in different prediction environments are compared by describing the set of all pairs of asymptotic complexities w.r.t. different environments. A geometric characterisation in terms of generalised entropies is obtained and thus the results of Lutz and Fortnow are generalised.
algorithmic learning theory | 2004
Yuri Kalnishkan; Vladimir Vovk; Michael V. Vyugin
It is well known that there exists a universal (i.e., optimal to within an additive constant if allowed to work infinitely long) algorithm for lossless data compression (Kolmogorov, Levin). The game of lossless compression is an example of an on-line prediction game; for some other on-line prediction games (such as the simple prediction game) a universal algorithm is known not to exist. In this paper we give an analytic characterisation of those binary on-line prediction games for which a universal prediction algorithm exists.
algorithmic learning theory | 2002
Yuri Kalnishkan; Michael V. Vyugin
This paper shows that if the curvature of the boundary of the set of superpredictions for a game vanishes in a nontrivial way, then there is no predictive complexity for the game. This is the first result concerning the absence of complexity for games with convex sets of superpredictions. The proof is further employed to show that for some games there are no certain variants of weak predictive complexity. In the case of the absolute-loss game we reach a tight demarcation between the existing and non-existing variants of weak predictive complexity.