Nikolay I. Nikolaev
University of London
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nikolay I. Nikolaev.
IEEE Transactions on Neural Networks | 2010
Derrick Takeshi Mirikitani; Nikolay I. Nikolaev
This paper develops a probabilistic approach to recursive second-order training of recurrent neural networks (RNNs) for improved time-series modeling. A general recursive Bayesian Levenberg-Marquardt algorithm is derived to sequentially update the weights and the covariance (Hessian) matrix. The main strengths of the approach are a principled handling of the regularization hyperparameters that leads to better generalization, and stable numerical performance. The framework involves the adaptation of a noise hyperparameter and local weight prior hyperparameters, which represent the noise in the data and the uncertainties in the model parameters. Experimental investigations using artificial and real-world data sets show that RNNs equipped with the proposed approach outperform standard real-time recurrent learning and extended Kalman training algorithms for recurrent networks, as well as other contemporary nonlinear neural models, on time-series modeling.
IEEE Transactions on Evolutionary Computation | 2001
Nikolay I. Nikolaev; Hitoshi Iba
This paper presents an approach to regularization of inductive genetic programming tuned for learning polynomials. The objective is to achieve optimal evolutionary performance when searching high-order multivariate polynomials represented as tree structures. We show how to improve the genetic programming of polynomials by balancing its statistical bias with its variance. Bias reduction is achieved by employing a set of basis polynomials in the tree nodes for better agreement with the examples. Since this often leads to over-fitting, such tendencies are counteracted by decreasing the variance through regularization of the fitness function. We demonstrate that this balance facilitates the search as well as enables discovery of parsimonious, accurate, and predictive polynomials. The experimental results given show that this regularization approach outperforms traditional genetic programming on benchmark data mining and practical time-series prediction tasks.
congress on evolutionary computation | 2000
Hitoshi Iba; Nikolay I. Nikolaev
The problem of identifying the trend in financial data series in order to forecast them for profit increase is addressed using genetic programming (GP). We enhance a GP system that searches for polynomial models of financial data series and relate it to a traditional GP manipulating functional models. Two of the key issues in the development are: 1) preprocessing of the series which includes data transformations and embedding; and 2) design of a proper fitness function that navigates the search by favouring parsimonious and predictive models. The two GP systems are applied for stock market analysis, and examined with real Tokyo Stock Exchange data. Using statistical and economical measures to estimate the results, we show that the GP could evolve profitable polynomials.
intelligent data analysis | 1998
Nikolay I. Nikolaev; Vanio Slavov
This article proposes a study of inductive Genetic Programming with Decision Trees GPDT. The theoretical underpinning is an approach to the development of fitness functions for improving the search guidance. The approach relies on analysis of the global fitness landscape structure with a statistical correlation measure. The basic idea is that the fitness landscape could be made informative enough to enable efficient search navigation. We demonstrate that by a careful design of the fitness function the global landscape becomes smoother, its correlation increases, and facilitates the search. Another claim is that the fitness function has not only to mitigate navigation difficulties, but also to guarantee maintenance of decision trees with low syntactic complexity and high predictive accuracy.
european conference on machine learning | 1997
Nikolay I. Nikolaev; Vanio Slavov
This paper proposes an empirical study of inductive Genetic Programming with Decision Trees. An approach to development of fitness functions for efficient navigation of the search process is presented. It relies on analysis of the fitness landscape structure and suggests measuring its characteristics with statistical correlations. We demonstrate that this approach increases the global landscape correlation, and thus leads to mitigation of the search difficulties. Another claim is that the elaborated fitness functions help to produce decision trees with low syntactic complexity and high predictive accuracy.
european conference on genetic programming | 1998
Nikolay I. Nikolaev; Vanio Slavov
This paper presents the fundamental concepts of inductive Genetic Programming, an evolutionary search method especially suitable for inductive learning tasks. We review the components of the method, and propose new approaches to some open issues such as: the sensitivity of the operators to the topology of the genetic program trees, the coordination of the operators, and the investigation of their performance. The genetic operators are examined by correlation and information analysis of the fitness landscapes. The performance of inductive Genetic Programming is studied with population diversity and evolutionary dynamics measures using hard instances for induction of regular expressions.
international symposium on neural networks | 2005
Nikolay I. Nikolaev; Peter Tino
This paper presents an approach to sequential training of the relevance vector machine suitable for Bayesian learning from time series. The key idea is to perform simultaneous incremental optimization of both the weight parameters and their prior hyperparameters using data arriving successively one at a time. Algorithms for efficient sequential regularized dynamic learning rate training of the weights and gradient-descent training of their corresponding individual priors are derived. It is shown that this fast sequential RVM can outperform similar Bayesian kernel methods, like: batch RVM, fast RVM, variational RVM, and Gaussian processes on multistep ahead forecasting of time series.
european conference on principles of data mining and knowledge discovery | 1999
Nikolay I. Nikolaev; Hitoshi Iba
This paper presents an approach to automated discovery of high-order multivariate polynomials by inductive Genetic Programming (iGP). Evolutionary search is used for learning polynomials represented as non-linear multivariate trees. Optimal search performance is pursued with balancing the statistical bias and the variance of iGP. We reduce the bias by extending the set of basis polynomials for better agreement with the examples. Possible overfitting due to the reduced bias is conteracted by a variance component, implemented as a regularizing factor of the error in an MDL fitness function. Experimental results demonstrate that regularized iGP discovers accurate, parsimonious, and predictive polynomials when trained on practical data mining tasks.
Genetic Programming and Evolvable Machines | 2001
Nikolay I. Nikolaev; Hitoshi Iba
An accelerated polynomial construction technique for genetic programming is proposed. This is a horizontal technique for gradual expansion of a partial polynomial during traversal of its tree-structured representation. The coefficients of the partial polynomial and the coefficient of the new term are calculated by a rapid recurrent least squares (RLS) fitting method. When used for genetic programming (GP) of polynomials this technique enables us not only to achieve fast estimation of the coefficients, but also leads to power series models that differ from those of traditional Koza-style GP and from those of the previous GP with polynomials STROGANOFF. We demonstrate that the accelerated GP is sucessful in that it evolves solutions with greater generalization capacity than STROGANOFF and traditional GP on symbolic regression, pattern recognition, and financial time-series prediction tasks.
Archive | 1998
Vanio Slavov; Nikolay I. Nikolaev
This paper proposes a study of the performance of inductive genetic programming with decision trees. The investigation concerns the influence of the fitness function, the genetic mutation operator and the categorical distribution of the examples in inductive tasks on the search process. The approach uses statistical correlations in order to clarify two aspects: the global and the local search characteristics of the structure of the fitness landscape. The work is motivated by the fact that the structure of the fitness landscape is the only information which helps to navigate in the search space of the inductive task. It was found that the analysis of the landscape structure allows tuning the landscape and increasing the exploratory power of the operator on this landscape.