Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Derrick Takeshi Mirikitani is active.

Publication


Featured researches published by Derrick Takeshi Mirikitani.


IEEE Transactions on Neural Networks | 2010

Recursive Bayesian Recurrent Neural Networks for Time-Series Modeling

Derrick Takeshi Mirikitani; Nikolay I. Nikolaev

This paper develops a probabilistic approach to recursive second-order training of recurrent neural networks (RNNs) for improved time-series modeling. A general recursive Bayesian Levenberg-Marquardt algorithm is derived to sequentially update the weights and the covariance (Hessian) matrix. The main strengths of the approach are a principled handling of the regularization hyperparameters that leads to better generalization, and stable numerical performance. The framework involves the adaptation of a noise hyperparameter and local weight prior hyperparameters, which represent the noise in the data and the uncertainties in the model parameters. Experimental investigations using artificial and real-world data sets show that RNNs equipped with the proposed approach outperform standard real-time recurrent learning and extended Kalman training algorithms for recurrent networks, as well as other contemporary nonlinear neural models, on time-series modeling.


Neurocomputing | 2010

Letters: Efficient online recurrent connectionist learning with the ensemble Kalman filter

Derrick Takeshi Mirikitani; Nikolay I. Nikolaev

One of the main drawbacks for online learning of recurrent neural networks (RNNs) is the high computational cost of training. Much effort has been spent to reduce the computational complexity of online learning algorithms, usually focusing on the real time recurrent learning (RTRL) algorithm. Significant reductions in complexity of RTRL have been achieved, but with a tradeoff, degradation of model performance. We take a different approach to complexity reduction in online learning of RNNs through a sequential Bayesian filtering framework and propose the ensemble Kalman filter (EnKF) for derivative free parameter estimation. The EnKF provides an online training solution that under certain assumptions can reduce the computational complexity by two orders of magnitude from the original RTRL algorithm without sacrificing the modeling potential of the network. Through forecasting experiments on observed data from nonlinear systems, it is shown that the EnKF trained RNN outperforms other RNN training algorithms in terms of real computational time and also leads to models that produce better forecasts.


Neural Computing and Applications | 2011

Nonlinear maximum likelihood estimation of electricity spot prices using recurrent neural networks

Derrick Takeshi Mirikitani; Nikolay I. Nikolaev

Electricity spot prices are complex processes characterized by nonlinearity and extreme volatility. Previous work on nonlinear modeling of electricity spot prices has shown encouraging results, and we build on this area by proposing an Expectation Maximization algorithm for maximum likelihood estimation of recurrent neural networks utilizing the Kalman filter and smoother. This involves inference of both parameters and hyper-parameters of the model which takes into account the model uncertainty and noise in the data. The Expectation Maximization algorithm uses a forward filtering and backward smoothing (Expectation) step, followed by a hyper-parameter estimation (Maximization) step. The model is validated across two data sets of different power exchanges. It is found that after learning a posteriori hyper-parameters, the proposed algorithm outperforms the real-time recurrent learning and the extended Kalman Filtering algorithm for recurrent networks, as well as other contemporary models that have been previously applied to the modeling of electricity spot prices.


international symposium on neural networks | 2007

Recursive Bayesian Levenberg-Marquardt Training of Recurrent Neural Networks

Derrick Takeshi Mirikitani; Nikolay I. Nikolaev

This paper develops a Bayesian approach to recursive second order training of recurrent neural networks. A general recursive Levenberg-Marquardt algorithm is elaborated using Bayesian regularization. Individual local regularization hyperparameters as well as an output noise hyper-parameter are reestimated in order to maximize the weight posterior distribution and to produce a well generalizing network model. The proposed algorithm performs a computationally stable sequential Hessian estimation with RTRL derivatives. Experimental investigations using benchmark and practical data sets show that the developed algorithm outperforms the standard RTRL and extended Kalman training algorithms for recurrent nets, as well as feed forward and finite impulse response neural filters, on time series modeling.


international conference on networking and services | 2007

Energy Reduction in Wireless Sensor Networks through Measurement Estimation with Second Order Recurrent Neural Networks

Incheon Park; Derrick Takeshi Mirikitani

Wireless sensor networks are real time databases to real world phenomena. As wireless sensor networks (WSNs) generally rely on batteries for power, the nodes of the network have a limited operational lifetime. Efficient power consumption is of utmost importance in operation and maintenance of the network. This paper summarizes work in progress in efficient energy consumption during sensor data collecting through time series modeling. A previous approach for energy efficient model in WSNs using a linear time series model for measurement prediction is reviewed and a new model utilizing a non-linear machine learning approach is proposed.


international conference on machine learning and applications | 2008

Dynamic Modeling with Ensemble Kalman Filter Trained Recurrent Neural Networks

Derrick Takeshi Mirikitani; Nikolay I. Nikolaev

The ensemble Kalman filter is a contemporary data assimilation algorithm used in the geoscience community. The filters popularity most likely stems from its simplicity, its low computational cost, and its superior performance over the extended Kalman filter in strongly nonlinear high dimensional assimilation tasks. Due to its attractive characteristics we investigate the performance and suitability of the filter for training neural networks on time series forecasting applications. Through modeling experiments on observed data from nonlinear systems it is shown that the ensemble Kalman filter trained recurrent neural network outperforms other neural time series models trained with the extended Kalman filter, and gradient descent learning.


international conference on artificial neural networks | 2009

Modeling Dst with Recurrent EM Neural Networks

Derrick Takeshi Mirikitani; Lahcen Ouarbya

Recurrent Neural Networks have been used extensively for space weather forecasts of geomagnetospheric disturbances. One of the major drawbacks for reliable forecasts have been the use of training algorithms that are unable to account for model uncertainty and noise in data. We propose a probabilistic training algorithm based on the Expectation Maximization framework for parameterization of the model which makes use of a forward filtering and backward smoothing Expectation step, and a Maximization step in which the model uncertainty and measurement noise estimates are computed. Through numerical experimentation it is shown that the proposed model allows for reliable forecasts and also outperforms other neural time series models trained with the Extended Kalman Filter, and gradient descent learning.


OCEANS 2007 - Europe | 2007

Day Ahead Ocean Swell Forecasting With Recursively Regularized Recurrent Neural Networks

Derrick Takeshi Mirikitani

Day ahead forecasts of ocean swell amplitude at fixed deep water observation platforms could provide critical decision making information for a large number of costal ocean activities. Currently the hourly measurements of wave height data provided from fixed deep water observation platforms tend to be irregular, and contaminated with noise. This data quality issue has been problematic for previous approaches to wave amplitude forecasting. This paper proposes a solution to the data quality issue through recursively regularized weight estimation for a recurrent multilayer perceptron neural network. Experimentation has shown that the proposed model out preforms standard feed forward models as well as extended Kalman filter trained recurrent neural models in a next day forecasting task.


intelligent data engineering and automated learning | 2011

Evolving recurrent neural models of geomagnetic storms

Derrick Takeshi Mirikitani; Lisa Tsui; Lahcen Ouarbya

Genetic algorithms for training recurrent neural networks (RNNs) have not yet been considered for modeling the dynamics of magnetospheric plasma. We provide a discussion of the previous state of the art in modeling Dst. Then, a recurrent neural network trained by a genetic algorithm is proposed for geomagnetic storm forecasting. The exogenous inputs to the RNN consist of three parameters, bz, n, and v, which represent the southward and azimuthal components of the interplanetary magnetic field (IMF), the density of electromagnetic particles, and the velocity of the particles respectively. The proposed model is compared to a model used in operational forecasts on a series of geomagnetic storms that so far have been difficult to forecast. It is shown that the proposed evolutionary method of training the RNN outperforms the operational model which was trained by gradient descent.


international symposium on neural networks | 2010

Unscented grid filtering and elman recurrent networks

Nikolay Y. Nikolaev; Derrick Takeshi Mirikitani; Evgueni N. Smirnov

This paper develops an unscented grid-based filter for improved recurrent neural network modeling of time series. The filter approximates directly the weight posterior distribution as a linear mixture using deterministic unscented sampling. The weight posterior is obtained in one step, without linearisation through derivatives. An expectation maximisation algorithm is formulated for evaluation of the complete data likelihood and finding the state noise and observation noise hyperparemeters. Empirical investigations show that the proposed unscented grid filter compares favourably to other similar filters on recurrent network modeling of two real-world time series of environmental importance.

Collaboration


Dive into the Derrick Takeshi Mirikitani's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge