Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rohitash Chandra is active.

Publication


Featured researches published by Rohitash Chandra.


Neurocomputing | 2012

Cooperative coevolution of Elman recurrent neural networks for chaotic time series prediction

Rohitash Chandra; Mengjie Zhang

Cooperative coevolution decomposes a problem into subcomponents and employs evolutionary algorithms for solving them. Cooperative coevolution has been effective for evolving neural networks. Different problem decomposition methods in cooperative coevolution determine how a neural network is decomposed and encoded which affects its performance. A good problem decomposition method should provide enough diversity and also group interacting variables which are the synapses in the neural network. Neural networks have shown promising results in chaotic time series prediction. This work employs two problem decomposition methods for training Elman recurrent neural networks on chaotic time series problems. The Mackey-Glass, Lorenz and Sunspot time series are used to demonstrate the performance of the cooperative neuro-evolutionary methods. The results show improvement in performance in terms of accuracy when compared to some of the methods from literature.


IEEE Transactions on Neural Networks | 2015

Competition and Collaboration in Cooperative Coevolution of Elman Recurrent Neural Networks for Time-Series Prediction

Rohitash Chandra

Collaboration enables weak species to survive in an environment where different species compete for limited resources. Cooperative coevolution (CC) is a nature-inspired optimization method that divides a problem into subcomponents and evolves them while genetically isolating them. Problem decomposition is an important aspect in using CC for neuroevolution. CC employs different problem decomposition methods to decompose the neural network training problem into subcomponents. Different problem decomposition methods have features that are helpful at different stages in the evolutionary process. Adaptation, collaboration, and competition are needed for CC, as multiple subpopulations are used to represent the problem. It is important to add collaboration and competition in CC. This paper presents a competitive CC method for training recurrent neural networks for chaotic time-series prediction. Two different instances of the competitive method are proposed that employs different problem decomposition methods to enforce island-based competition. The results show improvement in the performance of the proposed methods in most cases when compared with standalone CC and other methods from the literature.


Neurocomputing | 2012

On the issue of separability for problem decomposition in cooperative neuro-evolution

Rohitash Chandra; Marcus Frean; Mengjie Zhang

Abstract Cooperative coevolution divides an optimisation problem into subcomponents and employs evolutionary algorithms for evolving them. Problem decomposition has been a major issue in using cooperative coevolution for neuro-evolution. Efficient problem decomposition methods group interacting variables into the same subcomponents. It is important to find out which problem decomposition methods efficiently group subcomponents and the behaviour of neural network during training in terms of the interaction among the synapses. In this paper, the interdependencies among the synapses are analysed and a problem decomposition method is introduced for feedforward neural networks on pattern classification problems. We show that the neural network training problem is partially separable and that the level of interdependencies changes during the learning process. The results confirm that the proposed problem decomposition method has improved performance compared to its counterparts.


Neurocomputing | 2011

Encoding subcomponents in cooperative co-evolutionary recurrent neural networks

Rohitash Chandra; Marcus Frean; Mengjie Zhang; Christian W. Omlin

Cooperative coevolution employs evolutionary algorithms to solve a high-dimensional search problem by decomposing it into low-dimensional subcomponents. Efficient problem decomposition methods or encoding schemes group interacting variables into separate subcomponents in order to solve them separately where possible. It is important to find out which encoding schemes efficiently group subcomponents and the nature of the neural network training problem in terms of the degree of non-separability. This paper introduces a novel encoding scheme in cooperative coevolution for training recurrent neural networks. The method is tested on grammatical inference problems. The results show that the proposed encoding scheme achieves better performance when compared to a previous encoding scheme.


Applied Soft Computing | 2012

Crossover-based local search in cooperative co-evolutionary feedforward neural networks

Rohitash Chandra; Marcus Frean; Mengjie Zhang

Cooperative coevolution has been a major approach to neuro-evolution. Memetic algorithms employ local search to selected individuals in a population. This paper presents a new cooperative coevolution framework that incorporates crossover-based local search. The proposed approach effectively makes use of local search without adding to the computational cost in the sub-populations of cooperative coevolution. The relationship between the intensity of, and interval between the local search is empirically investigated and a heuristic for the adaptation of the local search intensity during evolution is presented. The method is used for training feedforward neural networks on eight pattern classification problems. The results show an improved performance in terms of optimisation time, scalability and robustness for most of these problems.


soft computing | 2012

Adapting modularity during learning in cooperative co-evolutionary recurrent neural networks

Rohitash Chandra; Marcus Frean; Mengjie Zhang

Adaptation during evolution has been an important focus of research in training neural networks. Cooperative coevolution has played a significant role in improving standard evolution of neural networks by organizing the training problem into modules and independently solving them. The number of modules required to represent a neural network is critical to the success of evolution. This paper proposes a framework for the adaptation of the number of modules during evolution. The framework is called adaptive modularity cooperative coevolution. It is used for training recurrent neural networks on grammatical inference problems. The results shows that the proposed approach performs better than its counterparts as the dimensionality of the problem increases.


international symposium on neural networks | 2014

Competitive two-island cooperative coevolution for training Elman recurrent networks for time series prediction

Rohitash Chandra

Problem decomposition is an important aspect in using cooperative coevolution for neuro-evolution. Cooperative coevolution employs different problem decomposition methods to decompose the neural network training problem into subcomponents. Different problem decomposition methods have features that are helpful at different stages in the evolutionary process. Adaptation, collaboration and competition are characteristics that are needed for cooperative coevolution as multiple sub-populations are used to represent the problem. It is important to add collaboration and competition in cooperative coevolution. This paper presents a competitive two-island cooperative coevolution method for training recurrent neural networks on chaotic time series problems. Neural level and Synapse level problem decomposition is used in each of the islands. The results show improvement in performance when compared to standalone cooperative coevolution and other methods from literature.


Robotica | 2016

The forward kinematics of the 6-6 parallel manipulator using an evolutionary algorithm based on generalized generation gap with parent-centric crossover

Luc Rolland; Rohitash Chandra

In this paper, a fast and efficient evolutional algorithm, called the G3-PCX has been implemented to solve the forward kinematics problem (FKP) of the general parallel manipulators being modeled by the 6-6 hexapod, constituted by a fixed and mobile platforms being non planar and non-symmetrical. The two platforms are connected by six linear actuators, each of which is located between one ball joint and one universal joint. Forward kinematics are formulated using Inverse Kinematics applying one position based equation system which is converted into an objective function by expressing the sum of squared error on kinematics chain lengths and mobile platform distances. In less than one second, the 16 unique real solutions are computed with improved accuracy when compared to previous methods.


international symposium on neural networks | 2014

Multi-objective cooperative coevolution of neural networks for time series prediction

Shelvin Chand; Rohitash Chandra

The use of neural networks for time series prediction has been an important focus of recent research. Multi-objective optimization techniques have been used for training neural networks for time series prediction. Cooperative coevolution is an evolutionary computation method that decomposes the problem into subcomponents and has shown promising results for training neural networks. This paper presents a multi-objective cooperative coevolutionary method for training neural networks where the training data set is processed to obtain the different objectives for multi-objective evolutionary training of the neural network. We use different time lags as multi-objective criterion. The trained multi-objective neural network can give prediction of the original time series for preprocessed data sets distinguished by their time lags. The proposed method is able to outperform the conventional cooperative coevolutionary methods for training neural networks and also other methods from the literature on benchmark problems.


international symposium on neural networks | 2013

Adaptive problem decomposition in cooperative coevolution of recurrent networks for time series prediction

Rohitash Chandra

Cooperative coevolution employs different problem decomposition methods to decompose the neural network problem into subcomponents. The efficiency of a problem decomposition method is dependent on the neural network architecture and the nature of the training problem. The adaptation of problem decomposition methods has been recently proposed which showed that different problem decomposition methods are needed at different phases in the evolutionary process. This paper employs an adaptive cooperative coevolution problem decomposition framework for training recurrent neural networks on chaotic time series problems. The Mackey Glass, Lorenz and Sunspot chaotic time series are used. The results show improvement in performance in most cases, however, there are some limitations when compared to cooperative coevolution and other methods from literature.

Collaboration


Dive into the Rohitash Chandra's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mengjie Zhang

Victoria University of Wellington

View shared research outputs
Top Co-Authors

Avatar

Marcus Frean

Victoria University of Wellington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kavitesh K. Bali

University of the South Pacific

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ravneil Nand

University of the South Pacific

View shared research outputs
Top Co-Authors

Avatar

Gary Wong

University of the South Pacific

View shared research outputs
Top Co-Authors

Avatar

Nicholas Rollings

University of the South Pacific

View shared research outputs
Top Co-Authors

Avatar

Ratneel Deo

University of the South Pacific

View shared research outputs
Researchain Logo
Decentralizing Knowledge