Nazri Mohd Nawi
Universiti Tun Hussein Onn Malaysia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nazri Mohd Nawi.
Neurocomputing | 2009
Rozaida Ghazali; Abir Jaafar Hussain; Nazri Mohd Nawi; Baharuddin Mohamad
This research focuses on using various higher order neural networks (HONNs) to predict the upcoming trends of financial signals. Two HONNs models: the Pi-Sigma neural network and the ridge polynomial neural network were used. Furthermore, a novel HONN architecture which combines the properties of both higher order and recurrent neural network was constructed, and is called dynamic ridge polynomial neural network (DRPNN). Extensive simulations for the prediction of one and five steps ahead of financial signals were performed. Simulation results indicate that DRPNN in most cases demonstrated advantages in capturing chaotic movement in the signals with an improvement in the profit return and rapid convergence over other network models.
international conference on computational science and its applications | 2013
Nazri Mohd Nawi; Abdullah Khan; Mohammad Zubair Rehman
Back-propagation Neural Network (BPNN) algorithm is one of the most widely used and a popular technique to optimize the feed forward neural network training. Traditional BP algorithm has some drawbacks, such as getting stuck easily in local minima and slow speed of convergence. Nature inspired meta-heuristic algorithms provide derivative-free solution to optimize complex problems. This paper proposed a new meta-heuristic search algorithm, called cuckoo search (CS), based on cuckoo birds behavior to train BP in achieving fast convergence rate and to avoid local minima problem. The performance of the proposed Cuckoo Search Back-Propagation (CSBP) is compared with artificial bee colony using BP algorithm, and other hybrid variants. Specifically OR and XOR datasets are used. The simulation results show that the computational efficiency of BP training process is highly enhanced when coupled with the proposed hybrid method.
FGIT-DTA/BSBT | 2010
Nazri Mohd Nawi; Rajesh Ransing; Mohd Najib Mohd Salleh; Rozaida Ghazali; Norhamreeza Abdul Hamid
The back propagation algorithm is one the most popular algorithms to train feed forward neural networks. However, the convergence of this algorithm is slow, it is mainly because of gradient descent algorithm. Previous research demonstrated that in ‘feed forward’ algorithm, the slope of the activation function is directly influenced by a parameter referred to as ‘gain’. This research proposed an algorithm for improving the performance of the back propagation algorithm by introducing the adaptive gain of the activation function. The gain values change adaptively for each node. The influence of the adaptive gain on the learning ability of a neural network is analysed. Multi layer feed forward neural networks have been assessed. Physical interpretation of the relationship between the gain value and the learning rate and weight values is given. The efficiency of the proposed algorithm is compared with conventional Gradient Descent Method and verified by means of simulation on four classification problems. In learning the patterns, the simulations result demonstrate that the proposed method converged faster on Wisconsin breast cancer with an improvement ratio of nearly 2.8, 1.76 on diabetes problem, 65% better on thyroid data sets and 97% faster on IRIS classification problem. The results clearly show that the proposed algorithm significantly improves the learning speed of the conventional back-propagation algorithm.
international conference on software engineering and computer systems | 2012
M. Z. Rehman; Nazri Mohd Nawi
The traditional Gradient Descent Back-propagation Neural Network Algorithm is widely used in solving many practical applications around the globe. Despite providing successful solutions, it possesses a problem of slow convergence and sometimes getting stuck at local minima. Several modifications are suggested to improve the convergence rate of Gradient Descent Backpropagation algorithm such as careful selection of initial weights and biases, learning rate, momentum, network topology, activation function and ‘gain’ value in the activation function. In a certain variation, the previous researchers demonstrated that in “feed-forward algorithm”, the slope of activation function is directly influenced by ‘gain’ parameter. This research proposed an algorithm for improving the current working performance of Back-propagation algorithm by adaptively changing the momentum value and at the same time keeping the ‘gain’ parameter fixed for all nodes in the neural network. The performance of the proposed method known as ‘Gradient Descent Method with Adaptive Momentum (GDAM)’ is compared with the performances of ‘Gradient Descent Method with Adaptive Gain (GDM-AG)’ and ‘Gradient Descent with Simple Momentum (GDM)’. The learning rate is kept fixed while sigmoid activation function is used throughout the experiments. The efficiency of the proposed method is demonstrated by simulations on three classification problems. Results show that GDAM is far better than previous methods with an accuracy ratio of 1.0 for classification problems and can be used as an alternative approach of BPNN.
international conference on information computing and applications | 2010
Nazri Mohd Nawi; Rozaida Ghazali; Mohd Najib Mohd Salleh
A study on improving training efficiency of Artificial Neural Networks algorithm was carried out throughout many previous papers. This paper presents a new approach to improve the training efficiency of back propagation neural network algorithms. The proposed algorithm (GDM/AG) adaptively modifies the gradient based search direction by introducing the value of gain parameter in the activation function. It has been shown that this modification significantly enhance the computational efficiency of training process. The proposed algorithm is generic and can be implemented in almost all gradient based optimization processes. The robustness of the proposed algorithm is shown by comparing convergence rates and the effectiveness of gradient descent methods using the proposed method on heart disease data.
ubiquitous computing | 2011
Norhamreeza Abdul Hamid; Nazri Mohd Nawi; Rozaida Ghazali; Mohd Najib Mohd Salleh
The back propagation (BP) algorithm is a very popular learning approach in feedforward multilayer perceptron networks. However, the most serious problem associated with the BP is local minima problem and slow convergence speeds. Over the years, many improvements and modifications of the back propagation learning algorithm have been reported. In this research, we propose a new modified back propagation learning algorithm by introducing adaptive gain together with adaptive momentum and adaptive learning rate into weight update process. By computer simulations, we demonstrate that the proposed algorithm can give a better convergence rate and can find a good solution in early time compare to the conventional back propagation. We use two common benchmark classification problems to illustrate the improvement in convergence time.
ICSS | 2014
Nazri Mohd Nawi; M. Z. Rehman; Abdullah Khan
Metaheuristic algorithm such as BAT algorithm is becoming a popular method in solving many hard optimization problems. This paper investigates the use of Bat algorithm in combination with Back-propagation neural network (BPNN) algorithm to solve the local minima problem in gradient descent trajectory and to increase the convergence rate. The performance of the proposed Bat based Back-Propagation (Bat-BP) algorithm is compared with Artificial Bee Colony using BPNN algorithm (ABC-BP) and simple BPNN algorithm. Specifically, OR and XOR datasets are used for training the network. The simulation results show that the computational efficiency of BPNN training process is highly enhanced when combined with BAT algorithm.
international conference on computational science and its applications | 2013
Nazri Mohd Nawi; Abdullah Khan; Mohammad Zubair Rehman
Back propagation neural network (BPNN) algorithm is a widely used technique in training artificial neural networks. It is also a very popular optimization procedure applied to find optimal weights in a training process. However, traditional back propagation optimized with Levenberg marquardt training algorithm has some drawbacks such as getting stuck in local minima, and network stagnancy. This paper proposed an improved Levenberg-Marquardt back propagation (LMBP) algorithm integrated and trained with Cuckoo Search (CS) algorithm to avoided local minima problem and achieves fast convergence. The performance of the proposed Cuckoo Search Levenberg-Marquardt (CSLM) algorithm is compared with Artificial Bee Colony (ABC) and similar hybrid variants. The simulation results show that the proposed CSLM algorithm performs better than other algorithm used in this study in term of convergence rate and accuracy.
international multi-topic conference | 2012
Habib Shah; Rozaida Ghazali; Nazri Mohd Nawi
A social insect’s techniques become more focus by researchers because of its nature behavior processing and by training neural networks through agents. Chief among them are Swarm Intelligence (SI), Ant Colony Optimization (ACO), and recently Artificial Bee Colony algorithm, which produced easy way for solving combinatorial problems and for training NNs. These social based techniques mostly used for finding optimal weight values in NNs learning. Usually, NNs trained by a standard and well known algorithm called Backpropagation (BP) have difficulties such as trapping in local minima, slow convergence or might fail sometimes. For recovering the above cracks the population or social insects based algorithms used for training NNs for minimizing network output error. Here, the hybrid of nature behavior agents’ ant and bees combine’s techniques used for training ANNs. The simulation result of a hybrid algorithm compared with, ABC and BP training algorithms. From the experimental results, the proposed Hybrid Ant Bee Colony (HABC) algorithm did improve the classification accuracy for prediction of a volcano time-series data.
asian conference on intelligent information and database systems | 2013
Habib Shah; Rozaida Ghazali; Nazri Mohd Nawi
This paper proposed Global Artificial Bee Colony algorithm for training Neural Network (NN), which is a globalised form of standard Artificial Bee Colony algorithm. NN trained with the standard backpropagation (BP) algorithm normally utilizes computationally intensive training algorithms. One of the crucial problems with the BP algorithm is that it can sometimes yield the networks with suboptimal weights because of the presence of many local optima in the solution space. To overcome, GABC algorithm used in this work to train MLP learning for classification problem, the performance of GABC is benchmarked against MLP training with the typical BP, ABC and Particle swarm optimization for boolean function classification. The experimental result shows that MLP-GABC performs better than that standard BP, ABC and PSO for the classification task.