Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Norhamreeza Abdul Hamid is active.

Publication


Featured researches published by Norhamreeza Abdul Hamid.


FGIT-DTA/BSBT | 2010

An Improved Back Propagation Neural Network Algorithm on Classification Problems

Nazri Mohd Nawi; Rajesh Ransing; Mohd Najib Mohd Salleh; Rozaida Ghazali; Norhamreeza Abdul Hamid

The back propagation algorithm is one the most popular algorithms to train feed forward neural networks. However, the convergence of this algorithm is slow, it is mainly because of gradient descent algorithm. Previous research demonstrated that in ‘feed forward’ algorithm, the slope of the activation function is directly influenced by a parameter referred to as ‘gain’. This research proposed an algorithm for improving the performance of the back propagation algorithm by introducing the adaptive gain of the activation function. The gain values change adaptively for each node. The influence of the adaptive gain on the learning ability of a neural network is analysed. Multi layer feed forward neural networks have been assessed. Physical interpretation of the relationship between the gain value and the learning rate and weight values is given. The efficiency of the proposed algorithm is compared with conventional Gradient Descent Method and verified by means of simulation on four classification problems. In learning the patterns, the simulations result demonstrate that the proposed method converged faster on Wisconsin breast cancer with an improvement ratio of nearly 2.8, 1.76 on diabetes problem, 65% better on thyroid data sets and 97% faster on IRIS classification problem. The results clearly show that the proposed algorithm significantly improves the learning speed of the conventional back-propagation algorithm.


ubiquitous computing | 2011

Accelerating Learning Performance of Back Propagation Algorithm by Using Adaptive Gain Together with Adaptive Momentum and Adaptive Learning Rate on Classification Problems

Norhamreeza Abdul Hamid; Nazri Mohd Nawi; Rozaida Ghazali; Mohd Najib Mohd Salleh

The back propagation (BP) algorithm is a very popular learning approach in feedforward multilayer perceptron networks. However, the most serious problem associated with the BP is local minima problem and slow convergence speeds. Over the years, many improvements and modifications of the back propagation learning algorithm have been reported. In this research, we propose a new modified back propagation learning algorithm by introducing adaptive gain together with adaptive momentum and adaptive learning rate into weight update process. By computer simulations, we demonstrate that the proposed algorithm can give a better convergence rate and can find a good solution in early time compare to the conventional back propagation. We use two common benchmark classification problems to illustrate the improvement in convergence time.


International Journal of Modern Physics: Conference Series | 2012

SOLVING LOCAL MINIMA PROBLEM IN BACK PROPAGATION ALGORITHM USING ADAPTIVE GAIN, ADAPTIVE MOMENTUM AND ADAPTIVE LEARNING RATE ON CLASSIFICATION PROBLEMS

Norhamreeza Abdul Hamid; Nazri Mohd Nawi; Rozaida Ghazali; Mohd Najib Mohd Salleh

This paper presents a new method to improve back propagation algorithm from getting stuck with local minima problem and slow convergence speeds which caused by neuron saturation in the hidden layer. In this proposed algorithm, each training pattern has its own activation functions of neurons in the hidden layer that are adjusted by the adaptation of gain parameters together with adaptive momentum and learning rate value during the learning process. The efficiency of the proposed algorithm is compared with the conventional back propagation gradient descent and the current working back propagation gradient descent with adaptive gain by means of simulation on three benchmark problems namely iris, glass and thyroid.


INNS-CIIS | 2015

Second Order Back Propagation Neural Network (SOBPNN) Algorithm for Medical Data Classification

Nazri Mohd Nawi; Norhamreeza Abdul Hamid; Nursyafika Harsad; Azizul Azhar Ramli

Gradient based methods are one of the most widely used error minimization methods used to train back propagation neural networks (BPNN). Some second order learning methods deal with a quadratic approximation of the error function determined from the calculation of the Hessian matrix, and achieves improved convergence rates in many cases. This paper introduces an improved second order back propagation which calculates efficiently the Hessian matrix by adaptively modifying the search direction. This paper suggests a simple modification to the initial search direction, i.e. the gradient of error with respect to weights, can substantially improve the training efficiency. The efficiency of the proposed SOBPNN is verified by means of simulations on five medical data classification. The results show that the SOBPNN significantly improves the learning performance of BPNN.


DaEng | 2014

Carving Linearly JPEG Images Using Unique Hex Patterns (UHP)

Nurul Azma Abdullah; Rosziati Ibrahim; Kamaruddin Malik Mohamad; Norhamreeza Abdul Hamid

Many studies have been conducted in addressing problem of fragmented JPEG. However, there are many scenarios in fragmentation yet to be solved. This paper is discussing of using pattern matching to identify single linear fragmented JPEG images. The main contribution of this paper is introducing Unique Hex Patterns (UHP) to carve single linear fragmented JPEG images.


international conference on software engineering and computer systems | 2011

Learning Efficiency Improvement of Back Propagation Algorithm by Adaptively Changing Gain Parameter together with Momentum and Learning Rate

Norhamreeza Abdul Hamid; Nazri Mohd Nawi; Rozaida Ghazali; Mohd Najib Mohd Salleh

In some practical NN applications, fast response to external events within enormously short time is highly demanded. However, by using back propagation (BP) based on gradient descent optimization method obviously not satisfy in several application due to serious problems associated with BP which are slow learning convergence velocity and confinement to shallow minima. Over the years, many improvements and modifications of the back propagation learning algorithm have been reported. In this research, we modified existing back propagation learning algorithm with adaptive gain by adaptively change the momentum coefficient and learning rate. In learning the patterns, the simulation results indicate that the proposed algorithm can hasten up the convergence behaviour as well as slide the network through shallow local minima compare to conventional BP algorithm. We use three common benchmark classification problems to illustrate the improvement of the proposed algorithm.


information integration and web-based applications & services | 2011

Temperature forecasting with a dynamic higher-order neural network model

Noor Aida Husaini; Rozaida Ghazali; Lokman Hakim Ismail; Norhamreeza Abdul Hamid; Mustafa Mat Deris; Nazri Mohd Nawi

This paper presents the application of a combined approach of Higher Order Neural Networks and Recurrent Neural Networks, so called Jordan Pi-Sigma Neural Network (JPSN) for comprehensive temperature forecasting. In the present study, one-step-ahead forecasts are made for daily temperature measurement, by using a 5-year historical temperature measurement data. We also examine the effects of network parameters viz the learning factors, the higher order terms and the number of neurons in the input layer for selecting the best network architecture, using several performance measures. The comparison results show that the JPSN model can provide excellent fit and forecasts with reasonable results, therefore can be used as temperature forecasting tool.


soft computing | 2018

RMIL/AG: A New Class of Nonlinear Conjugate Gradient for Training Back Propagation Algorithm

Sri Mazura Muhammad Basri; Nazri Mohd Nawi; Mustafa Mamat; Norhamreeza Abdul Hamid

The conventional back propagation (BP) algorithm is generally known for some disadvantages, such as slow training, easy to getting trapped into local minima and being sensitive to the initial weights and bias. This paper introduced a new class of efficient second order conjugate gradient (CG) for training BP called Rivaie, Mustafa, Ismail and Leong (RMIL)/AG. The RMIL uses the value of adaptive gain parameter in the activation function to modify the gradient based search direction. The efficiency of the proposed method is verified by means of simulation on four classification problems. The results show that the computational efficiency of the proposed method was better than the conventional BP algorithm.


soft computing | 2016

Optimizing Weights in Elman Recurrent Neural Networks with Wolf Search Algorithm

Nazri Mohd Nawi; M. Z. Rehman; Norhamreeza Abdul Hamid; Abdullah Khan; Rashid Naseem; Jamal Uddin

This paper presents a Metahybrid algorithm that consists of the dual combination of Wolf Search (WS) and Elman Recurrent Neural Network (ERNN). ERNN is one of the most efficient feed forward neural network learning algorithm. Since ERNN uses gradient descent technique during the training process; therefore, it is not devoid of local minima and slow convergence problem. This paper used a new metaheuristic search algorithm, called wolf search (WS) based on wolf’s predatory behavior to train the weights in ERNN to achieve faster convergence and to avoid the local minima. The performance of the proposed Metahybrid Wolf Search Elman Recurrent Neural Network (WRNN) is compared with Bat with back propagation (Bat-BP) algorithm and other hybrid variants on benchmark classification datasets. The simulation results show that the proposed Metahybrid WRNN algorithm has better performance in terms of CPU time, accuracy and MSE than the other algorithms.


International Journal on Advanced Science, Engineering and Information Technology | 2011

The Effect of Adaptive Gain and Adaptive Momentum in Improving Training Time of Gradient Descent Back Propagation Algorithm on Classification Problems

Norhamreeza Abdul Hamid; Nazri Mohd Nawi; Rozaida Ghazali

Collaboration


Dive into the Norhamreeza Abdul Hamid's collaboration.

Top Co-Authors

Avatar

Nazri Mohd Nawi

Universiti Tun Hussein Onn Malaysia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mohd Najib Mohd Salleh

Universiti Tun Hussein Onn Malaysia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

M. Z. Rehman

Universiti Tun Hussein Onn Malaysia

View shared research outputs
Top Co-Authors

Avatar

Abdullah Khan

Universiti Tun Hussein Onn Malaysia

View shared research outputs
Top Co-Authors

Avatar

Ameer Saleh Hussein

Universiti Tun Hussein Onn Malaysia

View shared research outputs
Top Co-Authors

Avatar

Azizul Azhar Ramli

Universiti Tun Hussein Onn Malaysia

View shared research outputs
Top Co-Authors

Avatar

Azizul Ramli Azhar

Universiti Tun Hussein Onn Malaysia

View shared research outputs
Top Co-Authors

Avatar

Faridah Hamzah

Universiti Tun Hussein Onn Malaysia

View shared research outputs
Researchain Logo
Decentralizing Knowledge