Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rozaida Ghazali is active.

Publication


Featured researches published by Rozaida Ghazali.


Neurocomputing | 2009

Non-stationary and stationary prediction of financial time series using dynamic ridge polynomial neural network

Rozaida Ghazali; Abir Jaafar Hussain; Nazri Mohd Nawi; Baharuddin Mohamad

This research focuses on using various higher order neural networks (HONNs) to predict the upcoming trends of financial signals. Two HONNs models: the Pi-Sigma neural network and the ridge polynomial neural network were used. Furthermore, a novel HONN architecture which combines the properties of both higher order and recurrent neural network was constructed, and is called dynamic ridge polynomial neural network (DRPNN). Extensive simulations for the prediction of one and five steps ahead of financial signals were performed. Simulation results indicate that DRPNN in most cases demonstrated advantages in capturing chaotic movement in the signals with an improvement in the profit return and rapid convergence over other network models.


Expert Systems With Applications | 2011

Dynamic Ridge Polynomial Neural Network: Forecasting the univariate non-stationary and stationary trading signals

Rozaida Ghazali; Abir Jaafar Hussain; Panos Liatsis

This paper considers the prediction of noisy time series data, specifically, the prediction of financial signals. A novel Dynamic Ridge Polynomial Neural Network (DRPNN) for financial time series prediction is presented which combines the properties of both higher order and recurrent neural network. In an attempt to overcome the stability and convergence problems in the proposed DRPNN, the stability convergence of DRPNN is derived to ensure that the network posses a unique equilibrium state. In order to provide a more accurate comparative evaluation in terms of profit earning, empirical testing used in this work encompass not only on the more traditional criteria of NMSE, which concerned at how good the forecasts fit their target, but also on financial metrics where the objective is to use the networks predictions to generate profit. Extensive simulations for the prediction of one and five steps ahead of stationary and non-stationary time series were performed. The resulting forecast made by DRPNN shows substantial profits on financial historical signals when compared to various neural networks; the Pi-Sigma Neural Network, the Functional Link Neural Network, the feedforward Ridge Polynomial Neural Network, and the Multilayer Perceptron. Simulation results indicate that DRPNN in most cases demonstrated advantages in capturing chaotic movement in the financial signals with an improvement in the profit return and rapid convergence over other network models.


FGIT-DTA/BSBT | 2010

An Improved Back Propagation Neural Network Algorithm on Classification Problems

Nazri Mohd Nawi; Rajesh Ransing; Mohd Najib Mohd Salleh; Rozaida Ghazali; Norhamreeza Abdul Hamid

The back propagation algorithm is one the most popular algorithms to train feed forward neural networks. However, the convergence of this algorithm is slow, it is mainly because of gradient descent algorithm. Previous research demonstrated that in ‘feed forward’ algorithm, the slope of the activation function is directly influenced by a parameter referred to as ‘gain’. This research proposed an algorithm for improving the performance of the back propagation algorithm by introducing the adaptive gain of the activation function. The gain values change adaptively for each node. The influence of the adaptive gain on the learning ability of a neural network is analysed. Multi layer feed forward neural networks have been assessed. Physical interpretation of the relationship between the gain value and the learning rate and weight values is given. The efficiency of the proposed algorithm is compared with conventional Gradient Descent Method and verified by means of simulation on four classification problems. In learning the patterns, the simulations result demonstrate that the proposed method converged faster on Wisconsin breast cancer with an improvement ratio of nearly 2.8, 1.76 on diabetes problem, 65% better on thyroid data sets and 97% faster on IRIS classification problem. The results clearly show that the proposed algorithm significantly improves the learning speed of the conventional back-propagation algorithm.


international joint conference on neural network | 2006

Application of Ridge Polynomial Neural Networks to Financial Time Series Prediction

Rozaida Ghazali; Abir Jaafar Hussain; Wael El-Deredy

This paper presents a novel application of ridge polynomial neural network to forecast the future trends of financial time series data. The prediction capability of ridge polynomial neural network was tested on four different data sets; the US/EU exchange rate, the UK/EU exchange rate, the JP/EU exchange rate, and the IBM common stock closing price. The performance of the network is benchmarked against the performance of multilayer perceptron, functional link neural network, and pi-sigma neural network. The predictions demonstrated that ridge polynomial neural network brings in more return in comparison to other models. It is observed that the network is able to find an appropriate input output mapping of various chaotic financial time series data with a good performance in learning speed and generalization capability.


Neural Computing and Applications | 2008

The application of ridge polynomial neural network to multi-step ahead financial time series prediction

Rozaida Ghazali; Abir Jaafar Hussain; Panos Liatsis; Hissam Tawfik

Motivated by the slow learning properties of multilayer perceptrons (MLPs) which utilize computationally intensive training algorithms, such as the backpropagation learning algorithm, and can get trapped in local minima, this work deals with ridge polynomial neural networks (RPNN), which maintain fast learning properties and powerful mapping capabilities of single layer high order neural networks. The RPNN is constructed from a number of increasing orders of Pi–Sigma units, which are used to capture the underlying patterns in financial time series signals and to predict future trends in the financial market. In particular, this paper systematically investigates a method of pre-processing the financial signals in order to reduce the influence of their trends. The performance of the networks is benchmarked against the performance of MLPs, functional link neural networks (FLNN), and Pi–Sigma neural networks (PSNN). Simulation results clearly demonstrate that RPNNs generate higher profit returns with fast convergence on various noisy financial signals.


2011 Developments in E-systems Engineering | 2011

Prediction of Earthquake Magnitude by an Improved ABC-MLP

Habib Shah; Rozaida Ghazali

Different algorithms have been used for training neural networks (NNs) such as back propagation (BP), gradient descent (GA), partial swarm optimization (PSO), and ant colony algorithm (ACO). Most of these algorithms focused on NNs weight values, activation functions, and network structures for providing optimal outputs. Ordinary BP is one well known technique which updates the weight values for minimizing error but still it has some drawbacks such as trapping in local minima and slow convergence. Therefore, in this work a population based algorithm called an Improved Artificial Bee Colony (IABC) algorithm is proposed for improving the training process of Multilayer Perceptron (MLP) in order to overcome these issues by optimal weight values. Population based algorithm makes MLP attractive because of the social insects training algorithm. It investigates the improved weights initialization technique using IABC-MLP. The performance of IABC-MLP is benchmarked against MLP train with the standard BP. The experimental result shows that IABC-MLP performance is better than BP-MLP for earthquake time series data.


ubiquitous computing | 2011

Accelerating Learning Performance of Back Propagation Algorithm by Using Adaptive Gain Together with Adaptive Momentum and Adaptive Learning Rate on Classification Problems

Norhamreeza Abdul Hamid; Nazri Mohd Nawi; Rozaida Ghazali; Mohd Najib Mohd Salleh

The back propagation (BP) algorithm is a very popular learning approach in feedforward multilayer perceptron networks. However, the most serious problem associated with the BP is local minima problem and slow convergence speeds. Over the years, many improvements and modifications of the back propagation learning algorithm have been reported. In this research, we propose a new modified back propagation learning algorithm by introducing adaptive gain together with adaptive momentum and adaptive learning rate into weight update process. By computer simulations, we demonstrate that the proposed algorithm can give a better convergence rate and can find a good solution in early time compare to the conventional back propagation. We use two common benchmark classification problems to illustrate the improvement in convergence time.


international conference on adaptive and natural computing algorithms | 2007

Dynamic Ridge Polynomial Neural Networks in Exchange Rates Time Series Forecasting

Rozaida Ghazali; Abir Jaafar Hussain; Dhiya Al-Jumeily; Madjid Merabti

This paper proposed a novel dynamic system which utilizes Ridge Polynomial Neural Networks for the prediction of the exchange rate time series. We performed a set of simulations covering three uni-variate exchange rate signals which are; the JP/EU, JP/UK, and JP/US time series. The forecasting performance of the novel Dynamic Ridge Polynomial Neural Network is compared with the performance of the Multilayer Perceptron and the feedforward Ridge Polynomial Neural Network. The simulation results indicated that the proposed network demonstrated advantages in capturing noisy movement in the exchange rate signals with a higher profit return.


Applied Mechanics and Materials | 2012

Using Artificial Bee Colony to Improve Functional Link Neural Network Training

Yana Mazwin Mohmad Hassim; Rozaida Ghazali

Artificial Neural Networks have emerged as an important tool for classification and have been widely used to classify non-linearly separable pattern. The most popular artificial neural networks model is a Multilayer Perceptron (MLP) that is able to perform classification task with significant success. However due to the complexity of MLP structure and also problems such as local minima trapping, over fitting and weight interference have made neural network training difficult. Thus, the easy way to avoid these problems is by removing the hidden layers. This paper presents the ability of Functional Link Neural Network (FLNN) in overcoming the complexity structure of MLP, using it single layer architecture and proposes an Artificial Bee Colony (ABC) optimization for training the FLNN. The proposed technique is expected to provide better learning scheme for a classifier in order to get more accurate classification result.


international conference on innovations in information technology | 2006

Dynamic Ridge Polynomial Neural Network for Financial Time Series Prediction

Abir Jaafar Hussain; Rozaida Ghazali; Dhiya Al-Jumeily; Madjid Merabti

This paper presents a novel type of higher-order polynomial recurrent neural network called the dynamic ridge polynomial neural network. The aim of the proposed network is to improve the performance of the ridge polynomial neural network by accommodating recurrent links structure. The network is tested for the prediction of non-linear and non-stationary financial signals. Two exchange rates time-series, which are the exchange rate time series between the British pound and the euro as well as the US dollar and the euro, are used in the simulation process. Simulation results showed that dynamic ridge polynomial neural networks generate higher profit returns with fast convergence when used to predict noisy financial time series

Collaboration


Dive into the Rozaida Ghazali's collaboration.

Top Co-Authors

Avatar

Nazri Mohd Nawi

Universiti Tun Hussein Onn Malaysia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mustafa Mat Deris

Universiti Tun Hussein Onn Malaysia

View shared research outputs
Top Co-Authors

Avatar

Lokman Hakim Ismail

Universiti Tun Hussein Onn Malaysia

View shared research outputs
Top Co-Authors

Avatar

Yana Mazwin Mohmad Hassim

Universiti Tun Hussein Onn Malaysia

View shared research outputs
Top Co-Authors

Avatar

Habib Shah

Universiti Tun Hussein Onn Malaysia

View shared research outputs
Top Co-Authors

Avatar

Abir Jaafar Hussain

Liverpool John Moores University

View shared research outputs
Top Co-Authors

Avatar

Noor Aida Husaini

Universiti Tun Hussein Onn Malaysia

View shared research outputs
Top Co-Authors

Avatar

Ayodele Lasisi

Universiti Tun Hussein Onn Malaysia

View shared research outputs
Top Co-Authors

Avatar

Waddah Waheeb

Universiti Tun Hussein Onn Malaysia

View shared research outputs
Researchain Logo
Decentralizing Knowledge