Ali Kattan
Universiti Sains Malaysia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ali Kattan.
international conference on intelligent systems, modelling and simulation | 2010
Ali Kattan; Rosni Abdullah; Rosalina Abdul Salam
This paper presents a novel technique for the supervised training of feed-forward artificial neural networks (ANN) using the Harmony Search (HS) algorithm. HS is a stochastic meta-heuristic that is inspired from the improvisation process of musicians. Unlike Backpropagation, HS is non-trajectory driven. By modifying an existing improved version of HS & adopting a suitable ANN data representation, we propose a training technique where two of HS probabilistic parameters are determined dynamically based on the best-to-worst (BtW) harmony ratio in the current harmony memory instead of the improvisation count. This would be more suitable for ANN training since parameters and termination would depend on the quality of the attained solution. We have empirically tested and verified our technique by training an ANN with a benchmarking problem. In terms of overall training time and recognition, our results have revealed that our method is superior to both the original improved HS and standard Backpropagation.
Applied Mathematics and Computation | 2013
Ali Kattan; Rosni Abdullah
In solving global optimization problems for continuous functions, researchers often rely on metaheuristic algorithms to overcome the computational drawbacks of the existing numerical methods. A metaheuristic is an evolutionary algorithm that does not require the functions in the problem to satisfy specific conditions or mathematical properties. A recently proposed metaheuristic is the harmony search algorithm, which was inspired by the music improvisation process and has been applied successfully in the solution of various global optimization problems. However, the overall performance of this algorithm and its convergence properties are quite sensitive to the initial parameter settings. Several improvements of the harmony search algorithm have been proposed to incorporate self-adaptive features. In these modified versions of the algorithm, the parameters are automatically tuned during the optimization process to achieve superior results. This paper proposes a new dynamic and self-adaptive harmony search algorithm in which two of the optimization parameters, the pitch adjustment rate and the bandwidth, are auto-tuned. These two parameters have substantial influence on the quality of the final solution. The proposed algorithm utilizes two new quality measures to dynamically drive the optimization process: the current best-to-worst ratio of the harmony memory fitness function and the improvisation acceptance rate. The key difference between the proposed algorithm and most competing methods is that the values of the pitch adjustment rate and bandwidth are determined independently of the current improvisation count and therefore vary dynamically rather than monotonically. The results demonstrate the superiority of the proposed algorithm over various other recent methods based on several common benchmarking functions.
computational intelligence communication systems and networks | 2011
Ali Kattan; Rosni Abdullah
The authors have published earlier a parallel & distributed implementation method for the supervised training of feed-forward artificial neural networks using the Harmony Search algorithm. Such implementation was intended to address the training of larger pattern-classification problem. The implementation platforms included both a homogeneous and a heterogeneous system of Master-Slave processing nodes. The latter heterogeneous implementation utilized a node benchmarking score obtained via independent software in order to determine the load balancing ratios for the different processing nodes. In this paper an enhanced alternative benchmarking technique is proposed that is based on the actual workload execution times for each heterogeneous processing node. Using the same pattern-classification problem on the same heterogeneous platform setup used in the previous technique, results show that the proposed technique has attained higher speedup in comparison with the former.
international conference on intelligent systems, modelling and simulation | 2010
Ali Kattan; Rosni Abdullah; Rosalina Abdul Salam
Feed-Forward Artificial Neural Networks (FFANN) can be trained using Genetic Algorithm (GA). GA offers a stochastic global optimization technique that might suffer from two major shortcomings: slow convergence time and impractical data representation. The effect of these shortcomings is more considerable in case of larger FFANN with larger dataset. Using a non-binary real-coded data representation we offer an enhancement to the generational GA used for the training of FFANN. Such enhancement would come in two fold: The first being a new strategy to process the strings of the population by allowing the fittest string to survive unchanged to the next population depending on its age. The second is to speed up fitness computation time through the utilization of known parallel processing techniques used for matrix multiplication. The implementation was carried on master-slaves architecture of commodity computers connected via Ethernet. Using a well-known benchmarking dataset, results show that our proposed technique is superior to the standard in terms of both the overall convergence time and processing time.
international conference on intelligent systems, modelling and simulation | 2011
Ali Kattan; Rosni Abdullah
The authors have published earlier a novel technique for the supervised training of feed-forward artificial neural networks using the Harmony Search algorithm. This paper proposes a parallel and distributed implementation method to speedup the execution time to address the training of larger pattern-classification benchmarking problems. The proposed method is a hybrid technique that adopts form the merits of two common parallel and distributed training methods, namely network partitioning and pattern partitioning. Experimentation is carried out on a large pattern-classification benchmarking problem using two Master-Slave parallel systems, a homogeneous system using a cluster computer and a heterogeneous system using a set of commodity computers connected via switched network. Results show that the proposed method attains a considerable speedup in comparison to the sequential implementation.
Journal of intelligent systems | 2018
Ahmed A. Abusnaina; Rosni Abdullah; Ali Kattan
Abstract The mussels wandering optimization (MWO) is a recent population-based metaheuristic optimization algorithm inspired ecologically by mussels’ movement behavior. The MWO has been used successfully for solving several optimization problems. This paper proposes an enhanced version of MWO, known as the enhanced-mussels wandering optimization (E-MWO) algorithm. The E-MWO aims to overcome the MWO shortcomings, such as lack in explorative ability and the possibility to fall in premature convergence. In addition, the E-MWO incorporates the self-adaptive feature for setting the value of a sensitive algorithm parameter. Then, it is adapted for supervised training of artificial neural networks, whereas pattern classification of real-world problems is considered. The obtained results indicate that the proposed method is a competitive alternative in terms of classification accuracy and achieve superior results in training time.
distributed frameworks for multimedia applications | 2006
Ali Kattan; Rosni Abdullah; Rosalina Abdul Salam
Being a Java developer the decision of starting to use an integrated development environment (IDE) instead of a set of discrete programming tools, is to be taken sooner or later. This paper identifies the favorite features to opt for, in selecting a suitable IDE, what should be used and by whom with plethora of such IDEs. Though IDEs are meant to facilitate program development and increase productivity, migrating developers might fall into a set of common pitfalls that could hinder productivity. Such pitfalls are indicated and discussed in the view of available IDE features
computational intelligence communication systems and networks | 2009
Ali Kattan; Rosni Abdullah; Rosalina Abdul Salam
This paper presents a work in progress that aims to reduce the overall training and processing time of feed-forward multi-layer neural networks. If the network is large processing is expensive in terms of both; time and space. In this paper, we suggest a cost-effective and presumably a faster processing technique by utilizing a heterogeneous distributed system composed of a set of commodity computers connected by a local area network. Neural network computations can be viewed as a set of matrix multiplication processes. These can be adapted to utilize the existing matrix multiplication algorithms tailored for such systems. With Java technology as an implementation means, we discuss the different factors that should be considered in order to achieve this goal highlighting some issues that might affect such a proposed implementation.
digital enterprise and information systems | 2013
Ali Kattan; Rosni Abdullah
Archive | 2011
Ali Kattan; Rosni Abdullah