Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hossam Faris is active.

Publication


Featured researches published by Hossam Faris.


Advances in Engineering Software | 2017

Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems

Seyedali Mirjalili; Amir Hossein Gandomi; Seyedeh Zahra Mirjalili; Shahrzad Saremi; Hossam Faris; Seyed Mohammad Mirjalili

A novel optimization algorithm called Salp Swarm Optimizer (SSA) is proposed.Multi-objective Salp Swarm Algorithm (MSSA) is proposed to solve multi-objective problems.Both algorithms are tested on several mathematical optimization functions.Two challenging engineering design problems are solved: airfoil design and marine propeller design.The qualitative and quantitative results prove the efficiency of SSA and MSSA. This work proposes two novel optimization algorithms called Salp Swarm Algorithm (SSA) and Multi-objective Salp Swarm Algorithm (MSSA) for solving optimization problems with single and multiple objectives. The main inspiration of SSA and MSSA is the swarming behaviour of salps when navigating and foraging in oceans. These two algorithms are tested on several mathematical optimization functions to observe and confirm their effective behaviours in finding the optimal solutions for optimization problems. The results on the mathematical functions show that the SSA algorithm is able to improve the initial random solutions effectively and converge towards the optimum. The results of MSSA show that this algorithm can approximate Pareto optimal solutions with high convergence and coverage. The paper also considers solving several challenging and computationally expensive engineering design problems (e.g. airfoil design and marine propeller design) using SSA and MSSA. The results of the real case studies demonstrate the merits of the algorithms proposed in solving real-world problems with difficult and unknown search spaces.


soft computing | 2018

Optimizing connection weights in neural networks using the whale optimization algorithm

Ibrahim Aljarah; Hossam Faris; Seyedali Mirjalili

The learning process of artificial neural networks is considered as one of the most difficult challenges in machine learning and has attracted many researchers recently. The main difficulty of training a neural network is the nonlinear nature and the unknown best set of main controlling parameters (weights and biases). The main disadvantages of the conventional training algorithms are local optima stagnation and slow convergence speed. This makes stochastic optimization algorithm reliable alternative to alleviate these drawbacks. This work proposes a new training algorithm based on the recently proposed whale optimization algorithm (WOA). It has been proved that this algorithm is able to solve a wide range of optimization problems and outperform the current algorithms. This motivated our attempts to benchmark its performance in training feedforward neural networks. For the first time in the literature, a set of 20 datasets with different levels of difficulty are chosen to test the proposed WOA-based trainer. The results are verified by comparisons with back-propagation algorithm and six evolutionary techniques. The qualitative and quantitative results prove that the proposed trainer is able to outperform the current algorithms on the majority of datasets in terms of both local optima avoidance and convergence speed.


Applied Intelligence | 2016

Training feedforward neural networks using multi-verse optimizer for binary classification problems

Hossam Faris; Ibrahim Aljarah; Seyedali Mirjalili

This paper employs the recently proposed nature-inspired algorithm called Multi-Verse Optimizer (MVO) for training the Multi-layer Perceptron (MLP) neural network. The new training approach is benchmarked and evaluated using nine different bio-medical datasets selected from the UCI machine learning repository. The results are compared to five classical and recent evolutionary metaheuristic algorithms: Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Differential Evolution (DE), FireFly (FF) Algorithm and Cuckoo Search (CS). In addition, the results are compared with two well-regarded conventional gradient-based training methods: the conventional Back-Propagation (BP) and the Levenberg-Marquardt (LM) algorithms. The comparative study demonstrates that MVO is very competitive and outperforms other training algorithms in the majority of datasets in terms of improved local optima avoidance and convergence speed.


Neural Computing and Applications | 2018

Training radial basis function networks using biogeography-based optimizer

Ibrahim Aljarah; Hossam Faris; Seyedali Mirjalili; Nailah Al-Madi

Training artificial neural networks is considered as one of the most challenging machine learning problems. This is mainly due to the presence of a large number of solutions and changes in the search space for different datasets. Conventional training techniques mostly suffer from local optima stagnation and degraded convergence, which make them impractical for datasets with many features. The literature shows that stochastic population-based optimization techniques suit this problem better and are reliably alternative because of high local optima avoidance and flexibility. For the first time, this work proposes a new learning mechanism for radial basis function networks based on biogeography-based optimizer as one of the most well-regarded optimizers in the literature. To prove the efficacy of the proposed methodology, it is employed to solve 12 well-known datasets and compared to 11 current training algorithms including gradient-based and stochastic approaches. The paper considers changing the number of neurons and investigating the performance of algorithms on radial basis function networks with different number of parameters as well. A statistical test is also conducted to judge about the significance of the results. The results show that the biogeography-based optimizer trainer is able to substantially outperform the current training algorithms on all datasets in terms of classification accuracy, speed of convergence, and entrapment in local optima. In addition, the comparison of trainers on radial basis function networks with different neurons size reveal that the biogeography-based optimizer trainer is able to train radial basis function networks with different number of structural parameters effectively.


International Journal of Advanced Research in Artificial Intelligence | 2015

A Comparison between Regression, Artificial Neural Networks and Support Vector Machines for Predicting Stock Market Index

Alaa F. Sheta; Sara Elsir M. Ahmed; Hossam Faris

Obtaining accurate prediction of stock index sig-nificantly helps decision maker to take correct actions to develop a better economy. The inability to predict fluctuation of the stock market might cause serious profit loss. The challenge is that we always deal with dynamic market which is influenced by many factors. They include political, financial and reserve occasions. Thus, stable, robust and adaptive approaches which can provide models have the capability to accurately predict stock index are urgently needed. In this paper, we explore the use of Artificial Neural Networks (ANNs) and Support Vector Machines (SVM) to build prediction models for the S&P 500 stock index. We will also show how traditional models such as multiple linear regression (MLR) behave in this case. The developed models will be evaluated and compared based on a number of evaluation criteria.


Knowledge Based Systems | 2017

Evolutionary Population Dynamics and Grasshopper Optimization approaches for feature selection problems

Majdi M. Mafarja; Ibrahim Aljarah; Ali Asghar Heidari; Abdelaziz I. Hammouri; Hossam Faris; Ala’ M. Al-Zoubi; Seyedali Mirjalili

Abstract Searching for the optimal subset of features is known as a challenging problem in feature selection process. To deal with the difficulties involved in this problem, a robust and reliable optimization algorithm is required. In this paper, Grasshopper Optimization Algorithm (GOA) is employed as a search strategy to design a wrapper-based feature selection method. The GOA is a recent population-based metaheuristic that mimics the swarming behaviors of grasshoppers. In this work, an efficient optimizer based on the simultaneous use of the GOA, selection operators, and Evolutionary Population Dynamics (EPD) is proposed in the form of four different strategies to mitigate the immature convergence and stagnation drawbacks of the conventional GOA. In the first two approaches, one of the top three agents and a randomly generated one are selected to reposition a solution from the worst half of the population. In the third and fourth approaches, to give a chance to the low fitness solutions in reforming the population, Roulette Wheel Selection (RWS) and Tournament Selection (TS) are utilized to select the guiding agent from the first half. The proposed GOA_EPD approaches are employed to tackle various feature selection tasks. The proposed approaches are benchmarked on 22 UCI datasets. The comprehensive results and various comparisons reveal that the EPD has a remarkable impact on the efficacy of the GOA and using the selection mechanism enhanced the capability of the proposed approach to outperform other optimizers and find the best solutions with improved convergence trends. Furthermore, the comparative experiments demonstrate the superiority of the proposed approaches when compared to other similar methods in the literature.


Neural Computing and Applications | 2018

A multi-verse optimizer approach for feature selection and optimizing SVM parameters based on a robust system architecture

Hossam Faris; Mohammad A. Hassonah; Ala’ M. Al-Zoubi; Seyedali Mirjalili; Ibrahim Aljarah

Abstract Support vector machine (SVM) is a well-regarded machine learning algorithm widely applied to classification tasks and regression problems. SVM was founded based on the statistical learning theory and structural risk minimization. Despite the high prediction rate of this technique in a wide range of real applications, the efficiency of SVM and its classification accuracy highly depends on the parameter setting as well as the subset feature selection. This work proposes a robust approach based on a recent nature-inspired metaheuristic called multi-verse optimizer (MVO) for selecting optimal features and optimizing the parameters of SVM simultaneously. In fact, the MVO algorithm is employed as a tuner to manipulate the main parameters of SVM and find the optimal set of features for this classifier. The proposed approach is implemented and tested on two different system architectures. MVO is benchmarked and compared with four classic and recent metaheuristic algorithms using ten binary and multi-class labeled datasets. Experimental results demonstrate that MVO can effectively reduce the number of features while maintaining a high prediction accuracy.


International Journal of Computer Integrated Manufacturing | 2013

Modelling hot rolling manufacturing process using soft computing techniques

Hossam Faris; Alaa F. Sheta; Ertan Öznergiz

Steel making industry is becoming more competitive due to the high demand. In order to protect the market share, automation of the manufacturing industrial process is vital and represents a challenge. Empirical mathematical modelling of the process was used to design mill equipment, ensure productivity and service quality. This modelling approach shows many problems associated to complexity and time consumption. Evolutionary computing techniques show significant modelling capabilities on handling complex non-linear systems modelling. In this research, symbolic regression modelling via genetic programming is used to develop relatively simple mathematical models for the hot rolling industrial non-linear process. Three models are proposed for the rolling force, torque and slab temperature. A set of simple mathematical functions which represents the dynamical relationship between the input and output of these models shall be presented. Moreover, the performance of the symbolic regression models is compared to the known empirical models for the hot rolling system. A comparison with experimental data collected from the Ere[gtilde]li Iron and Steel Factory in Turkey is conducted for the verification of the promising model performance. Genetic programming shows better performance results compared to other soft computing approaches, such as neural networks and fuzzy logic.


Neural Computing and Applications | 2018

Grey wolf optimizer: a review of recent variants and applications

Hossam Faris; Ibrahim Aljarah; Mohammed Azmi Al-Betar; Seyedali Mirjalili

Grey wolf optimizer (GWO) is one of recent metaheuristics swarm intelligence methods. It has been widely tailored for a wide variety of optimization problems due to its impressive characteristics over other swarm intelligence methods: it has very few parameters, and no derivation information is required in the initial search. Also it is simple, easy to use, flexible, scalable, and has a special capability to strike the right balance between the exploration and exploitation during the search which leads to favourable convergence. Therefore, the GWO has recently gained a very big research interest with tremendous audiences from several domains in a very short time. Thus, in this review paper, several research publications using GWO have been overviewed and summarized. Initially, an introductory information about GWO is provided which illustrates the natural foundation context and its related optimization conceptual framework. The main operations of GWO are procedurally discussed, and the theoretical foundation is described. Furthermore, the recent versions of GWO are discussed in detail which are categorized into modified, hybridized and paralleled versions. The main applications of GWO are also thoroughly described. The applications belong to the domains of global optimization, power engineering, bioinformatics, environmental applications, machine learning, networking and image processing, etc. The open source software of GWO is also provided. The review paper is ended by providing a summary conclusion of the main foundation of GWO and suggests several possible future directions that can be further investigated.


international conference on computational collective intelligence | 2014

A Genetic Programming Based Framework for Churn Prediction in Telecommunication Industry

Hossam Faris; Bashar Al-Shboul; Nazeeh Ghatasheh

Customer defection is critically important since it leads to serious business loss. Therefore, investigating methods to identify defecting customers (i.e. churners) has become a priority for telecommunication operators. In this paper, a churn prediction framework is proposed aiming at enhancing the ability to forecast customer churn. The framework combine two heuristic approaches: Self Organizing Maps (SOM) and Genetic Programming (GP). At first, SOM is used to cluster the customers in the dataset, and then remove outliers representing abnormal customer behaviors. After that, GP is used to build an enhanced classification tree. The dataset used for this study contains anonymized real customer information provided by a major local telecom operator in Jordan. Our work shows that using the proposed method surpasses various state-of-the-art classification methods for this particular dataset.

Collaboration


Dive into the Hossam Faris's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge