Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shahryar Rahnamayan is active.

Publication


Featured researches published by Shahryar Rahnamayan.


Information Sciences | 2013

Diversity enhanced particle swarm optimization with neighborhood search

Hui Wang; Hui Sun; Changhe Li; Shahryar Rahnamayan; Jeng-Shyang Pan

Particle Swarm Optimization (PSO) has shown an effective performance for solving variant benchmark and real-world optimization problems. However, it suffers from premature convergence because of quick losing of diversity. In order to enhance its performance, this paper proposes a hybrid PSO algorithm, called DNSPSO, which employs a diversity enhancing mechanism and neighborhood search strategies to achieve a trade-off between exploration and exploitation abilities. A comprehensive experimental study is conducted on a set of benchmark functions, including rotated multimodal and shifted high-dimensional problems. Comparison results show that DNSPSO obtains a promising performance on the majority of the test problems.


Computers & Mathematics With Applications | 2007

A novel population initialization method for accelerating evolutionary algorithms

Shahryar Rahnamayan; Hamid R. Tizhoosh; M.M.A. Salama

Population initialization is a crucial task in evolutionary algorithms because it can affect the convergence speed and also the quality of the final solution. If no information about the solution is available, then random initialization is the most commonly used method to generate candidate solutions (initial population). This paper proposes a novel initialization approach which employs opposition-based learning to generate initial population. The conducted experiments over a comprehensive set of benchmark functions demonstrate that replacing the random initialization with the opposition-based population initialization can accelerate convergence speed.


Applied Soft Computing | 2008

Opposition versus randomness in soft computing techniques

Shahryar Rahnamayan; Hamid R. Tizhoosh; M.M.A. Salama

For many soft computing methods, we need to generate random numbers to use either as initial estimates or during the learning and search process. Recently, results for evolutionary algorithms, reinforcement learning and neural networks have been reported which indicate that the simultaneous consideration of randomness and opposition is more advantageous than pure randomness. This new scheme, called opposition-based learning, has the apparent effect of accelerating soft computing algorithms. This paper mathematically and also experimentally proves this advantage and, as an application, applies that to accelerate differential evolution (DE). By taking advantage of random numbers and their opposites, the optimization, search or learning process in many soft computing techniques can be accelerated when there is no a priori knowledge about the solution. The mathematical proofs and the results of conducted experiments confirm each other.


Information Sciences | 2011

Enhancing particle swarm optimization using generalized opposition-based learning

Hui Wang; Zhijian Wu; Shahryar Rahnamayan; Yong Liu; Mario Ventresca

Particle swarm optimization (PSO) has been shown to yield good performance for solving various optimization problems. However, it tends to suffer from premature convergence when solving complex problems. This paper presents an enhanced PSO algorithm called GOPSO, which employs generalized opposition-based learning (GOBL) and Cauchy mutation to overcome this problem. GOBL can provide a faster convergence, and the Cauchy mutation with a long tail helps trapped particles escape from local optima. The proposed approach uses a similar scheme as opposition-based differential evolution (ODE) with opposition-based population initialization and generation jumping using GOBL. Experiments are conducted on a comprehensive set of benchmark functions, including rotated multimodal problems and shifted large-scale problems. The results show that GOPSO obtains promising performance on a majority of the test problems.


IEEE Transactions on Systems, Man, and Cybernetics | 2013

Gaussian Bare-Bones Differential Evolution

Hui Wang; Shahryar Rahnamayan; Hui Sun; Mahamed G. H. Omran

Differential evolution (DE) is a well-known algorithm for global optimization over continuous search spaces. However, choosing the optimal control parameters is a challenging task because they are problem oriented. In order to minimize the effects of the control parameters, a Gaussian bare-bones DE (GBDE) and its modified version (MGBDE) are proposed which are almost parameter free. To verify the performance of our approaches, 30 benchmark functions and two real-world problems are utilized. Conducted experiments indicate that the MGBDE performs significantly better than, or at least comparable to, several state-of-the-art DE variants and some existing bare-bones algorithms.


ieee international conference on evolutionary computation | 2006

Opposition-Based Differential Evolution Algorithms

Shahryar Rahnamayan; Hamid R. Tizhoosh; M.M.A. Salama

Evolutionary Algorithms (EAs) are well-known optimization approaches to cope with non-linear, complex problems. These population-based algorithms, however, suffer from a general weakness; they are computationally expensive due to slow nature of the evolutionary process. This paper presents some novel schemes to accelerate convergence of evolutionary algorithms. The proposed schemes employ opposition-based learning for population initialization and also for generation jumping. In order to investigate the performance of the proposed schemes, Differential Evolution (DE), an efficient and robust optimization method, has been used. The main idea is general and applicable to other population-based algorithms such as Genetic algorithms, Swarm Intelligence, and Ant Colonies. A set of test functions including unimodal and multimodal benchmark functions is employed for experimental verification. The details of proposed schemes and also conducted experiments are given. The results are highly promising.


congress on evolutionary computation | 2007

Quasi-oppositional Differential Evolution

Shahryar Rahnamayan; Hamid R. Tizhoosh; M.M.A. Salama

In this paper, an enhanced version of the opposition-based differential evolution (ODE) is proposed. ODE utilizes opposite numbers in the population initialization and generation jumping to accelerate differential evolution (DE). Instead of opposite numbers, in this work, quasi opposite points are used. So, we call the new extension quasi- oppositional DE (QODE). The proposed mathematical proof shows that in a black-box optimization problem quasi- opposite points have a higher chance to be closer to the solution than opposite points. A test suite with 15 benchmark functions has been employed to compare performance of DE, ODE, and QODE experimentally. Results confirm that QODE performs better than ODE and DE in overall. Details for the proposed approach and the conducted experiments are provided.


ieee international conference on evolutionary computation | 2006

Opposition-Based Differential Evolution for Optimization of Noisy Problems

Shahryar Rahnamayan; Hamid R. Tizhoosh; M.M.A. Salama

Differential evolution (DE) is a simple, reliable, and efficient optimization algorithm. However, it suffers from a weakness, losing the efficiency over optimization of noisy problems. In many real-world optimization problems we are faced with noisy environments. This paper presents a new algorithm to improve the efficiency of DE to cope with noisy optimization problems. It employs opposition-based learning for population initialization, generation jumping, and also improving populations best member. A set of commonly used benchmark functions is employed for experimental verification. The details of proposed algorithm and also conducted experiments are given. The new algorithm outperforms DE in terms of convergence speed.


Information Sciences | 2015

Metaheuristics in large-scale global continues optimization

Sedigheh Mahdavi; Mohammad Ebrahim Shiri; Shahryar Rahnamayan

Metaheuristic algorithms are extensively recognized as effective approaches for solving high-dimensional optimization problems. These algorithms provide effective tools with important applications in business, engineering, economics, and science. This paper surveys state-of-the-art metaheuristic algorithms and their current applications in the field of large-scale global optimization. The paper mainly covers the fundamental algorithmic frameworks such as decomposition and non-decomposition methods. More than 200 papers are carefully reviewed to prepare the current comprehensive survey.


Journal of Parallel and Distributed Computing | 2013

Parallel differential evolution with self-adapting control parameters and generalized opposition-based learning for solving high-dimensional optimization problems

Hui Wang; Shahryar Rahnamayan; Zhijian Wu

Solving high-dimensional global optimization problems is a time-consuming task because of the high complexity of the problems. To reduce the computational time for high-dimensional problems, this paper presents a parallel differential evolution (DE) based on Graphics Processing Units (GPUs). The proposed approach is called GOjDE, which employs self-adapting control parameters and generalized opposition-based learning (GOBL). The adapting parameters strategy is helpful to avoid manually adjusting the control parameters, and GOBL is beneficial for improving the quality of candidate solutions. Simulation experiments are conducted on a set of recently proposed high-dimensional benchmark problems with dimensions of 100, 200, 500 and 1,000. Simulation results demonstrate that GjODE is better than, or at least comparable to, six other algorithms, and employing GPU can effectively reduce computational time. The obtained maximum speedup is up to 75.

Collaboration


Dive into the Shahryar Rahnamayan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hui Wang

Nanchang Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sedigheh Mahdavi

University of Ontario Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kalyanmoy Deb

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Hojjat Salehinejad

University of Ontario Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Amin Ibrahim

University of Ontario Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Hui Sun

Nanchang Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Farid Bourennani

University of Ontario Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge