Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ponnuthurai N. Suganthan is active.

Publication


Featured researches published by Ponnuthurai N. Suganthan.


IEEE Transactions on Evolutionary Computation | 2011

Differential Evolution: A Survey of the State-of-the-Art

Swagatam Das; Ponnuthurai N. Suganthan

Differential evolution (DE) is arguably one of the most powerful stochastic real-parameter optimization algorithms in current use. DE operates through similar computational steps as employed by a standard evolutionary algorithm (EA). However, unlike traditional EAs, the DE-variants perturb the current-generation population members with the scaled differences of randomly selected and distinct population members. Therefore, no separate probability distribution has to be used for generating the offspring. Since its inception in 1995, DE has drawn the attention of many researchers all over the world resulting in a lot of variants of the basic algorithm with improved performance. This paper presents a detailed review of the basic concepts of DE and a survey of its major variants, its application to multiobjective, constrained, large scale, and uncertain optimization problems, and the theoretical studies conducted on DE so far. Also, it provides an overview of the significant engineering applications that have benefited from the powerful nature of DE.


IEEE Transactions on Evolutionary Computation | 2009

Differential Evolution Algorithm With Strategy Adaptation for Global Numerical Optimization

A.K. Qin; V. L. Huang; Ponnuthurai N. Suganthan

Differential evolution (DE) is an efficient and powerful population-based stochastic search technique for solving optimization problems over continuous space, which has been widely applied in many scientific and engineering fields. However, the success of DE in solving a specific problem crucially depends on appropriately choosing trial vector generation strategies and their associated control parameter values. Employing a trial-and-error scheme to search for the most suitable strategy and its associated parameter settings requires high computational costs. Moreover, at different stages of evolution, different strategies coupled with different parameter settings may be required in order to achieve the best performance. In this paper, we propose a self-adaptive DE (SaDE) algorithm, in which both trial vector generation strategies and their associated control parameter values are gradually self-adapted by learning from their previous experiences in generating promising solutions. Consequently, a more suitable generation strategy along with its parameter settings can be determined adaptively to match different phases of the search process/evolution. The performance of the SaDE algorithm is extensively evaluated (using codes available from P. N. Suganthan) on a suite of 26 bound-constrained numerical optimization problems and compares favorably with the conventional DE and several state-of-the-art parameter adaptive DE variants.


Swarm and evolutionary computation | 2011

Multiobjective evolutionary algorithms: A survey of the state of the art

Aimin Zhou; Bo-Yang Qu; Hui Li; Shi-Zheng Zhao; Ponnuthurai N. Suganthan; Qingfu Zhang

Abstract A multiobjective optimization problem involves several conflicting objectives and has a set of Pareto optimal solutions. By evolving a population of solutions, multiobjective evolutionary algorithms (MOEAs) are able to approximate the Pareto optimal set in a single run. MOEAs have attracted a lot of research effort during the last 20 years, and they are still one of the hottest research areas in the field of evolutionary computation. This paper surveys the development of MOEAs primarily during the last eight years. It covers algorithmic frameworks such as decomposition-based MOEAs (MOEA/Ds), memetic MOEAs, coevolutionary MOEAs, selection and offspring reproduction operators, MOEAs with specific search methods, MOEAs for multimodal problems, constraint handling and MOEAs, computationally expensive multiobjective optimization problems (MOPs), dynamic MOPs, noisy MOPs, combinatorial and discrete MOPs, benchmark problems, performance indicators, and applications. In addition, some future research issues are also presented.


congress on evolutionary computation | 2005

Self-adaptive differential evolution algorithm for numerical optimization

A.K. Qin; Ponnuthurai N. Suganthan

In this paper, we propose a novel self-adaptive differential evolution algorithm (SaDE), where the choice of learning strategy and the two control parameters F and CR are not required to be pre-specified. During evolution, the suitable learning strategy and parameter settings are gradually self-adapted according to the learning experience. The performance of the SaDE is reported on the set of 25 benchmark functions provided by CEC2005 special session on real parameter optimization.


Applied Soft Computing | 2011

Differential evolution algorithm with ensemble of parameters and mutation strategies

Rammohan Mallipeddi; Ponnuthurai N. Suganthan; Quan-Ke Pan; Mehmet Fatih Tasgetiren

Differential evolution (DE) has attracted much attention recently as an effective approach for solving numerical optimization problems. However, the performance of DE is sensitive to the choice of the mutation strategy and associated control parameters. Thus, to obtain optimal performance, time-consuming parameter tuning is necessary. Different mutation strategies with different parameter settings can be appropriate during different stages of the evolution. In this paper, we propose to employ an ensemble of mutation strategies and control parameters with the DE (EPSDE). In EPSDE, a pool of distinct mutation strategies along with a pool of values for each control parameter coexists throughout the evolution process and competes to produce offspring. The performance of EPSDE is evaluated on a set of bound-constrained problems and is compared with conventional DE and several state-of-the-art parameter adaptive DE variants.


Pattern Recognition | 2005

Rapid and brief communication: Evolutionary extreme learning machine

Qin-Yu Zhu; A.K. Qin; Ponnuthurai N. Suganthan; Guang-Bin Huang

Extreme learning machine (ELM) [G.-B. Huang, Q.-Y. Zhu, C.-K. Siew, Extreme learning machine: a new learning scheme of feedforward neural networks, in: Proceedings of the International Joint Conference on Neural Networks (IJCNN2004), Budapest, Hungary, 25-29 July 2004], a novel learning algorithm much faster than the traditional gradient-based learning algorithms, was proposed recently for single-hidden-layer feedforward neural networks (SLFNs). However, ELM may need higher number of hidden neurons due to the random determination of the input weights and hidden biases. In this paper, a hybrid learning algorithm is proposed which uses the differential evolutionary algorithm to select the input weights and Moore-Penrose (MP) generalized inverse to analytically determine the output weights. Experimental results show that this approach is able to achieve good generalization performance with much more compact networks.


Information Sciences | 2011

A discrete artificial bee colony algorithm for the lot-streaming flow shop scheduling problem

Quan-Ke Pan; M. Fatih Tasgetiren; Ponnuthurai N. Suganthan; Tay Jin Chua

In this paper, a discrete artificial bee colony (DABC) algorithm is proposed to solve the lot-streaming flow shop scheduling problem with the criterion of total weighted earliness and tardiness penalties under both the idling and no-idling cases. Unlike the original ABC algorithm, the proposed DABC algorithm represents a food source as a discrete job permutation and applies discrete operators to generate new neighboring food sources for the employed bees, onlookers and scouts. An efficient initialization scheme, which is based on the earliest due date (EDD), the smallest slack time on the last machine (LSL) and the smallest overall slack time (OSL) rules, is presented to construct the initial population with certain quality and diversity. In addition, a self adaptive strategy for generating neighboring food sources based on insert and swap operators is developed to enable the DABC algorithm to work on discrete/combinatorial spaces. Furthermore, a simple but effective local search approach is embedded in the proposed DABC algorithm to enhance the local intensification capability. Through the analysis of experimental results, the highly effective performance of the proposed DABC algorithm is shown against the best performing algorithms from the literature.


congress on evolutionary computation | 1999

Particle swarm optimiser with neighbourhood operator

Ponnuthurai N. Suganthan

In recent years population based methods such as genetic algorithms, evolutionary programming, evolution strategies and genetic programming have been increasingly employed to solve a variety of optimisation problems. Recently, another novel population based optimisation algorithm - namely the particle swarm optimisation (PSO) algorithm, was introduced by R. Eberhart and J. Kennedy (1995). Although the PSO algorithm possesses some attractive properties, its solution quality has been somewhat inferior to other evolutionary optimisation algorithms (P. Angeline, 1998). We propose a number of techniques to improve the standard PSO algorithm. Similar techniques have been employed in the context of self organising maps and neural-gas networks (T. Kohonen, 1990; T.M. Martinez et al., 1994).


systems man and cybernetics | 2012

An Adaptive Differential Evolution Algorithm With Novel Mutation and Crossover Strategies for Global Numerical Optimization

Sk. Minhazul Islam; Swagatam Das; Saurav Ghosh; Subhrajit Roy; Ponnuthurai N. Suganthan

Differential evolution (DE) is one of the most powerful stochastic real parameter optimizers of current interest. In this paper, we propose a new mutation strategy, a fitness- induced parent selection scheme for the binomial crossover of DE, and a simple but effective scheme of adapting two of its most important control parameters with an objective of achieving improved performance. The new mutation operator, which we call DE/current-to-gr_best/1, js a variant of the classical DE/current-to-best/1 scheme. It uses the best of a group (whose size is q% of the population size) of randomly selected solutions from current generation to perturb the parent (target) vector, unlike DE/current-to-best/1 that always picks the best vector of the entire population to perturb the target vector. In our modified framework of recombination, a biased parent selection scheme has been incorporated by letting each mutant undergo the usual binomial crossover with one of the p top-ranked individuals from the current population and not with the target vector with the same index as used in all variants of DE. A DE variant obtained by integrating the proposed mutation, crossover, and parameter adaptation strategies with the classical DE framework (developed in 1995) is compared with two classical and four state-of-the-art adaptive DE variants over 25 standard numerical benchmarks taken from the IEEE Congress on Evolutionary Computation 2005 competition and special session on real parameter optimization. Our comparative study indicates that the proposed schemes improve the performance of DE by a large magnitude such that it becomes capable of enjoying statistical superiority over the state-of-the-art DE variants for a wide variety of test problems. Finally, we experimentally demonstrate that, if one or more of our proposed strategies are integrated with existing powerful DE variants such as jDE and JADE, their performances can also be enhanced.


ieee swarm intelligence symposium | 2005

Novel composition test functions for numerical global optimization

J. J. Liang; Ponnuthurai N. Suganthan; Kalyanmoy Deb

In the evolutionary optimization field, there exist some algorithms taking advantage of the known property of the benchmark functions, such as local optima lying along the coordinate axes, global optimum having the same values for many variables and so on. Multiagent genetic algorithm (MAGA) is an example for this class of algorithms. In this paper, we identify shortcomings associated with the existing test functions. Novel hybrid benchmark functions, whose complexity and properties can be controlled easily, are introduced and several evolutionary algorithms are evaluated with the novel test functions.

Collaboration


Dive into the Ponnuthurai N. Suganthan's collaboration.

Top Co-Authors

Avatar

Swagatam Das

Indian Statistical Institute

View shared research outputs
Top Co-Authors

Avatar

Quan-Ke Pan

Northeastern University

View shared research outputs
Top Co-Authors

Avatar

Rammohan Mallipeddi

Kyungpook National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bo-Yang Qu

Zhongyuan University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A.K. Qin

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mostafa Z. Ali

Jordan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Shi-Zheng Zhao

Nanyang Technological University

View shared research outputs
Researchain Logo
Decentralizing Knowledge