Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jingqiao Zhang is active.

Publication


Featured researches published by Jingqiao Zhang.


IEEE Transactions on Evolutionary Computation | 2009

JADE: Adaptive Differential Evolution With Optional External Archive

Jingqiao Zhang; Arthur C. Sanderson

A new differential evolution (DE) algorithm, JADE, is proposed to improve optimization performance by implementing a new mutation strategy ldquoDE/current-to-p bestrdquo with optional external archive and updating control parameters in an adaptive manner. The DE/current-to-pbest is a generalization of the classic ldquoDE/current-to-best,rdquo while the optional archive operation utilizes historical data to provide information of progress direction. Both operations diversify the population and improve the convergence performance. The parameter adaptation automatically updates the control parameters to appropriate values and avoids a users prior knowledge of the relationship between the parameter settings and the characteristics of optimization problems. It is thus helpful to improve the robustness of the algorithm. Simulation results show that JADE is better than, or at least comparable to, other classic or adaptive DE algorithms, the canonical particle swarm optimization, and other evolutionary algorithms from the literature in terms of convergence performance for a set of 20 benchmark problems. JADE with an external archive shows promising results for relatively high dimensional problems. In addition, it clearly shows that there is no fixed control parameter setting suitable for various problems or even at different optimization stages of a single problem.


congress on evolutionary computation | 2009

An adaptive coevolutionary Differential Evolution algorithm for large-scale optimization

Zhenyu Yang; Jingqiao Zhang; Ke Tang; Xin Yao; Arthur C. Sanderson

In this paper, we propose a new algorithm, named JACC-G, for large scale optimization problems. The motivation is to improve our previous work on grouping and adaptive weighting based cooperative coevolution algorithm, DECC-G [1], which uses random grouping strategy to divide the objective vector into subcomponents, and solve each of them in a cyclical fashion. The adaptive weighting mechanism is used to adjust all the subcomponents together at the end of each cycle. In the new JACC-G algorithm: (1) A most recent and efficient Differential Evolution (DE) variant, JADE [2], is employed as the subcomponent optimizer to seek for a better performance; (2) The adaptive weighting is time-consuming and expected to work only in the first few cycles, so a detection module is added to prevent applying it arbitrarily; (3) JADE is also used to optimize the weight vector in adaptive weighting process instead of using a basic DE in previous DECC-G. The efficacy of the proposed JACC-G algorithm is evaluated on two sets of widely used benchmark functions up to 1000 dimensions.


world congress on computational intelligence | 2008

Differential evolution for discrete optimization: An experimental study on Combinatorial Auction problems

Jingqiao Zhang; Viswanath Avasarala; Arthur C. Sanderson; Tracy Mullen

Differential evolution (DE) mutates solution vectors by the weighted difference of other vectors using arithmetic operations. As these operations cannot be directly extended to discrete combinatorial space, DE algorithms have been traditionally applied to optimization problems where the search space is continuous. In this paper, we use JADE, a self-adaptive DE algorithm, for winner determination in combinatorial auctions (CAs) where users place bids on combinations of items. To adapt JADE to discrete optimization, we use a rank-based representation schema that produces only feasible solutions and a regeneration operation that constricts the problem search space. It is shown that JADE compares favorably to a local stochastic search algorithm, Casanova, and a genetic algorithm based approach, SGA.


world congress on computational intelligence | 2008

Self-adaptive multi-objective differential evolution with direction information provided by archived inferior solutions

Jingqiao Zhang; Arthur C. Sanderson

We propose a new self-adaptive differential evolution algorithm for multi-objective optimization problems. To address the challenges in multi-objective optimization, we introduce an archive to store recently explored inferior solutions whose difference with the current population is utilized as direction information about the optimum, and also consider a fairness measure in calculating crowding distances to prefer the solutions whose distances to nearest neighbors are large and close to be uniform. As a result, the obtained solutions can spread well over the computed non-dominated front and the front can be moved fast toward the Pareto-optimal front. In addition, the control parameters of the algorithm are adjusted in a self-adaptive manner, avoiding parameter tuning for problems of different characteristics. The proposed algorithm, named JADE2, achieves better or at least competitive results compared to NSGA-II and GDE3 for a set of twenty-two benchmark problems.


congress on evolutionary computation | 2007

An approximate gaussian model of Differential Evolution with spherical fitness functions

Jingqiao Zhang; Arthur C. Sanderson

An analytical method is proposed to study the evolutionary stochastic properties of the population in differential evolution (DE) for a spherical function model. Properties of mutation and selection are developed, based on which a Gaussian approximate model of DE is introduced to facilitate mathematical derivations. The evolutionary dynamics and the convergence behavior of DE are investigated based on the derived analytical formulae and their appropriateness is verified by experimental results. It is shown that the lower limit of mutation factor should be as high as 0.68 to avoid premature convergence if the initial population is isotropically normally distributed and infinitely far from the optimum (i.e., the function landscape becomes a hyper-plane). The lower limit, however, may be decreased if the population becomes closer to the optimum and an accordingly smaller mutation factor is beneficial to speed up the convergence. This motivates future research to improve DE by dynamically adapting control parameters as evolution search proceeds.


congress on evolutionary computation | 2007

DE-AEC: A differential evolution algorithm based on adaptive evolution control

Jingqiao Zhang; Arthur C. Sanderson

A new differential evolution algorithm DE-AEC, is proposed based on adaptive evolution control utilizing the information provided by a surrogate model. The algorithm is useful for optimization problems with expensive function evaluations, because it can significantly reduce the number of true function evaluations. Specifically DE-AEC generates multiple offspring for each parent and chooses the promising one based on the accuracy and the predicted function value of the current surrogate model. The models accuracy is also used as an indicator of potential false convergence and special measures are taken to improve the convergence reliability. Simulation results on a set of fifteen test functions show that, compared to an already improved DE algorithm DE-AEC reduces the number of true function evaluations by 30%-80% for fourteen functions in the achievement of either low-level (10-2 ) or high-level (10-8 ) accuracy.


Archive | 2009

Surrogate Model-Based Differential Evolution

Jingqiao Zhang; Arthur C. Sanderson

In many practical applications, optimization problems have a demanding limitation on the number of function evaluations that are expensive in terms of time, cost and/or other limited resources. Adaptive differential evolution such as JADE is capable of speeding up the evolutionary search by automatically evolving control parameters to appropriate values. However, as a population-based method by nature, adaptive differential evolution still typically requires a great number of function evaluations, which is a challenge for the increasing computational cost of today’s applications. In general, computational cost increases with the size, complexity and fidelity of the problem model and the large number of function evaluations involved in the optimization process may be cost prohibitive or impractical without high performance computing resources. One promising way to significantly relieve this problem is to utilize computationally cheap surrogate models (i.e., approximate models to the original function) in the evolutionary computation.


Archive | 2009

Theoretical Analysis of Differential Evolution

Jingqiao Zhang; Arthur C. Sanderson

Differential evolution has proven to be a simple yet efficient optimization approach since its invention by Storn and Price in 1995 [1], [2]. Despite its success in various practical applications, only a few theoretical results [2], [13], [14], [15] have been obtained concerning its stochastic behavior and most of them focus on the mutation and crossover operations while omitting detailed analysis of selection that is immediately related to objective function values and characteristics. The control parameters of DE, both the mutation factor and the crossover probability, are sensitive to the characteristics of different problems and the varied landscapes of a single problem at different evolutionary search stages. Thus, it is necessary to consider the selection operation in investigating the effect of control parameters on the stochastic convergence behavior of differential evolution.


Archive | 2009

Related Work and Background

Jingqiao Zhang; Arthur C. Sanderson

This chapter introduces background information on several topics. First, the basic concepts of evolutionary algorithms are overviewed in Sect. 2.1. The procedure of classic differential evolution is then described in Sect. 2.2 to serve as a basis for theoretical analysis and algorithm development in later chapters. Different parameter control mechanisms are summarized in Sect. 2.3.Multi-objective optimization is introduced in Sect. 2.4 as an application domain of parameter adaptive differential evolution. Finally, the no-free lunch theory and domain knowledge utilization are briefly discussed in Sect. 2.5.


Archive | 2009

Parameter Adaptive Differential Evolution

Jingqiao Zhang; Arthur C. Sanderson

The performance of differential evolution is affected by its control parameters which in turn are dependent on the landscape characteristics of the objective function.As is clear from extensive experimental studies and theoretical analysis of simple spherical functions, inappropriate control parameter settings may lead to false or slow convergence and therefore degrade the optimization performance of the algorithm. To address this problem, one method is to automatically update control parameters based on the feedback from the evolutionary search. As better parameter values tend to generate offspring that are more likely to survive, parameter adaptation schemes usually follow the principle of propagating parameter values that have produced successful offspring solutions.

Collaboration


Dive into the Jingqiao Zhang's collaboration.

Top Co-Authors

Avatar

Arthur C. Sanderson

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Tracy Mullen

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zhenyu Yang

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Ke Tang

University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Xin Yao

University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge