Wei-Jie Yu
Sun Yat-sen University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Wei-Jie Yu.
IEEE Transactions on Systems, Man, and Cybernetics | 2014
Wei-Jie Yu; Meie Shen; Wei-Neng Chen; Zhi-Hui Zhan; Yue-Jiao Gong; Ying Lin; Ou Liu; Jun Zhang
The performance of differential evolution (DE) largely depends on its mutation strategy and control parameters. In this paper, we propose an adaptive DE (ADE) algorithm with a new mutation strategy DE/lbest/1 and a two-level adaptive parameter control scheme. The DE/lbest/1 strategy is a variant of the greedy DE/best/1 strategy. However, the population is mutated under the guide of multiple locally best individuals in DE/lbest/1 instead of one globally best individual in DE/best/1. This strategy is beneficial to the balance between fast convergence and population diversity. The two-level adaptive parameter control scheme is implemented mainly in two steps. In the first step, the population-level parameters Fp and CRp for the whole population are adaptively controlled according to the optimization states, namely, the exploration state and the exploitation state in each generation. These optimization states are estimated by measuring the population distribution. Then, the individual-level parameters Fi and CRi for each individual are generated by adjusting the population-level parameters. The adjustment is based on considering the individuals fitness value and its distance from the globally best individual. This way, the parameters can be adapted to not only the overall state of the population but also the characteristics of different individuals. The performance of the proposed ADE is evaluated on a suite of benchmark functions. Experimental results show that ADE generally outperforms four state-of-the-art DE variants on different kinds of optimization problems. The effects of ADE components, parameter properties of ADE, search behavior of ADE, and parameter sensitivity of ADE are also studied. Finally, we investigate the capability of ADE for solving three real-world optimization problems.
systems, man and cybernetics | 2009
Wei-Jie Yu; Xiao-Min Hu; Jun Zhang; Rui-Zhang Huang
In the ant colony system (ACS) algorithm, ants build tours mainly depending on the pheromone information on edges. The parameter settings of pheromone updating in ACS have direct effect on the performance of the algorithm. However, it is a difficult task to choose the proper pheromone decay parameters α and ρ for ACS. This paper presents a novel version of ACS algorithm for obtaining self-adaptive parameters control in pheromone updating rules. The proposed adaptive ACS (AACS) algorithm employs Average Tour Similarity (ATS) as an indicator of the optimization state in the ACS. Instead of using fixed values of α and ρ, the values of α and ρ are adaptively adjusted according to the normalized value of ATS. The AACS algorithm has been applied to optimize several benchmark TSP instances. The solution quality and the convergence rate are favorably compared with the ACS using fixed values of α and ρ. Experimental results confirm that our proposed method is effective and outperforms the conventional ACS.
systems man and cybernetics | 2018
Ya-Hui Jia; Wei-Neng Chen; Tianlong Gu; Huaxiang Zhang; Hua-Qiang Yuan; Ying Lin; Wei-Jie Yu; Jun Zhang
With the rapid development of e-commerce, logistics industry becomes a crucial component in the e-commercial ecological chain. Impelled by both economical and environmental benefit, logistics companies demand automated tools more urgently than ever. In this paper, a dynamic logistic dispatching system is proposed. The underlying model of the dispatching system is the dynamic vehicle routing problem which allows new orders being received as the working day progress. With this feature, the system becomes more practical than the systems with traditional static vehicle routing models, but is also more challenging as the vehicles must be scheduled in a dynamic way. The core of the system is a specially designed set-based particle swarm optimization algorithm. According to the characteristic of the problem, a new encoding scheme is defined by set and possibility, and a local refinement method is designed to accelerate the convergence speed of the algorithm. In addition, two more techniques: 1) region partition and 2) archive strategy are incorporated in the dispatching system to reduce the complexity of the problem and to facilitate the optimization process, helping the dispatcher control the vehicles in real time. The proposed system is tested on various benchmarks with different scales. Experimental results show that the proposed dispatching system is effective.
soft computing | 2018
Wei-Jie Yu; Zhi-Hui Zhan; Jun Zhang
Artificial bee colony (ABC) is a recent swarm intelligence algorithm. There have been some greedy ABC variants developed to enhance the exploitation capability, but greedy variants are usually less reliable and may cause premature convergence, especially without proper control on the greediness degree. In this paper, we propose an adaptive ABC algorithm (AABC), which is characterized by a novel greedy position update strategy and an adaptive control scheme for adjusting the greediness degree. The greedy position update strategy incorporates the information of top t solutions into the search process of the onlooker bees. Such a greedy strategy is beneficial to fast convergence performance. In order to adapt the greediness degree to fit for different optimization scenarios, the proposed adaptive control scheme further adjusts the size of top solutions for selection in each iteration of the algorithm. The adjustment is based on considering the current search tendency of the bees. This way, by combining the greedy position update process and the adaptive control scheme, the convergence performance and the robustness of the algorithm can be improved at the same time. A set of benchmark functions is used to test the proposed AABC algorithm. Experimental results show that the components of AABC can significantly improve the performance of the classic ABC algorithm. Moreover, the AABC performs better than, or at least comparably to, some existing ABC variants as well as other state-of-the-art evolutionary algorithms.
Information Sciences | 2018
Wei-Jie Yu; Jing-Yu Ji; Yue-Jiao Gong; Qiang Yang; Jun Zhang
Abstract The multimodal optimization problems (MMOPs) need to find multiple optima simultaneously, so the population diversity is a critical issue that should be considered in designing an evolutionary optimization algorithm for MMOPs. Taking advantage of evolutionary multiobjective optimization in maintaining good population diversity, this paper proposes a tri-objective differential evolution (DE) approach to solve MMOPs. Given an MMOP, we first transform it into a tri-objective optimization problem (TOP). The three optimization objectives are constructed based on 1) the objective function of an MMOP, 2) the individual distance information measured by a set of reference points, and 3) the shared fitness based on niching technique. The first two objectives are mutually conflicting so that the advantage of evolutionary multiobjective optimization can be fully used. The population diversity is greatly improved by the third objective constructed by the niching technique which is insensitive to niching parameters. Mathematical proofs are given to demonstrate that the Pareto-optimal front of the TOP contains all global optima of the MMOP. Subsequently, DE-based multiobjective optimization techniques are applied to solve the converted TOP. Moreover, a modified solution comparison criterion and an adaptive ranking strategy for DE are introduced to improve the accuracy of solutions. Experiments have been conducted on 44 benchmark functions to evaluate the performance of the proposed approach. The results show that the proposed approach achieves competitive performance compared with several state-of-the-art multimodal optimization algorithms.
IEEE Transactions on Systems, Man, and Cybernetics | 2018
Yong-Feng Ge; Wei-Jie Yu; Ying Lin; Yue-Jiao Gong; Zhi-Hui Zhan; Wei-Neng Chen; Jun Zhang
Nowadays, large-scale optimization problems are ubiquitous in many research fields. To deal with such problems efficiently, this paper proposes a distributed differential evolution with adaptive mergence and split (DDE-AMS) on subpopulations. The novel mergence and split operators are designed to make full use of limited population resource, which is important for large-scale optimization. They are adaptively performed based on the performance of the subpopulations. During the evolution, once a subpopulation finds a promising region, the current worst performing subpopulation will merge into it. If the merged subpopulation could not continuously provide competitive solutions, it will be split in half. In this way, the number of subpopulations is adaptively adjusted and better performing subpopulations obtain more individuals. Thus, population resource can be adaptively arranged for subpopulations during the evolution. Moreover, the proposed algorithm is implemented with a parallel master–slave manner. Extensive experiments are conducted on 20 widely used large-scale benchmark functions. Experimental results demonstrate that the proposed DDE-AMS could achieve competitive or even better performance compared with several state-of-the-art algorithms. The effects of DDE-AMS components, adaptive behavior, scalability, and parameter sensitivity are also studied. Finally, we investigate the speedup ratios of DDE-AMS with different computation resources.
genetic and evolutionary computation conference | 2013
Wei-Jie Yu; Jun Zhang; Wei-Neng Chen
In this paper, we propose a novel greedy position update strategy for the ABC algorithm. The greedy position update strategy is implemented mainly in two steps. In the first step, good solutions randomly chosen from the top t solutions in the current population are used to guide the search process of onlooker bees. In the second step, the new parameter t is adaptively adjusted in each iteration of the algorithm. The adjustment is simply based on determining whether the globally best solution is obtained by the employed bees or the onlooker bees. The effect of the proposed greedy position update strategy is evaluated on a set of benchmark functions. Experimental results show that the proposed strategy can significantly improve the performance of the classic ABC algorithm. In addition, ABC using the proposed strategy exhibits very competitive performance when compared with some existing ABC variants.
international conference on information science and technology | 2017
Jing-Yu Ji; Wei-Jie Yu; Wei-Neng Chen; Zhi-Hui Zhan; Jun Zhang
This paper proposes a novel multi-objective optimization approach for solving multimodal optimization problems (MMOPs). An MMOP at hand is first transformed into a bi-objective optimization problem. The two objectives are constructed totally conflict by using the distance information and the objective function value. In this way, multiple optima of an MMOP are converted into the non-dominated solutions of the transformed bi-objective optimization problem. Then, multi-objective optimization techniques based on differential evolution are applied to solve the bi-objective problem. In addition, a modified solution comparison criterion is proposed to improve the accuracy level of the final solutions. The performance of the proposed approach is evaluated on a suite of benchmark functions. Experimental results show that the proposed approach is very competitive compared with six state-of-the-art multimodal optimization algorithms on most of the benchmark functions.
congress on evolutionary computation | 2017
Xin Situ; Wei-Neng Chen; Yue-Jiao Gong; Ying Lin; Wei-Jie Yu; Zhiwen Yu; Jun Zhang
Taxi dispatch is a critical issue for taxi company to consider in modern life. This paper formulates the problem into a taxi-passenger matching model and proposes a parallel ant colony optimization algorithm to optimize the model. As the search space is large, we develop a region-dependent decomposition strategy to divide and conquer the problem. To keep the global performance, a critical region is defined to deal with the communications and interactions between the subregions. The experimental results verify that the proposed algorithm is effective, efficient, and extensible, which outperforms the traditional global perspective greedy algorithm in terms of both accuracy and efficiency.
IEEE Transactions on Evolutionary Computation | 2017
Zi-Jia Wang; Zhi-Hui Zhan; Ying Lin; Wei-Jie Yu; Hua-Qiang Yuan; Tianlong Gu; Sam Kwong; Jun Zhang
Multimodal optimization problem (MMOP), which targets at searching for multiple optimal solutions simultaneously, is one of the most challenging problems for optimization. There are two general goals for solving MMOPs. One is to maintain population diversity so as to locate global optima as many as possible, while the other is to increase the accuracy of the solutions found. To achieve these two goals, a novel dual-strategy differential evolution (DSDE) with affinity propagation clustering (APC) is proposed in this paper. The novelties and advantages of DSDE include the following three aspects. First, a dual-strategy mutation scheme is designed to balance exploration and exploitation in generating offspring. Second, an adaptive selection mechanism based on APC is proposed to choose diverse individuals from different optimal regions for locating as many peaks as possible. Third, an archive technique is applied to detect and protect stagnated and converged individuals. These individuals are stored in the archive to preserve the found promising solutions and are reinitialized for exploring more new areas. The experimental results show that the proposed DSDE algorithm is better than or at least comparable to the state-of-the-art multimodal algorithms when evaluated on the benchmark problems from CEC2013, in terms of locating more global optima, obtaining higher accuracy solution, and converging with faster speed.