Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Feng Zou is active.

Publication


Featured researches published by Feng Zou.


Information Sciences | 2014

Teaching–learning-based optimization with dynamic group strategy for global optimization

Feng Zou; Lei Wang; Xinhong Hei; Debao Chen; Dongdong Yang

Abstract Global optimization remains one of the most challenging tasks for evolutionary computation and swarm intelligence. In recent years, there have been some significant developments in these areas regarding the solution of global optimization problems. In this paper, we propose an improved teaching–learning-based optimization (TLBO) algorithm with dynamic group strategy (DGS) for global optimization problems. Different to the original TLBO algorithm, DGSTLBO enables each learner to learn from the mean of his corresponding group, rather than the mean of the class, in the teacher phase. Furthermore, each learner employs the random learning strategy or the quantum-behaved learning strategy in his corresponding group in the learner phase. Regrouping occurs dynamically after a certain number of generations, helping to maintain the diversity of the population and discourage premature convergence. To verify the feasibility and effectiveness of the proposed algorithm, experiments are conducted on 18 numerical benchmark functions in 10, 30, and 50 dimensions. The results show that the proposed DGSTLBO algorithm is an effective method for global optimization problems.


Engineering Applications of Artificial Intelligence | 2013

Multi-objective optimization using teaching-learning-based optimization algorithm

Feng Zou; Lei Wang; Xinhong Hei; Debao Chen; Bin Wang

Two major goals in multi-objective optimization are to obtain a set of nondominated solutions as closely as possible to the true Pareto front (PF) and maintain a well-distributed solution set along the Pareto front. In this paper, we propose a teaching-learning-based optimization (TLBO) algorithm for multi-objective optimization problems (MOPs). In our algorithm, we adopt the nondominated sorting concept and the mechanism of crowding distance computation. The teacher of the learners is selected from among current nondominated solutions with the highest crowding distance values and the centroid of the nondominated solutions from current archive is selected as the Mean of the learners. The performance of proposed algorithm is investigated on a set of some benchmark problems and real life application problems and the results show that the proposed algorithm is a challenging method for multi-objective algorithms.


Information Sciences | 2015

An improved teaching-learning-based optimization algorithm for solving global optimization problem

Debao Chen; Feng Zou; Zheng Li; Jiangtao Wang; Suwen Li

Teaching-learning-based optimization (TLBO) is a recently proposed population-based algorithm that simulates the process of teaching and learning. Compared with other evolutionary algorithms, TLBO has fewer parameters that must be determined during the renewal process, and is very efficient for certain optimization problems. However, as a population-based algorithm, certain complex problems cause TLBO to exhibit local convergence phenomena. Therefore, to improve the global performance of TLBO, we have designed local learning and self-learning methods to enhance the search ability of TLBO. In the learner phase, every individual learns from both the teacher of the current generation and other individuals. Whether these individuals are neighbours or random individuals from the whole class is determined probabilistically. In the self-learning phase, individuals either renew their positions according to their own gradient information, or randomly exploit new positions according to a design based on the means and variances. To maintain local diversity, all individuals are rearranged after a set number of iterations. The proposed algorithm is tested on a number of functions, and its performance is compared with that of other well-known optimization algorithms. The results indicate that the improved TLBO attains good performance.


Neurocomputing | 2014

An improved teaching-learning-based optimization with neighborhood search for applications of ANN

Lei Wang; Feng Zou; Xinhong Hei; Dongdong Yang; Debao Chen; Qiaoyong Jiang

Teaching-learning-based optimization (TLBO) algorithm, which simulates the teaching-learning process of the class room, is one of the recently proposed swarm intelligent (SI) algorithms. The performance of TLBO is maintained by the teaching and learning process, but when the learners cannot found a better position than the old one at some successive iteration, the population might be trapped into local optima. In this paper, an improved teaching-learning-based optimization algorithm with neighborhood search (NSTLBO) is presented. In the proposed method, a ring neighborhood topology is introduced into the original TLBO algorithm to maintain the exploration ability of the population. Different than the traditional method to utilize the global information, the mutation of each learner is now restricted within a certain neighboring area so as to fully utilize the whole space and avoid over-congestion around local optima. Moreover, a mutation operation is presented to NSTLBO during the duplicate eliminations in order to maintain the diversity of population. To verify the performance of the proposed algorithm, thirty-two benchmark functions are utilized. Finally, three application problems of artificial neural network are examined. The results in thirty-two benchmark functions and three applications of ANN indicate that the proposed algorithm has shown interesting outcomes.


Neural Computing and Applications | 2014

A hybridization of teaching---learning-based optimization and differential evolution for chaotic time series prediction

Lei Wang; Feng Zou; Xinhong Hei; Dongdong Yang; Debao Chen; Qiaoyong Jiang; Zijian Cao

Chaotic time series prediction problems have some very interesting properties and their prediction has received increasing interest in the recent years. Prediction of chaotic time series based on the phase space reconstruction theory has been applied in many research fields. It is well known that prediction of a chaotic system is a nonlinear, multivariable and multimodal optimization problem for which global optimization techniques are required in order to avoid local optima. In this paper, a new hybrid algorithm named teaching–learning-based optimization (TLBO)–differential evolution (DE), which integrates TLBO and DE, is proposed to solve chaotic time series prediction. DE is incorporated into update the previous best positions of individuals to force TLBO jump out of stagnation, because of its strong searching ability. The proposed hybrid algorithm speeds up the convergence and improves the algorithm’s performance. To demonstrate the effectiveness of our approaches, ten benchmark functions and three typical chaotic nonlinear time series prediction problems are used for simulating. Conducted experiments indicate that the TLBO–DE performs significantly better than, or at least comparable to, TLBO and some other algorithms.


Applied Soft Computing | 2015

Teaching-learning-based optimization with learning experience of other learners and its application

Feng Zou; Lei Wang; Xinhong Hei; Debao Chen

The learning experience of some other learners is introduced into TLBO so as to improve its performance.We design a random learning strategy whatever in the Learner Phase or in the Teacher Phase.18 benchmark functions and two real-world problems are used in experimental study.The results indicate that the proposed algorithm has shown interesting outcomes. To improve the global performance of the standard teaching-learning-based optimization (TLBO) algorithm, an improved TLBO algorithm (LETLBO) with learning experience of other learners is proposed in the paper. In LETLBO, two random possibilities are used to determine the learning methods of learners in different phases. In the Teacher Phase, the learners improve their grades by utilizing the mean information of the class and the learning experience of other learners according to a random probability. In Learner Phase, the learner learns knowledge from another learner which is randomly selected from the whole class or the mutual learning experience of two randomly selected learners according to a random probability. Moreover, area copying operator which is used in Producer-Scrounger model is used for parts of learners to increase its learning speed. The feasibility and effectiveness of the proposed algorithm are tested on 18 benchmark functions and two practical optimization problems. The merits of the improved method are compared with those of some other evolutionary algorithms (EAs), the results show that the proposed algorithm is an effective method for global optimization problems.


Applied Soft Computing | 2012

An improved group search optimizer with operation of quantum-behaved swarm and its application

Debao Chen; Jiangtao Wang; Feng Zou; Weibo Hou; Chunxia Zhao

Group search optimizer (GSO) is a novel swarm intelligent (SI) algorithm for continuous optimization problem. The framework of the algorithm is mainly based on the producer-scrounger (PS) model. Comparing with ant colony optimization (ACO) and particle swarm optimization (PSO) algorithms, GSO emphasizes more on imitating searching behavior of animals. In standard GSO algorithm, more than 80% individuals are chosen as scroungers, and the producer is the one and only destination of them. When the producer cannot found a better position than the old one in some successive iterations, the scroungers will almost move to the same place, the group might be trapped into local optima though a small quantity of rangers are used to improve the diversity of it. To improve the convergence performance of GSO, an improved GSO optimizer with quantum-behaved operator for scroungers according to a certain probability is presented in the paper. In the method, the scroungers are divided into two parts, the scroungers in the first part update their positions with the operators of QPSO, and the remainders keep searching for opportunities to join the resources found by the producer. The operators of QPSO are utilized to improve the diversity of population for GSO. The improved GSO algorithm (IGSO) is tested on several benchmark functions and applied to train single multiplicative neuron model. The results of the experiments indicate that IGSO is competitive to some other EAs.


Information Sciences | 2017

Learning backtracking search optimisation algorithm and its application

Debao Chen; Feng Zou; Renquan Lu; Peng Wang

The backtracking search algorithm (BSA) is a recently proposed evolutionary algorithm (EA) that has been used for solving optimisation problems. The structure of the algorithm is simple and has only a single control parameter that should be determined. To improve the convergence performance and extend its application domain, a new algorithm called the learning BSA (LBSA) is proposed in this paper. In this method, the globally best information of the current generation and historical information in the BSA are combined to renew individuals according to a random probability, and the remaining individuals have their positions renewed by learning knowledge from the best individual, the worst individual, and another random individual of the current generation. There are two main advantages of the algorithm. First, some individuals update their positions with the guidance of the best individual (the teacher), which makes the convergence faster, and second, learning from different individuals, especially when avoiding the worst individual, increases the diversity of the population. To test the performance of the LBSA, benchmark functions in CEC2005 and CEC2014 were tested, and the algorithm was also used to train artificial neural networks for chaotic time series prediction and nonlinear system modelling problems. To evaluate the performance of LBSA with some other EAs, several comparisons between LBSA and other classical algorithms were conducted. The results indicate that LBSA performs well with respect to other algorithms and improves the performance of BSA.


Applied Soft Computing | 2011

A multi-objective endocrine PSO algorithm and application

Debao Chen; Feng Zou; Jiangtao Wang

A novel multi-objective endocrine particle swarm optimization algorithm (MOEPSO) based on the regulation of endocrine system is proposed. In the method, the releasing hormone (RH) of endocrine system is encoded as particle swarm and supervised by the corresponding stimulating hormone (SH). For multi-objective problem, the new SH is composed by the Pareto optimal solutions which determined by the feedback of RH and SH of current generation. In each generation, RH is divided into different classes according to SH, the best positions of different classes, the best position of current generation and the best positions that the particles have achieved so far are simultaneously used to generate the new RH. The effectiveness of the method is tested by simulation experiments with some unconstrained and constrained benchmark multi-objective Pareto optimal problems. The results indicate that the designed method is efficient for some multi-objective optimization problems.


Neurocomputing | 2016

Teaching-learning-based optimization with variable-population scheme and its application for ANN and global optimization

Debao Chen; Renquan Lu; Feng Zou; Suwen Li

A teaching-learning-based optimization algorithm (TLBO) which uses a variable population size in the form of a triangle form (VTTLBO) is proposed in the paper. The main goal of the proposed method is to decrease the computing cost of original TLBO and extend it for optimizing the parameters of artificial neural network (ANN). In the proposed algorithm, the evolutionary process is divided into some equal periods according to the maximal generation. The population size in each period is changed in form of triangle. In the linear increasing phase of populations number, some new individuals are generated with gauss distribution by using the adaptive mean and variance of the population. In the linear decreasing phase of populations number, some highly similar individuals are deleted. To compare the performance of the proposed method with some other methods, Saw-tooth teaching-learning-based optimization algorithm is also designed with simulating the basic principle of Saw-tooth genetic algorithm (STGA), and some other EAs with the fixed population size are also simulated. A variety of benchmark problems and system modeling and prediction problems with ANN are tested in this paper, the results indicate that the computation cost of the given method is small and the convergence accuracy and speed of it are high.

Collaboration


Dive into the Feng Zou's collaboration.

Top Co-Authors

Avatar

Debao Chen

Huaibei Normal University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Renquan Lu

Guangdong University of Technology

View shared research outputs
Top Co-Authors

Avatar

Xinhong Hei

Beijing Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Jiangtao Wang

Huaibei Normal University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Suwen Li

Huaibei Normal University

View shared research outputs
Top Co-Authors

Avatar

Peng Wang

Huaibei Normal University

View shared research outputs
Top Co-Authors

Avatar

Wujie Yuan

Huaibei Normal University

View shared research outputs
Top Co-Authors

Avatar

Xude Wang

Huaibei Normal University

View shared research outputs
Researchain Logo
Decentralizing Knowledge