Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lishan Kang is active.

Publication


Featured researches published by Lishan Kang.


systems man and cybernetics | 2008

A New Evolutionary Algorithm for Solving Many-Objective Optimization Problems

Xiufen Zou; Yu Chen; Minzhong Liu; Lishan Kang

In this paper, we focus on the study of evolutionary algorithms for solving multiobjective optimization problems with a large number of objectives. First, a comparative study of a newly developed dynamical multiobjective evolutionary algorithm (DMOEA) and some modern algorithms, such as the indicator-based evolutionary algorithm, multiple single objective Pareto sampling, and nondominated sorting genetic algorithm II, is presented by employing the convergence metric and relative hypervolume metric. For three scalable test problems (namely, DTLZ1, DTLZ2, and DTLZ6), which represent some of the most difficult problems studied in the literature, the DMOEA shows good performance in both converging to the true Pareto-optimal front and maintaining a widely distributed set of solutions. Second, a new definition of optimality (namely, L-optimality) is proposed in this paper, which not only takes into account the number of improved objective values but also considers the values of improved objective functions if all objectives have the same importance. We prove that L-optimal solutions are subsets of Pareto-optimal solutions. Finally, the new algorithm based on L-optimality (namely, MDMOEA) is developed, and simulation and comparative results indicate that well-distributed L-optimal solutions can be obtained by utilizing the MDMOEA but cannot be achieved by applying L-optimality to make a posteriori selection within the huge Pareto nondominated solutions. We can conclude that our new algorithm is suitable to tackle many-objective problems.


Genetic Programming and Evolvable Machines | 2000

Evolutionary Modeling of Systems of Ordinary Differential Equations with Genetic Programming

Hongqing Cao; Lishan Kang; Yuping Chen; Jingxian Yu

This paper describes an approach to the evolutionary modeling problem of ordinary differential equations including systems of ordinary differential equations and higher-order differential equations. Hybrid evolutionary modeling algorithms are presented to implement the automatic modeling of one- and multi-dimensional dynamic systems respectively. The main idea of the method is to embed a genetic algorithm in genetic programming where the latter is employed to discover and optimize the structure of a model, while the former is employed to optimize its parameters. A number of practical examples are used to demonstrate the effectiveness of the approach. Experimental results show that the algorithm has some advantages over most available modeling methods.


Theoretical Computer Science | 1999

On the convergence rates of genetic algorithms

Jun He; Lishan Kang

This paper discusses the convergence rates of genetic algorithms by using the minorization condition in the Markov chain theory. We classify genetic algorithms into two kinds: one with time-invariant genetic operators, another with time-variant genetic operators. For the former case, we have obtained the bound on its convergence rate on the general state space; for the later case, we have bounded its convergence rate on the finite state space.


international symposium on intelligence computation and applications | 2007

A fast particle swarm optimization algorithm with cauchy mutation and natural selection strategy

Changhe Li; Yong Liu; Aimin Zhou; Lishan Kang; Hui Wang

The standard Particle Swarm Optimization (PSO) algorithm is a novel evolutionary algorithm in which each particle studies its own previous best solution and the groups previous best to optimize problems. One problem exists in PSO is its tendency of trapping into local optima. In this paper, a fast particle swarm optimization (FPSO) algorithm is proposed by combining PSO and the Cauchy mutation and an evolutionary selection strategy. The idea is to introduce the Cauchy mutation into PSO in the hope of preventing PSO from trapping into a local optimum through long jumps made by the Cauchy mutation. FPSO has been compared with another improved PSO called AMPSO [12] on a set of benchmark functions. The results show that FPSO is much faster than AMPSO on all the test functions.


Computational Biology and Chemistry | 1999

The kinetic evolutionary modeling of complex systems of chemical reactions

Hongqing Cao; Jingxian Yu; Lishan Kang; Yuping Chen; Yongyan Chen

Abstract To overcome the drawbacks of most available methods for kinetic analysis, this paper proposes a hybrid evolutionary modeling algorithm called HEMA to build kinetic models of systems of ordinary differential equations (ODEs) automatically for complex systems of chemical reactions. The main idea of the algorithm is to embed a genetic algorithm (GA) into genetic programming (GP) where GP is employed to optimize the structure of a model, while a GA is employed to optimize its parameters. The experimental results of two chemical reaction systems show that by running the HEMA, the computer can discover the kinetic models automatically which are appropriate for describing the kinetic characteristics of the reacting systems. Those models can not only fit the kinetic data very well, but also give good predictions.


simulated evolution and learning | 2006

A new approach to solving dynamic traveling salesman problems

Changhe Li; Ming Yang; Lishan Kang

The Traveling Salesman Problem (TSP) is one of the classic NP hard optimization problems. The Dynamic TSP (DTSP) is arguably even more difficult than general static TSP. Moreover the DTSP is widely applicable to real-world applications, arguably even more than its static equivalent. However its investigation is only in the preliminary stages. There are many open questions to be investigated. This paper proposes an effective algorithm to solve DTSP. Experiments showed that this algorithm is effective, as it can find very high quality solutions using only a very short time step.


international conference on evolutionary multi criterion optimization | 2003

A new MOEA for multi-objective TSP and Its convergence property analysis

Zhenyu Yan; Linghai Zhang; Lishan Kang; Guangming Lin

Evolutionary Multi-objective Optimization(EMO) is becoming a hot research area and quite a few aspects of Multi-objective Evolutionary algorithms(MOEAs) have been studied and discussed. However there are still few literatures discussing the roles of search and selection operators in MOEAs. This paper studied their roles by a representative combinatorial Multi-objective Problem(MOP): Multi-objective TSP. In the new MOEA, We adopt an efficient search operator, which has the properties of both crossover and mutation, to generate the new individuals and chose two kinds of selection operators: Family Competition and Population Competition with probabilities to realize selection. The simulation experiments showed that this new MOEA could get good uniform solutions representing the Pareto Front and outperformed SPEA in almost every simulation run on this problem. Furthermore, we analyzed its convergence property using Finite Markov Chain and proved that it could converge to Pareto Front with probability 1.We also find that the convergence property of MOEAs has much relationship with search and selection operators.


intelligent systems design and applications | 2009

A Scalability Test for Accelerated DE Using Generalized Opposition-Based Learning

Hui Wang; Zhijian Wu; Shahryar Rahnamayan; Lishan Kang

In this paper a scalability test over eleven scalable benchmark functions, provided by the current workshop (Evolutionary Algorithms and other Metaheuristics for Continuous Optimization Problems - A Scalability Test), are conducted for accelerated DE using generalized opposition-based learning (GODE). The average error of the best individual in the population has been reported for dimensions 50, 100, 200, and 500 in order to compare with the results of other algorithms which are participating in this workshop. Current work is based on opposition-based differential evolution (ODE) and our previous work, accelerated PSO by generalized OBL.


Journal of Electroanalytical Chemistry | 1999

A new approach to the estimation of electrocrystallization parameters

Jingxian Yu; Hongqing Cao; Yongyan Chen; Lishan Kang; Hanxi Yang

Abstract To overcome the drawbacks in estimating electrocrystallization parameters using traditional methods, we propose a genetic algorithm using a novel crossover operator based on the non-convex linear combination of multiple parents to estimate the electrocrystallization parameters A (the nucleation rate constant), N 0 (the nucleation density) and D (the diffusion coefficient of Zn 2+ ions) simultaneously in the general current-time expression of Scharifker and Mostany for nucleation and growth by fitting the whole current transients for zinc electrodeposition onto glassy carbon electrode immersed in the acetate solutions. By running the algorithm, we obtained for different step potentials, D values close to 2.10×10 −6 cm 2 s −1 , which are comparable to reported values. The values of A obtained for all step potentials are identical, 1.41×10 9 s −1 , which indicates that zinc deposition onto glassy carbon electrode follows three-dimensional instantaneous nucleation and growth. In addition, from the values of N 0 obtained, one can observe that an increase in step potential leads to a higher N 0 . These results show that our algorithm works stably and effectively in solving the problem of estimating the electrocrystallization parameters, and more importantly, it can be extended easily to a general algorithm to estimate multiple parameters in an arbitrary chemical model.


congress on evolutionary computation | 2004

Benchmarking algorithms for dynamic travelling salesman problems

Lishan Kang; Aimin Zhou; Bob McKay; Yan Li; Zhuo Kang

Dynamic optimisation problems are becoming increasingly important; meanwhile, progress in optimisation techniques and in computational resources are permitting the development of effective systems for dynamic optimisation, resulting in a need for objective methods to evaluate and compare different techniques. The search for effective techniques may be seen as a multi-objective problem, trading off time complexity against effectiveness; hence benchmarks must be able to compare techniques across the Pareto front, not merely at a single point. We propose benchmarks for the dynamic travelling salesman problem, adapted from the CHN-144 benchmark of 144 Chinese cities for the static travelling salesman problem. We provide an example of the use of the benchmark, and illustrate the information that can be gleaned from analysis of the algorithm performance on the benchmarks.

Collaboration


Dive into the Lishan Kang's collaboration.

Top Co-Authors

Avatar

Sanyou Zeng

China University of Geosciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jun He

Aberystwyth University

View shared research outputs
Top Co-Authors

Avatar

Hui Shi

China University of Geosciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guang Chen

China University of Geosciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge