Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zhuo Kang is active.

Publication


Featured researches published by Zhuo Kang.


congress on evolutionary computation | 2004

Benchmarking algorithms for dynamic travelling salesman problems

Lishan Kang; Aimin Zhou; Bob McKay; Yan Li; Zhuo Kang

Dynamic optimisation problems are becoming increasingly important; meanwhile, progress in optimisation techniques and in computational resources are permitting the development of effective systems for dynamic optimisation, resulting in a need for objective methods to evaluate and compare different techniques. The search for effective techniques may be seen as a multi-objective problem, trading off time complexity against effectiveness; hence benchmarks must be able to compare techniques across the Pareto front, not merely at a single point. We propose benchmarks for the dynamic travelling salesman problem, adapted from the CHN-144 benchmark of 144 Chinese cities for the static travelling salesman problem. We provide an example of the use of the benchmark, and illustrate the information that can be gleaned from analysis of the algorithm performance on the benchmarks.


International Journal of Computer Mathematics | 2002

A Robust Algorithm for Solving Nonlinear Programming Problems

Yan Li; Lishan Kang; Hugo de Garis; Zhuo Kang; Pu Liu

In this paper, we introduce a new algorithm for solving nonlinear programming (NLP) problems. It is an extension of Guos algorithm [1] which possesses enhanced capabilities for solving NLP problems. These capabilities include: a) extending the variable subspace, b) adding a search process over subspaces and normalized constraints, c) using an adaptive penalty function, and d) adding the ability to deal with integer NLP problems, 0-1 NLP problems, and mixed-integer NLP problems which have equality constraints. These four enhancements increase the capabilities of the algorithm to solve nonlinear programming problems in a more robust and universal way. This paper will present results of numerical experiments which show that the new algorithm is not only more robust and universal than its competitors, but also its performance level is higher than any others in the literature.


technology of object oriented languages and systems | 2001

Automatic data mining by asynchronous parallel evolutionary algorithms

Jiandong Li; Zhuo Kang; Yan Li; Hongqing Cao; Pu Liu

How to discover high-level knowledge modeled by complicated functions, ordinary differential equations and difference equations in databases automatically is a very important and difficult task in KDD research. In this paper, high-level knowledge modeled by ordinary differential equations (ODEs) is discovered in dynamic data automatically by an asynchronous parallel evolutionary modeling algorithm (APHEMA). A numerical example is used to demonstrate the potential of APEA. The results show that the dynamic models discovered automatically in dynamic data by computer sometimes can compare with the models discovered by human.


congress on evolutionary computation | 2000

Asynchronous parallelization of Guo's algorithm for function optimization

Lishan Kang; Zhuo Kang; Yan Li; Pu Liu; Yuping Chen

Recently Tao Guo (1999) proposed a stochastic search algorithm in his PhD thesis for solving function optimization problems. He combined the subspace search method (a general multi-parent recombination strategy) with the population hill-climbing method. The former keeps a global search for the overall situation, and the latter maintains the convergence of the algorithm. Guos algorithm has many advantages, such as the simplicity of its structure, the high accuracy of its results, the wide range of its applications, and the robustness of its use. In this paper a preliminary theoretical analysis of the algorithm is given and some numerical experiments are performed using Guos algorithm to demonstrate the theoretical results. Three asynchronous parallel algorithms with different granularities for MIMD machines are designed by parallelizing Guos algorithm.


international symposium on intelligence computation and applications | 2007

A new evolutionary decision theory for many-objective optimization problems

Zhuo Kang; Lishan Kang; Xiufen Zou; Minzhong Liu; Changhe Li; Ming Yang; Yan Li; Yuping Chen; Sanyou Zeng

In this paper the authors point out that the Pareto Optimality is unfair, unreasonable and imperfect for Many-objective Optimization Problems (MOPs) underlying the hypothesis that all objectives have equal importance. The key contribution of this paper is the discovery of the new definition of optimality called Ɛ-optimality for MOP that is based on a new conception, so called Ɛ-dominance, which not only considers the difference of the number of superior and inferior objectives between two feasible solutions, but also considers the values of improved objective functions underlying the hypothesis that all objectives in the problem have equal importance. Two new evolutionary algorithms are given, where Ɛ- dominance is used as a selection strategy with the winning score as an elite strategy for search -optimal solutions. Two benchmark problems are designed for testing the new concepts of many-objective optimization problems. Numerical experiments show that the new definition of optimality is more perfect than that of the Pareto Optimality which is widely used in the evolutionary computation community for solving many-objective optimization problems.


computational intelligence and security | 2006

Automatic Programming Methodology for Program Reuse

Zhuo Kang; Yan Li; Lishan Kang

This paper investigates the representation of program for program reuse. A new gene structure is proposed: head + body + tail, which allows the program with necessary complexity and putting some learning mechanism into the search process. A new homeotic gene structure is proposed, it not only can call for subroutines easily, but also can automatically perform programming. The concept of different homeotic gene, a multi-cellular structure is proposed. It can be used to describe the complex multi-level programs and to implement the complex subroutine calls. An estimation of distribution mutation operator for guiding search is proposed. It fuses statistic learning mechanism into the search process to accelerate the convergent process and improve the quality of solutions. Numerical experiments show that the new methods of automatic programming is very practical


world congress on computational intelligence | 2008

Convergence properties of E-optimality algorithms for Many objective Optimization Problems

Zhuo Kang; Lishan Kang; Changhe Li; Yuping Chen; Minzhong Liu

In the paper, for many-objective optimization problems, the authors pointed out that the Pareto Optimality is unfair, unreasonable and imperfect for Many-objective Optimization Problems (MOPs) underlying the hypothesis that all objectives have equal importance and propose a new evolutionary decision theory. The key contribution is the discovery of the new definition of optimality called E-optimality for MOP that is based on a new conception, so called E-dominance, which not only considers the difference of the number of superior and inferior objectives between two feasible solutions, but also considers the values of improved objective functions underlying the hypothesis that all objectives in the problem have equal importance. Two new evolutionary algorithms for E-optimal solutions are proposed. Because the new relation <E of E-dominance is not transitive, so a new way must be found for consideration of convergence properties of algorithms. A Boolean function better used as a select strategy is defined. The convergence theorems of the new evolutionary algorithms are proved. Some numerical experiments show that the new evolutionary decision theory is better than Pareto decision theory for many-objective function optimization problems.


computational intelligence | 2001

Automatic discovery of scientific laws in observed data by asynchronous parallel evolutionary algorithm

Yan Li; Zhuo Kang; Lishan Kang; Hongqing Cao; Pu Liu

How to discover high-level knowledge such as laws of natural science in observed data automatically is a very important and difficult task in scientific research. High level knowledge modeled by ordinary differential equations (ODES) is discovered in observed dynamic data automatically by an asynchronous parallel evolutionary algorithm called AP-HEMA. A numerical example is used to demonstrate the potential of AP-HEMA. The results show that the dynamic models discovered automatically in the observed dynamic data by computer can sometimes compare with models discovered by humans.


Wuhan University Journal of Natural Sciences | 2009

An improved GT algorithm for solving complicated dynamic function optimization problems

Qing Zhang; Yan Li; Zhuo Kang; Lishan Kang

An improved Guo Tao algorithm (IGT algorithm) is proposed for solving complicated dynamic function optimization problems, and a function optimization benchmark problem with constrained condition and two dynamic parameters has been designed. The results achieved by IGT algorithm have been compared with the results from the Guo Tao algorithm (GT algorithm). It is shown that the new algorithm (IGT algorithm) provides better results. This preliminarily demonstrates the efficiency of the new algorithm in complicated dynamic environments.


international symposium on advances in computation and intelligence | 2008

A Parallel Self-adaptive Subspace Searching Algorithm for Solving Dynamic Function Optimization Problems

Yan Li; Zhuo Kang; Lishan Kang

In this paper, a parallel self-adaptive subspace searching algorithm is proposed for solving dynamic function optimization problems. The new algorithm called DSSSEA uses a re-initialization strategy for gathering global information of the landscape as the change of fitness is detected, and a parallel subspace searching strategy for maintaining the diversity and speeding up the convergence in order to find the optimal solution before it changes. Experimental results show that DSSSEA can be used to track the moving optimal solutions of dynamic function optimization problems efficiently.

Collaboration


Dive into the Zhuo Kang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lishan Kang

China University of Geosciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Changhe Li

China University of Geosciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ming Yang

China University of Geosciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge