Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chi Kin Chow is active.

Publication


Featured researches published by Chi Kin Chow.


IEEE Transactions on Evolutionary Computation | 2009

A Genetic Algorithm That Adaptively Mutates and Never Revisits

Shiu Yin Yuen; Chi Kin Chow

A novel genetic algorithm is reported that is non-revisiting: It remembers every position that it has searched before. An archive is used to store all the solutions that have been explored before. Different from other memory schemes in the literature, a novel binary space partitioning tree archive design is advocated. Not only is the design an efficient method to check for revisits, if any, it in itself constitutes a novel adaptive mutation operator that has no parameter. To demonstrate the power of the method, the algorithm is evaluated using 19 famous benchmark functions. The results are as follows. (1) Though it only uses finite resolution grids, when compared with a canonical genetic algorithm, a generic real-coded genetic algorithm, a canonical genetic algorithm with simple diversity mechanism, and three particle swarm optimization algorithms, it shows a significant improvement. (2) The new algorithm also shows superior performance compared to covariance matrix adaptation evolution strategy (CMA-ES), a state-of-the-art method for adaptive mutation. (3) It can work with problems that have large search spaces with dimensions as high as 40. (4) The corresponding CPU overhead of the binary space partitioning tree design is insignificant for applications with expensive or time-consuming fitness evaluations, and for such applications, the memory usage due to the archive is acceptable. (5) Though the adaptive mutation is parameter-less, it shows and maintains a stable good performance. However, for other algorithms we compare, the performance is highly dependent on suitable parameter settings.


IEEE Transactions on Evolutionary Computation | 2011

An Evolutionary Algorithm That Makes Decision Based on the Entire Previous Search History

Chi Kin Chow; Shiu Yin Yuen

In this paper, we report a novel evolutionary algorithm that enhances its performance by utilizing the entire previous search history. The proposed algorithm, namely history driven evolutionary algorithm (HdEA), employs a binary space partitioning tree structure to memorize the positions and the fitness values of the evaluated solutions. Benefiting from the space partitioning scheme, a fast fitness function approximation using the archive is obtained. The approximation is used to improve the mutation strategy in HdEA. The resultant mutation operator is parameter-less, anisotropic, and adaptive. Moreover, the mutation operator naturally avoids the generation of out-of-bound solutions. The performance of HdEA is tested on 34 benchmark functions with dimensions ranging from 2 to 40. We also provide a performance comparison of HdEA with eight benchmark evolutionary algorithms, including a real coded genetic algorithm, differential evolution, two improved differential evolution, covariance matrix adaptation evolution strategy, two improved particle swarm optimization, and an estimation of distribution algorithm. Seen from the experimental results, HdEA outperforms the other algorithms for multimodal function optimization.


congress on evolutionary computation | 2007

A non-revisiting Genetic Algorithm

Shiu Yin Yuen; Chi Kin Chow

Genetic Algorithm (GA) is a revisiting stochastic algorithm. In other words, a solution that has been visited before may be revisited. The fitness of the solution has to be evaluated each time. Since fitness evaluation is the most computationally intensive process in the execution of the GA, revisits should be minimized or eliminated. In this paper, a novel dynamic binary partitioning tree archive is proposed to eliminate all revisits. It works as follows: When the GA generates a solution, the tree is accessed. A leaf node is appended to the tree if the solution has not been visited before and so has no record in the tree. Otherwise, a search is initiated from the leaf node that is the duplicate to the solution to find the nearest neighbor solution in the search space that is not visited. During this process, whole sub-trees may be pruned if all the leaf nodes it contains are visited. The search naturally implements a self adaptive mutation mechanism. Hence the GA requires no other mutation parameter or mutation scheme. Experimental results reveal that this new GA is superior in performance compared with the standard GA with revisits, and the tree archive is not memory intensive.


IEEE Transactions on Evolutionary Computation | 2012

A Multiobjective Evolutionary Algorithm That Diversifies Population by Its Density

Chi Kin Chow; Shiu Yin Yuen

Most existing multiobjective evolutionary algorithms (MOEAs) assume the existence of Pareto-optimal solutions/Pareto-optimal objective vectors in a neighborhood of an obtained Pareto-optimal set (PS)/Pareto-optimal front (PF). Obviously, this assumption does not work well on the multiobjective problem (MOP) whose true PF and true PS are in the form of multiple segments-truly disconnected MOP (TYD-MOP). Moreover, these MOEAs commonly involve more than three control parameters; and some of them even involve nine control parameters. The stabilities of their performance against parameter settings are generally unknown. In this paper, we propose a MOEA, namely multiobjective density driven evolutionary algorithm (MODdEA), which can handle TYD-MOP. MODdEA stores all evaluated solutions by a binary space partitioning (BSP) tree. Benefiting from the BSP scheme, a fast solution density estimation by the archive is naturally obtained. MODdEA uses this estimated density together with the nondominated rank to probabilistically select mating individuals, which relaxes the neighborhood assumption on PF in a parameter-less manner. Moreover, two genetic operators, extended arithmetic crossover and diversified mutation, are proposed to enhance the explorative search ability of the algorithm. MODdEA is examined on two test problem sets. The first test set consists of six TYD-MOPs; the second test set consists of 17 benchmark MOPs which are commonly examined by the existing MOEAs. Comparing to 14 test MOEAs, MODdEA has superior performance on TYD-MOP and is competitive on MOP whose true PF and PS are one single connected segment.


Pattern Recognition Letters | 2007

A fast marching formulation of perspective shape from shading under frontal illumination

Shiu Yin Yuen; Yuen Yan Tsui; Chi Kin Chow

An adaptation of the fast marching method to the perspective Shape from Shading under frontal illumination is proposed. A heuristic is proposed to handle occlusion. The method outperforms Tankus et al. [Tankus, A., Sochen, N., Yeshurun, Y., 2005. Shape-from-shading under perspective projection. Internat. J. Comput. Vision 63(1), 21-43] in both time and accuracy. Accurate methods for generating testing images are also reported.


genetic and evolutionary computation conference | 2013

Which algorithm should i choose at any point of the search: an evolutionary portfolio approach

Shiu Yin Yuen; Chi Kin Chow; Xin Zhang

Many good evolutionary algorithms have been proposed in the past. However, frequently, the question arises that given a problem, one is at a loss of which algorithm to choose. In this paper, we propose a novel algorithm portfolio approach to address the above problem. A portfolio of evolutionary algorithms is first formed. Artificial Bee Colony (ABC), Covariance Matrix Adaptation Evolutionary Strategy (CMA-ES), Composite DE (CoDE), Particle Swarm Optimization (PSO2011) and Self adaptive Differential Evolution (SaDE) are chosen as component algorithms. Each algorithm runs independently with no information exchange. At any point in time, the algorithm with the best predicted performance is run for one generation, after which the performance is predicted again. The best algorithm runs for the next generation, and the process goes on. In this way, algorithms switch automatically as a function of the computational budget. This novel algorithm is named Multiple Evolutionary Algorithm (MultiEA). Experimental results on the full set of 25 CEC2005 benchmark functions show that MultiEA outperforms i) Multialgorithm Genetically Adaptive Method for Single Objective Optimization (AMALGAM-SO); ii) Population-based Algorithm Portfolio (PAP); and iii) a multiple algorithm approach which chooses an algorithm randomly (RandEA). The properties of the prediction measures are also studied. The portfolio approach proposed is generic. It can be applied to portfolios composed of non-evolutionary algorithms as well.


Applied Soft Computing | 2012

Parameter control system of evolutionary algorithm that is aided by the entire search history

Shing Wa Leung; Shiu Yin Yuen; Chi Kin Chow

In solving problems with evolutionary algorithms (EAs), the performance of the EA will be affected by its properties. As the properties of EA depend on the parameter setting, users need to tune the parameters to optimize the performance on different problems. In the case that the user does not have any prior knowledge of the problem, parameter tuning is very difficult and time consuming. One needs to try different combinations of parameter values to find the best setting. To solve this problem, one way is to control the parameters during the EA run. This paper proposes a new adaptive parameter control system, called Parameter Control system using entire Search History (PCSH). It is a general add-on system which is not restricted to a specific class of EA. Users are only required to know the range of the parameters. It automatically adjusts the parameters of an EA according to the entire search history, in a parameter-less manner. To illustrate the performance of PCSH, it is applied to control the parameters of three common classes of EAs: (1) canonical Genetic Algorithm (GA), (2) Particle Swarm Optimization (PSO) and (3) Differential Evolution (DE). For GA, we show that PCSH can automatically control the crossover operator, crossover values (uniformly sampled from the range) and mutation operator. For DE, we show that PCSH can automatically control the crossover operator, crossover values and the differential amplification factor (uniformly sampled from the ranges). For PSO, we show that PCSH can automatically control the two learning factors and the inertia weight (uniformly sampled from the range). Moreover, no special provision is needed at the initialization. 34 benchmark functions are used to evaluate the performance comprehensively. The test results show that, in most of the benchmark functions, the performance of the test EAs are improved or similar after adopting PCSH. It shows that PCSH keeps or improves the performance of the test EAs while relieving the heavy burden of the algorithm designer on the setting of some parameters.


International Journal of Computer Vision | 2009

Recovering Shape by Shading and Stereo Under Lambertian Shading Model

Chi Kin Chow; Shiu Yin Yuen

A method that integrates shape from shading and stereo is reported for Lambertian objects. A rectification is proposed to convert any lighting direction from oblique to orthographic. A sparse stereo method is reported that directly uses depth information and has no foreshortening problem. The method completely solves three difficult problems in stereo, namely, recovering depth at occlusion; matching at places with similar shading and matching at smooth silhouettes. The method has been tested on both synthetic and real images. It shows superior performance compared with two recent stereo algorithms. It is also a method based on the physics of image formation.


congress on evolutionary computation | 2009

Continuous non-revisiting genetic algorithm

Shiu Yin Yuen; Chi Kin Chow

The non-revisiting genetic algorithm (NrGA) is extended to handle continuous search space. The extended NrGA model, Continuous NrGA (cNrGA), employs the same tree-structure archive of NrGA to memorize the evaluated solutions, in which the search space is divided into non-overlapped partitions according to the distribution of the solutions. cNrGA is a bi-modulus evolutionary algorithm consisting of the genetic algorithm module (GAM) and the adaptive mutation module (AMM). When GAM generates an offspring, the offspring is sent to AMM and is mutated according to the density of the solutions stored in the memory archive. For a point in the search space with high solution-density, it infers a high probability that the point is close to the optimum and hence a near search is suggested. Alternatively, a far search is recommended for a point with low solution-density. Benefitting from the space partitioning scheme, a fast solution-density approximation is obtained. Also, the adaptive mutation scheme naturally avoid the generation of out-of-bound solutions. The performance of cNrGA is tested on 14 benchmark functions on dimensions ranging from 2 to 40. It is compared with real coded GA, differential evolution, covariance matrix adaptation evolution strategy and two improved particle swarm optimization. The simulation results show that cNrGA outperforms the other algorithms for multi-modal function optimization.


world congress on computational intelligence | 2008

A non-revisiting particle swarm optimization

Chi Kin Chow; Shiu Yin Yuen

In this article, a non-revisiting particle swarm optimization (NrPSO) is proposed NrPSO is an integration of the non-revisiting scheme and a standard particle swarm optimization (PSO). It guarantees that all updated positions are not evaluated before. This property leads to two advantages: 1) it undisputedly reduces the computation cost on evaluating a time consuming and expensive objective function and 2) It helps prevent premature convergence. The non-revisiting scheme acts as a self-adaptive mutation. Particles genericly switch between local search and global search. In addition, since the adaptive mutation scheme of NrPSO involves no parameter, comparing with other variants of PSO which involve at least two performance sensitive parameters, the performance of NrPSO is more reliable. The simulation results show that NrPSO outperforms four variants of PSOs on optimizing both uni-modal and multi-modal functions with dimensions up to 40. We also illustrate that the overhead and archive size of NrPSO are insignificant. Thus NrPSO is practical for real world applications. In addition, it is shown that the performance of NrPSO is insensitive to the specific chosen values of parameters.

Collaboration


Dive into the Chi Kin Chow's collaboration.

Top Co-Authors

Avatar

Shiu Yin Yuen

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Shing Wa Leung

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

K.F. Fong

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Xin Zhang

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

C.K. Lee

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Chun Ki Fong

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Hoi Shan Lam

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Yang Lou

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Yuen Yan Tsui

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Shifeng Chen

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge