Chi-Chung Cheung
Hong Kong Polytechnic University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Chi-Chung Cheung.
international symposium on neural networks | 2010
Chi-Chung Cheung; Sin Chun Ng; Andrew K. Lui; Sean Shensheng Xu
Backpropagation (BP) learning algorithm is the most widely supervised learning technique which is extensively applied in the training of multi-layer feed-forward neural networks. Many modifications of BP have been proposed to speed up the learning of the original BP. However, the performance of these modifications is still not promising due to the existence of the local minimum problem and the error overshooting problem. This paper proposes an Enhanced Two-Phase method to solve these two problems to improve the performance of existing fast learning algorithms. The proposed method effectively locates the existence of the above problems and assigns appropriate fast learning algorithms to solve them. Throughout our investigation, the proposed method significantly improves the performance of different fast learning algorithms in terms of the convergence rate and the global convergence capability in different problems. The convergence rate can be increased up to 100 times compared with the existing fast learning algorithms.
congress on evolutionary computation | 2014
Man-Fai Leung; Sin Chun Ng; Chi-Chung Cheung; Andrew K. Lui
This paper presents a new algorithm that extends Particle Swarm Optimization (PSO) to deal with multi-objective problems. It makes two main contributions. The first is that the square root distance (SRD) computation among particles and leaders is proposed to be the criterion of the local best selection. This new criterion can make all swarms explore the whole Pareto-front more uniformly. The second contribution is the procedure to update the archive members. When the external archive is full and a new member is to be added, an existing archive member with the smallest SRD value among its neighbors will be deleted. With this arrangement, the non-dominated solutions can be well distributed. Through the performance investigation, our proposed algorithm performed better than two well-known multi-objective PSO algorithms, MOPSO-σ and MOPSO-CD, in terms of different standard measures.
international symposium on neural networks | 2011
Chi-Chung Cheung; Sin Chun Ng; Andrew K. Lui; Sean Shensheng Xu
Backpropagation (BP) learning algorithm is the most widely supervised learning technique which is extensively applied in the training of multi-layer feed-forward neural networks. Many modifications of BP have been proposed to speed up the learning of the original BP. However, these modifications sometimes cannot converge properly due to the local minimum problem. This paper proposes a new algorithm, which provides a systematic approach to make use of the characteristics of different fast learning algorithms so that the convergence of a learning process is promising with a fast learning rate. Our performance investigation shows that the proposed algorithm always converges with a fast learning rate in two popular complicated applications whereas other popular fast learning algorithms give very poor global convergence capabilities in these two applications.
congress on evolutionary computation | 2015
Man-Fai Leung; Sin Chun Ng; Chi-Chung Cheung; Andrew K. Lui
This paper presents a new Multi-Objective Particle Swarm Optimization (MOPSO) algorithm that has two new components: leader selection and crossover. The new leader selection algorithm, called Space Expanding Strategy (SES), guides particles moving to the boundaries of the objective space in each generation so that the objective space can be expanded rapidly. Besides, crossover is adopted instead of mutation to enhance the convergence and maintain the stability of the generated solutions (exploitation). The performance of the proposed MOPSO algorithm was compared with three popular multi-objective algorithms in solving fifteen standard test functions. Their performance measures were hypervolume, spread and inverse generational distance. The performance investigation found that the performance of the proposed algorithm was generally better than the other three, and the performance of the proposed crossover was generally better than three popular mutation operators.
international symposium on neural networks | 2013
Chi-Chung Cheung; Andrew K. Lui; Sean Shensheng Xu
Backpropagation (BP) algorithm, which is very popular in supervised learning, is extensively applied in training feed-forward neural networks. Many modifications have been proposed to speed up the convergence process of the standard BP algorithm. However, they seldom focus on improving the global convergence capability. This paper proposes a new algorithm called Wrong Output Modification (WOM) to improve the global convergence capability of a fast learning algorithm. When a learning process is trapped by a local minimum or a flat-spot area, this algorithm looks for some outputs that go to other extremes when compared with their target outputs, and then it modifies such outputs systemically so that they can get close to their target outputs and hence some weights of neurons are changed accordingly. It is hoped that these changes make the learning process escape from such local minima or flat-spot areas and then converge. The performance investigation shows that the proposed algorithm can be applied into different fast learning algorithms, and their global convergence capabilities are improved significantly compared with their original algorithms. Moreover, some statistical data obtained from this algorithm can be used to identify the difficulty of a learning problem.
international symposium on neural networks | 2012
Sin Chun Ng; Chi-Chung Cheung; Andrew K. Lui; Hau-Ting Tse
This paper proposes a new approach called output monitoring and modification (OMM) to address the local minimum problem for existing gradient-descent algorithms (like BP, Rprop and Quickprop) in training feed-forward neural networks. OMM monitors the learning process. When the learning process is trapped into a local minimum, OMM changes some incorrect output values to escape from such local minimum. This modification can be repeated with different parameter settings until the learning process converges to the global optimum. The simulation experiments show that a gradient-descent learning algorithm with OMM has a much better global convergence capability than those without OMM but their convergence rates are similar. In one benchmark problem (application), the global convergence capability was increased from 1% to 100%.
international symposium on neural networks | 2012
Chi-Chung Cheung; Sin Chun Ng; Andrew K. Lui
Backpropagation (BP) algorithm is the most popular supervised learning algorithm that is extensively applied in training feed-forward neural networks. Many BP modifications have been proposed to increase the convergence rate of the standard BP algorithm, and Quickprop is one the most popular fast learning algorithms. The convergence rate of Quickprop is very fast; however, it is easily trapped into a local minimum and thus it cannot converge to the global minimum. This paper proposes a new fast learning algorithm modified from Quickprop. By addressing the drawbacks of the Quickprop algorithm, the new algorithm has a systematic approach to improve the convergence rate and the global convergence capability of Quickprop. Our performance investigation shows that the proposed algorithm always converges with a faster learning rate compared with Quickprop. The improvement in the global convergence capability is especially large. In one learning problem (application), the global convergence capability increased from 4% to 100%.
international symposium on neural networks | 2014
Chi-Chung Cheung; Sin Chun Ng; Andrew K. Lui; Sean Shensheng Xu
Backpropagation (BP) algorithm is very popular in supervised learning for feed-forward neural networks. However, it is sometimes slow and easily trapped into a local minimum or a flat-spot area (known as the local minimum and flat-spot area problem respectively). Many modifications have been proposed to speed up its convergence rate but they seldom improve the global convergence capability. Some fast learning algorithms have been proposed recently to solve these two problems: Wrong Output Modification (WOM) is one new algorithm that can improve the global convergence capability significantly. However, some limitations exist in WOM so that it cannot solve the local minimum and flat-spot problem effectively. In this paper, some enhancements are proposed to further improve the performance of WOM by (a) changing the mechanism to escape from a local minimum or a flat-spot area and (b) adding a fast checking procedure to identify the existence of a local minimum or a flat-spot area. The performance investigation shows that the proposed enhancements can improve the performance of WOM significantly when it is applied into different fast learning algorithms. Moreover, WOM with these enhancements is also applied to a very popular second-order gradient descent learning algorithm, Levenberg-Marquardt (LM) algorithm. The performance investigation shows that it can significantly improve the performance of LM.
international symposium on neural networks | 2013
Chi-Chung Cheung; Sin Chun Ng; Andrew K. Lui; Sean Shensheng Xu
Backpropagation (BP) learning algorithm is the most widely used supervised learning technique that is extensively applied in the training of multi-layer feed-forward neural networks. Although many modifications of BP have been proposed to speed up the learning of the original BP, they seldom address the local minimum and the flat-spot problem. This paper proposes a new algorithm called Local-minimum and Flat-spot Problem Solver (LFPS) to solve these two problems. It uses a systematic approach to check whether a learning process is trapped by a local minimum or a flat-spot area, and then escape from it. Thus, a learning process using LFPS can keep finding an appropriate way to converge to the global minimum. The performance investigation shows that the proposed algorithm always converges in different learning problems (applications) whereas other popular fast learning algorithms sometimes give very poor global convergence capabilities.
congress on evolutionary computation | 2016
Hiu-Hin Tam; Man-Fai Leung; Zhenkun Wang; Sin Chun Ng; Chi-Chung Cheung; Andrew K. Lui
Multi-Objective Evolutionary Algorithm based on decomposition (MOEA/D) has been proposed for a decade in solving complex multi-objective problems (MOP) by decomposing it into a set of single-objective problems. MOEA/D-AGR is one of the improved algorithm introduced recently to substitute the original replacement scheme in MOEA/D with a new adaptive global replacement (GR) scheme so that the neighborhood replacement size Tr is increased among the generation to achieve the shifting of focus from solution diversity to convergence. However, the new replacement scheme only considers the one-way convergence of the objective solutions among all sub-problems. It is hard to re-achieve the solution diversity once the algorithm reaches another steeper landscape of solution from a flatten one while it is focusing on convergence. This paper proposes a new adaptive GR scheme to prolong the period of Tr increment so that it can fit the re-increment of fitness landscape. To compensate the shorten period for solution convergence, local searching is adapted for those individuals which has stopped improving its solution value by Simulated Annealing (SA) algorithm. In order to suppress the degree of local searching at the early stage of focusing solution diversity, Fuzzy Logic is used here to coordinate the frequency of local searching according to the average change of objective solution values. To demonstrate the performance of the proposed algorithm, several common benchmark MOPs are used in this paper for comparing with the several state-of-the-art MOEA/D algorithms in terms of IGD. The performance investigation found that the performance of the proposed algorithm was generally better than the other compared algorithms.