Sin Chun Ng
Open University of Hong Kong
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sin Chun Ng.
international conference on neural information processing | 2006
Yunfeng Wu; Cong Wang; Sin Chun Ng; Anant Madabhushi; Yixin Zhong
Breast cancer is one of the leading causes of mortality among women, and the early diagnosis is of significant clinical importance. In this paper, we describe several linear fusion strategies, in particular the Majority Vote, Simple Average, Weighted Average, and Perceptron Average, which are used to combine a group of component multilayer perceptrons with optimal architecture for the classification of breast lesions. In our experiments, we utilize the criteria of mean squared error, absolute classification error, relative error ratio, and Receiver Operating Characteristic (ROC) curve to concretely evaluate and compare the performances of the four fusion strategies. The experimental results demonstrate that the Weighted Average and Perceptron Average strategies can achieve better diagnostic performance compared to the Majority Vote and Simple Average methods.
Mathematical Problems in Engineering | 2014
Yunfeng Wu; Xin Luo; Fang Zheng; Shanshan Yang; Suxian Cai; Sin Chun Ng
This paper presents a novel adaptive linear and normalized combination (ALNC) method that can be used to combine the component radial basis function networks (RBFNs) to implement better function approximation and regression tasks. The optimization of the fusion weights is obtained by solving a constrained quadratic programming problem. According to the instantaneous errors generated by the component RBFNs, the ALNC is able to perform the selective ensemble of multiple leaners by adaptively adjusting the fusion weights from one instance to another. The results of the experiments on eight synthetic function approximation and six benchmark regression data sets show that the ALNC method can effectively help the ensemble system achieve a higher accuracy (measured in terms of mean-squared error) and the better fidelity (characterized by normalized correlation coefficient) of approximation, in relation to the popular simple average, weighted average, and the Bagging methods.
international symposium on neural networks | 2010
Chi-Chung Cheung; Sin Chun Ng; Andrew K. Lui; Sean Shensheng Xu
Backpropagation (BP) learning algorithm is the most widely supervised learning technique which is extensively applied in the training of multi-layer feed-forward neural networks. Many modifications of BP have been proposed to speed up the learning of the original BP. However, the performance of these modifications is still not promising due to the existence of the local minimum problem and the error overshooting problem. This paper proposes an Enhanced Two-Phase method to solve these two problems to improve the performance of existing fast learning algorithms. The proposed method effectively locates the existence of the above problems and assigns appropriate fast learning algorithms to solve them. Throughout our investigation, the proposed method significantly improves the performance of different fast learning algorithms in terms of the convergence rate and the global convergence capability in different problems. The convergence rate can be increased up to 100 times compared with the existing fast learning algorithms.
congress on evolutionary computation | 2014
Man-Fai Leung; Sin Chun Ng; Chi-Chung Cheung; Andrew K. Lui
This paper presents a new algorithm that extends Particle Swarm Optimization (PSO) to deal with multi-objective problems. It makes two main contributions. The first is that the square root distance (SRD) computation among particles and leaders is proposed to be the criterion of the local best selection. This new criterion can make all swarms explore the whole Pareto-front more uniformly. The second contribution is the procedure to update the archive members. When the external archive is full and a new member is to be added, an existing archive member with the smallest SRD value among its neighbors will be deleted. With this arrangement, the non-dominated solutions can be well distributed. Through the performance investigation, our proposed algorithm performed better than two well-known multi-objective PSO algorithms, MOPSO-σ and MOPSO-CD, in terms of different standard measures.
international conference on tools with artificial intelligence | 2005
Yunfeng Wu; Jinming Zhang; Cong Wang; Sin Chun Ng
We introduce a non-parametric linear decision fusion called perceptron average (PA) for breast cancer diagnosis. We concretely compare the accuracy between both two fusion strategies for breast cancer diagnosis. The PA fusion demonstrates a higher overall diagnostic accuracy versus the weighted average fusion, and the PA fusion method also exhibits a better capability of generalization when a casualty of training data sizes. Moreover, the PA fusion gains a larger area covered by its receiver operating characteristic curve
ACM Inroads | 2010
Andrew K. Lui; Sin Chun Ng; Yannie H. Y. Cheung; Prabhat Gurung
This paper describes a project aiming at promoting independent learning among CS1 students. The project used Lego Mindstorms robots as the tool for building a course that could engage students of various levels of learning independence. Based on the Staged Self-Directed Learning Model proposed by Grow, the course hoped to take students to higher levels of learning independence. Lego Mindstorms robots proved their versatility in achieving this objective.
international symposium on neural networks | 2011
Chi-Chung Cheung; Sin Chun Ng; Andrew K. Lui; Sean Shensheng Xu
Backpropagation (BP) learning algorithm is the most widely supervised learning technique which is extensively applied in the training of multi-layer feed-forward neural networks. Many modifications of BP have been proposed to speed up the learning of the original BP. However, these modifications sometimes cannot converge properly due to the local minimum problem. This paper proposes a new algorithm, which provides a systematic approach to make use of the characteristics of different fast learning algorithms so that the convergence of a learning process is promising with a fast learning rate. Our performance investigation shows that the proposed algorithm always converges with a fast learning rate in two popular complicated applications whereas other popular fast learning algorithms give very poor global convergence capabilities in these two applications.
international symposium on neural networks | 2007
Yunfeng Wu; Sin Chun Ng
Regression is a very important data mining problem. In this paper, we present a new unbiased linear fusion method that combines component predictors so as to solve regression problems. The fusion weighted coefficients assigned are normalized, and updated by estimating the prediction errors between the component predictors and the desired regression values. The empirical results of our regression experiments on five synthetic and four benchmark data sets show that the proposed fusion method improves prediction accuracy in terms of mean-squared error, and also provides the regression curves with better fidelity with respect to normalized correlation coefficients, compared with the popular simple average and weighted average fusion rules.
international symposium on neural networks | 2008
Chi-Chung Cheung; Sin Chun Ng
Backpropagation (BP) learning algorithm is the most widely supervised learning technique which is extensively applied in the training of multi-layer feed-forward neural networks. Many modifications have been proposed to improve the performance of BP, and BP with Magnified Gradient Function (MGFPROP) is one of the fast learning algorithms which improve both the convergence rate and the global convergence capability of BP [19]. MGFPROP outperforms many benchmarking fast learning algorithms in different adaptive problems [19]. However, the performance of MGFPROP is limited due to the error overshooting problem. This paper presents a new approach called BP with Two-Phase Magnified Gradient Function (2P-MGFPROP) to overcome the error overshooting problem and hence speed up the convergence rate of MGFPROP. 2P-MGFPROP is modified from MGFPROP. It divides the learning process into two phases and adjusts the parameter setting of MGFPROP based on the nature of the phase of the learning process. Through simulation results in two different adaptive problems, 2P-MGFPROP outperforms MGFPROP with optimal parameter setting in terms of the convergence rate, and the improvement can be up to 50%.
ieee region 10 conference | 2006
Yunfeng Wu; Cong Wang; Sin Chun Ng
The merits of linear decision fusion in multiple learner systems have been widely accepted, and their practical applications are rich in literature. In this paper we present a new linear decision fusion strategy named BaggingmiddotLMS, which takes advantage of the least-mean-square (LMS) algorithm to update the fusion parameters in the Bagging ensemble systems. In the regression experiments on four synthetic and two benchmark data sets, we compared this method with the bagging-based simple average and adaptive mixture of experts ensemble methods. The empirical results show that the BaggingmiddotLMS method may significantly reduce the regression errors versus the other two types of Bagging ensembles, which indicates the superiority of the suggested BaggingmiddotLMS method