Kwok-Wo Wong
City University of Hong Kong
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kwok-Wo Wong.
international conference on electronics circuits and systems | 1999
A.S.Y. Wong; Kwok-Wo Wong; C.S. Leung
We propose a strictly local unified sequential method for principal component analysis. Any principal component analysis algorithm for linear feedforward neural networks can be used as the weight updating equation in our method. When principal components are extracted one by one sequentially, we suggest that the initial weight vector for the next component extraction should be orthogonal to the eigen-subspace already extracted. Simulation results show that both the convergence and the precision of the extraction are improved. Our method is also capable of extracting full eigenspace by using the neural network approach.
international conference on pattern recognition | 2002
Kwok-Wo Wong; Chi-Sing Leung; Sheng-Jiang Chang
The problem of handwritten digit recognition is dealt with by multilayer feedforward neural networks with different types of neuronal activation functions. Three types of activation functions are adopted in the network, namely, the traditional sigmoid function, sinusoidal function and a periodic function that can be considered as a combination of the first two functions. To speed up the learning, as well as to reduce the network size, an extended Kalman filter algorithm with the pruning method is used to train the network. Simulation results show that periodic activation functions perform better than monotonic ones in solving multi-cluster classification problems such as handwritten digit recognition.
Neural Processing Letters | 2000
Arnold Shu-Yan Wong; Kwok-Wo Wong; Chi-Sing Wong
When increasing numbers of principalcomponents are extracted by using the sequentialmethod proposed in [1] by Banour and Azimi-Sadjadi, the accumulated extractionerror will become dominant and affect the extractionsof the remaining principal components. To improvethis, we suggest that the initial weight vector forthe extraction of the next component should beorthogonal to the eigensubspace spanned by the alreadyextracted weight vectors. Simulation results showthat both the convergence and the accuracy of theextraction are improved. Our improved method is alsocapable of extracting full eigenspace accurately.
International Journal of Neural Systems | 2000
Sheng-Jiang Chang; Chi-Sing Leung; Kwok-Wo Wong; John Sum
The training of neural networks using the extended Kalman filter (EKF) algorithm is plagued by the drawback of high computational complexity and storage requirement that may become prohibitive even for networks of moderate size. In this paper, we present a local EKF training and pruning approach that can solve this problem. In particular, the by-products obtained along with the local EKF training can be utilized to measure the importance of the network weights. Comparing with the original global approach, the proposed local EKF training and pruning approach results in a much lower computational complexity and storage requirement. Hence, it is more practical in solving real world problems. The performance of the proposed algorithm is demonstrated on one medium- and one large-scale problems, namely, sunspot data prediction and handwritten digit recognition.
Pattern Recognition Letters | 2000
Sheng-Jiang Chang; Kwok-Wo Wong; Chi-Sing Leung
Abstract A self-organizing network is used to perform invariance extraction and recognition of handwritten digits. To extract the invariance effectively, we propose to combine the trace learning rule and the on-line dual extended Kalman filter (DEKF) algorithm. Furthermore, a new activation function is suggested to replace the traditional sigmoid activation function so as to reduce the sensitivity of the extracted features to samples with large variance. Computer simulations show that both the learning speed and the recognition rate are improved using a compact network.
international conference on electronics circuits and systems | 1998
A.S.Y. Wong; Kwok-Wo Wong; C.S. Leung
By using the fact that the derivatives of the ith network output with respect to the weights connected to the jth output neuron (i/spl ne/j) are zero, a modified RLS method is proposed for principal and minor components analysis. After the extraction of significant components of the input vectors, the error covariance matrix obtained in the learning process is used to perform minor components analysis. The minor components found are then pruned so as to achieve a higher compression ratio. Simulation results show that both the convergent speed and the compression ratio are improved. These indicate that our method combines the extraction of principal components and the pruning of minor components effectively.
international conference on acoustics speech and signal processing | 1998
Arnold Shu-Yan Wong; Kwok-Wo Wong; Chi-Sing Leung
In combining principal and minor components analysis, a parallel extraction method based on the recursive least square algorithm is suggested to extract the principal components of the input vectors. After the extraction, the error covariance matrix obtained in the learning process is used to perform minor components analysis. The minor components found are then pruned so as to achieve a higher compression ratio. Simulation results show that both the convergent speed and the compression ratio are improved, which in turn indicate that our method effectively combines the extraction of the principal components and the pruning of the minor components.
IEE Proceedings - Vision, Image, and Signal Processing | 2002
Kwok-Wo Wong; Chi-Sing Leung; Sheng-Jiang Chang
Electronics Letters | 1998
Kai-Chiu Kan; Kwok-Wo Wong
Electronics Letters | 1998
Arnold-Shu-Yan Wong; Kwok-Wo Wong; Chi-Sing Leung