Chi-Tat Leung
City University of Hong Kong
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Chi-Tat Leung.
IEEE Transactions on Power Systems | 1996
Tommy W. S. Chow; Chi-Tat Leung
This paper presents a novel technique for electric load forecasting based on neural weather compensation. Our proposed method is a nonlinear generalization of Box and Jenkins approach for nonstationary time-series prediction. A weather compensation neural network is implemented for one-day ahead electric load forecasting. Our weather compensation neural network can accurately predict the change of actual electric load consumption from the previous day. The results, based on Hong Kong Island historical load demand, indicate that this methodology is capable of providing a more accurate load forecast with a 0.9% reduction in forecast error.
systems man and cybernetics | 1999
Siu-Yeung Cho; Tommy W. S. Chow; Chi-Tat Leung
A neural-based crowd estimation system for surveillance in complex scenes at underground station platform is presented. Estimation is carried out by extracting a set of significant features from sequences of images. Those feature indexes are modeled by a neural network to estimate the crowd density. The learning phase is based on our proposed hybrid of the least-squares and global search algorithms which are capable of providing the global search characteristic and fast convergence speed. Promising experimental results are obtained in terms of accuracy and real-time response capability to alert operators automatically.
Neurocomputing | 1997
Yat-Fung Yam; Tommy W. S. Chow; Chi-Tat Leung
Abstract An algorithm for determining the optimal initial weights of feedforward neural networks based on a linear algebraic method is developed. The optimal initial weights are evaluated by using a least squares method at each layer. With the optimal initial weights determined, the initial error is substantially smaller and therefore the number of iterations required to achieve the error criterion is reduced. For a character recognition task, the number of iterations required for the network started with the optimal weights is only 53.9% of that started with the random weights. In addition, the time required for the initialisation process is negligible when compared to the training process.
Artificial Intelligence | 1999
Chi-Tat Leung; Tommy W. S. Chow
A novel adaptive regularization parameter selection (ARPS) method is proposed in this paper to enhance the performance of the regularization method. The proposed ARPS method enables a gradient descent type training to tunnel through some of the undesired sub-optimal solutions on the composite error surface by means of changing the value of the regularization parameter. Undesired sub-optimal solutions are introduced inherently from regularized objective functions. Hence, the proposed ARPS method is capable of enhancing the regularization method without getting stuck at these sub-optimal solutions.
international symposium on neural networks | 1997
Chi-Tat Leung; Tommy W. S. Chow
A hybrid learning algorithm for backpropagation network based on global search and least squares methods is presented to increase the speed of convergence. The proposed algorithm comprises global search and least squares parts. The global search part trains a backpropagation network over a reduced weight space. The remained weights are calculated in accordance with linear least squares method. Two problems of nonlinear function approximation and modified XOR are applied to demonstrate the fast global search performance of the proposed algorithm. The results indicate that the proposed algorithm enables the learning process to significantly speed up by at most 4670% in terms of iterations and do not trap in local minima.
Neurocomputing | 1997
Chi-Tat Leung; Tommy W. S. Chow
Abstract A novel robust fourth-order cumulants cost function was introduced to enhance the fitting to an underlying function in small data sets with high noise level of Gaussian distribution because higher-order statistics provide a unique feature of suppressing Gaussian noise processes of unknown spectral characteristics. The proposed cost function was validated on the prediction of benchmark sunspot data and an excellent result was obtained. The proposed cost function enables the network to provide a very low training error and an excellent generalization property. Our result indicates that the network trained by the proposed cost function can, at most, provide 74% reduction of the normalized test error in the benchmark test.
international symposium on neural networks | 1996
Chi-Tat Leung; T.W.S. Chow
A novel robust fourth-order cumulants cost function is introduced to enhance the fitting to underlying function in small data sets with high noise level of Gaussian noise. The neural network learns based on the gradient descent optimization method by introducing a constraint term in the cost function. The proposed cost function was applied to benchmark sunspot series prediction and nonlinear system identification. Excellent results are obtained. The neural network can provide lower training error and excellent generalization property. Our proposed cost function enables the network to provide, at most, 73% reduction of normalized test error in the benchmark test.
Neural Processing Letters | 1996
Chi-Tat Leung; Tommy W. S. Chow; Yat-Fung Yam
A novel Least Cumulants Method is proposed to tackle the problem of fitting to underlying function in small data sets with high noise level because higher-order statistics provide an unique feature of suppressing Gaussian noise processes of unknown spectral characteristics. The current backpropagation algorithm is actually the Least Square Method based algorithm which does not perform very well in noisy data set. Instead, the proposed method is more robust to the noise because a complete new objective function based on higher-order statistics is introduced. The proposed objective function was validated by applying to predict benchmark sunspot data and excellent results are obtained. The proposed objective function enables the network to provide a very low training error and excellent generalization property. Our results indicate that the network trained by the proposed objective function can, at most, provide 73% reduction of normalized test error in the benchmark test.
Engineering Applications of Artificial Intelligence | 1996
Tommy W. S. Chow; Chi-Tat Leung
Abstract This paper presents a neural network in which a nonlinear dilation function in the input layer is introduced to emulate the nonlinearity of human sensors, say, the ear. The introduction of the dilation function reduces the redundancy in the information contained in the input variables. This results in a minimization of the prediction error, as well as of the error variance. The applications of time-series predictions and the instantaneous VAr prediction of an electric arc furnace are included to corroborate the performance of the nonlinear dilation network.
IFAC Proceedings Volumes | 1994
Tommy W. S. Chow; Chi-Tat Leung
Abstract This paper presents a neural network that introduce a nonlinear dilation function emulating the nonlinearity of the human sensors, say ear. The introduction of the dilation function reduces redundancy in the information contents of the input variables. This results in minimizing the prediction error as well as the error variance. The applications of time-series prediction and the instantaneous VAr prediction of electric arc furnace are included to corroborate the performance of the nonlinear dilation network.