Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tin Yau Kwok is active.

Publication


Featured researches published by Tin Yau Kwok.


IEEE Transactions on Neural Networks | 1997

Constructive algorithms for structure learning in feedforward neural networks for regression problems

Tin Yau Kwok; Dit Yan Yeung

In this survey paper, we review the constructive algorithms for structure learning in feedforward neural networks for regression problems. The basic idea is to start with a small network, then add hidden units and weights incrementally until a satisfactory solution is found. By formulating the whole problem as a state-space search, we first describe the general issues in constructive algorithms, with special emphasis on the search strategy. A taxonomy, based on the differences in the state transition mapping, the training algorithm, and the network architecture, is then presented.


IEEE Transactions on Neural Networks | 1997

Objective functions for training new hidden units in constructive neural networks

Tin Yau Kwok; Dit Yan Yeung

In this paper, we study a number of objective functions for training new hidden units in constructive algorithms for multilayer feedforward networks. The aim is to derive a class of objective functions the computation of which and the corresponding weight updates can be done in O(N) time, where N is the number of training patterns. Moreover, even though input weight freezing is applied during the process for computational efficiency, the convergence property of the constructive algorithms using these objective functions is still preserved. We also propose a few computational tricks that can be used to improve the optimization of the objective functions under practical situations. Their relative performance in a set of two-dimensional regression problems is also discussed.


IEEE Transactions on Neural Networks | 1996

Use of bias term in projection pursuit learning improves approximation and convergence properties

Tin Yau Kwok; Dit Yan Yeung

In a regression problem, one is given a multidimensional random vector X, the components of which are called predictor variables, and a random variable, Y, called response. A regression surface describes a general relationship between X and Y. A nonparametric regression technique that has been successfully applied to high-dimensional data is projection pursuit regression (PPR). The regression surface is approximated by a sum of empirically determined univariate functions of linear combinations of the predictors. Projection pursuit learning (PPL) formulates PPR using a 2-layer feedforward neural network. The smoothers in PPR are nonparametric, whereas those in PPL are based on Hermite functions of some predefined highest order R. We demonstrate that PPL networks in the original form do not have the universal approximation property for any finite R, and thus cannot converge to the desired function even with an arbitrarily large number of hidden units. But, by including a bias term in each linear projection of the predictor variables, PPL networks can regain these capabilities, independent of the exact choice of R. Experimentally, it is shown in this paper that this modification increases the rate of convergence with respect to the number of hidden units, improves the generalization performance, and makes it less sensitive to the setting of R. Finally, we apply PPL to chaotic time series prediction, and obtain superior results compared with the cascade-correlation architecture.


international conference on artificial neural networks | 1996

Bayesian Regularization in Constructive Neural Networks

Tin Yau Kwok; Dit Yan Yeung

In this paper, we study the incorporation of Bayesian regularization into constructive neural networks. The degree of regularization is automatically controlled in the Bayesian inference framework and hence does not require manual setting. Simulation shows that regularization, with input training using a full Bayesian approach, produces networks with better generalization performance and lower susceptibility to over-fitting as the network size increases. Regularization with input training under MacKays evidence framework, however, does not produce significant improvement on the problems tested.


international symposium on neural networks | 1993

Experimental analysis of input weight freezing in constructive neural networks

Tin Yau Kwok; Dit Yan Yeung

An important research problem in constructive network algorithms is how to train the new network after the addition of a hidden unit. Some previous empirical analyses performed on the cascade-correlation architecture indicate that the effectiveness of freezing is different for different problem domains and hence is not conclusive. A series of experiments with the single-hidden-layer network on a number of artificial pattern classification problems is described. The performance of the network is compared with and without input weight freezing, and against standard backpropagation. Drawbacks with freezing are identified, and some directions for future work are discussed.<<ETX>>


international symposium on neural networks | 1994

Constructive neural networks: some practical considerations

Tin Yau Kwok; Dit Yan Yeung

Based on a Hilbert space point of view, we proposed in our previous work a novel objective function for training new hidden units in a constructive feedforward neural network. Moreover, we proved that if the hidden unit functions satisfy the universal approximation property, the network so constructed incrementally, using the proposed objective function and with input weight freezing, still preserves the universal approximation property with respect to L/sup 2/ performance criteria. In this paper, we provide experimental support for the feasibility of using this objective function. Experiments are performed on two chaotic time series with encouraging results. In passing, we also demonstrate that engineering problems are not to be neglected in practical implementations. We identify the problem of plateau, and then show that by suitably transforming the objective function and modifying the quickprop algorithm, significant improvement can be obtained.<<ETX>>


Neural Processing Letters | 1995

Improving the approximation and convergence capabilities of projection pursuit learning

Tin Yau Kwok; Dit Yan Yeung

One nonparametric regression technique that has been successfully applied to high-dimensional data is projection pursuit regression (PPR). In this method, the regression surface is approximated by a sum of empirically determined univariate functions of linear combinations of the predictors. Projection pursuit learning (PPL) proposed by Hwanget al. formulates PPR using a two-layer feedforward neural network. One of the main differences between PPR and PPL is that the smoothers in PPR are nonparametric, whereas those in PPL are based on Hermite functions of some predefined highest orderR. While the convergence property of PPR is already known, that for PPL has not been thoroughly studied. In this paper, we demonstrate that PPL networks do not have the universal approximation and strong convergence properties for any finiteR. But, by including a bias term in each linear combination of the predictor variables, PPL networks can regain these capabilities, independent of the exact choice ofR. It is also shown experimentally that this modification improves the generalization performance in regression problems, and creates smoother decision surfaces for classification problems.


international conference on speech image processing and neural networks | 1994

A theoretically sound learning algorithm for constructive neural networks

Tin Yau Kwok; Dit Yan Yeung

In this paper, we analyse the problem of learning in constructive neural networks from a Hilbert space point of view. A novel objective function for training new hidden units using a greedy approach is derived. More importantly, we prove that a network so constructed incrementally still preserves the universal approximation property with respect to L/sup 2/ performance criteria. While theoretical results obtained so far on the universal approximation capabilities of multilayer feedforward networks only provide existence proofs, our results move one step further by providing a theoretically sound procedure for constructive approximation while still preserving the universal approximation property.<<ETX>>


Archive | 1995

Constructive feedforward neural networks for regression problems : a survey

Tin Yau Kwok; Dit Yan Yeung


international symposium on neural networks | 1995

Efficient cross-validation for feedforward neural networks

Tin Yau Kwok; Dit Yan Yeung

Collaboration


Dive into the Tin Yau Kwok's collaboration.

Top Co-Authors

Avatar

Dit Yan Yeung

Hong Kong University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge