Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John Sum is active.

Publication


Featured researches published by John Sum.


IEEE Transactions on Neural Networks | 1999

On the Kalman filtering method in neural network training and pruning

John Sum; Chi-Sing Leung; Gilbert H. Young; Wing-Kay Kan

In the use of extended Kalman filter approach in training and pruning a feedforward neural network, one usually encounters the problems on how to set the initial condition and how to use the result obtained to prune a neural network. In this paper, some cues on the setting of the initial condition will be presented with a simple example illustrated. Then based on three assumptions--1) the size of training set is large enough; 2) the training is able to converge; and 3) the trained network model is close to the actual one, an elegant equation linking the error sensitivity measure (the saliency) and the result obtained via extended Kalman filter is devised. The validity of the devised equation is then testified by a simulated example.


IEEE Transactions on Neural Networks | 1999

Analysis for a class of winner-take-all model

John Sum; Chi-Sing Leung; Peter Kwong-Shun Tam; Gilbert H. Young; Wing-Kay Kan; Lai-Wan Chan

Recently we have proposed a simple circuit of winner-take-all (WTA) neural network. Assuming no external input, we have derived an analytic equation for its network response time. In this paper, we further analyze the network response time for a class of winner-take-all circuits involving self-decay and show that the network response time of such a class of WTA is the same as that of the simple WTA model.


IEEE Transactions on Neural Networks | 2010

Convergence and Objective Functions of Some Fault/Noise-Injection-Based Online Learning Algorithms for RBF Networks

Kevin Ho; Chi-Sing Leung; John Sum

In the last two decades, many online fault/noise injection algorithms have been developed to attain a fault tolerant neural network. However, not much theoretical works related to their convergence and objective functions have been reported. This paper studies six common fault/noise-injection-based online learning algorithms for radial basis function (RBF) networks, namely 1) injecting additive input noise, 2) injecting additive/multiplicative weight noise, 3) injecting multiplicative node noise, 4) injecting multiweight fault (random disconnection of weights), 5) injecting multinode fault during training, and 6) weight decay with injecting multinode fault. Based on the Gladyshev theorem, we show that the convergence of these six online algorithms is almost sure. Moreover, their true objective functions being minimized are derived. For injecting additive input noise during training, the objective function is identical to that of the Tikhonov regularizer approach. For injecting additive/multiplicative weight noise during training, the objective function is the simple mean square training error. Thus, injecting additive/multiplicative weight noise during training cannot improve the fault tolerance of an RBF network. Similar to injective additive input noise, the objective functions of other fault/noise-injection-based online algorithms contain a mean square error term and a specialized regularization term.


IEEE Transactions on Neural Networks | 1999

On the regularization of forgetting recursive least square

Chi-sing Leung; Gilbert H. Young; John Sum; Wing-Kay Kan

In this paper, the regularization of employing the forgetting recursive least square (FRLS) training technique on feedforward neural networks is studied. We derive our result from the corresponding equations for the expected prediction error and the expected training error. By comparing these error equations with other equations obtained previously from the weight decay method, we have found that the FRLS technique has an effect which is identical to that of using the simple weight decay method. This new finding suggests that the FRLS technique is another on-line approach for the realization of the weight decay effect. Besides, we have shown that, under certain conditions, both the model complexity and the expected prediction error of the model being trained by the FRLS technique are better than the one trained by the standard RLS method.


IEEE Transactions on Neural Networks | 1997

Yet another algorithm which can generate topography map

John Sum; Chi-Sing Leung; Lai-Wan Chan; Lei Xu

This paper presents an algorithm to form a topographic map resembling to the self-organizing map. The idea stems on defining an energy function which reveals the local correlation between neighboring neurons. The larger the value of the energy function, the higher the correlation of the neighborhood neurons. On this account, the proposed algorithm is defined as the gradient ascent of this energy function. Simulations on two-dimensional maps are illustrated.


IEEE Transactions on Neural Networks | 2009

On Objective Function, Regularizer, and Prediction Error of a Learning Algorithm for Dealing With Multiplicative Weight Noise

John Sum; Chi-Sing Leung; Kevin Ho

In this paper, an objective function for training a functional link network to tolerate multiplicative weight noise is presented. Basically, the objective function is similar in form to other regularizer-based functions that consist of a mean square training error term and a regularizer term. Our study shows that under some mild conditions the derived regularizer is essentially the same as a weight decay regularizer. This explains why applying weight decay can also improve the fault-tolerant ability of a radial basis function (RBF) with multiplicative weight noise. In accordance with the objective function, a simple learning algorithm for a functional link network with multiplicative weight noise is derived. Finally, the mean prediction error of the trained network is analyzed. Simulated experiments on two artificial data sets and a real-world application are performed to verify theoretical result.


IEEE Transactions on Parallel and Distributed Systems | 2003

Analysis on a mobile agent-based algorithm for network routing and management

John Sum; Hong Shen; Chi-Sing Leung; Gilbert S. Young

Ant routing is a method for network routing in agent technology. Although its effectiveness and efficiency have been demonstrated and reported in the literature, its properties have not yet been well studied. This paper presents some preliminary analysis on an ant algorithm in regard to its population growing property and jumping behavior. Results conclude that as long as the value max, {i/spl Omega//sub j/|} is known, the practitioner is able to design the algorithm parameters, such as the number of agents being created for each request, k, and the maximum allowable number of jumps of an agent, in order to meet the network constraint.


IEEE Transactions on Neural Networks | 2010

On the Selection of Weight Decay Parameter for Faulty Networks

Chi-Sing Leung; Hong-Jiang Wang; John Sum

The weight-decay technique is an effective approach to handle overfitting and weight fault. For fault-free networks, without an appropriate value of decay parameter, the trained network is either overfitted or underfitted. However, many existing results on the selection of decay parameter focus on fault-free networks only. It is well known that the weight-decay method can also suppress the effect of weight fault. For the faulty case, using a test set to select the decay parameter is not practice because there are huge number of possible faulty networks for a trained network. This paper develops two mean prediction error (MPE) formulae for predicting the performance of faulty radial basis function (RBF) networks. Two fault models, multiplicative weight noise and open weight fault, are considered. Our MPE formulae involve the training error and trained weights only. Besides, in our method, we do not need to generate a huge number of faulty networks to measure the test error for the fault situation. The MPE formulae allow us to select appropriate values of decay parameter for faulty networks. Our experiments showed that, although there are small differences between the true test errors (from the test set) and the MPE values, the MPE formulae can accurately locate the appropriate value of the decay parameter for minimizing the true test error of faulty networks.


ieee international conference on high performance computing data and analytics | 2007

Lifetime performance of an energy efficient clustering algorithm for cluster-based wireless sensor networks

Yung-Fa Huang; Wun-He Luo; John Sum; Lin-Huang Chang; Chih-Wei Chang; Rung-Ching Chen

This paper proposes a fixed clustering algorithm (FCA) to improve energy efficiency for wireless sensor networks (WSNs). In order to reduce the consuming energy of sending data at each sensor, the proposed algorithm uniformly divides the sensing area into clusters where the cluster head is deployed in the center of the cluster area. Simulation results show that the proposed algorithm definitely reduces the energy consumption of the sensors and extends the lifetime of the networks nearly more 80% compared to the random clustering (RC).


hybrid intelligent systems | 2003

New analysis on mobile agents based network routing

Wenyu Qu; Hong Shen; John Sum

In this paper, we consider the problem of mobile agent based network routing. We analyze the probability of success (the probability that an agent can find the destination) and the population growth of mobile agents under an ant-routing algorithm. First, We give an estimation on the probability of success, P(d), than an agent can find the destination in d jumps as P(d) ≤ /1n(1-/1n)d (σ1-1/σ1)d-1, n is the number of nodes in the network, and σ1, σn, are the largest and smallest degrees of nodes in the network respectively. The probability of success that k agents can find the destination in d jumps is estimated as P*(d) ≤ 1 - (n-1/n+σ1-1)k, where k is the number of agents generated per request. Second, the distribution of mobile agents in the network is analyzed, p→(t) = (I + A + ... + At-1)km e→ when 0 d, where A is a matrix derived from the connectivity matrix. We further estimate that the number of agents running in the network is less than n2 σ1 km/n+σ1-1, and the populaUon of mobile agents running in each host, pj(t), satisfies: km + (1 -ξd-1)[1 - 1/n(1-ξ)](Dj - 1)km ≤ pj(t) ≤ km + (1 - ζd-1)[1 - 1/n(1-ζ)](Dj - 1)km, where ξ = ||A||1 = max1≤j≤n||aj||1, ζ = min1≤j≤n ||aj||1, aj is the jth column of matrix A, and Dj is the degree of the jth host.

Collaboration


Dive into the John Sum's collaboration.

Top Co-Authors

Avatar

Chi-Sing Leung

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gilbert H. Young

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Lai-Wan Chan

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Hong Shen

University of Adelaide

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ruibin Feng

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Wing-Kay Kan

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Chi-sing Leung

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge