Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where A. Luk is active.

Publication


Featured researches published by A. Luk.


international symposium on neural networks | 2003

Fast convergence for backpropagation network with magnified gradient function

S.C. Ng; C.C. Cheung; Shu Hung Leung; A. Luk

This paper presents a modified back-propagation algorithm using magnified gradient function (MGFPROP), which can effectively speed up the convergence rate and improve the global convergence capability of back-propagation. The purpose of MGFPROP is to increase the convergence rate by magnifying the gradient function of the activation function. From the convergence analysis, it is shown that the new algorithm retains the gradient descent property but gives faster convergence than that of the back-propagation algorithm. Simulation results show that, in terms of the convergence rate and the percentage of global convergence, the new algorithm always outperforms the standard back-propagation algorithm and other competing techniques.


international symposium on neural networks | 1995

Fast and Global Convergent Weight Evolution Algorithm based on Modified Back-propagation

Sin Chun Ng; Shu Hung Leung; A. Luk

The conventional back-propagation algorithm is basically a gradient-descent method, it has the problems of local minima and slow convergence. A new weight evolution algorithm based on modified back-propagation can effectively avoid the local minima and speed up the convergence rate. The idea is to perturb those weights linked with the output neuronsfor which their squared errors are above-average; thus local minima can be escaped. The modifled back-propagation is to change the partial derivative of the activation function so as to magnifi the backward propagated error signal, and thus the convergence rate can be accelerated.


international symposium on neural networks | 1999

Lotto-type competitive learning and its stability

A. Luk; Sandra Lien

This paper introduces a generalised idea of lotto-type competitive learning (LTCL), where one or move winners exist. The winners are divided into tiers, with each tier being rewarded differently. All losers are penalised equally. A set of dynamic LTCL equations is then introduced to assist in the study of the stability of the simplified idea of LTCL. It is shown that if a K-orthant exists in the LTCL state space, which is an attracting invariant set of the network flow, it will converge to a fixed point.


international symposium on neural networks | 1999

Rival rewarded and randomly rewarded rival competitive learning

A. Luk; Sandra Lien

This paper introduces two new competitive learning paradigms, namely rival rewarded and randomly rewarded rival competitive learning. In both cases, the winner is rewarded as in the classical competitive learning. In the rival rewarded case, the nearest rival can be rewarded in a number of ways. In the second case, the rival is randomly rewarded or penalised by a number of ways. For both paradigms, it is shown experimentally that they behave similarly to the rival penalised competitive learning.


international symposium on neural networks | 2000

Dynamics of the generalised lotto-type competitive learning

A. Luk; Sandra Lien

In the generalised lotto-type competitive learning algorithm more than one winner exist, and the winners are divided into tiers, with each tier being rewarded differently. All the losers are penalised equally. It is possible to formulate a set of equations which is useful in studying the various dynamic aspects of the generalised lotto-type competitive learning.


international joint conference on neural network | 2006

Convergence Analysis of Generalized Back-propagation Algorithm with Modified Gradient Function

Sin Chun Ng; Shu Hung Leung; A. Luk; Yunfeng Wu

In this paper, we further investigate the convergence properties of the generalized back-propagation algorithm using magnified gradient function (MGFPROP). The idea of MGFPROP is to increase the convergence rate by magnifying the gradient function of the activation function. The algorithm can effectively speed up the convergence rate and reduce the chance of being trapped in premature saturation. From the convergence analysis, it is shown that MGFPROP retains the gradient descent property, gives faster convergence and has better global searching capability than that of the back-propagation algorithm.


international symposium on neural networks | 2002

An integrated algorithm of magnified gradient function and weight evolution for solving local minima problem

Sin Chun Ng; Shu Hung Leung; A. Luk

This paper presents the integration of magnified gradient function and weight evolution algorithms in order to solve the local minima problem. The combination of the two algorithms gives a significant improvement in terms of the convergence rate and global search capability as compared to some common fast learning algorithms such as the standard backpropagation, Quickprop, resilient propagation, SARPROP, and genetic algorithms.


international symposium on neural networks | 1998

Convergence of the generalized back-propagation algorithm with constant learning rates

S.C. Ng; Shu Hung Leung; A. Luk

A new generalized back-propagation algorithm which can effectively speed up the convergence rate and reduce the chance of being trapped in local minima was previously proposed by the authors (1996). In this paper, we will analyze the convergence of the generalized back-propagation algorithm. The weight sequences in generalized backpropagation algorithm can be approximated by a certain ordinary differential equation (ODE). When the learning rate tends to zero, the interpolated weight sequences of generalized back-propagation converge weakly to the solution of associated ODE.


international symposium on neural networks | 1997

Weight evolution algorithm with dynamic offset range

S.C. Ng; Shu Hung Leung; A. Luk

The main problems for gradient-descent algorithms such as backpropagation are its slow convergence rate and the possibility of being trapped in local minima. In this paper, a weight evolution algorithm with dynamic offset range is proposed to remedy the above problems. The idea of weight evolution is to evolve the network weights in a controlled manner during the learning phase of backpropagation so as to jump to the regions of smaller mean squared error whenever the backpropagation stops at a local minimum. If the algorithm is consistently being trapped in a local minimum, the offset range for weight evolution will be incremented to allow larger weight space to be searched. When the local minimum is bypassed, the offset range will be reset to the initial value. It can be proved that this method can always escape local minima and guarantee convergence to the global solution. Simulation results show that the weight evolution algorithm with dynamic offset range gives a faster convergence rate and global search capability.


international symposium on neural networks | 1996

A generalized backpropagation algorithm for faster convergence

S.C. Ng; Shu Hung Leung; A. Luk

The conventional backpropagation algorithm is basically a gradient-descent method, it has the problems of local minima and slow convergence. A new generalized backpropagation algorithm which can effectively speed up the convergence rate and reduce the chance of being trapped in local minima is introduced in this paper. The new backpropagation algorithm is to change the derivative of the activation function so as to magnify the backward propagated error signal, thus the convergence rate can be accelerated and the local minimum can be escaped.

Collaboration


Dive into the A. Luk's collaboration.

Top Co-Authors

Avatar

Shu Hung Leung

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

S.C. Ng

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Sin Chun Ng

Open University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Chi Yin Chung

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Sin Chun Ng

Open University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

S.C. Ng

City University of Hong Kong

View shared research outputs
Researchain Logo
Decentralizing Knowledge