Andrew K. Lui
Open University of Hong Kong
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Andrew K. Lui.
web intelligence | 2006
Sheung-On Choy; Andrew K. Lui
Collaborative tagging on the Web has been quickly gaining ground as a new paradigm for Web information retrieval, discovering and filtering. There are a number of successful deployments of collaborative tagging systems that effectively recruits the activity of human users into collecting and annotating vast amounts of Web resources. They lead to an emergent categorization of Web resources in terms of tags, and create a different kind of Web directory. However, the current ways of exploration in the tagging space are limited, which cannot get the most out of the real value of it. This paper presents our methodology, observations, and experimental results in the way we propose how to improve the user experience in exploring information captured by collaborative tagging systems
technical symposium on computer science education | 2004
Andrew K. Lui; Reggie Kwan; Maria Poon; Yannie H. Y. Cheung
The Perform approach aims to improve the success rate of weak students in a first programming course. The approach, based on constructivism, takes a tight control on the mental model construction process in the weak students, and allows the students to navigate through many conceptual pitfalls in programming fundamentals. The paper covers a discussion of applying constructivism in programming, exposes common hazards in the learning process, illustrates why weak students are weak, and then suggests several guidelines that can help the weak students to attain at least foundation level programming. The paper ends with a summary of our experiences in the effect of the Perform approach.
international symposium on neural networks | 2010
Chi-Chung Cheung; Sin Chun Ng; Andrew K. Lui; Sean Shensheng Xu
Backpropagation (BP) learning algorithm is the most widely supervised learning technique which is extensively applied in the training of multi-layer feed-forward neural networks. Many modifications of BP have been proposed to speed up the learning of the original BP. However, the performance of these modifications is still not promising due to the existence of the local minimum problem and the error overshooting problem. This paper proposes an Enhanced Two-Phase method to solve these two problems to improve the performance of existing fast learning algorithms. The proposed method effectively locates the existence of the above problems and assigns appropriate fast learning algorithms to solve them. Throughout our investigation, the proposed method significantly improves the performance of different fast learning algorithms in terms of the convergence rate and the global convergence capability in different problems. The convergence rate can be increased up to 100 times compared with the existing fast learning algorithms.
international conference on advanced learning technologies | 2007
Andrew K. Lui; Siu Cheung Li; Sheung On Choy
Content analysis is often employed by teachers and research to analyse online discussion forums to serve various purposes such as assessment, evaluation, and educational research. Automating content analysis is desirable so that such analysis can be carried out efficiently on large amount of data. This paper evaluates text categorization and examines whether the attainable accuracy can satisfy the requirements of common content analysis tasks. It shows that even simple text categorization techniques can support tasks such as online learning progress monitoring. Methods of augmenting text categorization with other techniques are also discussed.
congress on evolutionary computation | 2014
Man-Fai Leung; Sin Chun Ng; Chi-Chung Cheung; Andrew K. Lui
This paper presents a new algorithm that extends Particle Swarm Optimization (PSO) to deal with multi-objective problems. It makes two main contributions. The first is that the square root distance (SRD) computation among particles and leaders is proposed to be the criterion of the local best selection. This new criterion can make all swarms explore the whole Pareto-front more uniformly. The second contribution is the procedure to update the archive members. When the external archive is full and a new member is to be added, an existing archive member with the smallest SRD value among its neighbors will be deleted. With this arrangement, the non-dominated solutions can be well distributed. Through the performance investigation, our proposed algorithm performed better than two well-known multi-objective PSO algorithms, MOPSO-σ and MOPSO-CD, in terms of different standard measures.
ACM Inroads | 2010
Andrew K. Lui; Sin Chun Ng; Yannie H. Y. Cheung; Prabhat Gurung
This paper describes a project aiming at promoting independent learning among CS1 students. The project used Lego Mindstorms robots as the tool for building a course that could engage students of various levels of learning independence. Based on the Staged Self-Directed Learning Model proposed by Grow, the course hoped to take students to higher levels of learning independence. Lego Mindstorms robots proved their versatility in achieving this objective.
international symposium on neural networks | 2011
Chi-Chung Cheung; Sin Chun Ng; Andrew K. Lui; Sean Shensheng Xu
Backpropagation (BP) learning algorithm is the most widely supervised learning technique which is extensively applied in the training of multi-layer feed-forward neural networks. Many modifications of BP have been proposed to speed up the learning of the original BP. However, these modifications sometimes cannot converge properly due to the local minimum problem. This paper proposes a new algorithm, which provides a systematic approach to make use of the characteristics of different fast learning algorithms so that the convergence of a learning process is promising with a fast learning rate. Our performance investigation shows that the proposed algorithm always converges with a fast learning rate in two popular complicated applications whereas other popular fast learning algorithms give very poor global convergence capabilities in these two applications.
Proceedings 24th Australian Computer Science Conference. ACSC 2001 | 2001
Michael J. Owen; Andrew K. Lui; Edward H. S. Lo; Mark W. Grigg
The use of progressive, on-demand image dissemination techniques can support efficient dissemination of very large images across networks. In this paper we examine the effectiveness of various design options in developing such on-demand dissemination systems. We show that the choice of the design options can have a profound impact on the efficient use of client, server, and network resources. Based on our performance evaluation experiments, we recommend that efficient dissemination can be achieved by tiling images with a session-based progressive wavelet compression algorithm, delivering the compressed data to tiles in a round-robin manner, and performing custom client-side virtual memory management for image data to improve image data retrieval speed.
congress on evolutionary computation | 2015
Man-Fai Leung; Sin Chun Ng; Chi-Chung Cheung; Andrew K. Lui
This paper presents a new Multi-Objective Particle Swarm Optimization (MOPSO) algorithm that has two new components: leader selection and crossover. The new leader selection algorithm, called Space Expanding Strategy (SES), guides particles moving to the boundaries of the objective space in each generation so that the objective space can be expanded rapidly. Besides, crossover is adopted instead of mutation to enhance the convergence and maintain the stability of the generated solutions (exploitation). The performance of the proposed MOPSO algorithm was compared with three popular multi-objective algorithms in solving fifteen standard test functions. Their performance measures were hypervolume, spread and inverse generational distance. The performance investigation found that the performance of the proposed algorithm was generally better than the other three, and the performance of the proposed crossover was generally better than three popular mutation operators.
international symposium on neural networks | 2013
Chi-Chung Cheung; Andrew K. Lui; Sean Shensheng Xu
Backpropagation (BP) algorithm, which is very popular in supervised learning, is extensively applied in training feed-forward neural networks. Many modifications have been proposed to speed up the convergence process of the standard BP algorithm. However, they seldom focus on improving the global convergence capability. This paper proposes a new algorithm called Wrong Output Modification (WOM) to improve the global convergence capability of a fast learning algorithm. When a learning process is trapped by a local minimum or a flat-spot area, this algorithm looks for some outputs that go to other extremes when compared with their target outputs, and then it modifies such outputs systemically so that they can get close to their target outputs and hence some weights of neurons are changed accordingly. It is hoped that these changes make the learning process escape from such local minima or flat-spot areas and then converge. The performance investigation shows that the proposed algorithm can be applied into different fast learning algorithms, and their global convergence capabilities are improved significantly compared with their original algorithms. Moreover, some statistical data obtained from this algorithm can be used to identify the difficulty of a learning problem.