Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lai-Wan Chan is active.

Publication


Featured researches published by Lai-Wan Chan.


intelligent data engineering and automated learning | 2002

Support Vector Machine Regression for Volatile Stock Market Prediction

Haiqin Yang; Lai-Wan Chan; Irwin King

Recently, Support Vector Regression (SVR) has been introduced to solve regression and prediction problems. In this paper, we apply SVR to financial prediction tasks. In particular, the financial data are usually noisy and the associated risk is time-varying. Therefore, our SVR model is an extension of the standard SVR which incorporates margins adaptation. By varying the margins of the SVR, we could reflect the change in volatility of the financial data. Furthermore, we have analyzed the effect of asymmetrical margins so as to allow for the reduction of the downside risk. Our experimental results show that the use of standard deviation to calculate a variable margin gives a good predictive result in the prediction of Hang Seng Index.


IEEE Transactions on Speech and Audio Processing | 1995

Tone recognition of isolated Cantonese syllables

Tan Lee; P. C. Ching; Lai-Wan Chan; Y. H. Cheng; Brian Mak

Tone identification is essential for the recognition of the Chinese language, specifically far Cantonese which is well known for being very rich in tones. The paper presents an efficient method for tone recognition of isolated Cantonese syllables. Suprasegmental feature parameters are extracted from the voiced portion of a monosyllabic utterance and a three-layer feedforward neural network is used to classify these feature vectors. Using a phonologically complete vocabulary of 234 distinct syllables, the recognition accuracy for single-speaker and multispeaker is given by 89.0% and 87.6% respectively. >


Neural Computation | 2006

An Adaptive Method for Subband Decomposition ICA

Kun Zhang; Lai-Wan Chan

Subband decomposition ICA (SDICA), an extension of ICA, assumes that each source is represented as the sum of some independent subcomponents and dependent subcomponents, which have different frequency bands. In this article, we first investigate the feasibility of separating the SDICA mixture in an adaptive manner. Second, we develop an adaptive method for SDICA, namely band-selective ICA (BS-ICA), which finds the mixing matrix and the estimate of the source independent subcomponents. This method is based on the minimization of the mutual information between outputs. Some practical issues are discussed. For better applicability, a scheme to avoid the high-dimensional score function difference is given. Third, we investigate one form of the overcomplete ICA problems with sources having specific frequency characteristics, which BS-ICA can also be used to solve. Experimental results illustrate the success of the proposed method for solving both SDICA and the over-complete ICA problems.


IEEE Transactions on Neural Networks | 2001

Two regularizers for recursive least squared algorithms in feedforward multilayered neural networks

Chi-Sing Leung; Ah-Chung Tsoi; Lai-Wan Chan

Recursive least squares (RLS)-based algorithms are a class of fast online training algorithms for feedforward multilayered neural networks (FMNNs). Though the standard RLS algorithm has an implicit weight decay term in its energy function, the weight decay effect decreases linearly as the number of learning epochs increases, thus rendering a diminishing weight decay effect as training progresses. In this paper, we derive two modified RLS algorithms to tackle this problem. In the first algorithm, namely, the true weight decay RLS (TWDRLS) algorithm, we consider a modified energy function whereby the weight decay effect remains constant, irrespective of the number of learning epochs. The second version, the input perturbation RLS (IPRLS) algorithm, is derived by requiring robustness in its prediction performance to input perturbations. Simulation results show that both algorithms improve the generalization capability of the trained network.


IEEE Transactions on Neural Networks | 1999

Analysis for a class of winner-take-all model

John Sum; Chi-Sing Leung; Peter Kwong-Shun Tam; Gilbert H. Young; Wing-Kay Kan; Lai-Wan Chan

Recently we have proposed a simple circuit of winner-take-all (WTA) neural network. Assuming no external input, we have derived an analytic equation for its network response time. In this paper, we further analyze the network response time for a class of winner-take-all circuits involving self-decay and show that the network response time of such a class of WTA is the same as that of the simple WTA model.


signal processing systems | 2008

Discovering Biclusters by Iteratively Sorting with Weighted Correlation Coefficient in Gene Expression Data

Li Teng; Lai-Wan Chan

We propose a framework for biclustering gene expression profiles. This framework applies dominant set approach to create sets of sorting vectors for the sorting of the rows in the data matrix. In this way, the coexpressed rows of gene expression vectors could be gathered. We iteratively sort and transpose the gene expression data matrix to gather the blocks of coexpressed subset. Weighted correlation coefficient is used to measure the similarity in the gene level and the condition level. Their weights are updated each time using the sorting vector of the previous iteration. In this way, the highly correlated bicluster is located at one corner of the rearranged gene expression data matrix. We applied our approach to synthetic data and three real gene expression data sets with encouraging results. Secondly, we propose ACV (average correlation value) to evaluate the homogeneity of a bicluster or a data matrix. This criterion conforms to the intuitive biological notion of coexpressed set of genes or samples and is compared with the mean squared residue score. ACV is found to be more appropriate for both additive models and multiplicative models.


intelligent data engineering and automated learning | 2000

Applying Independent Component Analysis to Factor Model in Finance

Siu-ming Cha; Lai-Wan Chan

Factor model is a very useful and popular model in finance. In this paper, we show the relation between factor model and blind source separation, and we propose to use Independent Component Analysis (ICA) as a data mining tool to construct the underlying factors and hence obtain the corresponding sensitivities for the factor model.


IEEE Transactions on Neural Networks | 1997

Yet another algorithm which can generate topography map

John Sum; Chi-Sing Leung; Lai-Wan Chan; Lei Xu

This paper presents an algorithm to form a topographic map resembling to the self-organizing map. The idea stems on defining an energy function which reveals the local correlation between neighboring neurons. The larger the value of the energy function, the higher the correlation of the neighborhood neurons. On this account, the proposed algorithm is defined as the gradient ascent of this energy function. Simulations on two-dimensional maps are illustrated.


international conference on pattern recognition | 1998

Intra-block algorithm for digital watermarking

F. Y. Duan; Irwin King; Lai-Wan Chan; Lei Xu

We present a variant to the DCT-based block algorithm proposed in Hsu and Wu (1996) for signal embedding in digital images. Instead of inter-block relations, our algorithm uses intra-block relations to generate the watermarked image. We describe the algorithm and its performance against translation and cropping. The features of our method are: (1) the watermark is perceptually invisible; (2) little loss of relevant information of original image; (3) the watermark can be retrieved by using a secret key; and (4) the watermark is robust against translation and area cropping.


IEEE Transactions on Neural Networks | 1997

Stability and statistical properties of second-order bidirectional associative memory

Chi-Sing Leung; Lai-Wan Chan; Edmund Man Kit Lai

In this paper, a bidirectional associative memory (BAM) model with second-order connections, namely second-order bidirectional associative memory (SOBAM), is first reviewed. The stability and statistical properties of the SOBAM are then examined. We use an example to illustrate that the stability of the SOBAM is not guaranteed. For this result, we cannot use the conventional energy approach to estimate its memory capacity. Thus, we develop the statistical dynamics of the SOBAM. Given that a small number of errors appear in the initial input, the dynamics shows how the number of errors varies during recall. We use the dynamics to estimate the memory capacity, the attraction basin, and the number of errors in the retrieved items. Extension of the results to higher-order bidirectional associative memories is also discussed.

Collaboration


Dive into the Lai-Wan Chan's collaboration.

Top Co-Authors

Avatar

Kun Zhang

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Irwin King

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Chi-Sing Leung

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

John Sum

National Chung Hsing University

View shared research outputs
Top Co-Authors

Avatar

P. C. Ching

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Haiqin Yang

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Tan Lee

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Lei Xu

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Furui Liu

The Chinese University of Hong Kong

View shared research outputs
Researchain Logo
Decentralizing Knowledge