Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Korris Fu-Lai Chung is active.

Publication


Featured researches published by Korris Fu-Lai Chung.


Neurocomputing | 2007

Applying the improved fuzzy cellular neural network IFCNN to white blood cell detection

Wang Shitong; Korris Fu-Lai Chung; Fu Duan

Although algorithm NDA based on the fuzzy cellular neural network (FCNN) has indicated the basic superiority in its adaptability and easy hardware-realization for microscopic white blood cell detection [Wang Shitong, Wang Min, A new algorithm NDA based on fuzzy cellular neural networks for white blood cell detection, IEEE Trans. Inf. Technol. Biomed., accepted], it still does not work very well in keeping the boundary integrity of a white blood cell. In this paper, the improved version of FCNN called IFCNN is proposed to tackle this issue. The distinctive characteristic of IFCNN is to incorporate the novel fuzzy status containing the useful information beyond a white blood cell into its state equation, resulting in enhancing the boundary integrity. Our theoretical analysis shows that IFCNN has the global stability and the experimental results demonstrate its obvious advantage over FCNN in keeping the boundary integrity.


intelligent data analysis | 2005

Fuzzy taxonomy, quantitative database and mining generalized association rules

Wang Shitong; Korris Fu-Lai Chung; Shen Hongbin

Mining association rules from databases is still a hot topic in data mining community in recent years. Due to the existence of multiple levels of abstraction (i.e, taxonomic structures) among the attributes of the databases, several algorithms were proposed to mine generalized Boolean association rules upon all levels of presumed crisp taxonomic structures. However, fuzzy taxonomic structures may be more suitable in many practical applications. In [9], the authors proposed an approach to mine generalized Boolean association rules with such fuzzy taxonomic structures. The main contribution of this paper is to extend their idea to mine generalized association rules from quantitative databases with fuzzy taxonomic structures. A new fuzzy taxonomic quantitative database model is presented, and the experimental results on realistic databases are demonstrated to validate this new model.


soft computing | 2004

Note on the relationship between probabilistic and fuzzy clustering

W. Shitong; Korris Fu-Lai Chung; S. Hongbin; Z. Ruiqiang

In this short communication, based on Renyi entropy measure, a new Renyi information based clustering algorithm A is presented. Algorithm A and the well-known fuzzy clustering algorithm FCM have the same clustering track. This fact builds the very bridge between probabilistic clustering and fuzzy clustering, and fruitful research results on Renyi entropy measure may help us to further understand the essence of fuzzy clustering.


soft computing | 2006

Robust maximum entropy clustering algorithm with its labeling for outliers

Wang Shi-tong; Korris Fu-Lai Chung; Deng Zhaohong; Hu Dewen; Wu Xisheng

In this paper, a novel robust maximum entropy clustering algorithm RMEC, as the improved version of the maximum entropy algorithm MEC [2–4], is presented to overcome MECs drawbacks: very sensitive to outliers and uneasy to label them. Algorithm RMEC incorporates Vapniks ɛ – insensitive loss function and the new concept of weight factors into its objective function and consequently, its new update rules are derived according to the Lagrangian optimization theory. Compared with algorithm MEC, the main contributions of algorithm RMEC exit in its much better robustness to outliers and the fact that it can effectively label outliers in the dataset using the obtained weight factors. Our experimental results demonstrate its superior performance in enhancing the robustness and labeling outliers in the dataset.


Expert Systems With Applications | 2011

A subspace decision cluster classifier for text classification

Yan Li; Edward Hung; Korris Fu-Lai Chung

In this paper, a new classification method (SDCC) for high dimensional text data with multiple classes is proposed. In this method, a subspace decision cluster classification (SDCC) model consists of a set of disjoint subspace decision clusters, each labeled with a dominant class to determine the class of new objects falling in the cluster. A cluster tree is first generated from a training data set by recursively calling a subspace clustering algorithm Entropy Weighting k-Means algorithm. Then, the SDCC model is extracted from the subspace decision cluster tree. Various tests including Anderson-Darling test are used to determine the stopping condition of the tree growing. A series of experiments on real text data sets have been conducted. Their results show that the new classification method (SDCC) outperforms the existing methods like decision tree and SVM. SDCC is particularly suitable for large, high dimensional sparse text data with many classes.


Applied Soft Computing | 2007

Robust fuzzy clustering neural network based on ε-insensitive loss function

Shitong Wang; Korris Fu-Lai Chung; Deng Zhaohong; Hu Dewen

In the paper, as an improvement of fuzzy clustering neural network FCNN proposed by Zhang et al., a novel robust fuzzy clustering neural network RFCNN is presented to cope with the sensitive issue of clustering when outliers exist. This new algorithm is based on Vapniks @?-insensitive loss function and quadratic programming optimization. Our experimental results demonstrate that RFCNN has much better robustness for outliers than FCNN.


australasian joint conference on artificial intelligence | 2008

Building a Decision Cluster Classification Model for High Dimensional Data by a Variable Weighting k-Means Method

Yan Li; Edward Hung; Korris Fu-Lai Chung; Joshua Zhexue Huang

In this paper, a new classification method (ADCC) for high dimensional data is proposed. In this method, a decision cluster classification model (DCC) consists of a set of disjoint decision clusters, each labeled with a dominant class that determines the class of new objects falling in the cluster. A cluster tree is first generated from a training data set by recursively calling a variable weighting k -means algorithm. Then, the DCC model is selected from the tree. Anderson-Darling test is used to determine the stopping condition of the tree growing. A series of experiments on both synthetic and real data sets have shown that the new classification method (ADCC) performed better in accuracy and scalability than the existing methods of k -NN , decision tree and SVM. It is particularly suitable for large, high dimensional data with many classes.


ieee international conference on fuzzy systems | 2012

Double indices induced FCM clustering and its integration with fuzzy subspace clustering

Jun Wang; Shitong Wang; Zhaohong Deng; Korris Fu-Lai Chung

As one of the most popular algorithms for cluster analysis, fuzzy c-means (FCM) and its variants have been widely studied. In this paper, a novel generalized version called double indices-induced FCM (DI-FCM) is developed from another perspective. DI-FCM introduces a power exponent r into the constraints of the objective function such that the fuzziness index m is generalized and a new criterion of selecting an appropriate fuzziness index m is defined. Furthermore, it can be explained from the viewpoint of entropy concept that the power exponent r facilitates the introduction of entropy-based constraints into fuzzy clustering algorithms. As an attractive and judicious application, DI-FCM is integrated with a fuzzy subspace clustering (FSC) algorithm so that a new fuzzy subspace clustering algorithm called double indices-induced fuzzy subspace clustering (DI-FSC) algorithm is proposed for high-dimensional data. DI-FSC replaces the commonly used Euclidean distance with the feature-weighted distance, resulting in having two fuzzy matrices in the objective function. A convergence proof of DI-FSC is also established by applying Zangwill’s convergence theorem. Several experiments on both artificial data and real data were conducted and the experimental results show the effectiveness of the proposed algorithm.


Journal of the Association for Information Science and Technology | 2010

A new context-dependent term weight computed by boost and discount using relevance information

Edward Kai Fung Dang; Robert W. P. Luk; James Kingsley A. Allan; Kei Shiu Ho; Stephen Chi-fai Chan; Korris Fu-Lai Chung; Dik Lun Lee

A huge number of informal messages are posted every day in social network sites, blogs, and discussion forums. Emotions seem to be frequently important in these texts for expressing friendship, showing social support or as part of online arguments. Algorithms to identify sentiment and sentiment strength are needed to help understand the role of emotion in this informal communication and also to identify inappropriate or anomalous affective utterances, potentially associated with threatening behavior to the self or others. Nevertheless, existing sentiment detection algorithms tend to be commercially oriented, designed to identify opinions about products rather than user behaviors. This article partly fills this gap with a new algorithm, SentiStrength, to extract sentiment strength from informal English text, using new methods to exploit the de facto grammars and spelling styles of cyberspace. Applied to MySpace comments and with a lookup table of term sentiment strengths optimized by machine learning, SentiStrength is able to predict positive emotion with 60.6p accuracy and negative emotion with 72.8p accuracy, both based upon strength scales of 1–5. The former, but not the latter, is better than baseline and a wide range of general machine learning approaches.


Pattern Recognition | 2005

A multiple classifier approach to detect Chinese character recognition errors

Kei Yuen Hung; Robert W. P. Luk; Daniel S. Yeung; Korris Fu-Lai Chung; Wenhao Shu

Detection of recognition errors is important in many areas, such as improving recognition performance, saving manual effort for proof-reading and post-editing, and assigning appropriate weights for retrieval in constructing digital libraries. We propose a novel application of multiple classifiers for the detection of recognition errors. A need for multiple classifiers emerges when a single classifier cannot improve recognition-error detection performance compared with the current detection scheme using a simple threshold mechanism. Although the single classifier does not improve recognition error performance, it serves as a baseline for comparison and the related study of useful features for error detection suggests three distinct cases where improvement is needed. For each case, the multiple classifier approach assigns a classifier to detect the presence or absence of errors and additional features are considered for each case. Our results show that the recall rate (70-80%) of recognition errors, the precision rate (80-90%) of recognition error detection and the saving in manual effort (75%) were better than the corresponding performance using a single classifier or a simple threshold detection scheme.

Collaboration


Dive into the Korris Fu-Lai Chung's collaboration.

Top Co-Authors

Avatar

Stephen Chi-fai Chan

Hong Kong Polytechnic University

View shared research outputs
Top Co-Authors

Avatar

Robert W. P. Luk

Hong Kong Polytechnic University

View shared research outputs
Top Co-Authors

Avatar

Jacky K. H. Shiu

Hong Kong Polytechnic University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel S. Yeung

Hong Kong Polytechnic University

View shared research outputs
Top Co-Authors

Avatar

Edward Hung

Hong Kong Polytechnic University

View shared research outputs
Top Co-Authors

Avatar

Kei Yuen Hung

Hong Kong Polytechnic University

View shared research outputs
Top Co-Authors

Avatar

Wang Shitong

Hong Kong Polytechnic University

View shared research outputs
Top Co-Authors

Avatar

Wenhao Shu

Hong Kong Polytechnic University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge