Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xuchun Li is active.

Publication


Featured researches published by Xuchun Li.


Engineering Applications of Artificial Intelligence | 2008

AdaBoost with SVM-based component classifiers

Xuchun Li; Lei Wang; Eric Sung

The use of SVM (Support Vector Machine) as component classifier in AdaBoost may seem like going against the grain of the Boosting principle since SVM is not an easy classifier to train. Moreover, Wickramaratna et al. [2001. Performance degradation in boosting. In: Proceedings of the Second International Workshop on Multiple Classifier Systems, pp. 11-21] show that AdaBoost with strong component classifiers is not viable. In this paper, we shall show that AdaBoost incorporating properly designed RBFSVM (SVM with the RBF kernel) component classifiers, which we call AdaBoostSVM, can perform as well as SVM. Furthermore, the proposed AdaBoostSVM demonstrates better generalization performance than SVM on imbalanced classification problems. The key idea of AdaBoostSVM is that for the sequence of trained RBFSVM component classifiers, starting with large @s values (implying weak learning), the @s values are reduced progressively as the Boosting iteration proceeds. This effectively produces a set of RBFSVM component classifiers whose model parameters are adaptively different manifesting in better generalization as compared to AdaBoost approach with SVM component classifiers using a fixed (optimal) @s value. From benchmark data sets, we show that our AdaBoostSVM approach outperforms other AdaBoost approaches using component classifiers such as Decision Trees and Neural Networks. AdaBoostSVM can be seen as a proof of concept of the idea proposed in Valentini and Dietterich [2004. Bias-variance analysis of support vector machines for the development of SVM-based ensemble methods. Journal of Machine Learning Research 5, 725-775] that Adaboost with heterogeneous SVMs could work well. Moreover, we extend AdaBoostSVM to the Diverse AdaBoostSVM to address the reported accuracy/diversity dilemma of the original Adaboost. By designing parameter adjusting strategies, the distributions of accuracy and diversity over RBFSVM component classifiers are tuned to maintain a good balance between them and promising results have been obtained on benchmark data sets.


Neural Networks | 2005

2005 Special Issue: Generalized 2D principal component analysis for face image representation and recognition

Hui Kong; Lei Wang; E.K. Teoh; Xuchun Li; Jian-Gang Wang; Ronda Venkateswarlu

In the tasks of image representation, recognition and retrieval, a 2D image is usually transformed into a 1D long vector and modelled as a point in a high-dimensional vector space. This vector-space model brings up much convenience and many advantages. However, it also leads to some problems such as the Curse of Dimensionality dilemma and Small Sample Size problem, and thus produces us a series of challenges, for example, how to deal with the problem of numerical instability in image recognition, how to improve the accuracy and meantime to lower down the computational complexity and storage requirement in image retrieval, and how to enhance the image quality and meanwhile to reduce the transmission time in image transmission, etc. In this paper, these problems are solved, to some extent, by the proposed Generalized 2D Principal Component Analysis (G2DPCA). G2DPCA overcomes the limitations of the recently proposed 2DPCA (Yang et al., 2004) from the following aspects: (1) the essence of 2DPCA is clarified and the theoretical proof why 2DPCA is better than Principal Component Analysis (PCA) is given; (2) 2DPCA often needs much more coefficients than PCA in representing an image. In this work, a Bilateral-projection-based 2DPCA (B2DPCA) is proposed to remedy this drawback; (3) a Kernel-based 2DPCA (K2DPCA) scheme is developed and the relationship between K2DPCA and KPCA (Scholkopf et al., 1998) is explored. Experimental results in face image representation and recognition show the excellent performance of G2DPCA.


international conference on image processing | 2004

Multilabel SVM active learning for image classification

Xuchun Li; Lei Wang; Eric Sung

Image classification is an important task in computer vision. However, how to assign suitable labels to images is a subjective matter, especially when some images can be categorized into multiple classes simultaneously. Multilabel image classification focuses on the problem that each image can have one or multiple labels. It is known that manually labelling images is time-consuming and expensive. In order to reduce the human effort of labelling images, especially multilabel images, we proposed a multilabel SVM active learning method. We also proposed two selection strategies: Max Loss strategy and Mean Max Loss strategy. Experimental results on both artificial data and real-world images demonstrated the advantage of proposed method.


international symposium on neural networks | 2005

Generalized 2D principal component analysis

Hui Kong; Xuchun Li; Lei Wang; Eam Khwang Teoh; Jian-Gang Wang; Ronda Venkateswarlu

A two-dimensional principal component analysis (2DPCA) by J. Yang et al. (2004) was proposed and the authors have demonstrated its superiority over the conventional principal component analysis (PCA) in face recognition. But the theoretical proof why 2DPCA is better than PCA has not been given until now. In this paper, the essence of 2DPCA is analyzed and a framework of generalized 2D principal component analysis (G2DPCA) is proposed to extend the original 2DPCA in two perspectives: a bilateral-projection-based 2DPCA (B2DPCA) and a kernel-based 2DPCA (K2DPCA) schemes are introduced. Experimental results in face recognition show its excellent performance.


international symposium on neural networks | 2005

A study of AdaBoost with SVM based weak learners

Xuchun Li; Lei Wang; Eric Sung

In this article, we focus on designing an algorithm, named AdaBoostSVM, using SVM as weak learners for AdaBoost. To obtain a set of effective SVM weak learners, this algorithm adaptively adjusts the kernel parameter in SVM instead of using a fixed one. Compared with the existing AdaBoost methods, the AdaBoostSVM has advantages of easier model selection and better generalization performance. It also provides a possible way to handle the over-fitting problem in AdaBoost. An improved version called Diverse AdaBoostSVM is further developed to deal with the accuracy/diversity dilemma in Boosting methods. By implementing some parameter adjusting strategies, the distributions of accuracy and diversity over these SVM weak learners are tuned to achieve a good balance. To the best of our knowledge, such a mechanism that can conveniently and explicitly balances this dilemma has not been seen in the literature. Experimental results demonstrated that both proposed algorithms achieve better generalization performance than AdaBoost using other kinds of weak learners. Benefiting from the balance between accuracy and diversity, the Diverse AdaBoostSVM achieves the best performance. In addition, the experiments on unbalanced data sets showed that the AdaBoostSVM performed much better than SVM.


british machine vision conference | 2005

Discriminant Low-dimensional Subspace Analysis for Face Recognition with Small Number of Training Samples

Hui Kong; Xuchun Li; Jian-Gang Wang; Eam Khwang Teoh; Chandra Kambhamettu

In this paper, a framework of Discriminant Low-dimensional Subspace Analysis (DLSA) method is proposed to deal with the Small Sample Size (SSS) problem in face recognition area. Firstly, it is rigorously proven that the null space of the total covariance matrix, S t , is useless for recognition. Therefore, a framework of Fisher discriminant analysis in a low-dimensional space is developed by projecting all the samples onto the range space of S t . Two algorithms are proposed in this framework, i.e., Unified Linear Discriminant Analysis (ULDA) and Modified Linear Discriminant Analysis (MLDA). The ULDA extracts discriminant information from three subspaces of this lowdimensional space. The MLDA adopts a modified Fisher criterion which can avoid the singularity problem in conventional LDA. Experimental results on a large combined database have demonstrated that the proposed ULDA and MLDA can both achieve better performance than the other state-of-the-art LDA-based algorithms in recognition accuracy.


international symposium on neural networks | 2005

Sequential bootstrapped support vector machines - a SVM accelerator

Xuchun Li; Yan Zhu; Eric Sung

Support vector machine has obtained much success in machine learning. But it requires to solve a quadratic optimization (QP) problem so that its training time increases dramatically with the increase of training set. Hence, standard SVM with batch learning has difficulty in handling large scale problems. In this paper, we introduce a SVM accelerator, called sequential bootstrapped SVM (SeqSVM), to speed up the training of SVM. At the beginning, the SeqSVM trains a SVM classifier on a small part of all training samples and then keeps on selecting the so-called convex hull samples from the given large training set to retrain this SVM until all convex hull samples are selected. The key principle in our method is to help the SVM pick the convex hull sample that is wrongly classified by the current SVM and furthest from the current SVM solution. The convex hull sample, which disagrees most with the SVM solution, will lie on the convex hull of each class distribution and all support vectors lie on the convex hull in the case of linearly separable classes. Two difficulties have to be overcome. The first is that the SeqSVMs iterations will take too many if there are too many support vectors. The second is that when the class distributions are not separable, it is not easy to pick convex hull samples. In this paper, we show how these two difficulties are overcome. Experimental results on both artificial database and benchmark databases demonstrated the effectiveness of proposed algorithm to reduce the learning time of SVM on the whole training set.


acm multimedia | 2005

A novel framework for SVM-based image retrieval on large databases

Lei Wang; Xuchun Li; Ping Xue; Kap Luk Chan

In this paper, a novel framework is proposed to deliver a fast, robust, and generally applicable SVM-based image retrieval for large databases. A quick test scheme is developed, and on-line kernel learning is employed to realize it after analyzing the relationship between them. Then an upper bound on maximum test scope is derived to speed up testing further. Also, the general applicability is well maintained because this framework does not need a kernel function and index structure to be pre-defined. Taking the advantages of this framework, more sophisticated SVM can be used to improve retrieval performance while keeping short response time. Experimental results on large image databases verify the effectiveness and efficiency of the proposed framework.


international conference on biometrics | 2006

Ensemble LDA for face recognition

Hui Kong; Xuchun Li; Jian-Gang Wang; Chandra Kambhamettu

Linear Discriminant Analysis (LDA) is a popular feature extraction technique for face image recognition and retrieval. However, It often suffers from the small sample size problem when dealing with the high dimensional face data. Two-step LDA (PCA+LDA) [1][2][3] is a class of conventional approaches to address this problem. But in many cases, these LDA classifiers are overfitted to the training set and discard some useful discriminative information. In this paper, by analyzing the overfitting problem for the two-step LDA approach, a framework of Ensemble Linear Discriminant Analysis (EnLDA) is proposed for face recognition with small number of training samples. In EnLDA, a Boosting-LDA (B-LDA) and a Random Sub-feature LDA (RS-LDA) schemes are incorporated together to construct the total weak-LDA classifier ensemble. By combining these weak-LDA classifiers using majority voting method, recognition accuracy can be significantly improved. Extensive experiments on two public face databases verify the superiority of the proposed EnLDA over the state-of-the-art algorithms in recognition accuracy.


international symposium on neural networks | 2005

Generalized 2D principal component analysis for face image representation and recognition

Hui Kong; Lei Wang; E.K. Teoh; Xuchun Li; Jian-Gang Wang; Ronda Venkateswarlu

Collaboration


Dive into the Xuchun Li's collaboration.

Top Co-Authors

Avatar

Lei Wang

Information Technology University

View shared research outputs
Top Co-Authors

Avatar

Eric Sung

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Hui Kong

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Jian-Gang Wang

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

E.K. Teoh

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Eam Khwang Teoh

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kap Luk Chan

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Ping Xue

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Yan Zhu

Nanyang Technological University

View shared research outputs
Researchain Logo
Decentralizing Knowledge