Cheng-Hsuan Li
National Chiao Tung University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Cheng-Hsuan Li.
IEEE Transactions on Geoscience and Remote Sensing | 2012
Cheng-Hsuan Li; Bor-Chen Kuo; Chin-Teng Lin; Chih-Sheng Huang
Recent studies show that hyperspectral image classification techniques that use both spectral and spatial information are more suitable, effective, and robust than those that use only spectral information. Using a spatial-contextual term, this study modifies the decision function and constraints of a support vector machine (SVM) and proposes two kinds of spatial-contextual SVMs for hyperspectral image classification. One machine, which is based on the concept of Markov random fields (MRFs), uses the spatial information in the original space (SCSVM). The other machine uses the spatial information in the feature space (SCSVMF), i.e., the nearest neighbors in the feature space. The SCSVM is better able to classify pixels of different class labels with similar spectral values and deal with data that have no clear numerical interpretation. To evaluate the effectiveness of SCSVM, the experiments in this study compare the performances of other classifiers: an SVM, a context-sensitive semisupervised SVM, a maximum likelihood (ML) classifier, a Bayesian contextual classifier based on MRFs (ML_MRF), and nearest neighbor classifier. Experimental results show that the proposed method achieves good classification performance on famous hyperspectral images (the Indian Pine site (IPS) and the Washington, DC mall data sets). The overall classification accuracy of the hyperspectral image of the IPS data set with 16 classes is 95.5%. The kappa accuracy is up to 94.9%, and the average accuracy of each class is up to 94.2%.
IEEE Transactions on Fuzzy Systems | 2011
Cheng-Hsuan Li; Bor-Chen Kuo; Chin-Teng Lin
Research has shown fuzzy c-means (FCM) clustering to be a powerful tool to partition samples into different categories. However, the objective function of FCM is based only on the sum of distances of samples to their cluster centers, which is equal to the trace of the within-cluster scatter matrix. In this study, we propose a clustering algorithm based on both within- and between-cluster scatter matrices, extended from linear discriminant analysis (LDA), and its application to an unsupervised feature extraction (FE). Our proposed methods comprise between- and within-cluster scatter matrices modified from the between- and within-class scatter matrices of LDA. The scatter matrices of LDA are special cases of our proposed unsupervised scatter matrices. The results of experiments on both synthetic and real data show that the proposed clustering algorithm can generate similar or better clustering results than 11 popular clustering algorithms: K-means, K-medoid, FCM, the Gustafson-Kessel, Gath-Geva, possibilistic c-means (PCM), fuzzy PCM, possibilistic FCM, fuzzy compactness and separation, a fuzzy clustering algorithm based on a fuzzy treatment of finite mixtures of multivariate Students t distributions algorithms, and a fuzzy mixture of the Students t factor analyzers model. The results also show that the proposed FE outperforms principal component analysis and independent component analysis.
international geoscience and remote sensing symposium | 2010
Cheng-Hsuan Li; Chin-Teng Lin; Bor-Chen Kuo; Hui-Shan Chu
Support vector machine (SVM) is one of the most powerful techniques for supervised classification. However, the performances of SVMs are based on choosing the proper kernel functions or proper parameters of a kernel function. It is extremely time consuming by applying the k-fold cross-validation (CV) to choose the almost best parameter. Nevertheless, the searching range and fineness of the grid method should be determined in advance. In this paper, an automatic method for selecting the parameter of the RBF kernel function is proposed. In the experimental results, it costs very little time than k-fold cross-validation for selecting the parameter by our proposed method. Moreover, the corresponding SVMs can obtain more accurate or at least equal performance than SVMs by applying k-fold cross-validation to determine the parameter.
Journal of Information Science and Engineering | 2012
Cheng-Hsuan Li; Hsin-Hua Ho; Yu-Lung Liu; Chin-Teng Lin; Bor-Chen Kuo; Jin-Shiuh Taur
Support vector machine (SVM) is one of the most powerful techniques for supervised classification. However, the performances of SVMs are based on choosing the proper kernel functions or proper parameters of a kernel function. It is extremely time consuming by applying the k-fold cross-validation (CV) to choose the almost best parameter. Nevertheless, the searching range and fineness of the grid method should be determined in advance. In this paper, an automatic method for selecting the parameter of the normalized kernel function is proposed. In the experimental results, it costs very little time than k-fold cross-validation for selecting the parameter by our proposed method. Moreover, the corresponding SVMs can obtain more accurate or at least equal performance than SVMs by applying k-fold cross-validation to determine the parameter.
international geoscience and remote sensing symposium | 2011
Cheng-Hsuan Li; Hui-Shan Chu; Bor-Chen Kuo; Chin-Teng Lin
Feature extraction plays an essential role in Hyperspectral image classification. Linear discriminant analysis (LDA) is a commonly used feature extraction (FE) method to resolve the Hughes phenomenon for classification. The Hughes phenomenon (also called the curse of dimensionality) is often encountered in classification when the dimensionality of the space grows and the size of the training set is fixed, especially in the small sampling size problem. Recent studies show that the spatial information can greatly improve the classification performance. Hence, for hyperspectral image classification, it is not only necessary to use the available spectral information but also to exploit the spatial information. In this paper, spatial information is acquired by the concept of the Markov random field (MRF), and this spatial information is used to form the membership values of every pixel in the hyperspectral image. The experimental results on two hyperspectral images, the Washington DC Mall and the Indian Pine Site, show that the proposed method can yield a better classification performance than LDA in the small sampling size problem.
ieee international conference on fuzzy systems | 2011
Hui-Shan Chu; Bor-Chen Kuo; Cheng-Hsuan Li; Chin-Teng Lin
Linear discriminant analysis (LDA) is a commonly used feature extraction (FE) method to resolve the Hughes phenomenon for classification. The Hughes phenomenon (also called the curse of dimensionality) is often encountered in classification when the dimensionality of the space grows and the size of the training set is fixed, especially in the small sampling size problem. Recent studies show that the spatial information can greatly improve the classification performance. Hence, for hyperspectral image classification, it is not only necessary to use the available spectral information but also to exploit the spatial information. In this paper, a semisupervised feature extraction method which is based on the scatter matrices of the fuzzy-type LDA and uses the semi-information is proposed. The experimental results on two hyperspectral images, the Washington DC Mall and the Indian Pine Site, show that the proposed method can yield a better classification performance than LDA in the small sampling size problem.
international geoscience and remote sensing symposium | 2010
I-Ling Chen; Cheng-Hsuan Li; Bor-Chen Kuo; Hsiao-Yun Huang
In the kernel methods, it is very important to choose a proper kernel function to avoid overlapping data. Based this fact, in this paper we mainly utilize a unified kernel optimization framework on the hyperspectral image classification to augment the margin between different classes, and under the kernel optimization framework, to employ the Fisher discriminant criteria formulated in a pairwise manner as the objective functions to optimize the kernel function in Kernel-based nonparametric weighted feature extraction. The experimental results display the superiority of the optimizing kernel function over the RBF kernel function with 5-fold cross-validation method, especially, in the small sample size problem.
international geoscience and remote sensing symposium | 2008
Cheng-Hsuan Li; Bor-Chen Kuo; Chin-Teng Lin; Chih-Cheng Hung
Usually feature extraction is applied for dimension reduction in hyperspectral data classification problems. Many studies show that nonparametric weighted feature extraction (NWFE) is a powerful tool for extracting hyperspectral image features. The detection of class boundaries is an important part in NWFE and the weighted mean was defined for this purpose. In this paper, a kernel-based feature extraction is proposed based on a new class boundary detection mechanism. The soft-margin support vector machine (SVM) binary classifier and the support vector domain description (SVDD) are applied to detect the boundaries between two classes and one class, respectively. The results of real data experiments show that the proposed method outperforms original NWFE.
workshop on hyperspectral image and signal processing: evolution in remote sensing | 2009
Bor-Chen Kuo; Chun-Hsiang Chuang; Cheng-Hsuan Li; Chin-Teng Lin
In a typical supervised classification task, the size of training data fundamentally affects the generality of a classifier. Given a finite and fixed size of training data, the classification result may be degraded as the number of features (dimensionality) increase. Many researches have demonstrated that multiple classifier systems (MCS) or so-called ensembles can alleviate small sample size and high dimensionality concern, and obtain more outstanding and robust results than single models. One of the effective approaches for generating an ensemble of diverse base classifiers is the use of different feature subsets such as random subspace method (RSM). The objective of this research is to develop a novel ensemble technique based on cluster algorithms for strengthening RSM. The results of real data experiments show that the proposed method obtains the sound performance especially in the situation of using less number of classifiers.
international conference on technologies and applications of artificial intelligence | 2010
I-Ling Chen; Kai-Chih Pai; Bor-Chen Kuo; Cheng-Hsuan Li
One of popular and simple pattern classification algorithms is the k-nearest neighbor rule. However, it often fails to work well when patterns of different classes overlap in some regions in the feature space. To overcome this problem, many researches strive for developing various adaptive or discriminatory metrics to improve its performance for classification, recently. In this paper, we proposed a simple adaptive nearest neighbor rule on distance measure for two objects. First one is to separate the overlapping data, and the second one is to avoid the influence of outliers. From the experimental results, our method is robust for the choice of the number of k and outperforms than k-nearest neighbor classifier.