Jiang-She Zhang
Xi'an Jiaotong University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jiang-She Zhang.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2000
Yee Leung; Jiang-She Zhang; Zongben Xu
In pattern recognition and image processing, the major application areas of cluster analysis, human eyes seem to possess a singular aptitude to group objects and find important structures in an efficient and effective way. Thus, a clustering algorithm simulating a visual system may solve some basic problems in these areas of research. From this point of view, we propose a new approach to data clustering by modeling the blurring effect of lateral retinal interconnections based on scale space theory. In this approach, a data set is considered as an image with each light point located at a datum position. As we blur this image, smaller light blobs merge into larger ones until the whole image becomes one light blob at a low enough level of resolution. By identifying each blob with a cluster, the blurring process generates a family of clustering along the hierarchy. The advantages of the proposed approach are: 1) The derived algorithms are computationally stable and insensitive to initialization and they are totally free from solving difficult global optimization problems. 2) It facilitates the construction of new checks on cluster validity and provides the final clustering a significant degree of robustness to noise in data and change in scale. 3) It is more robust in cases where hyperellipsoidal partitions may not be assumed. 4) it is suitable for the task of preserving the structure and integrity of the outliers in the clustering process. 5) The clustering is highly consistent with that perceived by human eyes. 6) The new approach provides a unified framework for scale-related clustering algorithms derived from many different fields such as estimation theory, recurrent signal processing on self-organization feature maps, information theory and statistical mechanics, and radial basis function neural networks.
IEEE Transactions on Fuzzy Systems | 2004
Jiang-She Zhang; Yiu-Wing Leung
A possibilistic approach was proposed in a previous paper for C-means clustering, and two algorithms realizing this approach were reported in two previous papers. Although the possibilistic approach is sound, these two algorithms tend to find identical clusters. In this paper, we modify and improve these algorithms to overcome their shortcoming. The numerical results demonstrate that the improved algorithms can determine proper clusters and they can realize the advantages of the possibilistic approach.
Pattern Recognition Letters | 2008
Chun-Xia Zhang; Jiang-She Zhang
This paper presents a novel ensemble classifier generation technique RotBoost, which is constructed by combining Rotation Forest and AdaBoost. The experiments conducted with 36 real-world data sets available from the UCI repository, among which a classification tree is adopted as the base learning algorithm, demonstrate that RotBoost can generate ensemble classifiers with significantly lower prediction error than either Rotation Forest or AdaBoost more often than the reverse. Meanwhile, RotBoost is found to perform much better than Bagging and MultiBoost. Through employing the bias and variance decompositions of error to gain more insight of the considered classification methods, RotBoost is seen to simultaneously reduce the bias and variance terms of a single tree and the decrement achieved by it is much greater than that done by the other ensemble methods, which leads RotBoost to perform best among the considered classification procedures. Furthermore, RotBoost has a potential advantage over AdaBoost of suiting parallel execution.
European Journal of Operational Research | 2007
Yong-Jun Wang; Jiang-She Zhang; Gai-Ying Zhang
A dynamic clustering based differential evolution algorithm (CDE) for global optimization is proposed to improve the performance of the differential evolution (DE) algorithm. With population evolution, CDE algorithm gradually changes from exploring promising areas at the early stages to exploiting solution with high precision at the later stages. Experiments on 28 benchmark problems, including 13 high dimensional functions, show that the new method is able to find near optimal solutions efficiently. Compared with other existing algorithms, CDE improves solution accuracy with less computational effort.
IEEE Geoscience and Remote Sensing Letters | 2014
Yee Leung; Junmin Liu; Jiang-She Zhang
Extending on the adaptive intensity-hue-saturation (AIHS) method, an improved AIHS (IAIHS) method is proposed for pansharpening in this letter. Through the IAIHS method, the amount of spatial details injected into each band of the multispectral (MS) image is appropriately determined by a weighting matrix, which is defined on the basis of the edges of the panchromatic and MS images and the proportions between the MS bands. Experiments carried out on QuickBird and IKONOS satellite images show that the IAIHS method can maintain spectral quality while providing comparable spatial quality with the AIHS and additive wavelet luminance proportional methods.
Computational Statistics & Data Analysis | 2008
Chun-Xia Zhang; Jiang-She Zhang
Based on the boosting-by-resampling version of Adaboost, a local boosting algorithm for dealing with classification tasks is proposed in this paper. Its main idea is that in each iteration, a local error is calculated for every training instance and a function of this local error is utilized to update the probability that the instance is selected to be part of next classifiers training set. When classifying a novel instance, the similarity information between it and each training instance is taken into account. Meanwhile, a parameter is introduced into the process of updating the probabilities assigned to training instances so that the algorithm can be more accurate than Adaboost. The experimental results on synthetic and several benchmark real-world data sets available from the UCI repository show that the proposed method improves the prediction accuracy and the robustness to classification noise of Adaboost. Furthermore, the diversity-accuracy patterns of the ensemble classifiers are investigated by kappa-error diagrams.
IEEE Transactions on Image Processing | 2014
Junmin Liu; Yijun Chen; Jiang-She Zhang; Zongben Xu
Recently, low-rank representation (LRR) method has achieved great success in subspace clustering, which aims to cluster the data points that lie in a union of low-dimensional subspace. Given a set of data points, LRR seeks the lowest rank representation among the many possible linear combinations of the bases in a given dictionary or in terms of the data itself. However, LRR only considers the global Euclidean structure, while the local manifold structure, which is often important for many real applications, is ignored. In this paper, to exploit the local manifold structure of the data, a manifold regularization characterized by a Laplacian graph has been incorporated into LRR, leading to our proposed Laplacian regularized LRR (LapLRR). An efficient optimization procedure, which is based on alternating direction method of multipliers, is developed for LapLRR. Experimental results on synthetic and real data sets are presented to demonstrate that the performance of LRR has been enhanced by using the manifold regularization.
IEEE Transactions on Geoscience and Remote Sensing | 2012
Junmin Liu; Jiang-She Zhang
Endmember extraction is very important in hyperspectral image analysis. The accurate identification of endmembers enables target detection and classification and efficient spectral unmixing. Although a number of endmember extraction algorithms have been proposed, such as two state-of-the-art algorithms-vertex component analysis (VCA) and simplex growing algorithm (SGA)-it is still a rather challenging task. In this paper, a new maximum simplex volume method based on Householder transformation (HT), referred to as maximum volume by HT (MVHT), is presented for endmember extraction. The proposed algorithm provides consistent results with low computational complexity, which overcomes the disadvantage of the inconsistent result of VCA and the shortcoming of the high computational cost of SGA resulted from calculating the simplex volume. A comparative study and analysis are conducted among the three endmember extraction algorithms, VCA, SGA, and MVHT, on both simulated and real hyperspectral data. The obtained experimental results demonstrate that the proposed MVHT algorithm generally provides a competitive or even better performance over VCA and SGA.
systems man and cybernetics | 2003
Jiang-She Zhang; Yiu-Wing Leung
In many applications of C-means clustering, the given data set often contains noisy points. These noisy points will affect the resulting clusters, especially if they are far away from the data points. In this paper, we develop a pruning approach for robust C-means clustering. This approach identifies and prunes the outliers based on the sizes and shapes of the clusters so that the resulting clusters are least affected by the outliers. The pruning approach is general, and it can improve the robustness of many existing C-means clustering methods. In particular, we apply the pruning approach to improve the robustness of hard C-means clustering, fuzzy C-means clustering, and deterministic-annealing C-means clustering. As a result, we obtain three clustering algorithms that are the robust versions of the existing ones. In addition, we integrate the pruning approach with the fuzzy approach and the possibilistic approach to design two new algorithms for robust C-means clustering. The numerical results demonstrate that the pruning approach can achieve good robustness.
Applied Mathematics and Computation | 2008
Chun-Xia Zhang; Jiang-She Zhang; Guan-Wei Wang
This paper investigates the performance of Rotation Forest ensemble method in improving the generalization ability of a base predictor for solving regression problems through conducting experiments on several benchmark data sets, which is also compared with that of Bagging, Random Forest, Adaboost.R2, and a single regression tree. The sensitivity of Rotation Forest to the choice of parameters included in it is also studied. On the considered regression data sets, Adaboost.R2 is seen to generally outperform Rotation Forest and both of them are better than Random Forest and a single tree. With respect to Bagging and Rotation Forest, it seems that there is not a clear winner between them. Furthermore, pruning the tree seems to have some bad effect on the performance of all the considered methods.