Fuyuan Cao
Shanxi University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Fuyuan Cao.
Pattern Recognition | 2011
Liang Bai; Jiye Liang; Chuangyin Dang; Fuyuan Cao
Due to data sparseness and attribute redundancy in high-dimensional data, clusters of objects often exist in subspaces rather than in the entire space. To effectively address this issue, this paper presents a new optimization algorithm for clustering high-dimensional categorical data, which is an extension of the k-modes clustering algorithm. In the proposed algorithm, a novel weighting technique for categorical data is developed to calculate two weights for each attribute (or dimension) in each cluster and use the weight values to identify the subsets of important attributes that categorize different clusters. The convergence of the algorithm under an optimization framework is proved. The performance and scalability of the algorithm is evaluated experimentally on both synthetic and real data sets. The experimental studies show that the proposed algorithm is effective in clustering categorical data sets and also scalable to large data sets owning to its linear time complexity with respect to the number of data objects, attributes or clusters.
Pattern Recognition | 2012
Jiye Liang; Xingwang Zhao; Deyu Li; Fuyuan Cao; Chuangyin Dang
In cluster analysis, one of the most challenging and difficult problems is the determination of the number of clusters in a data set, which is a basic input parameter for most clustering algorithms. To solve this problem, many algorithms have been proposed for either numerical or categorical data sets. However, these algorithms are not very effective for a mixed data set containing both numerical attributes and categorical attributes. To overcome this deficiency, a generalized mechanism is presented in this paper by integrating Renyi entropy and complement entropy together. The mechanism is able to uniformly characterize within-cluster entropy and between-cluster entropy and to identify the worst cluster in a mixed data set. In order to evaluate the clustering results for mixed data, an effective cluster validity index is also defined in this paper. Furthermore, by introducing a new dissimilarity measure into the k-prototypes algorithm, we develop an algorithm to determine the number of clusters in a mixed data set. The performance of the algorithm has been studied on several synthetic and real world data sets. The comparisons with other clustering algorithms show that the proposed algorithm is more effective in detecting the optimal number of clusters and generates better clustering results.
Knowledge Based Systems | 2012
Fuyuan Cao; Jiye Liang; Deyu Li; Liang Bai; Chuangyin Dang
Clustering is one of the most important data mining techniques that partitions data according to some similarity criterion. The problems of clustering categorical data have attracted much attention from the data mining research community recently. As the extension of the k-Means algorithm, the k-Modes algorithm has been widely applied to categorical data clustering by replacing means with modes. In this paper, the limitations of the simple matching dissimilarity measure and Ngs dissimilarity measure are analyzed using some illustrative examples. Based on the idea of biological and genetic taxonomy and rough membership function, a new dissimilarity measure for the k-Modes algorithm is defined. A distinct characteristic of the new dissimilarity measure is to take account of the distribution of attribute values on the whole universe. A convergence study and time complexity of the k-Modes algorithm based on new dissimilarity measure indicates that it can be effectively used for large data sets. The results of comparative experiments on synthetic data sets and five real data sets from UCI show the effectiveness of the new dissimilarity measure, especially on data sets with biological and genetic taxonomy information.
Expert Systems With Applications | 2012
Liang Bai; Jiye Liang; Chuangyin Dang; Fuyuan Cao
The leading partitional clustering technique, k-modes, is one of the most computationally efficient clustering methods for categorical data. However, the performance of the k-modes clustering algorithm which converges to numerous local minima strongly depends on initial cluster centers. Currently, most methods of initialization cluster centers are mainly for numerical data. Due to lack of geometry for the categorical data, these methods used in cluster centers initialization for numerical data are not applicable to categorical data. This paper proposes a novel initialization method for categorical data which is implemented to the k-modes algorithm. The method integrates the distance and the density together to select initial cluster centers and overcomes shortcomings of the existing initialization methods for categorical data. Experimental results illustrate the proposed initialization method is effective and can be applied to large data sets for its linear time complexity with respect to the number of data objects.
IEEE Transactions on Fuzzy Systems | 2010
Fuyuan Cao; Jiye Liang; Liang Bai; Xingwang Zhao; Chuangyin Dang
A fundamental assumption often made in unsupervised learning is that the problem is static, i.e., the description of the classes does not change with time. However, many practical clustering tasks involve changing environments. It is hence recognized that the methods and techniques to analyze the evolving trends for changing environments are of increasing interest and importance. Although the problem of clustering numerical time-evolving data is well-explored, the problem of clustering categorical time-evolving data remains as a challenging issue. In this paper, we propose a generalized clustering framework for categorical time-evolving data, which is composed of three algorithms: a drifting-concept detecting algorithm that detects the difference between the current sliding window and the last sliding window, a data-labeling algorithm that decides the most-appropriate cluster label for each object of the current sliding window based on the clustering results of the last sliding window, and a cluster-relationship-analysis algorithm that analyzes the relationship between clustering results at different time stamps. The time-complexity analysis indicates that these proposed algorithms are effective for large datasets. Experiments on a real dataset show that the proposed framework not only accurately detects the drifting concepts but also attains clustering results of better quality. Furthermore, compared with the other framework, the proposed one needs fewer parameters, which is favorable for specific applications.
Neurocomputing | 2013
Fuyuan Cao; Jiye Liang; Deyu Li; Xingwang Zhao
Traditional clustering algorithms consider all of the dimensions of an input data set equally. However, in the high dimensional data, a common property is that data points are highly clustered in subspaces, which means classes of objects are categorized in subspaces rather than the entire space. Subspace clustering is an extension of traditional clustering that seeks to find clusters in different subspaces within a data set. In this paper, a weighting k-modes algorithm is presented for subspace clustering of categorical data and its corresponding time complexity is analyzed as well. In the proposed algorithm, an additional step is added to the k-modes clustering process to automatically compute the weight of all dimensions in each cluster by using complement entropy. Furthermore, the attribute weight can be used to identify the subsets of important dimensions that categorize different clusters. The effectiveness of the proposed algorithm is demonstrated with real data sets and synthetic data sets.
IEEE Transactions on Fuzzy Systems | 2012
Jiye Liang; Liang Bai; Chuangyin Dang; Fuyuan Cao
K-means is a partitional clustering technique that is well-known and widely used for its low computational cost. The representative algorithms include the hard k-means and the fuzzy k -means. However, the performance of these algorithms tends to be affected by skewed data distributions, i.e., imbalanced data. They often produce clusters of relatively uniform sizes, even if input data have varied cluster sizes, which is called the “uniform effect.” In this paper, we analyze the causes of this effect and illustrate that it probably occurs more in the fuzzy k-means clustering process than the hard k-means clustering process. As the fuzzy index m increases, the “uniform effect” becomes evident. To prevent the effect of the “uniform effect,” we propose a multicenter clustering algorithm in which multicenters are used to represent each cluster, instead of one single center. The proposed algorithm consists of the three subalgorithms: the fast global fuzzy k -means, Best M-Plot, and grouping multicenter algorithms. They will be, respectively, used to address the three important problems: 1) How are the reliable cluster centers from a dataset obtained? 2) How are the number of clusters which these obtained cluster centers represent determined? 3) How is it judged as to which cluster centers represent the same clusters? The experimental studies on both synthetic and real datasets illustrate the effectiveness of the proposed clustering algorithm in clustering balanced and imbalanced data.
Fuzzy Sets and Systems | 2013
Liang Bai; Jiye Liang; Chuangyin Dang; Fuyuan Cao
In this paper, we present a new fuzzy clustering algorithm for categorical data. In the algorithm, the objective function of the fuzzy k-modes algorithm is modified by adding the between-cluster information so that we can simultaneously minimize the within-cluster dispersion and enhance the between-cluster separation. For obtaining the local optimal solutions of the modified objective function, the corresponding update formulas of the membership matrix and the cluster prototypes are strictly derived. The convergence of the proposed algorithm under the optimization framework is proved. On several real data sets from UCI, the performance of the proposed algorithm is studied. The experimental results illustrate that the algorithm is effective and suitable for categorical data sets.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2013
Liang Bai; Jiye Liang; Chuangyin Dang; Fuyuan Cao
As a leading partitional clustering technique,
Expert Systems With Applications | 2011
Fuyuan Cao; Jiye Liang
(k)