Qingxin Zhu
University of Electronic Science and Technology of China
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Qingxin Zhu.
Knowledge Based Systems | 2012
Shiping Wang; Qingxin Zhu; William Zhu; Fan Min
Rough sets are efficient for data pre-processing in data mining. However, some important problems such as attribute reduction in rough sets are NP-hard, and the algorithms to solve them are almost greedy ones. As a generalization of the linear independence in vector spaces, matroids provide well-established platforms for greedy algorithms. In this paper, we apply matroids to rough sets through an isomorphism from equivalence relations to 2-circuit matroids. First, a matroid is induced by an equivalence relation. Several equivalent characterizations of the independent sets of the induced matroid are obtained through rough sets. Second, an equivalence relation is induced by a matroid. The relationship between the above two inductions is studied. Third, an isomorphism from equivalence relations to 2-circuit matroids is established, which lays a sound foundation for studying rough sets using matroidal approaches. Finally, attribute reduction is equivalently formulated with rank functions and closure operators of matroids. These results show the potential for designing attribute reduction algorithms using matroidal approaches.
Information Sciences | 2013
Shiping Wang; Qingxin Zhu; William Zhu; Fan Min
Coverings are a useful form of data, while covering-based rough sets provide an effective tool for dealing with this data. Covering-based rough sets have been widely used in attribute reduction and rule extraction. However, few quantitative analyses for covering-based rough sets have been conducted, while many advances for classical rough sets have been obtained through quantitative tools. In this paper, the upper approximation number is defined as a measurement to quantify covering-based rough sets, and a pair of upper and lower approximation operators are constructed using the approximation number. The operators not only inherit some important properties of existing approximation operators, but also exhibit some new quantitative characteristics. It is interesting to note that the upper approximation number of a covering approximation space is similar to the dimension of a vector space or the rank of a matrix.
Pattern Recognition | 2015
Shiping Wang; Witold Pedrycz; Qingxin Zhu; William Zhu
Dimensionality reduction is an important and challenging task in machine learning and data mining. Feature selection and feature extraction are two commonly used techniques for decreasing dimensionality of the data and increasing efficiency of learning algorithms. Specifically, feature selection realized in the absence of class labels, namely unsupervised feature selection, is challenging and interesting. In this paper, we propose a new unsupervised feature selection criterion developed from the viewpoint of subspace learning, which is treated as a matrix factorization problem. The advantages of this work are four-fold. First, dwelling on the technique of matrix factorization, a unified framework is established for feature selection, feature extraction and clustering. Second, an iterative update algorithm is provided via matrix factorization, which is an efficient technique to deal with high-dimensional data. Third, an effective method for feature selection with numeric data is put forward, instead of drawing support from the discretization process. Fourth, this new criterion provides a sound foundation for embedding kernel tricks into feature selection. With this regard, an algorithm based on kernel methods is also proposed. The algorithms are compared with four state-of-the-art feature selection methods using six publicly available datasets. Experimental results demonstrate that in terms of clustering results, the proposed two algorithms come with better performance than the others for almost all datasets we experimented with here. HighlightsPropose a new feature selection based on matrix factorization.Present a fast convergent algorithm for matrix factorization on certain constraints.Incorporate kernel tricks into feature selection problems.Construct a unified framework for feature extraction, feature selection and clustering.
Information Sciences | 2014
Shiping Wang; William Zhu; Qingxin Zhu; Fan Min
Covering-based rough sets provide an efficient means of dealing with covering data, which occur widely in practical applications. Boolean matrix decomposition has frequently been applied to data mining and machine learning. In this paper, three types of existing covering approximation operators are represented by Boolean matrices, and then used in Boolean matrix decomposition. First, we define two characteristic matrices of a covering. Through these Boolean characteristic matrices, three types of existing covering approximation operator are concisely and equivalently represented. Second, these operator representations are applied to Boolean matrix decomposition, which has a close relationship with nonnegative matrix factorization, a popular and efficient technique for machine learning. We provide a sufficient and necessary condition for a square Boolean matrix to decompose into the Boolean product of another matrix and its transpose. We then develop an algorithm for this Boolean matrix decomposition. Finally, these three covering approximation operators are axiomatized using Boolean matrices. This work presents an interesting viewpoint from which to investigate covering-based rough set theory and its applications.
Information Sciences | 2014
Hai-Sheng Li; Qingxin Zhu; Ri-Gui Zhou; Ming-Cui Li; Lan Song; Hou Ian
Abstract In this study, we propose a new representation method for multidimensional color images, called an n -qubit normal arbitrary superposition state (NASS), where n qubits represent the colors and coordinates of 2 n pixels (e.g., a three-dimensional color image of 1024 × 1024 × 1024 using only 30 qubits). Based on NASS, we present an ( n + 1 )-qubit normal arbitrary superposition state with relative phases (NASSRP) and an ( n + 2 )-qubit normal arbitrary superposition state with three components (NASSTC) for lossless and lossy quantum compression, respectively. We also design three general quantum circuits to generate NASS, NASSRP, and NASSTC states, where we retrieve an image from a quantum system using different projection measurement operators. Finally, we define the quantum compression ratio and analyze lossless and lossy quantum compression algorithms of multidimensional quantum images. For the first time, we implemented the compression of multidimensional color images on a quantum computer. Thus, we address the theoretical and practical aspects of image processing on a quantum computer.
Knowledge Based Systems | 2015
Shiping Wang; Witold Pedrycz; Qingxin Zhu; William Zhu
Abstract Dimensionality reduction is an important and challenging task in machine learning and data mining. It can facilitate data clustering, classification and information retrieval. As an efficient technique for dimensionality reduction, feature selection is about finding a small feature subset preserving the most relevant information. In this paper, we propose a new criterion, called maximum projection and minimum redundancy feature selection, to address unsupervised learning scenarios. First, the feature selection is formalized with the use of the projection matrices and then characterized equivalently as a matrix factorization problem. Second, an iterative update algorithm and a greedy algorithm are proposed to tackle this problem. Third, kernel techniques are considered and the corresponding algorithm is also put forward. Finally, the proposed algorithms are compared with four state-of-the-art feature selection methods. Experimental results reported for six publicly datasets demonstrate the superiority of the proposed algorithms.
International Journal of Approximate Reasoning | 2013
Shiping Wang; William Zhu; Qingxin Zhu; Fan Min
Covering is a common form of data representation, and covering-based rough sets serve as an efficient technique to process this type of data. However, many important problems such as covering reduction in covering-based rough sets are NP-hard so that most algorithms to solve them are greedy. Matroids provide well-established platforms for greedy algorithm foundation and implementation. Therefore, it is necessary to integrate covering-based rough set with matroid. In this paper, we propose four matroidal structures of coverings and establish their relationships with rough sets. First, four different viewpoints are presented to construct these four matroidal structures of coverings, including 1-rank matroids, bigraphs, upper approximation numbers and transversals. The respective advantages of these four matroidal structures to rough sets are explored. Second, the connections among these four matroidal structures are studied. It is interesting to find that they coincide with each other. Third, a converse view is provided to induce a covering by a matroid. We study the relationship between this induction and the one from a covering to a matroid. Finally, some important concepts of covering-based rough sets, such as approximation operators, are equivalently formulated by these matroidal structures. These interesting results demonstrate the potential to combine covering-based rough sets with matroids. Four matroidal structures of a covering are proposed from four different viewpoints.The connections among these four matroidal structures are studied.Induction of a covering by a matroid and its relationship with induction of a matroid by a covering.Some important concepts of covering-based rough sets, such as approximation operators, are equivalently formulated by these matroidal structures.
Journal of Applied Mathematics | 2013
Shiping Wang; Qingxin Zhu; William Zhu; Fan Min
Covering is a widely used form of data structures. Covering-based rough set theory provides a systematic approach to this data. In this paper, graphs are connected with covering-based rough sets. Specifically, we convert some important concepts in graph theory including vertex covers, independent sets, edge covers, and matchings to ones in covering-based rough sets. At the same time, corresponding problems in graphs are also transformed into ones in covering-based rough sets. For example, finding a minimal edge cover of a graph is translated into finding a minimal general reduct of a covering. The main contributions of this paper are threefold. First, any graph is converted to a covering. Two graphs induce the same covering if and only if they are isomorphic. Second, some new concepts are defined in covering-based rough sets to correspond with ones in graph theory. The upper approximation number is essential to describe these concepts. Finally, from a new viewpoint of covering-based rough sets, the general reduct is defined, and its equivalent characterization for the edge cover is presented. These results show the potential for the connection between covering-based rough sets and graphs.
Mathematical Problems in Engineering | 2014
Shujiao Liao; Qingxin Zhu; Fan Min
In recent years, the theory of decision-theoretic rough set and its applications have been studied, including the attribute reduction problem. However, most researchers only focus on decision cost instead of test cost. In this paper, we study the attribute reduction problem with both types of costs in decision-theoretic rough set models. A new definition of attribute reduct is given, and the attribute reduction is formulated as an optimization problem, which aims to minimize the total cost of classification. Then both backtracking and heuristic algorithms to the new problem are proposed. The algorithms are tested on four UCI (University of California, Irvine) datasets. Experimental results manifest the efficiency and the effectiveness of both algorithms. This study provides a new insight into the attribute reduction problem in decision-theoretic rough set models.
Information Sciences | 2014
Shiping Wang; Qingxin Zhu; William Zhu; Fan Min
Abstract Rough sets are efficient for attribute reduction and rule extraction in data mining. However, many important problems including attribute reduction in rough sets are NP-hard, therefore the algorithms to solve them are often greedy. Matroids, generalized from linear independence in vector spaces, provide well-established platforms for greedy algorithm design. In this paper, we use graph and matrix approaches to study rough sets through matroids. First, we construct an isomorphism from equivalence relations to 2-circuit matroids, and then propose graph representations of lower and upper approximations through the graphic matroid. We also study graph representations of lower and upper approximations by that of the dual of the matroid. Second, in light of the fact that the relational matrix is a representable matrix of the matroid induced by an equivalence relation, matrix representations of lower and upper approximations are obtained with the representable matrix of the matroid. In a word, borrowing from matroids, this work presents two interesting views, graph and matrix ones, to investigate rough sets.