Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xi-Zhao Wang is active.

Publication


Featured researches published by Xi-Zhao Wang.


IEEE Transactions on Fuzzy Systems | 2008

Attributes Reduction Using Fuzzy Rough Sets

Eric C. C. Tsang; Degang Chen; Daniel S. Yeung; Xi-Zhao Wang; John W. T. Lee

Fuzzy rough sets are the generalization of traditional rough sets to deal with both fuzziness and vagueness in data. The existing researches on fuzzy rough sets are mainly concentrated on the construction of approximation operators. Less effort has been put on the attributes reduction of databases with fuzzy rough sets. This paper mainly focuses on the attributes reduction with fuzzy rough sets. After analyzing the previous works on attributes reduction with fuzzy rough sets, we introduce formal concepts of attributes reduction with fuzzy rough sets and completely study the structure of attributes reduction. An algorithm using discernibility matrix to compute all the attributes reductions is developed. Based on these lines of thought, we set up a solid mathematical foundation for attributes reduction with fuzzy rough sets. The experimental results show that the idea in this paper is feasible and valid.


Information Sciences | 2007

Learning fuzzy rules from fuzzy samples based on rough set technique

Xi-Zhao Wang; Eric C. C. Tsang; Suyun Zhao; Degang Chen; Daniel S. Yeung

Although the traditional rough set theory has been a powerful mathematical tool for modeling incompleteness and vagueness, its performance in dealing with initial fuzzy data is usually poor. This paper makes an attempt to improve its performance by extending the traditional rough set approach to the fuzzy environment. The extension is twofold. One is knowledge representation and the other is knowledge reduction. First, we provide new definitions of fuzzy lower and upper approximations by considering the similarity between the two objects. Second, we extend a number of underlying concepts of knowledge reduction (such as the reduct and core) to the fuzzy environment and use these extensions to propose a heuristic algorithm to learn fuzzy rules from initial fuzzy data. Finally, we provide some numerical experiments to demonstrate the feasibility of the proposed algorithm. One of the main contributions of this paper is that the fundamental relationship between the reducts and core of rough sets is still pertinent after the proposed extension.


Information Sciences | 2008

Induction of multiple fuzzy decision trees based on rough set technique

Xi-Zhao Wang; Junhai Zhai; Shu-Xia Lu

The integration of fuzzy sets and rough sets can lead to a hybrid soft-computing technique which has been applied successfully to many fields such as machine learning, pattern recognition and image processing. The key to this soft-computing technique is how to set up and make use of the fuzzy attribute reduct in fuzzy rough set theory. Given a fuzzy information system, we may find many fuzzy attribute reducts and each of them can have different contributions to decision-making. If only one of the fuzzy attribute reducts, which may be the most important one, is selected to induce decision rules, some useful information hidden in the other reducts for the decision-making will be losing unavoidably. To sufficiently make use of the information provided by every individual fuzzy attribute reduct in a fuzzy information system, this paper presents a novel induction of multiple fuzzy decision trees based on rough set technique. The induction consists of three stages. First several fuzzy attribute reducts are found by a similarity based approach, and then a fuzzy decision tree for each fuzzy attribute reduct is generated according to the fuzzy ID3 algorithm. The fuzzy integral is finally considered as a fusion tool to integrate the generated decision trees, which combines together all outputs of the multiple fuzzy decision trees and forms the final decision result. An illustration is given to show the proposed fusion scheme. A numerical experiment on real data indicates that the proposed multiple tree induction is superior to the single tree induction based on the individual reduct or on the entire feature set for learning problems with many attributes.


IEEE Transactions on Neural Networks | 2007

Localized Generalization Error Model and Its Application to Architecture Selection for Radial Basis Function Neural Network

Daniel S. Yeung; Wing W. Y. Ng; Defeng Wang; Eric C. C. Tsang; Xi-Zhao Wang

The generalization error bounds found by current error models using the number of effective parameters of a classifier and the number of training samples are usually very loose. These bounds are intended for the entire input space. However, support vector machine (SVM), radial basis function neural network (RBFNN), and multilayer perceptron neural network (MLPNN) are local learning machines for solving problems and treat unseen samples near the training samples to be more important. In this paper, we propose a localized generalization error model which bounds from above the generalization error within a neighborhood of the training samples using stochastic sensitivity measure. It is then used to develop an architecture selection technique for a classifier with maximal coverage of unseen samples by specifying a generalization error threshold. Experiments using 17 University of California at Irvine (UCI) data sets show that, in comparison with cross validation (CV), sequential learning, and two other ad hoc methods, our technique consistently yields the best testing classification accuracy with fewer hidden neurons and less training time.


systems man and cybernetics | 1999

A comparative study on heuristic algorithms for generating fuzzy decision trees

Xi-Zhao Wang; Daniel S. Yeung; Eric C. C. Tsang

Fuzzy decision tree induction is an important way of learning from examples with fuzzy representation. Since the construction of optimal fuzzy decision tree is NP-hard, the research on heuristic algorithms is necessary. In this paper, three heuristic algorithms for generating fuzzy decision trees are analyzed and compared. One of them is proposed by the authors. The comparisons are twofold. One is the analytic comparison based on expanded attribute selection and reasoning mechanism; the other is the experimental comparison based on the size of generated trees and learning accuracy. The purpose of this study is to explore comparative strengths and weaknesses of the three heuristics and to show some useful guidelines on how to choose an appropriate heuristic for a particular problem.


Information Sciences | 2011

Particle swarm optimization for determining fuzzy measures from data

Xi-Zhao Wang; Yu-Lin He; Ling-Cai Dong; Huan-Yu Zhao

Fuzzy measures and fuzzy integrals have been successfully used in many real applications. How to determine fuzzy measures is the most difficult problem in these applications. Though there have existed some methodologies for solving this problem, such as genetic algorithms, gradient descent algorithms and neural networks, it is hard to say which one is more appropriate and more feasible. Each method has its advantages and limitations. Therefore it is necessary to develop new methods or techniques to learn distinct fuzzy measures. In this paper, we make the first attempt to design a special particle swarm algorithm to determine a type of general fuzzy measures from data, and demonstrate that the algorithm is effective and efficient. Furthermore we extend this algorithm to identify and revise other types of fuzzy measures. To test our algorithms, we compare them with the basic particle swarm algorithms, gradient descent algorithms and genetic algorithms in literatures. In addition, for verifying whether our algorithms are robust in noisy-situations, a number of numerical experiments are conducted. Theoretical analysis and experimental results show that, for determining fuzzy measures, the particle swarm optimization is feasible and has a better performance than the existing genetic algorithms and gradient descent algorithms.


Neurocomputing | 2011

Upper integral network with extreme learning mechanism

Xi-Zhao Wang; Aixia Chen; Hui-Min Feng

Abstract The upper integral is a type of non-linear integral with respect to non-additive measures, which represents the maximum potential of efficiency for a group of features with interaction. The value of upper integrals can be evaluated through solving a linear programming problem. Considering the upper integral as a classifier, this paper first investigates its implementation and performance. Fusing multiple upper integral classifiers together by using a single layer neural network, this paper considers a upper integral network as a classification system. The learning mechanism of ELM is used to train this single layer neural network. A comparison of performance between a single upper integral classifier and the upper integral network is given on a number of benchmark databases.


soft computing | 2012

Dynamic ensemble extreme learning machine based on sample entropy

Junhai Zhai; Hong-yu Xu; Xi-Zhao Wang

Extreme learning machine (ELM) as a new learning algorithm has been proposed for single-hidden layer feed-forward neural networks, ELM can overcome many drawbacks in the traditional gradient-based learning algorithm such as local minimal, improper learning rate, and low learning speed by randomly selecting input weights and hidden layer bias. However, ELM suffers from instability and over-fitting, especially on large datasets. In this paper, a dynamic ensemble extreme learning machine based on sample entropy is proposed, which can alleviate to some extent the problems of instability and over-fitting, and increase the prediction accuracy. The experimental results show that the proposed approach is robust and efficient.


IEEE Transactions on Knowledge and Data Engineering | 2010

Building a Rule-Based Classifier—A Fuzzy-Rough Set Approach

Suyun Zhao; Eric C. C. Tsang; Degang Chen; Xi-Zhao Wang

The fuzzy-rough set (FRS) methodology, as a useful tool to handle discernibility and fuzziness, has been widely studied. Some researchers studied on the rough approximation of fuzzy sets, while some others focused on studying one application of FRS: attribute reduction (i.e., feature selection). However, constructing classifier by using FRS, as another application of FRS, has been less studied. In this paper, we build a rule-based classifier by using one generalized FRS model after proposing a new concept named as ¿consistence degree¿ which is used as the critical value to keep the discernibility information invariant in the processing of rule induction. First, we generalized the existing FRS to a robust model with respect to misclassification and perturbation by incorporating one controlled threshold into knowledge representation of FRS. Second, we propose a concept named as ¿consistence degree¿ and by the strict mathematical reasoning, we show that this concept is reasonable as a critical value to reduce redundant attribute values in database. By employing this concept, we then design a discernibility vector to develop the algorithms of rule induction. The induced rule set can function as a classifier. Finally, the experimental results show that the proposed rule-based classifier is feasible and effective on noisy data.


Pattern Recognition | 2008

Feature selection using localized generalization error for supervised classification problems using RBFNN

Wing W. Y. Ng; Daniel S. Yeung; Michael Firth; Eric C. C. Tsang; Xi-Zhao Wang

A pattern classification problem usually involves using high-dimensional features that make the classifier very complex and difficult to train. With no feature reduction, both training accuracy and generalization capability will suffer. This paper proposes a novel hybrid filter-wrapper-type feature subset selection methodology using a localized generalization error model. The localized generalization error model for a radial basis function neural network bounds from above the generalization error for unseen samples located within a neighborhood of the training samples. Iteratively, the feature making the smallest contribution to the generalization error bound is removed. Moreover, the novel feature selection method is independent of the sample size and is computationally fast. The experimental results show that the proposed method consistently removes large percentages of features with statistically insignificant loss of testing accuracy for unseen samples. In the experiments for two of the datasets, the classifiers built using feature subsets with 90% of features removed by our proposed approach yield average testing accuracies higher than those trained using the full set of features. Finally, we corroborate the efficacy of the model by using it to predict corporate bankruptcies in the US.

Collaboration


Dive into the Xi-Zhao Wang's collaboration.

Top Co-Authors

Avatar

Daniel S. Yeung

Hong Kong Polytechnic University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eric C. C. Tsang

Hong Kong Polytechnic University

View shared research outputs
Top Co-Authors

Avatar

Degang Chen

North China Electric Power University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel S. Yeung

Hong Kong Polytechnic University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge