Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ludmila I. Kuncheva is active.

Publication


Featured researches published by Ludmila I. Kuncheva.


Archive | 2004

Combining Pattern Classifiers

Ludmila I. Kuncheva

Thank you for downloading combining pattern classifiers methods and algorithms. Maybe you have knowledge that, people have look hundreds times for their chosen novels like this combining pattern classifiers methods and algorithms, but end up in infectious downloads. Rather than enjoying a good book with a cup of coffee in the afternoon, instead they cope with some harmful virus inside their computer.


Machine Learning | 2003

Measures of Diversity in Classifier Ensembles and Their Relationship with the Ensemble Accuracy

Ludmila I. Kuncheva; Christopher J. Whitaker

Diversity among the members of a team of classifiers is deemed to be a key issue in classifier combination. However, measuring diversity is not straightforward because there is no generally accepted formal definition. We have found and studied ten statistics which can measure diversity among binary classifier outputs (correct or incorrect vote for the class label): four averaged pairwise measures (the Q statistic, the correlation, the disagreement and the double fault) and six non-pairwise measures (the entropy of the votes, the difficulty index, the Kohavi-Wolpert variance, the interrater agreement, the generalized diversity, and the coincident failure diversity). Four experiments have been designed to examine the relationship between the accuracy of the team and the measures of diversity, and among the measures themselves. Although there are proven connections between diversity and accuracy in some special cases, our results raise some doubts about the usefulness of diversity measures in building classifier ensembles in real-life pattern recognition problems.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2002

A theoretical study on six classifier fusion strategies

Ludmila I. Kuncheva

We look at a single point in feature space, two classes, and L classifiers estimating the posterior probability for class /spl omega//sub 1/. Assuming that the estimates are independent and identically distributed (normal or uniform), we give formulas for the classification error for the following fusion methods: average, minimum, maximum, median, majority vote, and oracle.


Archive | 2010

Fuzzy Classifier Design

Ludmila I. Kuncheva

This book about fuzzy classifier design briefly introduces the fundamentals of supervised pattern recognition and fuzzy set theory. Fuzzy if-then classifiers are defined and some theoretical properties thereof are studied. Popular training algorithms are detailed. Non if-then fuzzy classifiers include relational, k-nearest neighbor, prototype-based designs, etc. A chapter on multiple classifier combination discusses fuzzy and non-fuzzy models for fusion and selection.


systems man and cybernetics | 2002

Switching between selection and fusion in combining classifiers: an experiment

Ludmila I. Kuncheva

This paper presents a combination of classifier selection and fusion by using statistical inference to switch between the two. Selection is applied in those regions of the feature space where one classifier strongly dominates the others from the pool [called clustering-and-selection or (CS)] and fusion is applied in the remaining regions. Decision templates (DT) method is adopted for the classifier fusion part. The proposed combination scheme (called CS+DT) is compared experimentally against its two components, and also against majority vote, naive Bayes, two joint-distribution methods (BKS and a variant due to Wernecke (1988)), the dynamic classifier selection (DCS) algorithm DCS_LA based on local accuracy (Woods et al. (1997)), and simple fusion methods such as maximum, minimum, average, and product. Based on the results with five data sets with homogeneous ensembles [multilayer perceptrons (NLPs)] and ensembles of different classifiers, we offer a discussion on when to combine classifiers and how classifier selection (static or dynamic) can be misled by the differences in the classifier team.


Pattern Analysis and Applications | 2003

Limits on the majority vote accuracy in classifier fusion

Ludmila I. Kuncheva; Christopher J. Whitaker; Catherine A. Shipp; Robert P. W. Duin

Abstract We derive upper and lower limits on the majority vote accuracy with respect to individual accuracy p, the number of classifiers in the pool (L), and the pairwise dependence between classifiers, measured by Yule’s Q statistic. Independence between individual classifiers is typically viewed as an asset in classifier fusion. We show that the majority vote with dependent classifiers can potentially offer a dramatic improvement both over independent classifiers and over an individual classifier with accuracy p. A functional relationship between the limits and the pairwise dependence Q is derived. Two patterns of the joint distribution for classifier outputs (correct/incorrect) are identified to derive the limits: the pattern of success and the pattern of failure. The results support the intuition that negative pairwise dependence is beneficial although not straightforwardly related to the accuracy. The pattern of success showed that for the highest improvement over p, all pairs of classifiers in the pool should have the same negative dependence.


multiple classifier systems | 2004

Classifier Ensembles for Changing Environments

Ludmila I. Kuncheva

We consider strategies for building classifier ensembles for non-stationary environments where the classification task changes during the operation of the ensemble. Individual classifier models capable of online learning are reviewed. The concept of ”forgetting” is discussed. Online ensembles and strategies suitable for changing environments are summarized.


Information Fusion | 2002

Relationships between combination methods and measures of diversity in combining classifiers

Catherine A. Shipp; Ludmila I. Kuncheva

Abstract This study looks at the relationships between different methods of classifier combination and different measures of diversity. We considered 10 combination methods and 10 measures of diversity on two benchmark data sets. The relationship was sought on ensembles of three classifiers built on all possible partitions of the respective feature sets into subsets of pre-specified sizes. The only positive finding was that the Double-Fault measure of diversity and the measure of difficulty both showed reasonable correlation with Majority Vote and Naive Bayes combinations. Since both these measures have an indirect connection to the ensemble accuracy, this result was not unexpected. However, our experiments did not detect a consistent relationship between the other measures of diversity and the 10 combination methods.


IEEE Transactions on Evolutionary Computation | 2000

Designing classifier fusion systems by genetic algorithms

Ludmila I. Kuncheva; Lakhmi C. Jain

We suggest two simple ways to use a genetic algorithm (GA) to design a multiple-classifier system. The first GA version selects disjoint feature subsets to be used by the individual classifiers, whereas the second version selects (possibly) overlapping feature subsets, and also the types of the individual classifiers. The two GAs have been tested with four real data sets: heart, Satimage, letters, and forensic glasses. We used three-classifier systems and basic types of individual classifiers (the linear and quadratic discriminant classifiers and the logistic classifier). The multiple-classifier systems designed with the two GAs were compared against classifiers using: all features; the best feature subset found by the sequential backward selection method; and the best feature subset found by a CA. The GA design can be made less prone to overtraining by including penalty terms in the fitness function accounting for the number of features used.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2006

Evaluation of Stability of k-Means Cluster Ensembles with Respect to Random Initialization

Ludmila I. Kuncheva; Dmitry P. Vetrov

Many clustering algorithms, including cluster ensembles, rely on a random component. Stability of the results across different runs is considered to be an asset of the algorithm. The cluster ensembles considered here are based on k-means clusterers. Each clusterer is assigned a random target number of clusters, k and is started from a random initialization. Here, we use 10 artificial and 10 real data sets to study ensemble stability with respect to random k, and random initialization. The data sets were chosen to have a small number of clusters (two to seven) and a moderate number of data points (up to a few hundred). Pairwise stability is defined as the adjusted Rand index between pairs of clusterers in the ensemble, averaged across all pairs. Nonpairwise stability is defined as the entropy of the consensus matrix of the ensemble. An experimental comparison with the stability of the standard k-means algorithm was carried out for k from 2 to 20. The results revealed that ensembles are generally more stable, markedly so for larger k. To establish whether stability can serve as a cluster validity index, we first looked at the relationship between stability and accuracy with respect to the number of clusters, k. We found that such a relationship strongly depends on the data set, varying from almost perfect positive correlation (0.97, for the glass data) to almost perfect negative correlation (-0.93, for the crabs data). We propose a new combined stability index to be the sum of the pairwise individual and ensemble stabilities. This index was found to correlate better with the ensemble accuracy. Following the hypothesis that a point of stability of a clustering algorithm corresponds to a structure found in the data, we used the stability measures to pick the number of clusters. The combined stability index gave best results

Collaboration


Dive into the Ludmila I. Kuncheva's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert P. W. Duin

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge