Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert P. W. Duin is active.

Publication


Featured researches published by Robert P. W. Duin.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2000

Statistical pattern recognition: a review

Anil K. Jain; Robert P. W. Duin; Jianchang Mao

The primary goal of pattern recognition is supervised or unsupervised classification. Among the various frameworks in which pattern recognition has been traditionally formulated, the statistical approach has been most intensively studied and used in practice. More recently, neural network techniques and methods imported from statistical learning theory have been receiving increasing attention. The design of a recognition system requires careful attention to the following issues: definition of pattern classes, sensing environment, pattern representation, feature extraction and selection, cluster analysis, classifier design and learning, selection of training and test samples, and performance evaluation. In spite of almost 50 years of research and development in this field, the general problem of recognizing complex patterns with arbitrary orientation, location, and scale remains unsolved. New and emerging applications, such as data mining, web searching, retrieval of multimedia data, face recognition, and cursive handwriting recognition, require robust and efficient pattern recognition techniques. The objective of this review paper is to summarize and compare some of the well-known methods used in various stages of a pattern recognition system and identify research topics and applications which are at the forefront of this exciting and challenging field.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1998

On combining classifiers

Josef Kittler; Mohamad Hatef; Robert P. W. Duin; Jiri Matas

We develop a common theoretical framework for combining classifiers which use distinct pattern representations and show that many existing schemes can be considered as special cases of compound classification where all the pattern representations are used jointly to make a decision. An experimental comparison of various classifier combination schemes demonstrates that the combination rule developed under the most restrictive assumptions-the sum rule-outperforms other classifier combinations schemes. A sensitivity analysis of the various schemes to estimation errors is carried out to show that this finding can be justified theoretically.


Machine Learning | 2004

Support Vector Data Description

David M. J. Tax; Robert P. W. Duin

Data domain description concerns the characterization of a data set. A good description covers all target data but includes no superfluous space. The boundary of a dataset can be used to detect novel data or outliers. We will present the Support Vector Data Description (SVDD) which is inspired by the Support Vector Classifier. It obtains a spherically shaped boundary around a dataset and analogous to the Support Vector Classifier it can be made flexible by using other kernel functions. The method is made robust against outliers in the training set and is capable of tightening the description by using negative examples. We show characteristics of the Support Vector Data Descriptions using artificial and real data.


Pattern Recognition Letters | 1999

Support vector domain description

David M. J. Tax; Robert P. W. Duin

This paper shows the use of a data domain description method, inspired by the support vector machine by Vapnik, called the support vector domain description (SVDD). This data description can be used for novelty or outlier detection. A spherically shaped decision boundary around a set of objects is constructed by a set of support vectors describing the sphere boundary. It has the possibility of transforming the data to new feature spaces without much extra computational cost. By using the transformed data, this SVDD can obtain more flexible and more accurate data descriptions. The error of the first kind, the fraction of the training objects which will be rejected, can be estimated immediately from the description without the use of an independent test set, which makes this method data eAcient. The support vector domain description is compared with other outlier detection methods on real data. ” 1999 Elsevier Science B.V. All rights reserved.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2001

Multiclass linear dimension reduction by weighted pairwise Fisher criteria

Marco Loog; Robert P. W. Duin

We derive a class of computationally inexpensive linear dimension reduction criteria by introducing a weighted variant of the well-known K-class Fisher criterion associated with linear discriminant analysis (LDA). It can be seen that LDA weights contributions of individual class pairs according to the Euclidean distance of the respective class means. We generalize upon LDA by introducing a different weighting function.


Pattern Recognition | 2000

Combining multiple classifiers by averaging or by multiplying

David M. J. Tax; Martijn van Breukelen; Robert P. W. Duin; Josef Kittler

Abstract In classification tasks it may be wise to combine observations from different sources. Not only it decreases the training time but it can also increase the robustness and the performance of the classification. Combining is often done by just (weighted) averaging of the outputs of the different classifiers. Using equal weights for all classifiers then results in the mean combination rule. This works very well in practice, but the combination strategy lacks a fundamental basis as it cannot readily be derived from the joint probabilities. This contrasts with the product combination rule which can be obtained from the joint probability under the assumption of independency. In this paper we will show differences and similarities between this mean combination rule and the product combination rule in theory and in practice.


Pattern Analysis and Applications | 2002

Bagging, Boosting and the Random Subspace Method for Linear Classifiers

Marina Skurichina; Robert P. W. Duin

Abstract: Recently bagging, boosting and the random subspace method have become popular combining techniques for improving weak classifiers. These techniques are designed for, and usually applied to, decision trees. In this paper, in contrast to a common opinion, we demonstrate that they may also be useful in linear discriminant analysis. Simulation studies, carried out for several artificial and real data sets, show that the performance of the combining techniques is strongly affected by the small sample size properties of the base classifier: boosting is useful for large training sample sizes, while bagging and the random subspace method are useful for critical training sample sizes. Finally, a table describing the possible usefulness of the combining techniques for linear classifiers is presented.


international conference on pattern recognition | 2002

The combining classifier: to train or not to train?

Robert P. W. Duin

When more than a single classifier has been trained for the same recognition problem the question arises how this set of classifiers may be combined into a final decision rule. Several fixed combining rules are used that depend on the output values of the base classifiers only. They are almost always suboptimal. Usually, however, training sets are available. They may be used to calibrate the base classifier outputs, as well as to build a trained combining classifier using these outputs as inputs. It depends on various circumstances whether this is useful, in particular whether the training set is used for the base classifiers as well and whether they are overtrained. We present an intuitive discussion on the use of trained combiners, relating the question of the choice of the combining classifier to a similar choice in the area of dissimilarity based pattern recognition. Some simple examples are used to illustrate the discussion.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2004

Linear dimensionality reduction via a heteroscedastic extension of LDA: the Chernoff criterion

Robert P. W. Duin; Marco Loog

We propose an eigenvector-based heteroscedastic linear dimension reduction (LDR) technique for multiclass data. The technique is based on a heteroscedastic two-class technique which utilizes the so-called Chernoff criterion, and successfully extends the well-known linear discriminant analysis (LDA). The latter, which is based on the Fisher criterion, is incapable of dealing with heteroscedastic data in a proper way. For the two-class case, the between-class scatter is generalized so to capture differences in (co)variances. It is shown that the classical notion of between-class scatter can be associated with Euclidean distances between class means. From this viewpoint, the between-class scatter is generalized by employing the Chernoff distance measure, leading to our proposed heteroscedastic measure. Finally, using the results from the two-class case, a multiclass extension of the Chernoff criterion is proposed. This criterion combines separation information present in the class mean as well as the class covariance matrices. Extensive experiments and a comparison with similar dimension reduction techniques are presented.


Pattern Analysis and Applications | 2003

Limits on the majority vote accuracy in classifier fusion

Ludmila I. Kuncheva; Christopher J. Whitaker; Catherine A. Shipp; Robert P. W. Duin

Abstract We derive upper and lower limits on the majority vote accuracy with respect to individual accuracy p, the number of classifiers in the pool (L), and the pairwise dependence between classifiers, measured by Yule’s Q statistic. Independence between individual classifiers is typically viewed as an asset in classifier fusion. We show that the majority vote with dependent classifiers can potentially offer a dramatic improvement both over independent classifiers and over an individual classifier with accuracy p. A functional relationship between the limits and the pairwise dependence Q is derived. Two patterns of the joint distribution for classifier outputs (correct/incorrect) are identified to derive the limits: the pattern of success and the pattern of failure. The results support the intuition that negative pairwise dependence is beneficial although not straightforwardly related to the accuracy. The pattern of success showed that for the highest improvement over p, all pairs of classifiers in the pool should have the same negative dependence.

Collaboration


Dive into the Robert P. W. Duin's collaboration.

Top Co-Authors

Avatar

David M. J. Tax

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marco Loog

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pavel Paclík

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Marina Skurichina

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Mauricio Orozco-Alzate

National University of Colombia

View shared research outputs
Top Co-Authors

Avatar

Wan-Jui Lee

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Isneri Talavera

Delft University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge