Robert Burduk
Wrocław University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Robert Burduk.
Archive | 2013
Robert Burduk; Konrad Jackowski; Marek Kurzynski; Michal Wozniak; Andrzej Zolnierek
The computer recognition systems are nowadays one of the most promising directions in artificial intelligence. This book is the most comprehensive study of this field. It contains a collection of 86 carefully selected articles contributed by experts of pattern recognition. It reports on current research with respect to both methodology and applications. In particular, it includes the following sections:Biometrics Data Stream Classification and Big Data AnalyticsFeatures, learning, and classifiers Image processing and computer vision Medical applications Miscellaneous applications Pattern recognition and image processing in roboticsSpeech and word recognitionThis book is a great reference tool for scientists who deal with the problems of designing computer pattern recognition systems. Its target readers can be the as well researchers as students of computer science, artificial intelligence or robotics.
Pattern Analysis and Applications | 2010
Robert Burduk
The paper considers the problem of classification error in multistage pattern recognition. This model of classification is based primarily on the Bayes rule and secondarily on the notion of fuzzy numbers. In adopting a probability-fuzzy model two concepts of hierarchical rules are proposed. In the first approach the local criterion that denote the probabilities of misclassification for particular nodes of a tree is considered. In the second approach the global optimal strategy that minimises the mean probability of misclassification on the whole multistage recognition process is considered. A probability of misclassifications is derived for a multiclass hierarchical classifier under the assumption that the features at different nodes of the tree are class-conditionally statistically independent, and we have fuzzy information on object features instead of exact information. Numerical example of this difference concludes the work.
Pattern Recognition Letters | 2013
Robert Burduk
The article presents a new approach of calculating the weight of base classifiers from a committee of classifiers. The obtained weights are interpreted in the context of the interval-valued sets. The work proposes four different ways of calculating weights which consider both the correctness and incorrectness of the classification. The proposed weights have been used in the algorithms which combine the outputs of base classifiers. In this work we use both the outputs, represented by rank and measure level. Research experiments have involved several bases available in the UCI repository and two data sets that have generated distributions. The performed experiments compare algorithms which are based on calculating the weights according to the resubstitution and algorithms proposed in the work. The ensemble of classifiers has also been compared with the base classifiers entering the committee.
Pattern Analysis and Applications | 2006
Robert Burduk; Marek Kurzynski
In this paper we present the decision rules of a two-stage binary Bayesian classifier. The loss function in our case is fuzzy-valued and is dependent on the stage of the decision tree or on the node of the decision tree. The decision rules minimize the mean risk, i.e., the mean value of the fuzzy loss function. The model is first based on the notion of fuzzy random variable and secondly on the subjective ranking of fuzzy number defined by Campos and González. In this paper also, influence of choice of parameter λ in selected comparison fuzzy number method on classification results are presented. Finally, an example illustrating the study developed in the paper is considered.
Central European Journal of Medicine | 2012
Robert Burduk; Michal Wozniak
The paper presents a comparative study of selected recognition methods for the medical decision problem -acute abdominal pain diagnosis. We consider if it is worth using expert knowledge and learning set at the same time. The article shows two groups of decision tree approaches to the problem under consideration. The first does not use expert knowledge and generates classifier only on the basis of learning set. The second approach utilizes expert knowledge for specifying the decision tree structure and learning set for determining mode of decision making in each node based on Bayes decision theory. All classifiers are evaluated on the basis of computer experiments.
Pattern Analysis and Applications | 2012
Robert Burduk
The paper considers the problem of classification error in pattern recognition. This model of classification is primarily based on the Bayes rule and secondarily on the notion of intuitionistic or interval-valued fuzzy sets. A probability of misclassifications is derived for a classifier under the assumption that the features are class-conditionally statistically independent, and we have intuitionistic or interval-valued fuzzy information on object features instead of exact information. A probability of the intuitionistic or interval-valued fuzzy event is represented by the real number. Additionally, the received results are compared with the bound on the probability of error based on information energy. Numerical example concludes the work.
asian conference on intelligent information and database systems | 2015
Robert Burduk; Krzysztof Walkowiak
The selection of classifiers is one of the important problems in the creation of an ensemble of classifiers. The paper presents the static selection in which a new method of calculating the weights of individual classifiers is used. The obtained weights can be interpreted in the context of the interval logic. It means that the particular weights will not be provided precisely but their lower and upper values will be used. A number of experiments have been carried out on several data sets from the UCI repository.
international conference on computational collective intelligence | 2012
Robert Burduk
This paper presents the recognition algorithm with random selection of features. In the proposed procedure of classification the choice of weights is one of the main problems. In this paper we propose the weighted majority vote rule in which weights are represented by interval-valued fuzzy set (IVFS). In our approach the weights have a lower and upper membership function. The described algorithm was tested on one data set from UCI repository. The obtained results are compared with the most popular majority vote and the weighted majority vote rule.
Archive | 2004
Robert Burduk
The paper deals with the decision rules for a Bayesian hierarchical classifier based on a decision-tree scheme. For given tree skeleton and features to be used, the optimal (Bayes) decision rules (strategy) at each non-terminal node are presented. The case has been considered when a loss (utility) function is described using fuzzy numbers. The globally optimal Bayes strategy has been calculated for the case when the loss function depends on the node of the decision tree. The model is based on the notion of fuzzy random variable and also on crisp ranking method for fuzzy numbers. The obtained result is presented as a calculation example.
asian conference on intelligent information and database systems | 2014
Robert Burduk
This paper presents the AdaBoost algorithm that provides for the imprecision in the calculation of weights. In our approach the obtained values of weights are changed within a certain range of values. This range represents the uncertainty of the calculation of the weight of each element of the learning set. In our study we use the boosting by the reweighting method where each weak classifier is based on the recursive partitioning method. A number of experiments have been carried out on eight data sets available in the UCI repository and on two randomly generated data sets. The obtained results are compared with the original AdaBoost algorithm using appropriate statistical tests.