Robert Susmaga
Poznań University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Robert Susmaga.
European Journal of Operational Research | 1999
Augustinos I. Dimitras; Roman Słowiński; Robert Susmaga; Constantin Zopounidis
A large number of methods like discriminant analysis, logit analysis, recursive partitioning algorithm, etc., have been used in the past for the prediction of business failure. Although some of these methods lead to models with a satisfactory ability to discriminate between healthy and bankrupt firms, they suAer from some limitations, often due to the unrealistic assumption of statistical hypotheses or due to a confusing language of communication with the decision makers. This is why we have undertaken a research aiming at weakening these limitations. In this paper, the rough set approach is used to provide a set of rules able to discriminate between healthy and failing firms in order to predict business failure. Financial characteristics of a large sample of 80 Greek firms are used to derive a set of rules and to evaluate its prediction ability. The results are very encouraging, compared with those of discriminant and logit analyses, and prove the usefulness of the proposed method for business failure prediction. The rough set approach discovers relevant subsets of financial characteristics and represents in these terms all important relationships between the image of a firm and its risk of failure. The method analyses only facts hidden in the input data and communicates with the decision maker in the natural language of rules derived from his/her experience. ” 1999 Elsevier Science B.V. All rights reserved.
Lecture Notes in Computer Science | 1998
Bartłomiej Prędki; Roman Słowiński; Jerzy Stefanowski; Robert Susmaga; Szymon Wilk
This paper briefly describes ROSE software package. It is an interactive, modular system designed for analysis and knowledge discovery based on rough set theory in 32-bit operating systems on PC computers. It implements classical rough set theory as well as its extension based on variable precision model. It includes generation of decision rules for classification systems and knowledge discovery.
Electronic Notes in Theoretical Computer Science | 2003
Krzysztof Dembczyński; Roman Pindur; Robert Susmaga
Abstract The rough sets theory has proved to be a useful mathematical tool for the analysis of a vague description of objects. One of extensions of the classic theory is the Dominance-based Set Approach (DRSA) that allows analysing preference-ordered data. The analysis ends with a set of decision rules induced from rough approximations of decision classes. The role of the decision rules is to explain the analysed phenomena, but they may also be applied in classifying new, unseen objects. There are several strategies of decision rule induction. One of them consists in generating the exhaustive set of minimal rules. In this paper we present an algorithm based on Boolean reasoning techniques that follows this strategy with in DRSA.
intelligent data analysis | 1997
Robert Susmaga
This article addresses the problem of analyzing existing discretizations of continuous attributes with regard to their redundancy and minimality properties. The research was inspired by the increasing number of heuristic algorithms created for generating the discretizations using various methodologies, and apparent lack of any direct techniques for examining the solutions obtained as far as their basic properties, e.g., the redundancy, are concerned. The proposed method of analysis fills this gap by providing a test for redundancy and enabling for a controlled reduction of the discretizations size within specified limits. Rough set theory techniques are used as the basic tools in this method. Exemplary results of discretization analyses for some known real-life data sets are presented for illustration.
Electronic Notes in Theoretical Computer Science | 2003
Krzysztof Dembczyński; Roman Pindur; Robert Susmaga
Abstract Rough Sets Theory is often applied to the task of classification and prediction, in which objects are assigned to some pre-defined decision classes. When the classes are preference-ordered, the process of classification is referred to as sorting. To deal with the specificity of sorting problems an extension of the Classic Rough Sets Approach, called the Dominance-based Rough Sets Approach, was introduced. The final result of the analysis is a set of decision rules induced from what is called rough approximations of decision classes. The main role of the induced decision rules is to discover regularities in the analyzed data set, but the same rules, when combined with a particular classification method, may also be used to classify/sort new objects (i.e. to assign the objects to appropriate classes). There exist many different rule induction strategies, including induction of an exhaustive set of rules. This strategy produces the most comprehensive knowledge base on the analyzed data set, but it requires a considerable amount of computing time, as the complexity of the process is exponential. In this paper we present a shortcut that allows classifying new objects without generating the rules. The presented approach bears some resemblance to the idea of lazy learning.
Information Sciences | 2014
Robert Susmaga
Abstract The idea of the reduct, as defined in the Classic Rough Sets Approach (CRSA), has proven to be inspiring enough to get into closely related theories, including the Dominance-based Rough Sets Approach (DRSA). The procedure of reduction is generally similar to that of Feature Selection, but narrower, as it is the descriptive, rather than the predictive, aspect of data exploration that constitutes its principal goal. CRSA reducts are thus defined as minimal subsets of attributes that retain sufficiently high quality of object description. Developed within CRSA, the CRSA reducts have given rise to the generalized notion of CRSA constructs, which have turned out to be superior to reducts in numerous practical experiments with real-life data sets. The generalization process is continued in this paper, in which a definition of constructs in the context of DRSA is introduced. The definition, fully analogous to that of CRSA constructs, differs only in that it is context-based in DRSA, while context-free in CRSA. Consequently, the presented DRSA constructs are expected to have analogous properties to that of CRSA constructs, including superiority to DRSA reducts in experiments with real-life data sets.
Infor | 2000
Matti Flinkman; Wojtek Michalowski; S. Nilsson; Roman Słowiński; Robert Susmaga; Szymon Wilk
Abstract This paper attempts to identify attributes that are considered essential for a development of sustainable forest management practices in the Siberian forests. This goal is accomplished through an analysis of net primary production of phytomass (NPP), which is used to classify the Siberian ecoregions into compact and cohesive NPP performance classes. Rough Sets (RS) analysis is used as a data mining methodology for the evaluation of the Siberian forest database. In order to interpret relationships between various forest characteristics, relationships known as interesting rules are generated on a basis of a reduced problem description.
Lecture Notes in Computer Science | 1998
Robert Susmaga
The paper addresses the problem of reduct generation, one of the key issues in the rough set theory. A considerable speed up of computations may be achieved by decomposing the original task into subtasks and executing these as parallel processes. This paper presents an effective method of such a decomposition. The presented algorithm is an adaptation of the reduct generation algorithm based on the notion of discernibility matrix. The practical behaviour of the parallel algorithm is illustrated with a computational experiment conducted for a real-life data set.
international syposium on methodologies for intelligent systems | 1999
Robert Susmaga
The paper addresses the problem of computing reducts in decision tables where attributes are assigned costs. Computing reducts has been an interesting issue as the reducts may be successfully applied in further analyses of the decision table. In systems where attribute are assigned costs, the problem of reduct generation may be reformulated to that of finding reducts satisfying some additional constraints, in particular reducts of minimal attribute costs. The constraints allow to incorporate external preference into the system and, additionally, simplify the problem of interpreting the obtained results, since the number of reducts of minimal costs, as opposed to the number of all existing reducts, is usually very small. This paper introduces a new algorithm for generating all reducts of minimal costs, called minimal cost reducts or cheapest reducts. The practical behaviour of this algorithm has been tested in numerous experiments with real-life data sets, the results of which are reported here.
Lecture Notes in Computer Science | 2004
Robert Susmaga
The paper addresses the problem of parallel computing in reduct/construct generation. The reducts are subsets of attributes that may be successfully applied in information/decision table analysis. Constructs, defined in a similar way, represent a notion that is a kind of generalization of the reduct. They ensure both discernibility between pairs of objects belonging to different classes (in which they follow the reducts) as well as similarity between pairs of objects belonging to the same class (which is not the case with reducts). Unfortunately, exhaustive sets of minimal constructs, similarly to sets of minimal reducts, are NP-hard to generate. To speed up the computations, decomposing the original task into multiple subtasks and executing these in parallel is employed. The paper presents a so-called constrained tree-like model of parallelization of this task and illustrates practical behaviour of this algorithm in a computational experiment.