Igor T. Podolak
Jagiellonian University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Igor T. Podolak.
Expert Systems With Applications | 2008
Igor T. Podolak
In this paper a novel complex classifier architecture is proposed. The architecture has a hierarchical tree-like structure with simple artificial neural networks (ANNs) at each node. The actual structure for a given problem is not preset but is built throughout training. The training algorithms ability to build the tree-like structure is based on the assumption that when a weak classifier (i.e., one that classifies only slightly better than a random classifier) is trained and examples from any two output classes are frequently mismatched, then they must carry similar information and constitute a sub-problem. After each ANN has been trained its incorrect classifications are analyzed and new sub-problems are formed. Consequently, new ANNs are built for each of these sub-problems and form another layer of the hierarchical classifier. An important feature of the hierarchical classifier proposed in this work is that the problem partition forms overlapping sub-problems. Thus, the classification follows not just a single path from the root, but may fork enhancing the power of the classification. It is shown how to combine the results of these individual classifiers.
parallel processing and applied mathematics | 2005
Igor T. Podolak; Sławomir Biel; Marcin Bobrowski
Artificial Intelligence (AI) methods are used to build classifiers that give different levels of accuracy and solution explication. The intent of this paper is to provide a way of building a hierarchical classifier composed of several artificial neural networks (ANNs) organised in a tree-like fashion. Such method of construction allows for partition of the original problem into several sub-problems which can be solved with simpler ANNs, and be built quicker than a single ANN. As the sub-problems extracted start to be independent of one another, this paves a way to realise the solutions for the individual sub-problems in a parallel fashion. It is observed that incorrect classifications are not random and can be therefore used to find clusters defining sub-problems.
Expert Systems With Applications | 2011
Stanisław Brodowski; Igor T. Podolak
In this paper, a new machine learning solution for function approximation is presented. It combines many simple and relatively inaccurate estimators to achieve high accuracy. It creates - in incremental manner - hierarchical, tree-like structure, adapting it to the specific problem being solved. As most variants use the errors of already constructed parts to direct further construction, it may be viewed as example of boosting - as understood in general sense. The influence of particular constituent estimator on the whole solutions output is not constant, but depends on the feature vector being evaluated. Provided in this paper are: general form of the metaalgorithm, a few specific, detailed solutions, theoretical basis and experimental results with one week power load prediction for country-wide power distribution grid and on simple test datasets.
computational intelligence | 2013
Igor T. Podolak; Adam Roman
This paper proposes a classification framework based on simple classifiers organized in a tree‐like structure. It is observed that simple classifiers, even though they have high error rate, find similarities among classes in the problem domain. The authors propose to trade on this property by recognizing classes that are mistaken and constructing overlapping subproblems. The subproblems are then solved by other classifiers, which can be very simple, giving as a result a hierarchical classifier (HC). It is shown that HC, together with the proposed training algorithm and evaluation methods, performs well as a classification framework. It is also proven that such constructs give better accuracy than the root classifier it is built upon.
Pattern Analysis and Applications | 2011
Igor T. Podolak; Adam Roman
This paper describes in full detail a model of a hierarchical classifier (HC). The original classification problem is broken down into several subproblems and a weak classifier is built for each of them. Subproblems consist of examples from a subset of the whole set of output classes. It is essential for this classification framework that the generated subproblems would overlap, i.e. some individual classes could belong to more than one subproblem. This approach allows to reduce the overall risk. Individual classifiers built for the subproblems are weak, i.e. their accuracy is only a little better than the accuracy of a random classifier. The notion of weakness for a multiclass model is extended in this paper. It is more intuitive than approaches proposed so far. In the HC model described, after a single node is trained, its problem is split into several subproblems using a clustering algorithm. It is responsible for selecting classes similarly classified. The main scope of this paper is focused on finding the most appropriate clustering method. Some algorithms are defined and compared. Finally, we compare a whole HC with other machine learning approaches.
international conference on adaptive and natural computing algorithms | 2007
Igor T. Podolak
A system of rule extraction out of a complex hierarchical classifier is proposed in this paper. There are several methods for rule extraction out of trained artificial neural networks (ANNs), but these methods do not scale well, i.e.results are satisfactory for small problems. For complicated problems hundreds of rules are produced, which are hard to govern. In this paper a hierarchical classifier with a tree-like structure and simple ANNs at nodes, is presented, which splits the original problem into several sub-problems that overlap. Node classifiers are all weak(i.e.with accuracy only better than random), and errors are corrected at lower levels. Single sub-problems constitute of examples that were hard to separate. Such architecture is able to classify better than single network models. At the same time if---then rules are extracted, which only answer which sub-problem a given example belongs to. Such rules, by introducing hierarchy, are simpler and easier to modify by hand, giving also a better insight into the original classifier behaviour.
international conference on adaptive and natural computing algorithms | 2009
Igor T. Podolak; Kamil Bartocha
A novel architecture for a hierarchical classifier (HC) is defined. The objective is to combine several weak classifiers to form a strong one, but a different approach from those known, e.g. AdaBoost, is taken: the training set is split on the basis of previous classifier misclassification between output classes. The problem is split into overlapping subproblems, each classifying into a different set of output classes. This allows for a task size reduction as each sub-problem is smaller in the sense of lower number of output classes, and for higher accuracy. The HC proposes a different approach to the boosting approach. The groups of output classes overlap, thus examples from a single class may end up in several subproblems. It is shown, that this approach ensures that such hierarchical classifier achieves better accuracy. A notion of generalized accuracy is introduced. The sub-problems generation is simple as it is performed with a clustering algorithm operating on classifier outputs. We propose to use the Growing Neural Gas [1] algorithm, because of its good adaptiveness.
Archive | 2009
Igor T. Podolak; Adam Roman
The notion of a weak classifier, as one which is “a little better” than a random one, was introduced first for 2-class problems [1]. The extensions to K-class problems are known. All are based on relative activations for correct and incorrect classes and do not take into account the final choice of the answer. A new understanding and definition is proposed here. It takes into account only the final choice of classification that must be taken. It is shown that for a K class classifier to be called “weak”, it needs to achieve lower than 1/K risk value. This approach considers only the probability of the final answer choice, not the actual activations.
computer recognition systems | 2013
Igor T. Podolak; Stanisław Jastrzębski
We present a method for osteoporosis detection using graph representations obtained running a Growing Neural Gas machine learning algorithm on X–ray bone images. The GNG induced graph, being dependent on density, represents well the features which may be in part responsible for the illness. The graph connects well dense bone regions, making it possible to subdivide the whole image into regions. It is interesting to note, that these regions in bones, whose extraction might make it easier to detect the illness, correspond to some graph theoretic notions. In the paper, some invariants based on these graph theoretic notions, are proposed and if used with a machine classification method, e.g. a neural network, will make it possible to help recognize images of bones of ill persons. This graph theoretic approach is novel in this area. It helps to separate solution from the actual physical properties. The paper gives the proposed indices definitions and shows a classification based on them as input attributes.
Schedae Informaticae | 2011
Adam Roman; Igor T. Podolak; Agnieszka Deszyńska
This paper shows a new combinatorial problem which emerged from studies on an artificial intelligence classification model of a hierarchical classifier. We introduce the notion of proper clustering and show how to count their number in a special case when 3 clusters are allowed. An algorithm t