Norbert Jankowski
Nicolaus Copernicus University in Toruń
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Norbert Jankowski.
international conference on artificial intelligence and soft computing | 2004
Norbert Jankowski; Marek Grochowski
Several methods were proposed to reduce the number of instances (vectors) in the learning set. Some of them extract only bad vectors while others try to remove as many instances as possible without significant degradation of the reduced dataset for learning. Several strategies to shrink training sets are compared here using different neural and machine learning classification algorithms. In part II (the accompanying paper) results on benchmarks databases have been presented.
international conference on artificial intelligence and soft computing | 2004
Marek Grochowski; Norbert Jankowski
This paper is an continuation of the accompanying paper with the same main title. The first paper reviewed instance selection algorithms, here results of empirical comparison and comments are presented. Several test were performed mostly on benchmark data sets from the machine learning repository at UCI. Instance selection algorithms were tested with neural networks and machine learning algorithms.
international conference on artificial neural networks | 1997
Norbert Jankowski; Visakan Kadirkamanathan
Incremental Net Pro (IncNet Pro) with local learning feature and statistically controlled growing and pruning of the network is introduced. The architecture of the net is based on RBF networks. Extended Kalman Filter algorithm and its new fast version is proposed and used as learning algorithm. IncNet Pro is similar to the Resource Allocation Network described by Platt in the main idea of the expanding the network. The statistical novel criterion is used to determine the growing point. The Bi-radial functions are used instead of radial basis functions to obtain more flexible network.
Archive | 2013
Norbert Jankowski; Włodzisław Duch; Krzysztof Grabczewski
Computational Intelligence (CI) community has developed hundreds of algorithms for intelligent data analysis, but still many hard problems in computer vision, signal processing or text and multimedia understanding, problems that require deep learning techniques, are open. Modern data mining packages contain numerous modules for data acquisition, pre-processing, feature selection and construction, instance selection, classification, association and approximation methods, optimization techniques, pattern discovery, clusterization, visualization and post-processing. A large data mining package allows for billions of ways in which these modules can be combined. No human expert can claim to explore and understand all possibilities in the knowledge discovery process. This is where algorithms that learn how to learnl come to rescue. Operating in the space of all available data transformations and optimization techniques these algorithms use meta-knowledge about learning processes automatically extracted from experience of solving diverse problems. Inferences about transformations useful in different contexts help to construct learning algorithms that can uncover various aspects of knowledge hidden in the data. Meta-learning shifts the focus of the whole CI field from individual learning algorithms to the higher level of learning how to learn. This book defines and reveals new theoretical and practical trends in meta-learning, inspiring the readers to further research in this exciting field.
international conference hybrid intelligent systems | 2005
Krzysztof Grabczewski; Norbert Jankowski
Classification techniques applicable to real life data are more and more often complex hybrid systems comprising feature selection. To augment their efficiency we propose two feature selection algorithms, which take advantage of a decision tree criterion. Large computational experiments have been done to test the possibilities of the two methods and to compare their results with other techniques of similar goal.
international symposium on neural networks | 2012
Włodzisław Duch; Norbert Jankowski; Tomasz Maszczyk
Classification methods with linear computational complexity O(nd) in the number of samples n and their dimensionality d often give results that are better or at least statistically not significantly worse that slower algorithms. This is demonstrated here for many benchmark datasets downloaded from the UCI Machine Learning Repository. Results provided in this paper should be used as a reference for estimating usefulness of new learning algorithms: higher complexity methods should provide significantly better results to justify their use.
computational intelligence and data mining | 2007
Krzysztof Grabczewski; Norbert Jankowski
There are many data mining systems derived from machine learning, neural network, statistics and other fields. Most of them are dedicated to some particular algorithms or applications. Unfortunately, their architectures are still too naive to provide satisfactory background for advanced meta-learning problems. In order to efficiently perform sophisticated meta-level analysis, we need a very versatile, easily expandable system (in many independent aspects), which uniformly deals with different kinds of models and models with very complex structures of models (not only committees but also much more hierarchic models). Meta-level techniques must provide mechanisms facilitating optimization of computation time and memory consumption. This article presents requirements and their motivations for an advanced data mining system, efficient not only in model construction for given data, but also in meta-learning. Some particular solutions to significant problems are presented. The newly proposed advanced meta-learning architecture has been implemented in our new data analysis system.
international conference on artificial neural networks | 2003
Krzysztof Grąbczewski; Norbert Jankowski
Most of Computational Intelligence models (e.g. neural networks or distance based methods) are designed to operate on continuous data and provide no tools to adapt their parameters to data described by symbolic values. Two new conversion methods which replace symbolic by continuous attributes are presented and compared to two commonly known ones. The advantages of the continuousification are illustrated with the results obtained with a neural network, SVM and a kNN systems for the converted data.
international symposium on neural networks | 2000
Włodzisław Duch; Norbert Jankowski
The choice of transfer functions may strongly influence complexity and performance of neural networks used in classification and approximation tasks. A taxonomy of activation and output functions is proposed, allowing to generate many transfer functions. Several less-known types of transfer functions and new combinations of activation/output functions are described. Functions parameterize to change from localized to delocalized type, functions with activation based on nonEuclidean distance measures, bicentral functions formed from pairs of sigmoids are discussed.
Meta-Learning in Computational Intelligence | 2011
Norbert Jankowski; Krzysztof Grąbczewski
There are hundreds of algorithms within data mining. Some of them are used to transform data, some to build classifiers, others for prediction, etc. Nobody knows well all these algorithms and nobody can know all the arcana of their behavior in all possible applications. How to find the best combination of transformation and final machine which solves given problem?