Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Igor Kononenko is active.

Publication


Featured researches published by Igor Kononenko.


european conference on machine learning | 1994

Estimating attributes: Analysis and extensions of RELIEF

Igor Kononenko

In the context of machine learning from examples this paper deals with the problem of estimating the quality of attributes with and without dependencies among them. Kira and Rendell (1992a,b) developed an algorithm called RELIEF, which was shown to be very efficient in estimating attributes. Original RELIEF can deal with discrete and continuous attributes and is limited to only two-class problems. In this paper RELIEF is analysed and extended to deal with noisy, incomplete, and multi-class data sets. The extensions are verified on various artificial and one well known real-world problem.


Machine Learning | 2003

Theoretical and Empirical Analysis of ReliefF and RReliefF

Marko Robnik-Šikonja; Igor Kononenko

Relief algorithms are general and successful attribute estimators. They are able to detect conditional dependencies between attributes and provide a unified view on the attribute estimation in regression and classification. In addition, their quality estimates have a natural interpretation. While they have commonly been viewed as feature subset selection methods that are applied in prepossessing step before a model is learned, they have actually been used successfully in a variety of settings, e.g., to select splits or to guide constructive induction in the building phase of decision or regression tree learning, as the attribute weighting method and also in the inductive logic programming.A broad spectrum of successful uses calls for especially careful investigation of various features Relief algorithms have. In this paper we theoretically and empirically investigate and discuss how and why they work, their theoretical and practical properties, their parameters, what kind of dependencies they detect, how do they scale up to large number of examples and features, how to sample data for them, how robust are they regarding the noise, how irrelevant and redundant attributes influence their output and how different metrics influences them.


Artificial Intelligence in Medicine | 2001

Machine learning for medical diagnosis: history, state of the art and perspective

Igor Kononenko

The paper provides an overview of the development of intelligent data analysis in medicine from a machine learning perspective: a historical view, a state-of-the-art view, and a view on some future trends in this subfield of applied artificial intelligence. The paper is not intended to provide a comprehensive overview but rather describes some subareas and directions which from my personal point of view seem to be important for applying machine learning in medical diagnosis. In the historical overview, I emphasize the naive Bayesian classifier, neural networks and decision trees. I present a comparison of some state-of-the-art systems, representatives from each branch of machine learning, when applied to several medical diagnostic tasks. The future trends are illustrated by two case studies. The first describes a recently developed method for dealing with reliability of decisions of classifiers, which seems to be promising for intelligent data analysis in medicine. The second describes an approach to using machine learning in order to verify some unexplained phenomena from complementary medicine, which is not (yet) approved by the orthodox medical community but could in the future play an important role in overall medical diagnosis and treatment.


Applied Intelligence | 1997

Overcoming the Myopia of Inductive Learning Algorithms with RELIEFF

Igor Kononenko; Edvard Šimec; Marko Robnik-Šikonja

Current inductive machine learning algorithms typically use greedy search with limited lookahead. This prevents them to detect significant conditional dependencies between the attributes that describe training objects. Instead of myopic impurity functions and lookahead, we propose to use RELIEFF, an extension of RELIEF developed by Kira and Rendell [10, 11], for heuristic guidance of inductive learning algorithms. We have reimplemented Assistant, a system for top down induction of decision trees, using RELIEFF as an estimator of attributes at each selection step. The algorithm is tested on several artificial and several real world problems and the results are compared with some other well known machine learning algorithms. Excellent results on artificial data sets and two real world problems show the advantage of the presented approach to inductive learning.


Applied Artificial Intelligence | 1993

INDUCTIVE AND BAYESIAN LEARNING IN MEDICAL DIAGNOSIS

Igor Kononenko

Abstract Although successful in medical diagnostic problems, inductive learning systems were not widely accepted in medical practice. In this paper two different approaches to machine learning in medical applications are compared: the system for inductive learning of decision trees Assistant, and the naive Bayesian classifier. Both methodologies were tested in four medical diagnostic problems: localization of primary tumor, prognostics of recurrence of breast cancer, diagnosis of thyroid diseases, and rheumatology. The accuracy of automatically acquired diagnostic knowledge from stored data records is compared, and the interpretation of the knowledge and the explanation ability of the classification process of each system is discussed. Surprisingly, the naive Bayesian classifier is superior to Assistant in classification accuracy and explanation ability, while the interpretation of the acquired knowledge seems to be equally valuable. In addition, two extensions to naive Bayesian classifier are briefly des...


Artificial Intelligence in Medicine | 1999

Analysing and improving the diagnosis of ischaemic heart disease with machine learning.

Matjaž Kukar; Igor Kononenko; Ciril Grošelj; Katarina Kralj; Jure Fettich

Ischaemic heart disease is one of the worlds most important causes of mortality, so improvements and rationalization of diagnostic procedures would be very useful. The four diagnostic levels consist of evaluation of signs and symptoms of the disease and ECG (electrocardiogram) at rest, sequential ECG testing during the controlled exercise, myocardial scintigraphy, and finally coronary angiography (which is considered to be the reference method). Machine learning methods may enable objective interpretation of all available results for the same patient and in this way may increase the diagnostic accuracy of each step. We conducted many experiments with various learning algorithms and achieved the performance level comparable to that of clinicians. We also extended the algorithms to deal with non-uniform misclassification costs in order to perform ROC analysis and control the trade-off between sensitivity and specificity. The ROC analysis shows significant improvements of sensitivity and specificity compared to the performance of the clinicians. We further compare the predictive power of standard tests with that of machine learning techniques and show that it can be significantly improved in this way.


Future Generation Computer Systems | 1997

Attribute selection for modelling

Igor Kononenko; Se June Hong

Abstract Modelling a target attribute by other attributes in the data is perhaps the most traditional data mining task. When there are many attributes in the data, one needs to know which of the attribute(s) are relevant for modelling the target, either as a group or the one feature that is most appropriate to select within the model construction process in progress. There are many approaches for selecting the attribute(s) in machine learning. We examine various important concepts and approaches that are used for this purpose and contrast their strengths. Discretization of numeric attributes is also discussed for its use is prevalent in many modelling techniques.


IEEE Transactions on Knowledge and Data Engineering | 2008

Explaining Classifications For Individual Instances

Marko Robnik-Šikonja; Igor Kononenko

We present a method for explaining predictions for individual instances. The presented approach is general and can be used with all classification models that output probabilities. It is based on the decomposition of a models predictions on individual contributions of each attribute. Our method works for the so-called black box models such as support vector machines, neural networks, and nearest neighbor algorithms, as well as for ensemble methods such as boosting and random forests. We demonstrate that the generated explanations closely follow the learned models and present a visualization technique that shows the utility of our approach and enables the comparison of different prediction methods.


data and knowledge engineering | 2008

Comparison of approaches for estimating reliability of individual regression predictions

Zoran Bosnić; Igor Kononenko

The paper compares different approaches to estimate the reliability of individual predictions in regression. We compare the sensitivity-based reliability estimates developed in our previous work with four approaches found in the literature: variance of bagged models, local cross-validation, density estimation, and local modeling. By combining pairs of individual estimates, we compose a combined estimate that performs better than the individual estimates. We tested the estimates by running data from 28 domains through eight regression models: regression trees, linear regression, neural networks, bagging, support vector machines, locally weighted regression, random forests, and generalized additive model. The results demonstrate the potential of a sensitivity-based estimate, as well as the local modeling of prediction error with regression trees. Among the tested approaches, the best average performance was achieved by estimation using the bagging variance approach, which achieved the best performance with neural networks, bagging and locally weighted regression.


Applied Intelligence | 2008

Estimation of individual prediction reliability using the local sensitivity analysis

Zoran Bosnić; Igor Kononenko

Abstract For a given prediction model, some predictions may be reliable while others may be unreliable. The average accuracy of the system cannot provide the reliability estimate for a single particular prediction. The measure of individual prediction reliability can be important information in risk-sensitive applications of machine learning (e.g. medicine, engineering, business). We define empirical measures for estimation of prediction accuracy in regression. Presented measures are based on sensitivity analysis of regression models. They estimate reliability for each individual regression prediction in contrast to the average prediction reliability of the given regression model. We study the empirical sensitivity properties of five regression models (linear regression, locally weighted regression, regression trees, neural networks, and support vector machines) and the relation between reliability measures and distribution of learning examples with prediction errors for all five regression models. We show that the suggested methodology is appropriate only for the three studied models: regression trees, neural networks, and support vector machines, and test the proposed estimates with these three models. The results of our experiments on 48 data sets indicate significant correlations of the proposed measures with the prediction error.

Collaboration


Dive into the Igor Kononenko's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ivan Bratko

University of Ljubljana

View shared research outputs
Top Co-Authors

Avatar

Matjaz Kukar

University of Ljubljana

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luka Šajn

University of Ljubljana

View shared research outputs
Top Co-Authors

Avatar

Darko Pevec

University of Ljubljana

View shared research outputs
Researchain Logo
Decentralizing Knowledge