Matjaž Kukar
University of Ljubljana
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Matjaž Kukar.
Artificial Intelligence in Medicine | 1999
Matjaž Kukar; Igor Kononenko; Ciril Grošelj; Katarina Kralj; Jure Fettich
Ischaemic heart disease is one of the worlds most important causes of mortality, so improvements and rationalization of diagnostic procedures would be very useful. The four diagnostic levels consist of evaluation of signs and symptoms of the disease and ECG (electrocardiogram) at rest, sequential ECG testing during the controlled exercise, myocardial scintigraphy, and finally coronary angiography (which is considered to be the reference method). Machine learning methods may enable objective interpretation of all available results for the same patient and in this way may increase the diagnostic accuracy of each step. We conducted many experiments with various learning algorithms and achieved the performance level comparable to that of clinicians. We also extended the algorithms to deal with non-uniform misclassification costs in order to perform ROC analysis and control the trade-off between sensitivity and specificity. The ROC analysis shows significant improvements of sensitivity and specificity compared to the performance of the clinicians. We further compare the predictive power of standard tests with that of machine learning techniques and show that it can be significantly improved in this way.
Artificial Intelligence in Medicine | 2003
Matjaž Kukar
In the past decades, machine learning (ML) tools have been successfully used in several medical diagnostic problems. While they often significantly outperform expert physicians (in terms of diagnostic accuracy, sensitivity, and specificity), they are mostly not being used in practice. One reason for this is that it is difficult to obtain an unbiased estimation of diagnoses reliability. We discuss how reliability of diagnoses is assessed in medical decision-making and propose a general framework for reliability estimation in machine learning, based on transductive inference. We compare our approach with a usual (machine learning) probabilistic approach as well as with classical stepwise diagnostic process where reliability of diagnose is presented as its post-test probability. The proposed transductive approach is evaluated on several medical datasets from the University of California (UCI) repository as well as on a practical problem of clinical diagnosis of the coronary artery disease (CAD). In all cases, significant improvements over existing techniques are achieved.
Artificial Intelligence in Medicine | 1996
Matjaž Kukar; Igor Kononenko; Toma Silvester
We compare the performance of several machine learning algorithms in the problem of prognostics of the femoral neck fracture recovery: the K-nearest neighbours algorithm, the semi-naive Bayesian classifier, backpropagation with weight elimination learning of the multilayered neural networks, the LFC (lookahead feature construction) algorithm, and the Assistant-I and Assistant-R algorithms for top down induction of decision trees using information gain and RELIEFF as search heuristics, respectively. We compare the prognostic accuracy and the explanation ability of different classifiers. Among the different algorithms the semi-naive Bayesian classifier and Assistant-R seem to be the most appropriate. We analyze the combination of decisions of several classifiers for solving prediction problems and show that the combined classifier improves both performance and the explanation ability.
conference on computer as a tool | 2003
Zoran Bosnić; Igor Kononenko; Marko Robnik-Šikonja; Matjaž Kukar
In machine learning community there are many efforts to improve overall reliability of predictors measured as an error on the testing set. But in contrast, very little research has been done concerning prediction reliability of a single answer. This article describes an algorithm that can be used for evaluation of prediction reliability in regression. The basic idea of the algorithm is based on construction of transductive predictors. Using them, the algorithm makes inference from the differences between initial and transductive predictions to the error on a single new case. The implementation of the algorithm with regression tress managed to significantly reduce the relative mean squared error on the majority of the tested domains.
artificial intelligence in medicine in europe | 2003
Matjaž Kukar
Most statistical, Machine Learning and Data Mining algorithms assume that the data they use is a random sample drawn from a stationary distribution. Unfortunately, many of the databases available for mining today violate this assumption. They were gathered over months or years, and the underlying processes generating them may have changed during this time, sometimes radically (this is also known as a concept drift). In clinical institutions, where the patients’ data are regularly stored in a central computer databases, similar situations may occur. Expert physicians may easily, even unconsciously, adapt to the changed environment, whereas Machine Learning and Data Mining tools may fail due to their underlaying assumptions. It is therefore important to detect and adapt to the changed situation. In the paper we review several techniques for dealing with concept drift in Machine Learning and Data Mining frameworks and evaluate their use in clinical studies with a case study of coronary artery disease diagnostics.
Computer Methods and Programs in Biomedicine | 2005
Luka Šajn; Matjaž Kukar; Igor Kononenko; Metka Milčinski
Bone scintigraphy or whole-body bone scan is one of the most common diagnostic procedures in nuclear medicine used in the last 25 years. Pathological conditions, technically poor image resolution and artefacts necessitate that algorithms use sufficient background knowledge of anatomy and spatial relations of bones in order to work satisfactorily. A robust knowledge based methodology for detecting reference points of the main skeletal regions that is simultaneously applied on anterior and posterior whole-body bone scintigrams is presented. Expert knowledge is represented as a set of parameterized rules which are used to support standard image-processing algorithms. Our study includes 467 consecutive, non-selected scintigrams, which is, to our knowledge the largest number of images ever used in such studies. Automatic analysis of whole-body bone scans using our segmentation algorithm gives more accurate and reliable results than previous studies. Obtained reference points are used for automatic segmentation of the skeleton, which is applied to automatic (machine learning) or manual (expert physicians) diagnostics. Preliminary experiments show that an expert system based on machine learning closely mimics the results of expert physicians.
Computer Methods and Programs in Biomedicine | 2011
Luka Šajn; Matjaž Kukar
The paper presents results of our long-term study on using image processing and data mining methods in a medical imaging. Since evaluation of modern medical images is becoming increasingly complex, advanced analytical and decision support tools are involved in integration of partial diagnostic results. Such partial results, frequently obtained from tests with substantial imperfections, are integrated into ultimate diagnostic conclusion about the probability of disease for a given patient. We study various topics such as improving the predictive power of clinical tests by utilizing pre-test and post-test probabilities, texture representation, multi-resolution feature extraction, feature construction and data mining algorithms that significantly outperform medical practice. Our long-term study reveals three significant milestones. The first improvement was achieved by significantly increasing post-test diagnostic probabilities with respect to expert physicians. The second, even more significant improvement utilizes multi-resolution image parametrization. Machine learning methods in conjunction with the feature subset selection on these parameters significantly improve diagnostic performance. However, further feature construction with the principle component analysis on these features elevates results to an even higher accuracy level that represents the third milestone. With the proposed approach clinical results are significantly improved throughout the study. The most significant result of our study is improvement in the diagnostic power of the whole diagnostic process. Our compound approach aids, but does not replace, the physicians judgment and may assist in decisions on cost effectiveness of tests.
information technology interfaces | 2006
Rok Rupnik; Matjaž Kukar; Marko Bajec; Marjan Krisper
Modern organizations use several types of decision support systems to facilitate decision support. In many cases OLAP based tools are used in the business area, which enable multiple views on data and through that a deductive approach to data analysis. Data mining extends the possibilities for decision support by discovering patterns and relationships hidden in data and therefore enabling the inductive approach of data analysis. The paper introduces data mining based decision support system, which was designed for business users to enable them to use data mining models to facilitate decision support with only a basic level of knowledge of data mining
computer based medical systems | 2005
Matjaž Kukar; Ciril Grošelj
In the past decades Machine Learning tools have been successfully used in several medical diagnostic problems. While they often significantly outperform expert physicians (in terms of diagnostic accuracy, sensitivity, and specificity), they are mostly not being used in practice. One reason for this is that it is difficult to obtain an unbiased estimation of diagnose’s reliability. We discuss how reliability of diagnoses is assessed in medical decision making and propose a general framework for reliability estimation in Machine Learning, based on transductive inference. We compare our approach with a usual (Machine Learning) probabilistic approach as well as with classical stepwise diagnostic process where reliability of diagnose is presented as its posttest probability. The proposed transductive approach is evaluated on several medical data sets from the UCI (University of California, Irvine) repository as well as on a practical problem of clinical diagnosis of the coronary artery disease. In all cases significant improvements over existing techniques are achieved.
artificial intelligence in medicine in europe | 2005
Luka Šajn; Matjaž Kukar; Igor Kononenko; Metka Milčinski
Bone scintigraphy or whole-body bone scan is one of the most common diagnostic procedures in nuclear medicine used in the last 25 years. Pathological conditions, technically poor quality images and artifacts necessitate that algorithms use sufficient background knowledge of anatomy and spatial relations of bones in order to work satisfactorily. We present a robust knowledge based methodology for detecting reference points of the main skeletal regions that simultaneously processes anterior and posterior whole-body bone scintigrams. Expert knowledge is represented as a set of parameterized rules which are used to support standard image processing algorithms. Our study includes 467 consecutive, non-selected scintigrams, which is to our knowledge the largest number of images ever used in such studies. Automatic analysis of whole-body bone scans using our knowledge based segmentation algorithm gives more accurate and reliable results than previous studies. Obtained reference points are used for automatic segmentation of the skeleton, which is used for automatic (machine learning) or manual (expert physicians) diagnostics. Preliminary experiments show that an expert system based on machine learning closely mimics the results of expert physicians.