Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zoran Bosnić is active.

Publication


Featured researches published by Zoran Bosnić.


Computers in Education | 2013

Exploring the relation between learning style models and preferred multimedia types

Uroš Ocepek; Zoran Bosnić; Irena Nančovska Šerbec; Jože Rugelj

There are many adaptive learning systems that adapt learning materials to student properties, preferences, and activities. This study is focused on designing such a learning system by relating combinations of different learning styles to preferred types of multimedia materials. We explore a decision model aimed at proposing learning material of an appropriate multimedia type. This study includes 272 student participants. The resulting decision model shows that students prefer well-structured learning texts with color discrimination, and that the hemispheric learning style model is the most important criterion in deciding student preferences for different multimedia learning materials. To provide a more accurate and reliable model for recommending different multimedia types more learning style models must be combined. Kolbs classification and the VAK classification allow us to learn if students prefer an active role in the learning process, and what multimedia type they prefer.


data and knowledge engineering | 2008

Comparison of approaches for estimating reliability of individual regression predictions

Zoran Bosnić; Igor Kononenko

The paper compares different approaches to estimate the reliability of individual predictions in regression. We compare the sensitivity-based reliability estimates developed in our previous work with four approaches found in the literature: variance of bagged models, local cross-validation, density estimation, and local modeling. By combining pairs of individual estimates, we compose a combined estimate that performs better than the individual estimates. We tested the estimates by running data from 28 domains through eight regression models: regression trees, linear regression, neural networks, bagging, support vector machines, locally weighted regression, random forests, and generalized additive model. The results demonstrate the potential of a sensitivity-based estimate, as well as the local modeling of prediction error with regression trees. Among the tested approaches, the best average performance was achieved by estimation using the bagging variance approach, which achieved the best performance with neural networks, bagging and locally weighted regression.


Applied Intelligence | 2008

Estimation of individual prediction reliability using the local sensitivity analysis

Zoran Bosnić; Igor Kononenko

AbstractnFor a given prediction model, some predictions may be reliable while others may be unreliable. The average accuracy of the system cannot provide the reliability estimate for a single particular prediction. The measure of individual prediction reliability can be important information in risk-sensitive applications of machine learning (e.g. medicine, engineering, business). We define empirical measures for estimation of prediction accuracy in regression. Presented measures are based on sensitivity analysis of regression models. They estimate reliability for each individual regression prediction in contrast to the average prediction reliability of the given regression model. We study the empirical sensitivity properties of five regression models (linear regression, locally weighted regression, regression trees, neural networks, and support vector machines) and the relation between reliability measures and distribution of learning examples with prediction errors for all five regression models. We show that the suggested methodology is appropriate only for the three studied models: regression trees, neural networks, and support vector machines, and test the proposed estimates with these three models. The results of our experiments on 48 data sets indicate significant correlations of the proposed measures with the prediction error.n


intelligent data analysis | 2009

An overview of advances in reliability estimation of individual predictions in machine learning

Zoran Bosnić; Igor Kononenko

In Machine Learning, estimation of the predictive accuracy for a given model is most commonly approached by analyzing the average accuracy of the model. In general, the predictive models do not provide accuracy estimates for their individual predictions. The reliability estimates of individual predictions require the analysis of various model and instance properties. In the paper we make an overview of the approaches for estimation of individual prediction reliability. We start by summarizing three research fields, that provided ideas and motivation for our work: (a) approaches to perturbing learning data, (b) the usage of unlabeled data in supervised learning, and (c) the sensitivity analysis. The main part of the paper presents two classes of reliability estimation approaches and summarizes the relevant terminology, which is often used in this and related research fields.


Knowledge and Information Systems | 2010

Explanation and reliability of prediction models: the case of breast cancer recurrence

Erik Štrumbelj; Zoran Bosnić; Igor Kononenko; B. Zakotnik; Cvetka Grasic Kuhar

In this paper, we describe the first practical application of two methods, which bridge the gap between the non-expert user and machine learning models. The first is a method for explaining classifiers’ predictions, which provides the user with additional information about the decision-making process of a classifier. The second is a reliability estimation methodology for regression predictions, which helps the users to decide to what extent to trust a particular prediction. Both methods are successfully applied to a novel breast cancer recurrence prediction data set and the results are evaluated by expert oncologists.


intelligent data analysis | 2013

ROC analysis of classifiers in machine learning: A survey

Matjaž Majnik; Zoran Bosnić

The use of ROC Receiver Operating Characteristics analysis as a tool for evaluating the performance of classification models in machine learning has been increasing in the last decade. Among the most notable advances in this area are the extension of two-class ROC analysis to the multi-class case as well as the employment of ROC analysis in cost-sensitive learning. Methods now exist which take instance-varying costs into account. The purpose of our paper is to present a survey of this field with the aim of gathering important achievements in one place. In the paper, we present application areas of the ROC analysis in machine learning, describe its problems and challenges and provide a summarized list of alternative approaches to ROC analysis. In addition to presented theory, we also provide a couple of examples intended to illustrate the described approaches.


conference on computer as a tool | 2003

Evaluation of prediction reliability in regression using the transduction principle

Zoran Bosnić; Igor Kononenko; Marko Robnik-Šikonja; Matjaž Kukar

In machine learning community there are many efforts to improve overall reliability of predictors measured as an error on the testing set. But in contrast, very little research has been done concerning prediction reliability of a single answer. This article describes an algorithm that can be used for evaluation of prediction reliability in regression. The basic idea of the algorithm is based on construction of transductive predictors. Using them, the algorithm makes inference from the differences between initial and transductive predictions to the error on a single new case. The implementation of the algorithm with regression tress managed to significantly reduce the relative mean squared error on the majority of the tested domains.


intelligent data analysis | 2010

Empirical evaluation of feature selection methods in classification

Luka Cehovin; Zoran Bosnić

In the paper, we present an empirical evaluation of five feature selection methods: ReliefF, random forest feature selector, sequential forward selection, sequential backward selection, and Gini index. Among the evaluated methods, the random forest feature selector has not yet been widely compared to the other methods. In our evaluation, we test how the implemented feature selection can affect (i.e. improve) the accuracy of six different classifiers by performing feature selection. The results show that ReliefF and random forest enabled the classifiers to achieve the highest increase in classification accuracy on the average while reducing the number of unnecessary attributes. The achieved conclusions can advise the machine learning users which classifier and feature selection method to use to optimize the classification accuracy, which may be important especially in risk-sensitive applications of Machine Learning (e.g. medicine, business decisions, control applications) as well as in the aim to reduce costs of collecting, processing and storage of unnecessary data.


ieee international conference on information technology and applications in biomedicine | 2010

Mining data from hemodynamic simulations for generating prediction and explanation models

Zoran Bosnić; Petar Vračar; Milos Radovic; Goran Devedzic; Nenad Filipovic; Igor Kononenko

One of the most common causes of human death is stroke, which can be caused by carotid bifurcation stenosis. In our work, we aim at proposing a prototype of a medical expert system that could significantly aid medical experts to detect hemodynamic abnormalities (increased artery wall shear stress). Based on the acquired simulated data, we apply several methodologies for1) predicting magnitudes and locations of maximum wall shear stress in the artery, 2) estimating reliability of computed predictions, and 3) providing user-friendly explanation of the models decision. The obtained results indicate that the evaluated methodologies can provide a useful tool for the given problem domain.


Expert Systems With Applications | 2015

Improving matrix factorization recommendations for examples in cold start

Uroš Ocepek; Jože Rugelj; Zoran Bosnić

Novel framework for the imputation of missing values into the ratings matrix.Imputation of missing values significantly reduces matrix factorization prediction error.Increased matrix factorization performance in the cold start state. Recommender systems suggest items of interest to users based on their preferences (i.e. previous ratings). If there are no ratings for a certain user or item, it is said that there is a problem of a cold start, which leads to unreliable recommendations. We propose a novel approach for alleviating the cold start problem by imputing missing values into the input matrix. Our approach combines local learning, attribute selection, and value aggregation into a single approach; it was evaluated on three datasets and using four matrix factorization algorithms. The results showed that the imputation of missing values significantly reduces the recommendation error. Two tested methods, denoted with 25-FR-ME-? and 10-FR-ME-?, significantly improved performance of all tested matrix factorization algorithms, without the requirement to use a different recommendation algorithm for the users in the cold start state.

Collaboration


Dive into the Zoran Bosnić's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kaja Zupanc

University of Ljubljana

View shared research outputs
Top Co-Authors

Avatar

Domen Košir

University of Ljubljana

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Darko Pevec

University of Ljubljana

View shared research outputs
Top Co-Authors

Avatar

Jože Rugelj

University of Ljubljana

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Uroš Ocepek

University of Ljubljana

View shared research outputs
Researchain Logo
Decentralizing Knowledge