Kevin Jasberg
University of Düsseldorf
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kevin Jasberg.
conference on recommender systems | 2017
Kevin Jasberg; Sergej Sizov
Recommender systems nowadays have many applications and are of great economic benefit. Hence, it is imperative for success-oriented companies to compare various of such systems and select the better one for their purposes. To this end, various metrics of predictive accuracy are commonly used, such as the Root Mean Square Error (RMSE), or precision and recall. All these metrics more or less measure how well a recommender system can predict human behaviour. Unfortunately, human behaviour is always associated with some degree of uncertainty, making the evaluation difficult, since it is not clear whether a deviation is system-induced or just originates from the natural variability of human decision making. At this point, some authors speculated that we may be reaching some Magic Barrier where this variability prevents us from getting much more accurate [12, 13, 24]. In this article, we will extend the existing theory of the Magic Barrier [24] into a new probabilistic but a yet pragmatic model. In particular, we will use methods from metrology and physics to develop easy-to-handle quantities for computation to describe the Magic Barrier for different accuracy metrics and provide suggestions for common application. This discussion is substantiated by comprehensive experiments with real users and large-scale simulations on a high-performance cluster.
web information systems engineering | 2017
Kevin Jasberg; Sergej Sizov
Many data mining approaches aim at modelling and predicting human behaviour. An important quantity of interest is the quality of model-based predictions, e.g. for comparative analysis and finding a competition winner with best prediction performance. In real life, human beings meet their decisions with considerable uncertainty. Its assessment and resulting implications for the statistically evident evaluation of predictive models are in the main focus of this contribution. We identify relevant sources of uncertainty as well as the limited ability of its accurate measurement, propose an uncertainty-aware methodology for more evident evaluations of data mining approaches, and discuss its implications for existing quality assessment strategies. Specifically, our approach switches from common point-paradigm to more appropriate distribution-paradigm. The proposed methodology is exemplified in the context of recommender systems and their established metrics of prediction quality. The discussion is substantiated by comprehensive experiments with real users and large-scale simulations.
international conference on user modeling adaptation and personalization | 2017
Kevin Jasberg; Sergej Sizov
In many areas of data mining, data is collected from human beings. In this contribution, we ask the question of how people actually respond to ordinal scales. The main problem observed is that users tend to be volatile in their choices, i.e. complex cognitions do not always lead to the same decisions, but to distributions of possible decision outputs. This human uncertainty may sometimes have quite an impact on common data mining approaches and thus, the question of effective modelling this so called human uncertainty emerges naturally. Our contribution introduces two different approaches for modelling the human uncertainty of user responses. In doing so, we develop techniques in order to measure this uncertainty at the level of user inputs as well as the level of user cognition. With support of comprehensive user experiments and large-scale simulations, we systematically compare both methodologies along with their implications for personalisation approaches. Our findings demonstrate that significant amounts of users do submit something completely different (action) than they really have in mind (cognition). Moreover, we demonstrate that statistically sound evidence with respect to algorithm assessment becomes quite hard to realise, especially when explicit rankings shall be built.
acm symposium on applied computing | 2018
Kevin Jasberg; Sergej Sizov
One of the most crucial issues in data mining is to model human behaviour in order to provide personalisation, adaptation and recommendation. This usually involves implicit or explicit knowledge, either by observing user interactions, or by asking users directly. But these sources of information are always subject to the volatility of human decisions, making the so gathered data uncertain to a particular extent. In this contribution, we elaborate on the impact of this human uncertainty when it comes to comparative assessments of different recommender systems. In particular, we reveal two problems: (1) biasing effects on various metrics of model-based prediction and (2) the propagation of uncertainty and its thus induced error probabilities for algorithm rankings. For this purpose, we introduce a probabilistic view and prove the existence of those problems mathematically, as well as provide possible solution strategies.
Archive | 2017
Kevin Jasberg; Sergej Sizov
arXiv: Human-Computer Interaction | 2018
Kevin Jasberg; Sergej Sizov
arXiv: Human-Computer Interaction | 2018
Kevin Jasberg; Sergej Sizov
Inf. Wiss. & Praxis | 2018
Kevin Jasberg; Sergej Sizov
arXiv: Human-Computer Interaction | 2017
Kevin Jasberg; Sergej Sizov
Archive | 2017
Kevin Jasberg; Sergej Sizov