David Hrabal
University of Ulm
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David Hrabal.
international conference on human computer interaction | 2011
Steffen Walter; Stefan Scherer; Martin Schels; Michael Glodek; David Hrabal; Miriam Schmidt; Ronald Böck; Kerstin Limbrecht; Harald C. Traue; Friedhelm Schwenker
The design of intelligent personalized interactive systems, having knowledge about the users state, his desires, needs and wishes, currently poses a great challenge to computer scientists. In this study we propose an information fusion approach combining acoustic, and biophysiological data, comprising multiple sensors, to classify emotional states. For this purpose a multimodal corpus has been created, where subjects undergo a controlled emotion eliciting experiment, passing several octants of the valence arousal dominance space. The temporal and decision level fusion of the multiple modalities outperforms the single modality classifiers and shows promising results.
Journal on Multimodal User Interfaces | 2014
Martin Schels; Markus Kächele; Michael Glodek; David Hrabal; Steffen Walter; Friedhelm Schwenker
The individual nature of physiological measurements of human affective states makes it very difficult to transfer statistical classifiers from one subject to another. In this work, we propose an approach to incorporate unlabeled data into a supervised classifier training in order to conduct an emotion classification. The key idea of the method is to conduct a density estimation of all available data (labeled and unlabeled) to create a new encoding of the problem. Based on this a supervised classifier is constructed. Further, numerical evaluations on the EmoRec II corpus are given, examining to what extent additional data can improve classification and which parameters of the density estimation are optimal.
systems man and cybernetics | 2013
Steffen Walter; Jonghwa Kim; David Hrabal; Stephen Crawcour; Henrik Kessler; Harald C. Traue
The goal of automatic biopsychological emotion recognition of companion technologies is to ensure reliable and valid classification rates. In this paper, emotional states were induced via a Wizard-of-Oz mental trainer scenario, which is based on the valence-arousal-dominance model. In most experiments, classification algorithms are tested via leave-out cross-validation of one situation. These studies often show very high classification rates, which are comparable with those in our experiment (92.6%). However, in order to guarantee robust emotion recognition based on biopsychological data, measurements have to be taken across several situations with the goal of selecting stable features for individual emotional states. For this purpose, our mental trainer experiment was conducted twice for each subject with a 10-min break between the two rounds. It is shown that there are robust psychobiological features that can be used for classification (70.1%) in both rounds. However, these are not the same as those that were found via feature selection performed only on the first round (classification: 53.0%).
ambient intelligence | 2012
Jun-Wen Tan; Steffen Walter; Andreas Scheck; David Hrabal; Holger Hoffmann; Henrik Kessler; Harald C. Traue
Recent affective computing findings indicated that effectively identifying users’ emotional responses is an important issue to improve the quality of ambient intelligence. In the current study, two bipolar facial electromyography (EMG) channels over corrugator supercilii and zygomaticus major were employed for differentiating various emotional states in two dimensions of valence (negative, neutral and positive) and arousal (high and low) while participants looked at affective visual stimuli. The results demonstrated that corrugator EMG and zygomaticus EMG efficiently differentiated negative and positive emotions from others, respectively. Moreover, corrugator EMG discriminated emotions on valence clearly, whereas zygomaticus EMG was ambiguous in neutral and negative emotional states. However, there was no significant statistical evidence for the discrimination of facial EMG responses in the dimension of arousal. Furthermore, correlation analysis proved significant correlations between facial EMG activities and ratings of valence performed by participants and other samples, which strongly supported the consistency of facial EMG reactions and subjective emotional experiences. In addition, the repeatability of facial EMG indicated by intraclass correlation coefficient (ICC) were provided, in which corrugator EMG held an excellent level of repeatability, and zygomaticus EMG grasped only a poor level of repeatability. Considering these results, facial EMG is reliable and effective to identify negative and positive emotional experiences elicited by affective visual stimuli, which may offer us an alternative method in building a basis for automated classification of users’ affective states in various situations.
Psychophysiology | 2014
Christin Kohrs; David Hrabal; Nicole Angenstein; André Brechmann
System response time research is an important issue in human-computer interactions. Experience with technical devices and general rules of human-human interactions determine the users expectation, and any delay in system response time may lead to immediate physiological, emotional, and behavioral consequences. We investigated such effects on a trial-by-trial basis during a human-computer interaction by measuring changes in skin conductance (SC), heart rate (HR), and the dynamics of button press responses. We found an increase in SC and a deceleration of HR for all three delayed system response times (0.5, 1, 2 s). Moreover, the data on button press dynamics was highly informative since subjects repeated a button press with more force in response to delayed system response times. Furthermore, the button press dynamics could distinguish between correct and incorrect decisions and may thus even be used to infer the uncertainty of a users decision.
PSL'11 Proceedings of the First IAPR TC3 conference on Partially Supervised Learning | 2011
Martin Schels; Markus Kächele; David Hrabal; Steffen Walter; Harald C. Traue; Friedhelm Schwenker
In this paper, a partially supervised machine learning approach is proposed for the recognition of emotional user states in HCI from bio-physiological data. To do so, an unsupervised learning preprocessing step is integrated into the training of a classifier. This makes it feasible to utilize unlabeled data or --- as it is conducted in this study --- data that is labeled in others than the considered categories. Thus, the data is transformed into a new representation and a standard classifier approach is subsequently applied. Experimental evidences that such an approach is beneficial in this particular setting is provided using classification experiments. Finally, the results are discussed and arguments when such an partially supervised approach is promising to yield robust and increased classification performances are given.
PLOS ONE | 2016
Jun-Wen Tan; Adriano O. Andrade; Hang Li; Steffen Walter; David Hrabal; Stefanie Rukavina; Kerstin Limbrecht-Ecklundt; Holger Hoffman; Harald C. Traue
Background Research suggests that interaction between humans and digital environments characterizes a form of companionship in addition to technical convenience. To this effect, humans have attempted to design computer systems able to demonstrably empathize with the human affective experience. Facial electromyography (EMG) is one such technique enabling machines to access to human affective states. Numerous studies have investigated the effects of valence emotions on facial EMG activity captured over the corrugator supercilii (frowning muscle) and zygomaticus major (smiling muscle). The arousal emotion, specifically, has not received much research attention, however. In the present study, we sought to identify intensive valence and arousal affective states via facial EMG activity. Methods Ten blocks of affective pictures were separated into five categories: neutral valence/low arousal (0VLA), positive valence/high arousal (PVHA), negative valence/high arousal (NVHA), positive valence/low arousal (PVLA), and negative valence/low arousal (NVLA), and the ability of each to elicit corresponding valence and arousal affective states was investigated at length. One hundred and thirteen participants were subjected to these stimuli and provided facial EMG. A set of 16 features based on the amplitude, frequency, predictability, and variability of signals was defined and classified using a support vector machine (SVM). Results We observed highly accurate classification rates based on the combined corrugator and zygomaticus EMG, ranging from 75.69% to 100.00% for the baseline and five affective states (0VLA, PVHA, PVLA, NVHA, and NVLA) in all individuals. There were significant differences in classification rate accuracy between senior and young adults, but there was no significant difference between female and male participants. Conclusion Our research provides robust evidences for recognition of intensive valence and arousal affective states in young and senior adults. These findings contribute to the successful future application of facial EMG for identifying user affective states in human machine interaction (HMI) or companion robotic systems (CRS).
Archive | 2014
Kerstin Limbrecht-Ecklundt; Holger Hoffmann; Steffen Walter; Sascha Gruss; David Hrabal; Harald C. Traue
Abstract Emotion recognition and emotion expression/regulation are important aspects of emotional intelligence (EI). Although the construct of EI is widely used and its components are part of many investigations, there is still no sufficient picture set that can be used for systematic research of facial emotion recognition and practical applications of individual assessments. In this research we present a new Facial Action Coding System validated picture set consisting of six emotions (anger, disgust, fear, happiness, sadness, and surprise). Basic principles of stimulus development and evaluation process are described. The PFA-U can be used for future studies in organization for the assessment of emotion recognition, emotion stimulation, and emotion management.
pervasive technologies related to assistive environments | 2010
Steffen Walter; David Hrabal; Andreas Scheck; Henrik Kessler; Gregor Bertrand; Florian Nothdurft; Wolfgang Minker; Harald C. Traue
One of the most important and difficult fields in research of assistive environment technology is the recognition of emotional and motivational users states. Emotion studies in the past show, that there are only a few universal interindividual valid psychobiological profiles states, which are stable associated with a users emotional state. In this approach we look for intraindividual valid psychobiological patterns of emotions and motivations. In order to predict such states separate for different subjects we introduced a calibration procedure for each of the 20 subjects. We expect higher emotion recognition rates than preceding studies focusing on universal patterns on sample data. First results will be presented at the conference.
international conference on human-computer interaction | 2013
Stefanie Rukavina; Sascha Gruss; Jun-Wen Tan; David Hrabal; Steffen Walter; Harald C. Traue; Lucia Jerg-Bretzke
It is a challenge to make cognitive technical systems more empathet- ic for user emotions and dispositions. Among channels like facial behavior and nonverbal cues, psychobiological patterns of emotional or dispositional beha- vior contain rich information, which is continuously available and hardly wil- lingly controlled. However, within this area of research, gender differences or even hormonal cycle effects as potential factors in influencing the classification of psychophysiological patterns of emotions have rarely been analyzed so far. In our study, emotions were induced with a blocked presentation of pictures from the International Affective Picture System (IAPS) and Ulm pictures. For the automated emotion classification in a first step 5 features from the heart rate signal were calculated and in a second step combined with two features of the facial EMG. The study focused mainly on gender differences in automated emotion classification and to a lesser degree on classification accuracy with Support Vector Machine (SVM) per se. We got diminished classification results for a gender mixed population and also we got diminished results for mixing young females with their hormonal cycle phases. Thus, we could show an im- provement of the accuracy rates when subdividing the population according to their gender, which is discussed as a possibility of incrementing automated classification results.