Sascha Gruss
University of Ulm
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sascha Gruss.
british machine vision conference | 2013
Philipp Werner; Ayoub Al-Hamadi; Robert Niese; Steffen Walter; Sascha Gruss; Harald C. Traue
Pain is what the patient says it is. But what about these who cannot utter? Automatic pain monitoring opens up prospects for better treatment, but accurate assessment of pain is challenging due to the subjective nature of pain. To facilitate advances, we contribute a new dataset, the BioVid Heat Pain Database which contains videos and physiological data of 90 persons subjected to well-defined pain stimuli of 4 intensities. We propose a fully automatic recognition system utilizing facial expression, head pose information and their dynamics. The approach is evaluated with the task of pain detection on the new dataset, also outlining open challenges for pain monitoring in general. Additionally, we analyze the relevance of head pose information for pain recognition and compare person-specific and general classification models.
2013 IEEE International Conference on Cybernetics (CYBCO) | 2013
Steffen Walter; Sascha Gruss; Hagen Ehleiter; Jun-Wen Tan; Harald C. Traue; Philipp Werner; Ayoub Al-Hamadi; Stephen Crawcour; Adriano O. Andrade; Gustavo Moreira da Silva
The objective measurement of subjective, multi-dimensionally experienced pain is still a problem that has yet to be adequately solved. Though verbal methods (i.e., pain scales, questionnaires) and visual analogue scales are commonly used for measuring clinical pain, they tend to lack in reliability or validity when applied to mentally impaired individuals. Expression of pain and/or its biopotential parameters could represent a solution. While such coding systems already exist, they are either very costly and time-consuming, or have been insufficiently evaluated with regards to the theory of mental tests. Building on the experiences made to date, we collected a database using visual and biopotential signals to advance an automated pain recognition system, to determine its theoretical testing quality, and to optimize its performance. For this purpose, participants were subjected to painful heat stimuli under controlled conditions.
international conference on pattern recognition | 2014
Philipp Werner; Ayoub Al-Hamadi; Robert Niese; Steffen Walter; Sascha Gruss; Harald C. Traue
How much does it hurt? Accurate assessment of pain is very important for selecting the right treatment, however current methods are not sufficiently valid and reliable in many cases. Automatic pain monitoring may help by providing an objective and continuous assessment. In this paper we propose an automatic pain recognition system combining information from video and biomedical signals, namely facial expression, head movement, galvanic skin response, electromyography and electrocardiogram. Using the BioVid Heat Pain Database, the system is evaluated in the task of pain detection showing significant improvement over the current state of the art. Further, we discuss the relevance of the modalities and compare person-specific and generic classification models.
Ergonomics | 2014
Steffen Walter; Cornelia Wendt; Jan R. Böhnke; Stephen Crawcour; Jun-Wen Tan; Andre Chan; Kerstin Limbrecht; Sascha Gruss; Harald C. Traue
Cognitive-technical intelligence is envisioned to be constantly available and capable of adapting to the users emotions. However, the question is: what specific emotions should be reliably recognised by intelligent systems? Hence, in this study, we have attempted to identify similarities and differences of emotions between human–human (HHI) and human–machine interactions (HMI). We focused on what emotions in the experienced scenarios of HMI are retroactively reflected as compared with HHI. The sample consisted of N = 145 participants, who were divided into two groups. Positive and negative scenario descriptions of HMI and HHI were given by the first and second groups, respectively. Subsequently, the participants evaluated their respective scenarios with the help of 94 adjectives relating to emotions. The correlations between the occurrences of emotions in the HMI versus HHI were very high. The results do not support the statement that only a few emotions in HMI are relevant. Practitioner Summary: This study sought to identify the relevant emotions in different technical domains their companion systems tend to use. Overall, the 20 essential emotions found as highly relevant for HMI were as follows: (i) positive, i.e. satisfied, pleased, happy, relieved, pleasant, well, serene, optimistic, confident and self-confident and (ii) negative, i.e. annoyed, aggravated, impatient, angry, unsatisfied, displeased, irritable, frustrated, enraged and tense.
PLOS ONE | 2015
Sascha Gruss; Roi Treister; Philipp Werner; Harald C. Traue; Stephen Crawcour; Adriano O. Andrade; Steffen Walter
Background The clinically used methods of pain diagnosis do not allow for objective and robust measurement, and physicians must rely on the patient’s report on the pain sensation. Verbal scales, visual analog scales (VAS) or numeric rating scales (NRS) count among the most common tools, which are restricted to patients with normal mental abilities. There also exist instruments for pain assessment in people with verbal and / or cognitive impairments and instruments for pain assessment in people who are sedated and automated ventilated. However, all these diagnostic methods either have limited reliability and validity or are very time-consuming. In contrast, biopotentials can be automatically analyzed with machine learning algorithms to provide a surrogate measure of pain intensity. Methods In this context, we created a database of biopotentials to advance an automated pain recognition system, determine its theoretical testing quality, and optimize its performance. Eighty-five participants were subjected to painful heat stimuli (baseline, pain threshold, two intermediate thresholds, and pain tolerance threshold) under controlled conditions and the signals of electromyography, skin conductance level, and electrocardiography were collected. A total of 159 features were extracted from the mathematical groupings of amplitude, frequency, stationarity, entropy, linearity, variability, and similarity. Results We achieved classification rates of 90.94% for baseline vs. pain tolerance threshold and 79.29% for baseline vs. pain threshold. The most selected pain features stemmed from the amplitude and similarity group and were derived from facial electromyography. Conclusion The machine learning measurement of pain in patients could provide valuable information for a clinical team and thus support the treatment assessment.
PLOS ONE | 2016
Stefanie Rukavina; Sascha Gruss; Holger Hoffmann; Jun-Wen Tan; Steffen Walter; Harald C. Traue
Affective computing aims at the detection of users’ mental states, in particular, emotions and dispositions during human-computer interactions. Detection can be achieved by measuring multimodal signals, namely, speech, facial expressions and/or psychobiology. Over the past years, one major approach was to identify the best features for each signal using different classification methods. Although this is of high priority, other subject-specific variables should not be neglected. In our study, we analyzed the effect of gender, age, personality and gender roles on the extracted psychobiological features (derived from skin conductance level, facial electromyography and heart rate variability) as well as the influence on the classification results. In an experimental human-computer interaction, five different affective states with picture material from the International Affective Picture System and ULM pictures were induced. A total of 127 subjects participated in the study. Among all potentially influencing variables (gender has been reported to be influential), age was the only variable that correlated significantly with psychobiological responses. In summary, the conducted classification processes resulted in 20% classification accuracy differences according to age and gender, especially when comparing the neutral condition with four other affective states. We suggest taking age and gender specifically into account for future studies in affective computing, as these may lead to an improvement of emotion recognition accuracy.
The Physician and Sportsmedicine | 2011
Steffen Walter; Henrik Kessler; Sascha Gruss; Lucia Jerg-Bretzke; Andreas Scheck; Jochen Ströbel; Holger Hoffmann; Harald C. Traue
Objective: The present study investigated the influence of neuroticism (NEO Five-Factor Inventory (NEO-FFI)) and psychological symptoms (Brief Symptom Inventory (BSI)) on pleasure, arousal, and dominance (PAD) ratings of the International Affective Picture System (IAPS). Methods: The subjects (N=131) were presented with images from the IAPS (30 images) and new images (30 images). The influence of neuroticism and BSI (median split: high vs. low) on the assessment of pleasure, arousal and dominance of the images was examined. Correlations of pleasure, arousal and dominance were presented in a 3-D video animation. Results: Subjects with high scores (compared to subjects with low scores by median split) of neuroticism and psychological symptoms of the BSI rated the presented emotional images more negative in the valence dimension (pleasure), higher in arousal and less dominant. Conclusion: Neuroticism and psychological symptoms influence the subjective emotional evaluation of emotional images. Therefore the location in the three-dimensional emotion space depends on individual differences. Such differences must be kept in mind, if correlations between emotion ratings and other variables like psychobiological measures are analyzed.
IEEE Transactions on Affective Computing | 2017
Philipp Werner; Ayoub Al-Hamadi; Kerstin Limbrecht-Ecklundt; Steffen Walter; Sascha Gruss; Harald C. Traue
Pain is a primary symptom in medicine, and accurate assessment is needed for proper treatment. However, today’s pain assessment methods are not sufficiently valid and reliable in many cases. Automatic recognition systems may contribute to overcome this problem by facilitating objective and continuous assessment. In this article we propose a novel feature set for describing facial actions and their dynamics, which we call facial activity descriptors. We apply them to detect pain and estimate the pain intensity. The proposed method outperforms previous state-of-the-art approaches in sequence-level pain classification on both, the BioVid Heat Pain and the UNBC-McMaster Shoulder Pain Expression database. We further discuss major challenges of pain recognition research, benefits of temporal integration, and shortcomings of widely used frame-based pain intensity ground truth.
international conference on human-computer interaction | 2015
Dilana Hazer; Xueyao Ma; Stefanie Rukavina; Sascha Gruss; Steffen Walter; Harald C. Traue
In affective computing an accurate emotion recognition process requires a reliable emotion elicitation method. One of the arising questions while inducing emotions for computer-based emotional applications is age group differences. In the present study, we investigate the effect of emotion elicitation on various age groups. Emotion elicitation was conducted using standardized movie clips representing five basic emotions: amusement, sadness, anger, disgust and fear. Each emotion was elicited by three different clips. The different clips are individually rated and the subjective choice of the most relevant clip is analyzed. The results show the influence of age on film-clip choice, the correlation between age and valence/arousal rating for the chosen clips and the differences in valence and arousal ratings in the different age groups.
pervasive technologies related to assistive environments | 2010
Timo Schuster; Sascha Gruss; Henrik Kessler; Andreas Scheck; Holger Hoffmann; Harald C. Traue
In this work we describe the processing and classifying of EEG-data that was acquired under emotional conditions. In the context of assistive environment technology it is one of the most important challenges to get information about a persons emotional state. To get this information, psychophysiological data was recorded while stimulating subjects with emotional pictures. Afterwards a classifier was trained to differentiate between physiological patterns of negative, positive and neutral conditions. The classification results show an accuracy of about 72%.