Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sascha Meudt is active.

Publication


Featured researches published by Sascha Meudt.


international conference on pattern recognition applications and methods | 2014

Fusion of Audio-visual Features using Hierarchical Classifier Systems for the Recognition of Affective States and the State of Depression

Markus Kächele; Michael Glodek; Dimitrij Zharkov; Sascha Meudt; Friedhelm Schwenker

Reliable prediction of affective states in real world scenarios is very challenging and a significant amount of ongoing research is targeted towards improvement of existing systems. Major problems include the unreliability of labels, different realizations of the same affective states amongst different persons and in different modalities as well as the presence of sensor noise in the signals. This work presents a framework for adaptive fusion of input modalities with variable degrees of certainty on different levels. Using a strategy that starts with ensembles of weak learners, gradually, level by level, the discriminative power of the system is improved by adaptively weighting favorable decisions, while concurrently dismissing unfavorable ones. For the final decision fusion the proposed system leverages a trained Kalman filter. Besides its ability to deal with missing and uncertain values, in its nature, the Kalman filter is a time series predictor and thus a suitable choice to match input signals to a reference time series in the form of ground truth labels.


international conference on multimodal interfaces | 2013

Multi classifier systems and forward backward feature selection algorithms to classify emotional coloured speech

Sascha Meudt; Dimitri Zharkov; Markus Kächele; Friedhelm Schwenker

Systems for the recognition of psychological characteristics such as the emotional state in real world scenarios have to deal with several difficulties. Amongst those are unconstrained environments and uncertainties in one or several input channels. However a more crucial aspect is the content of the data itself. Psychological states are highly person-dependent and often even humans are not able to determine the correct state a person is in. A successful recognition system thus has to deal with data, that is not very discriminative and often simply misleading. In order to succeed, a critical view on features and decisions is essential to select only the most valuable ones. This work presents a comparison of a common multi classifier system approach based on state of the art features and a modified forward backward feature selection algorithm with a long term stopping criteria. The second approach takes also features of the voice quality family into account. Both approaches are based on the audio modality only. The dataset used in the challenge is an in between dataset of real world datasets which are still very hard to handle and over acted datasets which were famous in the past and today are well understood.


international conference on pattern recognition | 2014

Prosodic, Spectral and Voice Quality Feature Selection Using a Long-Term Stopping Criterion for Audio-Based Emotion Recognition

Markus Kächele; Dimitrij Zharkov; Sascha Meudt; Friedhelm Schwenker

Emotion recognition from speech is an important field of research in human-machine-interfaces, and has begun to influence everyday life by employment in different areas such as call centers or wearable companions in the form of smartphones. In the proposed classification architecture, different spectral, prosodic and the relatively novel voice quality features are extracted from the speech signals. These features are then used to represent long-term information of the speech, leading to utterance-wise suprasegmental features. The most promising of these features are selected using a forward-selection/backward-elimination algorithm with a novel long-term termination criterion for the selection. The overall system has been evaluated using recordings from the public Berlin emotion database. Utilizing the resulted features, a recognition rate of 88,97% has been achieved which surpasses the performance of humans on this database and is comparable to the state of the art performance on this dataset.


Journal on Multimodal User Interfaces | 2016

Revisiting the EmotiW challenge: how wild is it really?

Markus Kächele; Martin Schels; Sascha Meudt; Günther Palm; Friedhelm Schwenker

The focus of this work is emotion recognition in the wild based on a multitude of different audio, visual and meta features. For this, a method is proposed to optimize multi-modal fusion architectures based on evolutionary computing. Extensive uni- and multi-modal experiments show the discriminative power of each computed feature set and fusion architecture. Furthermore, we summarize the EmotiW 2013/2014 challenges and review the conclusions that have been drawn and compare our results with the state-of-the-art on this dataset.


artificial neural networks in pattern recognition | 2012

On instance selection in audio based emotion recognition

Sascha Meudt; Friedhelm Schwenker

Affective computing aim to provide simpler and more natural interfaces for human-computer interaction applications, e.g. recognizing automatically the emotional status of the user based on facial expressions or speech is important to model user as complete as possible in order to develop human-computer interfaces that are able to respond to the users action or behavior in an appropriate manner. In this paper we focus on audio-based emotion recognition. Data sets employed for the statistical evaluation have been collected through Wizard-of-Oz experiments. The emotional labels have been are defined through the experimental set up therefore given on a relatively coarse temporal scale (a few minutes) which This global labeling concept might lead to miss-labeled data at smaller time scales, for instance for window sizes uses in audio analysis (less than a second). Manual labeling at these time scales is very difficult not to say impossible, and therefore our approach is to use the globally defined labels in combination with instance/sample selection methods. In such an instance selection approach the task is to select the most relevant and discriminative data of the training set by using a pre-trained classifier. Mel-Frequency Cepstral Coefficients (MFCC) features are used to extract relevant features, and probabilistic support vector machines (SVM) are applied as base classifiers in our numerical evaluation. Confidence values to the samples of the training set are assigned through the outputs of the probabilistic SVM.


Proceedings of the 2014 workshop on Emotion Representation and Modelling in Human-Computer-Interaction-Systems | 2014

Detection of Emotional Events utilizing Support Vector Methods in an Active Learning HCI Scenario

Patrick Thiam; Sascha Meudt; Markus Kächele; Günther Palm; Friedhelm Schwenker

In recent years the fields of affective computing and emotion recognition have experienced a steady increase in attention and especially the creation and analysis of multi-modal corpora has been the focus of intense research. Plausible annotation of this data, however is an enormous problem. In detail emotion annotation is very time consuming, cumbersome and sensitive with respect to the annotator. Furthermore emotional reactions are often very sparse in HCI scenarios resulting in a large annotation overhead to gather the interesting moments of a recording, which in turn are highly relevant for powerful features, classifiers and fusion architectures. Active learning techniques provide methods to improve the annotation processes since the annotator is asked to only label the relevant instances of a given dataset. In this work an unsupervised one-class Support Vector Machine is used to build a background model of non-emotional sequences on a novel HCI dataset. The human annotator is iteratively asked to label instances that are not well explained by the background model, which in turn renders them candidates for being interesting events such as emotional reactions that diverge from the norm. The outcome of the active learning procedure is a reduced dataset of only 14% the size of the original dataset that contains most of the significant information, in this case more than 75% of the emotional events.


international conference on multimodal interfaces | 2014

Enhanced Autocorrelation in Real World Emotion Recognition

Sascha Meudt; Friedhelm Schwenker

Multimodal emotion recognition in real world environments is still a challenging task of affective computing research. Recognizing the affective or physiological state of an individual is difficult for humans as well as for computer systems, and thus finding suitable discriminative features is the most promising approach in multimodal emotion recognition. In the literature numerous features have been developed or adapted from related signal processing tasks. But still, classifying emotional states in real world scenarios is difficult and the performance of automatic classifiers is rather limited. This is mainly due to the fact that emotional states can not be distinguished by a well defined set of discriminating features. In this work we present an enhanced autocorrelation feature as a multi pitch detection feature and compare its performance to feature well known, and state-of-the-art in signal and speech processing. Results of the evaluation show that the enhanced autocorrelation outperform other state-of-the-art features in case of the challenge data set. The complexity of this benchmark data set lies in between real world data sets showing naturalistic emotional utterances, and the widely applied and well-understood acted emotional data sets.


conference of the international speech communication association | 2014

On Annotation and Evaluation of Multi-modal Corpora in Affective Human-Computer Interaction

Markus Kächele; Martin Schels; Sascha Meudt; Viktor Kessler; Michael Glodek; Patrick Thiam; Stephan Tschechne; Günther Palm; Friedhelm Schwenker

In this paper, we discuss the topic of affective human-computer interaction from a data driven viewpoint. This comprises the collection of respective databases with emotional contents, feasible annotation procedures and software tools that are able to conduct a suitable labeling process. A further issue that is discussed in this paper is the evaluation of the results that are computed using statistical classifiers. Based on this we propose to use fuzzy memberships in order to model affective user state and endorse respective fuzzy performance measures.


artificial neural networks in pattern recognition | 2014

A New Multi-class Fuzzy Support Vector Machine Algorithm

Friedhelm Schwenker; Markus Frey; Michael Glodek; Markus Kächele; Sascha Meudt; Martin Schels; Miriam Schmidt

In this paper a novel approach to fuzzy support vector machines (SVM) in multi-class classification problems is presented. The proposed algorithm has the property to benefit from fuzzy labeled data in the training phase and can determine fuzzy memberships for input data. The algorithm can be considered as an extension of the traditional multi-class SVM for crisp labeled data, and it also extents the fuzzy SVM approach for fuzzy labeled training data in the two-class classification setting. Its behavior is demonstrated on three benchmark data sets, the achieved results motivate the inclusion of fuzzy labeled data into the training set for various tasks in pattern recognition and machine learning, such as the design of aggregation rules in multiple classifier systems, or in partially supervised learning.


Toward Robotic Socially Believable Behaving Systems (I) | 2016

Going Further in Affective Computing: How Emotion Recognition Can Improve Adaptive User Interaction

Sascha Meudt; Miriam Schmidt-Wack; Frank Honold; Felix Schüssel; Michael Weber; Friedhelm Schwenker; Günther Palm

This article joins the fields of emotion recognition and human computer interaction. While much work has been done on recognizing emotions, they are hardly used to improve a user’s interaction with a system. Although the fields of affective computing and especially serious games already make use of detected emotions, they tend to provide application and user specific adaptions only on the task level. We present an approach of utilizing recognized emotions to improve the interaction itself, independent of the underlying application at hand. Examining the state of the art in emotion recognition research and based on the architecture of Companion-System, a generic approach for determining the main cause of an emotion within the history of interactions is presented, allowing a specific reaction and adaption. Using such an approach could lead to systems that use emotions to improve not only the outcome of a task but the interaction itself in order to be truly individual and empathic.

Collaboration


Dive into the Sascha Meudt's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andreas Wendemuth

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge