Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Thurid Vogt is active.

Publication


Featured researches published by Thurid Vogt.


international conference on multimedia and expo | 2005

Comparing Feature Sets for Acted and Spontaneous Speech in View of Automatic Emotion Recognition

Thurid Vogt; Elisabeth André

We present a data-mining experiment on feature selection for automatic emotion recognition. Starting from more than 1000 features derived from pitch, energy and MFCC time series, the most relevant features in respect to the data are selected from this set by removing correlated features. The features selected for acted and realistic emotions are analyzed and show significant differences. All features are computed automatically and we also contrast automatically with manually units of analysis. A higher degree of automation did not prove to be a disadvantage in terms of recognition accuracy


perception and interactive technologies | 2008

EmoVoice -- A Framework for Online Recognition of Emotions from Voice

Thurid Vogt; Elisabeth André; Nikolaus Bee

We present EmoVoice, a framework for emotional speech corpus and classifier creation and for offline as well as real-time online speech emotion recognition. The framework is intended to be used by non-experts and therefore comes with an interface to create an own personal or application specific emotion recogniser. Furthermore, we describe some applications and prototypes that already use our framework to track online emotional user states from voice information.


Affect and Emotion in Human-Computer Interaction | 2008

Automatic Recognition of Emotions from Speech: A Review of the Literature and Recommendations for Practical Realisation

Thurid Vogt; Elisabeth André; Johannes Wagner

In this article we give guidelines on how to address the major technical challenges of automatic emotion recognition from speech in human-computer interfaces, which include audio segmentation to find appropriate units for emotions, extraction of emotion relevant features, classification of emotions, and training databases with emotional speech. Research so far has mostly dealt with offline evaluation of vocal emotions, and online processing has hardly been addressed. Online processing is, however, a necessary prerequisite for the realization of human-computer interfaces that analyze and respond to the users emotions while he or she is interacting with an application. By means of a sample application, we demonstrate how the challenges arising from online processing may be solved. The overall objective of the paper is to help readers to assess the feasibility of human-computer interfaces that are sensitive to the users emotional voice and to provide them with guidelines of how to technically realize such interfaces.


affective computing and intelligent interaction | 2007

A Systematic Comparison of Different HMM Designs for Emotion Recognition from Acted and Spontaneous Speech

Johannes Wagner; Thurid Vogt; Elisabeth André

In this work we elaborate the use of hidden Markov models (HMMs) for speech emotion recognition as a dynamic alternative to static modelling approaches. Since previous work on this field does not yet define a clear line which HMM design should be prioritised for this task, we run a systematic analysis of different HMM configurations. Furthermore, experiments are carried out on an acted and a spontaneous emotions corpus, since little is known about the suitability of HMMs for spontaneous speech. Additionally, we consider two different segmentation levels, namely words and utterances. Results are compared with the outcome of a support vector machine classifier trained on global statistics features. While for both databases similar performance was observed on utterance level, the HMM-based approach outperformed static classification on word level. However, setting up general guidelines which kind of models are best suited appeared to be rather difficult.


Archive | 2011

The Automatic Recognition of Emotions in Speech

Anton Batliner; Björn W. Schuller; Dino Seppi; Stefan Steidl; Laurence Devillers; Laurence Vidrascu; Thurid Vogt; Vered Aharonson; Noam Amir

In this chapter, we focus on the automatic recognition of emotional states using acoustic and linguistic parameters as features and classifiers as tools to predict the ‘correct’ emotional states. We first sketch history and state of the art in this field; then we describe the process of ‘corpus engineering’, i.e. the design and the recording of databases, the annotation of emotional states, and further processing such as manual or automatic segmentation. Next, we present an overview of acoustic and linguistic features that are extracted automatically or manually. In the section on classifiers, we deal with topics such as the curse of dimensionality and the sparse data problem, classifiers, and evaluation. At the end of each section, we point out important aspects that should be taken into account for the planning or the assessment of studies. The subject area of this chapter is not emotions in some narrow sense but in a wider sense encompassing emotion-related states such as moods, attitudes, or interpersonal stances as well. We do not aim at an in-depth treatise of some specific aspects or algorithms but at an overview of approaches and strategies that have been used or should be used.


affective computing and intelligent interaction | 2007

I Know What I Did Last Summer: Autobiographic Memory in Synthetic Characters

João Dias; Wang Ching Ho; Thurid Vogt; Nathalie Beeckman; Ana Paiva; Elisabeth André

According to traditional animators, the art of building believable characters resides in the ability to successfully portray a characters behaviour as the result of its internal emotions, intentions and thoughts. Following this direction, we want our agents to be able to explicitly talk about their internal thoughts and report their personal past experiences. In order to achieve it, we look at a specific type of episodic long term memory. This paper describes the integration of Autobiographic Memory into FAtiMA, an emotional agent architecture that generates emotions from a subjective appraisal of events.


affective computing and intelligent interaction | 2009

PAD-based multimodal affective fusion

Stephen W. Gilroy; Marc Cavazza; Marcus Niiranen; Elisabeth André; Thurid Vogt; Jérôme Urbain; M. Benayoun; Hartmut Seichter; Mark Billinghurst

The study of multimodality is comparatively less developed for affective interfaces than for their traditional counterparts. However, one condition for the successful development of affective interface technologies is the development of frameworks for the real-time multimodal fusion. In this paper, we describe an approach to multimodal affective fusion, which relies on a dimensional model, Pleasure-Arousal-Dominance (PAD) to support the fusion of affective modalities, each input modality being represented as a PAD vector. We describe how this model supports both affective content fusion and temporal fusion within a unified approach. We report results from early user studies which confirm the existence of a correlation between measured affective input and user temperament scores.


international conference on computer and electrical engineering | 2009

Evaluation and Discussion of Multi-modal Emotion Recognition

Ahmad Rabie; Britta Wrede; Thurid Vogt; Marc Hanheide

Recognition of emotions from multimodal cues is of basic interest for the design of many adaptive interfaces in human-machine and human-robot interaction. It provides a means to incorporate non-verbal feedback in the interactional course. Humans express their emotional state rather unconsciously exploiting their different natural communication modalities. In this paper, we present a first study on multimodal recognition of emotions from auditive and visual cues for interaction interfaces. We recognize seven classes of basic emotions by means of visual analysis of talking faces. In parallel, the audio signal is analyzed on the basis of the intonation of the verbal articulation. We compare the performance of state of the art recognition systems on the DaFEx database for both complement modalities and discuss these results with regard to the theoretical background and possible fusion schemes in real-world multimodal interfaces.


advances in computer entertainment technology | 2008

An affective model of user experience for interactive art

Stephen W. Gilroy; Marc Cavazza; Rémi Chaignon; Satu-Marja Mäkelä; Markus Niranen; Elisabeth André; Thurid Vogt; Jérôme Urbain; Hartmut Seichter; Mark Billinghurst; M. Benayoun

The development of Affective Interface technologies makes it possible to envision a new generation of Digital Arts and Entertainment applications, in which interaction will be based directly on the analysis of user experience. In this paper, we describe an approach to the development of Multimodal Affective Interfaces that supports real-time analysis of user experience as part of an Augmented Reality Art installation. The system relies on a PAD dimensional model of emotion to support the fusion of affective modalities, each input modality being represented as a PAD vector. A further advantage of the PAD model is that it can support a representation of affective responses that relate to aesthetic impressions.


virtual reality software and technology | 2010

Exploring the usability of immersive interactive storytelling

Jean-Luc Lugrin; Marc Cavazza; David Pizzi; Thurid Vogt; Elisabeth André

The Entertainment potential of Virtual Reality is yet to be fully realised. In recent years, this potential has been described through the Holodeck#8482; metaphor, without however addressing the issue of content creation and gameplay. Recent progress in Interactive Narrative technology makes it possible to envision immersive systems. Yet, little is known about the usability of such systems or which paradigms should be adopted for gameplay and interaction. We report user experiments carried out with a fully immersive Interactive Narrative system based on a CAVE-like system, which explore two interactivity paradigms for user involvement (Actor and Ghost). Our results confirm the potential of immersive Interactive Narratives in terms of performance but also of user acceptance.

Collaboration


Dive into the Thurid Vogt's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jonghwa Kim

University of Augsburg

View shared research outputs
Top Co-Authors

Avatar

Stefan Steidl

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Dino Seppi

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge