Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Elisabeth André is active.

Publication


Featured researches published by Elisabeth André.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2008

Emotion recognition based on physiological changes in music listening

Jonghwa Kim; Elisabeth André

Little attention has been paid so far to physiological signals for emotion recognition compared to audiovisual emotion channels such as facial expression or speech. This paper investigates the potential of physiological signals as reliable channels for emotion recognition. All essential stages of an automatic recognition system are discussed, from the recording of a physiological data set to a feature-based multiclass classification. In order to collect a physiological data set from multiple subjects over many weeks, we used a musical induction method that spontaneously leads subjects to real emotional states, without any deliberate laboratory setting. Four-channel biosensors were used to measure electromyogram, electrocardiogram, skin conductivity, and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to find the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by classification results. Classification of four musical emotions (positive/high arousal, negative/high arousal, negative/low arousal, and positive/low arousal) is performed by using an extended linear discriminant analysis (pLDA). Furthermore, by exploiting a dichotomic property of the 2D emotion model, we develop a novel scheme of emotion-specific multilevel dichotomous classification (EMDC) and compare its performance with direct multiclass classification using the pLDA. An improved recognition accuracy of 95 percent and 70 percent for subject-dependent and subject-independent classification, respectively, is achieved by using the EMDC scheme.


IEEE Intelligent Systems | 2002

Creating interactive virtual humans: some assembly required

Jonathan Gratch; Jeff Rickel; Elisabeth André; Justine Cassell; Eric Petajan; Norman I. Badler

Discusses some of the key issues that must be addressed in creating virtual humans, or androids. As a first step, we overview the issues and available tools in three key areas of virtual human research: face-to-face conversation, emotions and personality, and human figure animation. Assembling a virtual human is still a daunting task, but the building blocks are getting bigger and better every day.


international conference on multimedia and expo | 2005

From Physiological Signals to Emotions: Implementing and Comparing Selected Methods for Feature Extraction and Classification

Johannes Wagner; Jonghwa Kim; Elisabeth André

Little attention has been paid so far to physiological signals for emotion recognition compared to audio-visual emotion channels, such as facial expressions or speech. In this paper, we discuss the most important stages of a fully implemented emotion recognition system including data analysis and classification. For collecting physiological signals in different affective states, we used a music induction method which elicits natural emotional reactions from the subject. Four-channel biosensors are used to obtain electromyogram, electrocardiogram, skin conductivity and respiration changes. After calculating a sufficient amount of features from the raw signals, several feature selection/reduction methods are tested to extract a new feature set consisting of the most significant features for improving classification performance. Three well-known classifiers, linear discriminant function, k-nearest neighbour and multilayer perceptron, are then used to perform supervised classification


HCI '98 Proceedings of HCI on People and Computers XIII | 1998

The Persona Effect: How Substantial Is It?

Susanne van Mulken; Elisabeth André; Jochen Müller

Personification of interface agents has been speculated to have several advantages, such as a positive effect on agent credibility and on the perception of learning experience. However, important questions less often addressed so far are what effect personification has on more objective measures, such as comprehension and recall, and furthermore, under what circumstances this effect (if any) occurs. We performed an empirical study with adult participants to examine the effect of the Ppp Persona not only on subjective but also on objective measures. In addition, we tested it both with technical and non-technical domain information. The results of the study indicate that the data from the subjective measures support the so called persona effect for the technical information but not for non-technical information. With regard to the objective measures, however, neither a positive nor a negative effect could be found. Implications for software development are discussed.


international conference on multimedia and expo | 2005

Comparing Feature Sets for Acted and Spontaneous Speech in View of Automatic Emotion Recognition

Thurid Vogt; Elisabeth André

We present a data-mining experiment on feature selection for automatic emotion recognition. Starting from more than 1000 features derived from pitch, energy and MFCC time series, the most relevant features in respect to the data are selected from this set by removing correlated features. The features selected for acted and realistic emotions are analyzed and show significant differences. All features are computed automatically and we also contrast automatically with manually units of analysis. A higher degree of automation did not prove to be a disadvantage in terms of recognition accuracy


Applied Artificial Intelligence | 1999

Employing ai methods to control the behavior of animated interface agents

Elisabeth André; Thomas Rist; Jochen Müller

Life - like characters are increasingly gaining the attention of researchers and commercial developers of user interfaces . A strong argument in favor of using such characters in the interface is the rich repertoire of options they offer , enabling the emulation of communication styles common in human - human dialog . This contribution presents a framework for the development of presentation agents , which can be used for a broad range of applications including personalized information delivery fromthe WWW .


perception and interactive technologies | 2008

EmoVoice -- A Framework for Online Recognition of Emotions from Voice

Thurid Vogt; Elisabeth André; Nikolaus Bee

We present EmoVoice, a framework for emotional speech corpus and classifier creation and for offline as well as real-time online speech emotion recognition. The framework is intended to be used by non-experts and therefore comes with an interface to create an own personal or application specific emotion recogniser. Furthermore, we describe some applications and prototypes that already use our framework to track online emotional user states from voice information.


Archive | 2004

Affective Dialogue Systems

Elisabeth André; Laila Dybkjær; Wolfgang Minker; Paul Heisterkamp

The monitoring of emotional user states can help to assess the progress of human-machine-communication. If we look at specific databases, however, we are faced with several problems: users behave differently, even within one and the same setting, and some phenomena are sparse; thus it is not possible to model and classify them reliably. We exemplify these difficulties on the basis of SympaFly, a database with dialogues between users and a fully automatic speech dialogue telephone system for flight reservation and booking, and discuss possible remedies.


intelligent user interfaces | 1997

Adding animated presentation agents to the interface

Thomas Rist; Elisabeth André; Jochen Müller

A growing number of research projects both in academia and industries have started to investigate the use of animated agents in the interface. Such agents, either based on real video, cartoon-style drawings or even model-based 3D graphics, are likely to become integral parts of future user interfaces. To be useful, however, interface agents have to be intelligent in the sense that they exhibit a reasonable behavior. In this paper, we present a system that uses a lifelike character, the so-called PPP Persona, to present multimedia material to the user. This material has been either automatically generated or fetched from the web and modified if necessary. The underlying approach is based on our previous work on multimedia presentation planning. This core approach is complemented by additional concepts, namely the temporal coordination of presentation acts and the consideration of the human-factors dimension of the added visual metaphor.


adaptive agents and multi-agents systems | 1998

Integrating reactive and scripted behaviors in a life-like presentation agent

Elisabeth André; Thomas Rist; Jochen Müller

1. ABSTRACT Animated agents based either on real video, cartoon-style drawings or even model-based 3D graphics offer great promise for computer-based presentations as they make presentations more lively and appealing and allow for the emulation of conversation styles known from human-human communication. In this paper, we describe a life-like interface agent which presents multimedia material to the user following the directives of a script. The overall behavior of the presentation agent is partly determined by such a script, and partly by the agents self-behavior. In our approach, the agents behavior is defined in a declarative specification language. Behavior specifications are used to automatically generate a control module for an agent display system. The first part of the paper describes the generation process which involves AI planning and a two-step compilation. Since the manual creation of presentation scripts is tedious and error-prone, we also address the automated generation of presentation scripts which may be forwarded to the interface agent. The second part of the paper presents an approach for multimedia presentation design which combines hierarchical planning with temporal reasoning.

Collaboration


Dive into the Elisabeth André's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thurid Vogt

University of Augsburg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jonghwa Kim

University of Augsburg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge