Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sascha Fagel is active.

Publication


Featured researches published by Sascha Fagel.


Speech Communication | 2004

An Articulation Model for Audiovisual Speech Synthesis - Determination, Adjustment, Evaluation

Sascha Fagel; Caroline Clemens

Abstract The authors present a visual articulation model for speech synthesis and a method to obtain it from measured data. This visual articulation model is integrated into MASSY, the Modular Audiovisual Speech SYnthesizer, and used to control visible articulator movements described by six motion parameters: one for the up-down movement of the lower jaw, three for the lips and two for the tongue. The visual articulation model implements the dominance principle as suggested by Lofqvist [Lofqvist, A., 1990. Speech as audible gestures. In: Hardcastle, W.J., Marchal, A. (Eds.), Speech Production and Speech Modeling. Kluwer Academic Publishers, Dodrecht, pp. 289–322]. The parameter values for the model derive from measured articulator positions. To obtain these data, the articulation movements of a female speaker were measured with the 2D-articulograph AG100 and simultaneously filmed. The visual articulation model is adjusted and evaluated by testing word recognition in noise.


international conference on multimodal interfaces | 2008

Evaluating talking heads for smart home systems

Christine Kühnel; Benjamin Weiss; Ina Wechsung; Sascha Fagel; Sebastian Möller

In this paper we report the results of a user study evaluating talking heads in the smart home domain. Three noncommercial talking head components are linked to two freely available speech synthesis systems, resulting in six different combinations. The influence of head and voice components on overall quality is analyzed as well as the correlation between them. Three different ways to assess overall quality are presented. It is shown that these three are consistent in their results. Another important result is that in this design speech and visual quality are independent of each other. Furthermore, a linear combination of both quality aspects models overall quality of talking heads to a good degree.


Eurasip Journal on Audio, Speech, and Music Processing | 2009

Animating virtual speakers or singers from audio: lip-synching facial animation

Sascha Fagel; Gérard Bailly; Barry-John Theobald

The aim of this special issue is to provide a detailed description of state-of-the-art systems for animating faces during speech, and identify new techniques that have recently emerged from both the audiovisual speech and computer graphics research communities. This special issue is a followup to the first LIPS Visual Speech Synthesis Challenge held as a special session at INTERSPEECH 2008 in Brisbane, Australia. As a motivation for the present special issue, we will report on the LIPS Challenge with respect to the synthesis techniques, andmore importantly the methods and results of the subjective evaluation.


Proceedings of the 3rd international workshop on Affective interaction in natural environments | 2010

Facilitative effects of communicative gaze and speech in human-robot cooperation

Jean-David Boucher; Jocelyne Ventre-Dominey; Peter Ford Dominey; Sascha Fagel; Gérard Bailly

Human interaction in natural environments relies on a variety of perceptual cues to guide and stabilize the interaction. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should be able to manipulate and exploit these communicative cues in cooperation with their human partners. In the current research we identify a set of principal communicative speech and gaze cues in human-human interaction, and then formalize and implement these cues in a humanoid robot. The objective of the work is to render the humanoid robot more human-like in its ability to communicate with humans. The first phase of this research, described here, is to provide the robot with a generative capability - that is to produce appropriate speech and gaze cues in the context of human-robot cooperation tasks. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times.


Proceedings of the Third COST 2102 international training school conference on Toward autonomous, adaptive, and context-aware multimodal interfaces: theoretical and practical issues | 2010

Speech, gaze and head motion in a face-to-face collaborative task

Sascha Fagel; Gérard Bailly

In the present work we observe two subjects interacting in a collaborative task on a shared environment. One goal of the experiment is to measure the change in behavior with respect to gaze when one interactant is wearing dark glasses and hence his/her gaze is not visible by the other one. The results show that if one subject wears dark glasses while telling the other subject the position of a certain object, the other subject needs significantly more time to locate and move this object. Hence, eye gaze - when visible - of one subject looking at a certain object speeds up the location of the cube by the other subject. The second goal of the currently ongoing work is to collect data on the multimodal behavior of one of the subjects by means of audio recording, eye gaze and head motion tracking in order to build a model that can be used to control a robot in a comparable scenario in future experiments.


conference of the international speech communication association | 2008

LIPS2008: Visual Speech Synthesis Challenge

Barry-John Theobald; Sascha Fagel; Gérard Bailly; Frédéric Elisei


Archive | 2006

Emotional McGurk Effect

Sascha Fagel


conference of the international speech communication association | 2007

Visual information and redundancy conveyed by internal articulator dynamics in synthetic audiovisual speech.

Katja Grauwinkel; Britta Dewitt; Sascha Fagel


Archive | 2006

Joint Audio-Visual Unit Selection - the JAVUS Speech Synthesizer

Sascha Fagel


International Conference on Auditory-Visual Speech Processing, AVSP 2007 | 2007

Intelligibility of natural and 3D-cloned German speech

Sascha Fagel; Gérard Bailly; Frédéric Elisei

Collaboration


Dive into the Sascha Fagel's collaboration.

Top Co-Authors

Avatar

Gérard Bailly

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Benjamin Weiss

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Christine Kühnel

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Katja Grauwinkel

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Caroline Clemens

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Katja Madany

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Walter F. Sendlmeier

Technical University of Berlin

View shared research outputs
Researchain Logo
Decentralizing Knowledge