Stephan Raidt
University of Grenoble
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Stephan Raidt.
Speech Communication | 2010
Gérard Bailly; Stephan Raidt; Frédéric Elisei
In this paper, we describe two series of experiments that examine audiovisual face-to-face interaction between naive human viewers and either a human interlocutor or a virtual conversational agent. The main objective is to analyze the interplay between speech activity and mutual gaze patterns during mediated face-to-face interactions. We first quantify the impact of deictic gaze patterns of our agent. We further aim at refining our experimental knowledge on mutual gaze patterns during human face-to-face interaction by using new technological devices such as non-invasive eye trackers and pinhole cameras, and at quantifying the impact of a selection of cognitive states and communicative functions on recorded gaze patterns.
intelligent virtual agents | 2007
Antoine Picot; Gérard Bailly; Frédéric Elisei; Stephan Raidt
We present here a system for controlling the eye gaze of a virtual embodied conversational agent able to perceive the physical environment in which it interacts. This system is inspired by known components of human visual attention system and reproduces its limitations in terms of visual acuity, sensitivity to movement, limitations of short-memory and object pursuit. The aim of this coupling between animation and visual scene analysis is to provide sense of presence and mutual attention to human interlocutors. After a brief introduction to this research project and a focused state of the art, we detail the components of our system and confront simulation results to eye gaze data collected from viewers observing the same natural scenes.
web intelligence | 2007
Stephan Raidt; Gérard Bailly; Frédéric Elisei
We present here the analysis of multimodal data gathered during realistic face-to-face interaction of a target speaker with a number of interlocutors. Videos and gaze of both interlocutors were monitored with an experimental setup using coupled cameras and screens equipped with eye trackers. With the aim to understand the functions of gaze in social interaction and to develop a gaze control model for our talking heads we investigate the influence of cognitive state and social role on the observed gaze behaviour.
advances in multimedia | 2006
Gérard Bailly; Frédéric Elisei; Stephan Raidt; Alix Casari; Antoine Picot
We describe here our efforts for modeling multimodal signals exchanged by interlocutors when interacting face-to-face. This data is then used to control embodied conversational agents able to engage into a realistic face-to-face interaction with human partners. This paper focuses on the generation and rendering of realistic gaze patterns. The problems encountered and solutions proposed claim for a stronger coupling between research fields such as audiovisual signal processing, linguistics and psychosocial sciences for the sake of efficient and realistic human-computer interaction.
intelligent virtual agents | 2007
Stephan Raidt; Gérard Bailly; Frédéric Elisei
We present here the analysis of multimodal data gathered during realistic face-to-face interaction of a target speaker with a number of interlocutors. Videos and gaze have been monitored with an experimental setup using coupled cameras and screens with integrated eye trackers. With the aim to understand the functions of gaze in social interaction and to develop a coherent gaze control model for our talking heads we investigate the influence of cognitive state and social role on the observed gaze behavior.
ambient intelligence | 2005
Stephan Raidt; Gérard Bailly; Frédéric Elisei
We present a series of experiments that involve a face-to-face interaction between an embodied conversational agent (ECA) and a human interlocutor. The main challenge is to provide the interlocutor with implicit and explicit signs of mutual interest and attention and of the awareness of environmental conditions in which the interaction takes place. A video realistic talking head with independent head and eye movements was used as a talking agent interacting with a user during a simple card game offering different levels of help and guidance. We analyzed the user performance and how the quality of assistance given by the embodied conversational agent was perceived. The experiment showed that users can profit from its presence and its facial deictic cues.
AVSP | 2007
Stephan Raidt; Gérard Bailly; Frédéric Elisei
Revue Francaise De Linguistique Appliquee | 2008
Gérard Bailly; Frédéric Elisei; Stephan Raidt
Archive | 2006
Gérard Bailly; Frédéric Elisei; Stephan Raidt
AVSP | 2007
Frédéric Elisei; Gérard Bailly; Alix Casari; Stephan Raidt