Frédéric Elisei
University of Grenoble
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Frédéric Elisei.
Speech Communication | 2010
Pierre Badin; Yuliya Tarabalka; Frédéric Elisei; Gérard Bailly
Lip reading relies on visible articulators to ease speech understanding. However, lips and face alone provide very incomplete phonetic information: the tongue, that is generally not entirely seen, carries an important part of the articulatory information not accessible through lip reading. The question is thus whether the direct and full vision of the tongue allows tongue reading. We have therefore generated a set of audiovisual VCV stimuli with an audiovisual talking head that can display all speech articulators, including tongue, in an augmented speech mode. The talking head is a virtual clone of a human speaker and the articulatory movements have also been captured on this speaker using ElectroMagnetic Articulography (EMA). These stimuli have been played to subjects in audiovisual perception tests in various presentation conditions (audio signal alone, audiovisual signal with profile cutaway display with or without tongue, complete face), at various Signal-to-Noise Ratios. The results indicate: (1) the possibility of implicit learning of tongue reading, (2) better consonant identification with the cutaway presentation with the tongue than without the tongue, (3) no significant difference between the cutaway presentation with the tongue and the more ecological rendering of the complete face, (4) a predominance of lip reading over tongue reading, but (5) a certain natural human capability for tongue reading when the audio signal is strongly degraded or absent. We conclude that these tongue reading capabilities could be used for applications in the domains of speech therapy for speech retarded children, of perception and production rehabilitation of hearing impaired children, and of pronunciation training for second language learners.
Speech Communication | 2010
Gérard Bailly; Stephan Raidt; Frédéric Elisei
In this paper, we describe two series of experiments that examine audiovisual face-to-face interaction between naive human viewers and either a human interlocutor or a virtual conversational agent. The main objective is to analyze the interplay between speech activity and mutual gaze patterns during mediated face-to-face interactions. We first quantify the impact of deictic gaze patterns of our agent. We further aim at refining our experimental knowledge on mutual gaze patterns during human face-to-face interaction by using new technological devices such as non-invasive eye trackers and pinhole cameras, and at quantifying the impact of a selection of cognitive states and communicative functions on recorded gaze patterns.
Eurasip Journal on Audio, Speech, and Music Processing | 2009
Gérard Bailly; Oxana Govokhina; Frédéric Elisei; Gaspard Breton
We describe here the control, shape and appearance models that are built using an original photogrammetric method to capture characteristics of speaker-specific facial articulation, anatomy, and texture. Two original contributions are put forward here: the trainable trajectory formation model that predicts articulatory trajectories of a talking face from phonetic input and the texture model that computes a texture for each 3D facial shape according to articulation. Using motion capture data from different speakers and module-specific evaluation procedures, we show here that this cloning system restores detailed idiosyncrasies and the global coherence of visible articulation. Results of a subjective evaluation of the global system with competing trajectory formation models are further presented and commented.
web intelligence | 2007
Stephan Raidt; Gérard Bailly; Frédéric Elisei
We present here the analysis of multimodal data gathered during realistic face-to-face interaction of a target speaker with a number of interlocutors. Videos and gaze of both interlocutors were monitored with an experimental setup using coupled cameras and screens equipped with eye trackers. With the aim to understand the functions of gaze in social interaction and to develop a gaze control model for our talking heads we investigate the influence of cognitive state and social role on the observed gaze behaviour.
ieee-ras international conference on humanoid robots | 2015
François Foerster; Gérard Bailly; Frédéric Elisei
Primates - and in particular humans - are very sensitive to the eye direction of congeners. Estimation of gaze of others is one of the basic skills for estimating goals, intentions and desires of social agents, whether they are humans or avatars. When building robots, one should not only supply them with gaze trackers but also check for the readability of their own gaze by human partners. We conducted experiments that demonstrate the strong impact of the iris size and the position of the eyelids of an iCub humanoid robot on gaze reading performance by human observers. We comment on the importance of assessing the robots ability of displaying its intentions via clearly legible and readable gestures.
conference of the international speech communication association | 2008
Barry-John Theobald; Sascha Fagel; Gérard Bailly; Frédéric Elisei
articulated motion and deformable objects | 2008
Pierre Badin; Frédéric Elisei; Gérard Bailly; Yuliya Tarabalka
International Workshop on Multimodal Corpora | 2005
Gérard Bailly; Frédéric Elisei; Pierre Badin; Christophe Savariaux
Pattern Recognition Letters | 2016
Alaeddine Mihoub; Gérard Bailly; Christian Wolf; Frédéric Elisei
Archive | 2003
Maxime Berar; Gérard Bailly; M. Chabanas; Frédéric Elisei; Matthias Odisio; Y. Pahan