Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matthias Odisio is active.

Publication


Featured researches published by Matthias Odisio.


International Journal of Speech Technology | 2003

Audiovisual Speech Synthesis

Gérard Bailly; Maxime Berar; Frédéric Elisei; Matthias Odisio

This paper presents the main approaches used to synthesize talking faces, and provides greater detail on a handful of these approaches. An attempt is made to distinguish between facial synthesis itself (i.e. the manner in which facial movements are rendered on a computer screen), and the way these movements may be controlled and predicted using phonetic input. The two main synthesis techniques (model-based vs. image-based) are contrasted and presented by a brief description of the most illustrative existing systems. The challenging issues—evaluation, data acquisition and modeling—that may drive future models are also discussed and illustrated by our current work at ICP.


Proceedings of 2002 IEEE Workshop on Speech Synthesis, 2002. | 2002

Evaluation of movement generation systems using the point-light technique

Gérard Bailly; Guillaume Gibert; Matthias Odisio

We describe a comparative evaluation of different movement generation systems capable of computing articulatory trajectories from phonetic input. The articulatory trajectories here pilot the facial deformation of a 3D clone of a human female speaker. We test the adequacy of the predicted trajectories in accompanying the production of natural utterances. The performance of these predictions are compared to the ones of natural articulatory trajectories produced by the speaker and estimated by an original video-based motion capture technique. The test uses the point-light technique (Rosenblum, L.D. and Saldana, H.M., 1996; 1998).


Speech Communication | 2004

Tracking talking faces with shape and appearance models

Matthias Odisio; Gérard Bailly; Frédéric Elisei

Abstract This paper presents a system that can recover and track the 3D speech movements of a speaker’s face for each image of a monocular sequence. To handle both the individual specificities of the speaker’s articulation and the complexity of the facial deformations during speech, speaker-specific articulated models of the face geometry and appearance are first built from real data. These face models are used for tracking: articulatory parameters are extracted for each image by an analysis-by-synthesis loop. The geometric model is linearly controlled by only seven articulatory parameters. Appearance is seen either as a classical texture map or through local appearance of a relevant subset of 3D points. We compare several appearance models: they are either constant or depend linearly on the articulatory parameters. We compare tracking results using these different appearance models with ground truth data not only in terms of recovery errors of the 3D geometry but also in terms of intelligibility enhancement provided by the movements.


international soi conference | 2003

Shape and appearance models of talking faces for model-based tracking

Matthias Odisio; Gérard Bailly

We present a system that can recover and track the 3D speech movements of a speakers face for each image of a monocular sequence. A speaker-specific face model is used for tracking: model parameters are extracted from each image by an analysis-by-synthesis loop. To handle both the individual specificities of the speakers articulation and the complexity of the facial deformations during speech, speaker-specific models of the face 3D geometry and appearance are built from real data. The geometric model is linearly controlled by only six articulatory parameters. Appearance is seen either as a classical texture map or through local appearance of a relevant subset of 3D points. We compare several appearance models: they are either constant or depend linearly on the articulatory parameters. We evaluate these different appearance models with ground truth data.


AVSP | 2001

Creating and controlling video-realistic talking heads.

Frédéric Elisei; Matthias Odisio; Gérard Bailly; Pierre Badin


Speech Communication | 2004

A pilot study of temporal organization in Cued Speech production of French syllables: rules for a Cued Speech synthesizer

Virginie Attina; Denis Beautemps; Marie-Agnès Cathiard; Matthias Odisio


conference of the international speech communication association | 2008

Two-stage prosody prediction for emotional text-to-speech synthesis

Hao Tang; Xi Zhou; Matthias Odisio; Mark Hasegawa-Johnson; Thomas S. Huang


Archive | 2003

Towards a generic talking head

Maxime Berar; Gérard Bailly; M. Chabanas; Frédéric Elisei; Matthias Odisio; Y. Pahan


conference of the international speech communication association | 2004

Audiovisual perceptual evaluation of resynthesised speech movements

Matthias Odisio; Gérard Bailly


AVSP | 2003

Toward an audiovisual synthesizer for Cued Speech: Rules for CV French syllables.

Virginie Attina; Denis Beautemps; Marie-Agnès Cathiard; Matthias Odisio

Collaboration


Dive into the Matthias Odisio's collaboration.

Top Co-Authors

Avatar

Gérard Bailly

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Maxime Berar

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Denis Beautemps

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Virginie Attina

University of Western Sydney

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pierre Badin

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Matthieu Chabanas

Grenoble Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michel Desvignes

Grenoble Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Xi Zhou

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge