Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wesley Mattheyses is active.

Publication


Featured researches published by Wesley Mattheyses.


Speech Communication | 2015

Audiovisual speech synthesis

Wesley Mattheyses

Comprehensive overview of the various techniques for audiovisual speech synthesis.Innovative categorization of the techniques based on multiple aspects.Important future directives for the field of audiovisual speech synthesis.Bundles a lot of information that was scattered in the scientific literature. We live in a world where there are countless interactions with computer systems in every-day situations. In the most ideal case, this interaction feels as familiar and as natural as the communication we experience with other humans. To this end, an ideal means of communication between a user and a computer system consists of audiovisual speech signals. Audiovisual text-to-speech technology allows the computer system to utter any spoken message towards its users. Over the last decades, a wide range of techniques for performing audiovisual speech synthesis has been developed. This paper gives a comprehensive overview on these approaches using a categorization of the systems based on multiple important aspects that determine the properties of the synthesized speech signals. The paper makes a clear distinction between the techniques that are used to model the virtual speaker and the techniques that are used to generate the appropriate speech gestures. In addition, the paper discusses the evaluation of audiovisual speech synthesizers, it elaborates on the hardware requirements for performing visual speech synthesis and it describes some important future directions that should stimulate the use of audiovisual speech synthesis technology in real-life applications.


Eurasip Journal on Audio, Speech, and Music Processing | 2009

On the Importance of Audiovisual Coherence for the Perceived Quality of Synthesized Visual Speech

Wesley Mattheyses; Lukas Latacz; Werner Verhelst

Audiovisual text-to-speech systems convert a written text into an audiovisual speech signal. Typically, the visual mode of the synthetic speech is synthesized separately from the audio, the latter being either natural or synthesized speech. However, the perception of mismatches between these two information streams requires experimental exploration since it could degrade the quality of the output. In order to increase the intermodal coherence in synthetic 2D photorealistic speech, we extended the well-known unit selection audio synthesis technique to work with multimodal segments containing original combinations of audio and video. Subjective experiments confirm that the audiovisual signals created by our multimodal synthesis strategy are indeed perceived as being more synchronous than those of systems in which both modes are not intrinsically coherent. Furthermore, it is shown that the degree of coherence between the auditory mode and the visual mode has an influence on the perceived quality of the synthetic visual speech fragment. In addition, the audio quality was found to have only a minor influence on the perceived visual signals quality.


international conference on machine learning | 2008

Multimodal Unit Selection for 2D Audiovisual Text-to-Speech Synthesis

Wesley Mattheyses; Lukas Latacz; Werner Verhelst; Hichem Sahli

Audiovisual text-to-speech systems convert a written text into an audiovisual speech signal. Lately much interest goes out to data-driven 2D photorealistic synthesis, where the system uses a database of pre-recorded auditory and visual speech data to construct the target output signal. In this paper we propose a synthesis technique that creates both the target auditory and the target visual speech by using a same audiovisual database. To achieve this, the well-known unit selection synthesis technique is extended to work with multimodal segments containing original combinations of audio and video. This strategy results in a multimodal output signal that displays a high level of audiovisual correlation, which is crucial to achieve a natural perception of the synthetic speech signal.


Speech Communication | 2013

Comprehensive many-to-many phoneme-to-viseme mapping and its application for concatenative visual speech synthesis

Wesley Mattheyses; Lukas Latacz; Werner Verhelst

The use of visemes as atomic speech units in visual speech analysis and synthesis systems is well-established. Viseme labels are determined using a many-to-one phoneme-to-viseme mapping. However, due to visual coarticulation effects, an accurate mapping from phonemes to visemes should define a many-to-many mapping scheme instead. In this research it was found that neither the use of standardized nor speaker-dependent many-to-one viseme labels could satisfy the quality requirements of concatenative visual speech synthesis. Therefore, a novel technique to define a many-to-many phoneme-to-viseme mapping scheme is introduced, which makes use of both tree-based and k-means clustering approaches. We show that these many-to-many viseme labels more accurately describe the visual speech information as compared to both phoneme-based and many-to-one viseme-based speech labels. In addition, we found that the use of these many-to-many visemes improves the precision of the segment selection phase in concatenative visual speech synthesis using limited speech databases. Furthermore, the resulting synthetic visual speech was both objectively and subjectively found to be of higher quality when the many-to-many visemes are used to describe the speech database and the synthesis targets.


advances in multimedia | 2006

Expressive speech recognition and synthesis as enabling technologies for affective robot-child communication

Selma Yilmazyildiz; Wesley Mattheyses; Yorgos Patsis; Werner Verhelst

This paper presents our recent and current work on expressive speech synthesis and recognition as enabling technologies for affective robot-child interaction. We show that current expression recognition systems could be used to discriminate between several archetypical emotions, but also that the old adage ”there’s no data like more data” is more than ever valid in this field. A new speech synthesizer was developed that is capable of high quality concatenative synthesis. This system will be used in the robot to synthesize expressive nonsense speech by using prosody transplantation and a recorded database with expressive speech examples. With these enabling components lining up, we are getting ready to start experiments towards hopefully effective child-machine communication of affect and emotion.


text speech and dialogue | 2010

Expressive gibberish speech synthesis for affective human-computer interaction

Selma Yilmazyildiz; Lukas Latacz; Wesley Mattheyses; Werner Verhelst

In this paper we present our study on expressive gibberish speech synthesis as a means for affective communication between computing devices, such as a robot or an avatar, and their users. Gibberish speech consists of vocalizations of meaningless strings of speech sounds and is sometimes used by performing artists to express intended (and often exaggerated) emotions and affect, such as anger and surprise, without actually pronouncing any understandable word. The advantage of gibberish in affective computing lies with the fact that no understandable text has to be pronounced and that only affect is conveyed. This can be used to test the effectiveness of affective prosodic strategies, for example, but it can also be applied in actual systems.


text speech and dialogue | 2013

Speaker-Specific Pronunciation for Speech Synthesis

Lukas Latacz; Wesley Mattheyses; Werner Verhelst

A pronunciation lexicon for speech synthesis is a key component of a modern speech synthesizer, containing the orthography and phonemic transcriptions of a large number of words. A lexicon may contain words with multiple pronunciations, such as reduced and full versions of (function) words, homographs, or other types of words with multiple acceptable pronunciations such as foreign words or names. Pronunciation variants should therefore be taken into account during voice-building (e.g. segmentation and labeling of a speech database), as well as during synthesis.


Lecture Notes in Computer Science | 2006

Expressive Speech Recognition and Synthesis as Enabling Technologies for Affective Robot-Child Communication

Selma Yilmazyildiz; Wesley Mattheyses; Yorgos Patsis; Werner Verhelst


Archive | 2006

ROBUST PITCH MARKING FOR PROSODIC MODIFICATION OF SPEECH USING TD-PSOLA

Wesley Mattheyses; Werner Verhelst; Piet Verhoeve


Archive | 2008

An Overview of the VUB Entry for the 2008 Blizzard Challenge

Lukas Latacz; Wesley Mattheyses

Collaboration


Dive into the Wesley Mattheyses's collaboration.

Top Co-Authors

Avatar

Lukas Latacz

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hichem Sahli

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar

Yorgos Patsis

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar

Kenny Cauwerts

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar

Pieter Soens

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar

Ilse Ravyse

VU University Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge