Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lukas Latacz is active.

Publication


Featured researches published by Lukas Latacz.


Speech Communication | 2009

Developing a reading tutor: Design and evaluation of dedicated speech recognition and synthesis modules

Jacques Duchateau; Yuk On Kong; Leen Cleuren; Lukas Latacz; Jan Roelens; Abdurrahman Samir; Kris Demuynck; Pol Ghesquière; Werner Verhelst; Hugo Van hamme

When a child learns to read, the learning process can be enhanced by significant reading practice with individual support from a tutor. But in reality, the availability of teachers or clinicians is limited, so the additional use of a fully automated reading tutor would be beneficial for the child. This paper discusses our efforts to develop an automated reading tutor for Dutch. First, the dedicated speech recognition and synthesis modules in the reading tutor are described. Then, three diagnostic and remedial reading tutor tools are evaluated in practice and improved based on these evaluations: (1) automatic assessment of a childs reading level, (2) oral feedback to a child at the phoneme, syllable or word level, and (3) tracking where a child is reading, for automated screen advancement or for direct feedback to the child. In general, the presented tools work in a satisfactory way, including for children with known reading disabilities.


Eurasip Journal on Audio, Speech, and Music Processing | 2009

On the Importance of Audiovisual Coherence for the Perceived Quality of Synthesized Visual Speech

Wesley Mattheyses; Lukas Latacz; Werner Verhelst

Audiovisual text-to-speech systems convert a written text into an audiovisual speech signal. Typically, the visual mode of the synthetic speech is synthesized separately from the audio, the latter being either natural or synthesized speech. However, the perception of mismatches between these two information streams requires experimental exploration since it could degrade the quality of the output. In order to increase the intermodal coherence in synthetic 2D photorealistic speech, we extended the well-known unit selection audio synthesis technique to work with multimodal segments containing original combinations of audio and video. Subjective experiments confirm that the audiovisual signals created by our multimodal synthesis strategy are indeed perceived as being more synchronous than those of systems in which both modes are not intrinsically coherent. Furthermore, it is shown that the degree of coherence between the auditory mode and the visual mode has an influence on the perceived quality of the synthetic visual speech fragment. In addition, the audio quality was found to have only a minor influence on the perceived visual signals quality.


international conference on machine learning | 2008

Multimodal Unit Selection for 2D Audiovisual Text-to-Speech Synthesis

Wesley Mattheyses; Lukas Latacz; Werner Verhelst; Hichem Sahli

Audiovisual text-to-speech systems convert a written text into an audiovisual speech signal. Lately much interest goes out to data-driven 2D photorealistic synthesis, where the system uses a database of pre-recorded auditory and visual speech data to construct the target output signal. In this paper we propose a synthesis technique that creates both the target auditory and the target visual speech by using a same audiovisual database. To achieve this, the well-known unit selection synthesis technique is extended to work with multimodal segments containing original combinations of audio and video. This strategy results in a multimodal output signal that displays a high level of audiovisual correlation, which is crucial to achieve a natural perception of the synthetic speech signal.


Speech Communication | 2013

Comprehensive many-to-many phoneme-to-viseme mapping and its application for concatenative visual speech synthesis

Wesley Mattheyses; Lukas Latacz; Werner Verhelst

The use of visemes as atomic speech units in visual speech analysis and synthesis systems is well-established. Viseme labels are determined using a many-to-one phoneme-to-viseme mapping. However, due to visual coarticulation effects, an accurate mapping from phonemes to visemes should define a many-to-many mapping scheme instead. In this research it was found that neither the use of standardized nor speaker-dependent many-to-one viseme labels could satisfy the quality requirements of concatenative visual speech synthesis. Therefore, a novel technique to define a many-to-many phoneme-to-viseme mapping scheme is introduced, which makes use of both tree-based and k-means clustering approaches. We show that these many-to-many viseme labels more accurately describe the visual speech information as compared to both phoneme-based and many-to-one viseme-based speech labels. In addition, we found that the use of these many-to-many visemes improves the precision of the segment selection phase in concatenative visual speech synthesis using limited speech databases. Furthermore, the resulting synthetic visual speech was both objectively and subjectively found to be of higher quality when the many-to-many visemes are used to describe the speech database and the synthesis targets.


text speech and dialogue | 2010

Expressive gibberish speech synthesis for affective human-computer interaction

Selma Yilmazyildiz; Lukas Latacz; Wesley Mattheyses; Werner Verhelst

In this paper we present our study on expressive gibberish speech synthesis as a means for affective communication between computing devices, such as a robot or an avatar, and their users. Gibberish speech consists of vocalizations of meaningless strings of speech sounds and is sometimes used by performing artists to express intended (and often exaggerated) emotions and affect, such as anger and surprise, without actually pronouncing any understandable word. The advantage of gibberish in affective computing lies with the fact that no understandable text has to be pronounced and that only affect is conveyed. This can be used to test the effectiveness of affective prosodic strategies, for example, but it can also be applied in actual systems.


Journal of Speech Language and Hearing Research | 2015

Automated Speech Rate Measurement in Dysarthria

Heidi Martens; Tomas Dekens; Gwen Van Nuffelen; Lukas Latacz; Werner Verhelst; Marc De Bodt

PURPOSE In this study, a new algorithm for automated determination of speech rate (SR) in dysarthric speech is evaluated. We investigated how reliably the algorithm calculates the SR of dysarthric speech samples when compared with calculation performed by speech-language pathologists. METHOD The new algorithm was trained and tested using Dutch speech samples of 36 speakers with no history of speech impairment and 40 speakers with mild to moderate dysarthria. We tested the algorithm under various conditions: according to speech task type (sentence reading, passage reading, and storytelling) and algorithm optimization method (speaker group optimization and individual speaker optimization). Correlations between automated and human SR determination were calculated for each condition. RESULTS High correlations between automated and human SR determination were found in the various testing conditions. CONCLUSIONS The new algorithm measures SR in a sufficiently reliable manner. It is currently being integrated in a clinical software tool for assessing and managing prosody in dysarthric speech. Further research is needed to fine-tune the algorithm to severely dysarthric speech, to make the algorithm less sensitive to background noise, and to evaluate how the algorithm deals with syllabic consonants.


text speech and dialogue | 2013

Speaker-Specific Pronunciation for Speech Synthesis

Lukas Latacz; Wesley Mattheyses; Werner Verhelst

A pronunciation lexicon for speech synthesis is a key component of a modern speech synthesizer, containing the orthography and phonemic transcriptions of a large number of words. A lexicon may contain words with multiple pronunciations, such as reduced and full versions of (function) words, homographs, or other types of words with multiple acceptable pronunciations such as foreign words or names. Pronunciation variants should therefore be taken into account during voice-building (e.g. segmentation and labeling of a speech database), as well as during synthesis.


Archive | 2008

An Overview of the VUB Entry for the 2008 Blizzard Challenge

Lukas Latacz; Wesley Mattheyses


conference of the international speech communication association | 2010

Active appearance models for photorealistic visual speech synthesis.

Wesley Mattheyses; Lukas Latacz; Werner Verhelst


AVSP | 2010

Optimized photorealistic audiovisual speech synthesis using active appearance modeling.

Wesley Mattheyses; Lukas Latacz; Werner Verhelst

Collaboration


Dive into the Lukas Latacz's collaboration.

Top Co-Authors

Avatar

Wesley Mattheyses

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar

Werner Verhelst

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar

Werner Verhelst

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar

Yuk On Kong

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar

Heidi Martens

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar

Abdurrahman Samir

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hichem Sahli

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar

Hugo Van hamme

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Jacques Duchateau

Katholieke Universiteit Leuven

View shared research outputs
Researchain Logo
Decentralizing Knowledge