Maria Astrinaki
University of Mons
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Maria Astrinaki.
spoken language technology workshop | 2012
Maria Astrinaki; Nicolas D'Alessandro; Benjamin Picart; Thomas Drugman; Thierry Dutoit
In this paper, we present a modified version of HTS, called performative HTS or pHTS. The objective of pHTS is to enhance the control ability and reactivity of HTS. pHTS reduces the phonetic context used for training the models and generates the speech parameters within a 2-label window. Speech waveforms are generated on-the-fly and the models can be re-actively modified, impacting the synthesized speech with a delay of only one phoneme. It is shown that HTS and pHTS have comparable output quality. We use this new system to achieve reactive model interpolation and conduct a new test where articulation degree is modified within the sentence.
9th International Summer Workshop on Multimodal Interfaces (eNTERFACE) | 2013
Nicolas d’Alessandro; Joëlle Tilmanne; Maria Astrinaki; Thomas Hueber; Rasmus Dall; Thierry Ravet; Alexis Moinet; Hüseyin Çakmak; Onur Babacan; Adela Barbulescu; Valentin Parfait; Victor Huguenin; Emine Sümeyye Kalaycı; Qiong Hu
This paper presents the results of our participation to the ninth eNTERFACE workshop on multimodal user interfaces. Our target for this workshop was to bring some technologies currently used in speech recognition and synthesis to a new level, i.e. being the core of a new HMM-based mapping system. The idea of statistical mapping has been investigated, more precisely how to use Gaussian Mixture Models and Hidden Markov Models for realtime and reactive generation of new trajectories from inputted labels and for realtime regression in a continuous-to-continuous use case. As a result, we have developed several proofs of concept, including an incremental speech synthesiser, a software for exploring stylistic spaces for gait and facial motion in realtime, a reactive audiovisual laughter and a prototype demonstrating the realtime reconstruction of lower body gait motion strictly from upper body motion, with conservation of the stylistic properties. This project has been the opportunity to formalise HMM-based mapping, integrate various of these innovations into the Mage library and explore the development of a realtime gesture recognition tool.
intelligent technologies for interactive entertainment | 2013
Nicolas d’Alessandro; Maria Astrinaki; Thierry Dutoit
In this paper, we illustrate the use of the MAGE performative speech synthesizer through its application to the conversion of realtime-measured facial features with FaceOSC into speech synthesis features such as vocal tract shape or intonation. MAGE is a new software library for using HMM-based speech synthesis in reactive programming environments. MAGE uses a rewritten version of the HTS engine enabling the computation of speech audio samples on a two-label window instead of the whole sentence. Only this feature enables the realtime mapping of facial attributes to synthesis parameters.
Archive | 2011
Thierry Dutoit; Maria Astrinaki; Onur Babacan; Benjamin Picart
new interfaces for musical expression | 2012
Maria Astrinaki; Nicolas D'Alessandro; Thierry Dutoit
new interfaces for musical expression | 2013
Maria Astrinaki; Nicolas D'Alessandro; Loïc Reboursière; Alexis Moinet; Thierry Dutoit
conference of the international speech communication association | 2013
Maria Astrinaki; Junichi Yamagishi; Simon King; Nicolas D'Alessandro; Thierry Dutoit
SSW | 2013
Maria Astrinaki; Alexis Moinet; Junichi Yamagishi; Korin Richmond; Zhen-Hua Ling; Simon King; Thierry Dutoit
Archive | 2011
Maria Astrinaki; Onur Babacan; Nicolas D'Alessandro; Thierry Dutoit
international conference on computer vision theory and applications | 2014
Joëlle Tilmanne; Nicolas D'Alessandro; Maria Astrinaki; Thierry Ravet