Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Maria Astrinaki is active.

Publication


Featured researches published by Maria Astrinaki.


spoken language technology workshop | 2012

Reactive and continuous control of HMM-based speech synthesis

Maria Astrinaki; Nicolas D'Alessandro; Benjamin Picart; Thomas Drugman; Thierry Dutoit

In this paper, we present a modified version of HTS, called performative HTS or pHTS. The objective of pHTS is to enhance the control ability and reactivity of HTS. pHTS reduces the phonetic context used for training the models and generates the speech parameters within a 2-label window. Speech waveforms are generated on-the-fly and the models can be re-actively modified, impacting the synthesized speech with a delay of only one phoneme. It is shown that HTS and pHTS have comparable output quality. We use this new system to achieve reactive model interpolation and conduct a new test where articulation degree is modified within the sentence.


9th International Summer Workshop on Multimodal Interfaces (eNTERFACE) | 2013

Reactive Statistical Mapping: Towards the Sketching of Performative Control with Data

Nicolas d’Alessandro; Joëlle Tilmanne; Maria Astrinaki; Thomas Hueber; Rasmus Dall; Thierry Ravet; Alexis Moinet; Hüseyin Çakmak; Onur Babacan; Adela Barbulescu; Valentin Parfait; Victor Huguenin; Emine Sümeyye Kalaycı; Qiong Hu

This paper presents the results of our participation to the ninth eNTERFACE workshop on multimodal user interfaces. Our target for this workshop was to bring some technologies currently used in speech recognition and synthesis to a new level, i.e. being the core of a new HMM-based mapping system. The idea of statistical mapping has been investigated, more precisely how to use Gaussian Mixture Models and Hidden Markov Models for realtime and reactive generation of new trajectories from inputted labels and for realtime regression in a continuous-to-continuous use case. As a result, we have developed several proofs of concept, including an incremental speech synthesiser, a software for exploring stylistic spaces for gait and facial motion in realtime, a reactive audiovisual laughter and a prototype demonstrating the realtime reconstruction of lower body gait motion strictly from upper body motion, with conservation of the stylistic properties. This project has been the opportunity to formalise HMM-based mapping, integrate various of these innovations into the Mage library and explore the development of a realtime gesture recognition tool.


intelligent technologies for interactive entertainment | 2013

MAGEFACE: Performative Conversion of Facial Characteristics into Speech Synthesis Parameters

Nicolas d’Alessandro; Maria Astrinaki; Thierry Dutoit

In this paper, we illustrate the use of the MAGE performative speech synthesizer through its application to the conversion of realtime-measured facial features with FaceOSC into speech synthesis features such as vocal tract shape or intonation. MAGE is a new software library for using HMM-based speech synthesis in reactive programming environments. MAGE uses a rewritten version of the HTS engine enabling the computation of speech audio samples on a two-label window instead of the whole sentence. Only this feature enables the realtime mapping of facial attributes to synthesis parameters.


Archive | 2011

PHTS FOR MAX/MSP: A STREAMING ARCHITECTURE FOR STATISTICAL PARAMETRIC SPEECH SYNTHESIS

Thierry Dutoit; Maria Astrinaki; Onur Babacan; Benjamin Picart


new interfaces for musical expression | 2012

MAGE -A Platform for Tangible Speech Synthesis.

Maria Astrinaki; Nicolas D'Alessandro; Thierry Dutoit


new interfaces for musical expression | 2013

MAGE 2.0: New Features and its Application in the Development of a Talking Guitar.

Maria Astrinaki; Nicolas D'Alessandro; Loïc Reboursière; Alexis Moinet; Thierry Dutoit


conference of the international speech communication association | 2013

Reactive accent interpolation through an interactive map application

Maria Astrinaki; Junichi Yamagishi; Simon King; Nicolas D'Alessandro; Thierry Dutoit


SSW | 2013

Mage - Reactive articulatory feature control of HMM-based parametric speech synthesis

Maria Astrinaki; Alexis Moinet; Junichi Yamagishi; Korin Richmond; Zhen-Hua Ling; Simon King; Thierry Dutoit


Archive | 2011

sHTS : A Streaming Architecture for Statistical Parametric Speech Synthesis

Maria Astrinaki; Onur Babacan; Nicolas D'Alessandro; Thierry Dutoit


international conference on computer vision theory and applications | 2014

Exploration of a stylistic motion space through realtime synthesis

Joëlle Tilmanne; Nicolas D'Alessandro; Maria Astrinaki; Thierry Ravet

Collaboration


Dive into the Maria Astrinaki's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Junichi Yamagishi

National Institute of Informatics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Simon King

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge