Fabio Tesser
National Research Council
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Fabio Tesser.
International Journal of Social Robotics | 2013
Aryel Beck; Lola Cañamero; Antoine Hiolle; Luisa Damiano; Piero Cosi; Fabio Tesser; Giacomo Sommavilla
The work reported in this paper focuses on giving humanoid robots the capacity to express emotions with their body. Previous results show that adults are able to interpret different key poses displayed by a humanoid robot and also that changing the head position affects the expressiveness of the key poses in a consistent way. Moving the head down leads to decreased arousal (the level of energy) and valence (positive or negative emotion) whereas moving the head up produces an increase along these dimensions. Hence, changing the head position during an interaction should send intuitive signals. The study reported in this paper tested children’s ability to recognize the emotional body language displayed by a humanoid robot. The results suggest that body postures and head position can be used to convey emotions during child-robot interaction.
human robot interaction | 2016
Alexandre Coninx; Paul Baxter; Elettra Oleari; Sara Bellini; Bert P.B. Bierman; Olivier A. Blanson Henkemans; Lola Cañamero; Piero Cosi; Valentin Enescu; Raquel Ros Espinoza; Antoine Hiolle; Rémi Humbert; Bernd Kiefer; Ivana Kruijff-Korbayová; Rosemarijn Looije; Marco Mosconi; Mark A. Neerincx; Giulio Paci; Georgios Patsis; Clara Pozzi; Francesca Sacchitelli; Hichem Sahli; Alberto Sanna; Giacomo Sommavilla; Fabio Tesser; Yiannis Demiris; Tony Belpaeme
Social robots have the potential to provide support in a number of practical domains, such as learning and behaviour change. This potential is particularly relevant for children, who have proven receptive to interactions with social robots. To reach learning and therapeutic goals, a number of issues need to be investigated, notably the design of an effective child-robot interaction (cHRI) to ensure the child remains engaged in the relationship and that educational goals are met. Typically, current cHRI research experiments focus on a single type of interaction activity (e.g. a game). However, these can suffer from a lack of adaptation to the child, or from an increasingly repetitive nature of the activity and interaction. In this paper, we motivate and propose a practicable solution to this issue: an adaptive robot able to switch between multiple activities within single interactions. We describe a system that embodies this idea, and present a case study in which diabetic children collaboratively learn with the robot about various aspects of managing their condition. We demonstrate the ability of our system to induce a varied interaction and show the potential of this approach both as an educational tool and as a research method for long-term cHRI.
Archive | 2011
Ivana Kruijff-Korbayová; Georgios Athanasopoulos; Aryel Beck; Piero Cosi; Heriberto Cuayáhuitl; Tomas Dekens; Valentin Enescu; Antoine Hiolle; Bernd Kiefer; Hichem Sahli; Marc Schröder; Giacomo Sommavilla; Fabio Tesser; Werner Verhelst
Conversational systems play an important role in scenarios without a keyboard, e.g., talking to a robot. Communication in human-robot interaction (HRI) ultimately involves a combination of verbal and non-verbal inputs and outputs. HRI systems must process verbal and non-verbal observations and execute verbal and non-verbal actions in parallel, to interpret and produce synchronized behaviours. The development of such systems involves the integration of potentially many components and ensuring a complex interaction and synchronization between them. Most work in spoken dialogue system development uses pipeline architectures. Some exceptions are [1, 17], which execute system components in parallel (weakly-coupled or tightly-coupled architectures). The latter are more promising for building adaptive systems, which is one of the goals of contemporary research systems.
international conference on social robotics | 2011
Aryel Beck; Lola Cañamero; Luisa Damiano; Giacomo Sommavilla; Fabio Tesser; Piero Cosi
Previous results show that adults are able to interpret different key poses displayed by the robot and also that changing the head position affects the expressiveness of the key poses in a consistent way. Moving the head down leads to decreased arousal (the level of energy), valence (positive or negative) and stance (approaching or avoiding) whereas moving the head up produces an increase along these dimensions [1]. Hence, changing the head position during an interaction should send intuitive signals which could be used during an interaction. The ALIZ-E target group are children between the age of 8 and 11. Existing results suggest that they would be able to interpret human emotional body language [2, 3]. Based on these results, an experiment was conducted to test whether the results of [1] can be applied to children. If yes body postures and head position could be used to convey emotions during an interaction.
Proceedings of 2002 IEEE Workshop on Speech Synthesis, 2002. | 2002
Piero Cosi; Cinzia Avesani; Fabio Tesser; Roberto Gretter; Fabio Pianesi
In this work, a slightly modified version of the original PaIntE model, based on an F0 parametrization with an especially designed approximation function, is considered. The models parameters have been automatically optimized using a small set of Italian ToBI labeled sentences. This method drives our ongoing data-based approach to intonation modeling for Italian TTS. The quality of the model has been assessed by numerical measures and preliminary tests show quite promising results.
intelligent virtual agents | 2005
Piero Cosi; Carlo Drioli; Fabio Tesser; Graziano Tisato
INTERFACE is an integrated software implemented in Matlab© and created to speed-up the procedure for building an emotive/expressive talking head. Various processing tools, working on dynamic articulatory data physically extracted by an optotracking 3D movement analyzer called ELITE, were implemented to build the animation engine and also to create the correct WAV and FAP files needed for the animation. By the use of INTERFACE, LUCIA, our animated MPEG-4 talking face, can copy a real human by reproducing the movements of passive markers positioned on his face and recorded by an optoelectronic device, or can be directly driven by an emotional XML tagged input text, thus realizing a true audio/visual emotive/expressive synthesis. LUCIAs voice is based on an Italian version of FESTIVAL - MBROLA packages, modified for expressive/emotive synthesis by means of an appropriate APML/VSML tagged language.
human robot interaction | 2013
Tony Belpaeme; Paul Baxter; Robin Read; Rachel Wood; Heriberto Cuayáhuitl; Bernd Kiefer; Stefania Racioppa; Ivana Kruijff-Korbayová; Georgios Athanasopoulos; Valentin Enescu; Rosemarijn Looije; Mark A. Neerincx; Yiannis Demiris; Raquel Ros-Espinoza; Aryel Beck; Lola Cañamero; Antione Hiolle; Matthew Lewis; Ilaria Baroni; Marco Nalin; Piero Cosi; Giulio Paci; Fabio Tesser; Giacomo Sommavilla; Rémi Humbert
conference of the international speech communication association | 2001
Piero Cosi; Fabio Tesser; Roberto Gretter; Cinzia Avesani; Mike Macon
Proc. of Voice Quality: Functions Analysis and Synthesis (VOQUAL) Workshop | 2003
Carlo Drioli; Graziano Tisato; Piero Cosi; Fabio Tesser
Proceedings of the 18th International Congress of Phonetic Sciences | 2015
Petra Wagner; Antonio Origlia; Cinzia Avesani; Georges Christodoulides; Francesco Cutugno; Mariapaola D'Imperio; David Escudero Mancebo; Barbara Gili Fivela; Anne Lacheret; Bogdan Ludusan; Helena Moniz; Ailbhe Ní Chasaide; Oliver Niebuhr; Lucie Rousier-Vercruyssen; Anne-Catherine Simon; Juraj Simko; Fabio Tesser; Martti Vainio