Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Giacomo Sommavilla is active.

Publication


Featured researches published by Giacomo Sommavilla.


international conference on computer vision | 2013

A Multi-scale Approach to Gesture Detection and Recognition

Natalia Neverova; Christian Wolf; Giulio Paci; Giacomo Sommavilla; Graham W. Taylor; Florian Nebout

We propose a generalized approach to human gesture recognition based on multiple data modalities such as depth video, articulated pose and speech. In our system, each gesture is decomposed into large-scale body motion and local subtle movements such as hand articulation. The idea of learning at multiple scales is also applied to the temporal dimension, such that a gesture is considered as a set of characteristic motion impulses, or dynamic poses. Each modality is first processed separately in short spatio-temporal blocks, where discriminative data-specific features are either manually extracted or learned. Finally, we employ a Recurrent Neural Network for modeling large-scale temporal dependencies, data fusion and ultimately gesture classification. Our experiments on the 2013 Challenge on Multimodal Gesture Recognition dataset have demonstrated that using multiple modalities at several spatial and temporal scales leads to a significant increase in performance allowing the model to compensate for errors of individual classifiers as well as noise in the separate channels.


International Journal of Social Robotics | 2013

Interpretation of Emotional Body Language Displayed by a Humanoid Robot: A Case Study with Children

Aryel Beck; Lola Cañamero; Antoine Hiolle; Luisa Damiano; Piero Cosi; Fabio Tesser; Giacomo Sommavilla

The work reported in this paper focuses on giving humanoid robots the capacity to express emotions with their body. Previous results show that adults are able to interpret different key poses displayed by a humanoid robot and also that changing the head position affects the expressiveness of the key poses in a consistent way. Moving the head down leads to decreased arousal (the level of energy) and valence (positive or negative emotion) whereas moving the head up produces an increase along these dimensions. Hence, changing the head position during an interaction should send intuitive signals. The study reported in this paper tested children’s ability to recognize the emotional body language displayed by a humanoid robot. The results suggest that body postures and head position can be used to convey emotions during child-robot interaction.


human robot interaction | 2016

Towards long-term social child-robot interaction: using multi-activity switching to engage young users

Alexandre Coninx; Paul Baxter; Elettra Oleari; Sara Bellini; Bert P.B. Bierman; Olivier A. Blanson Henkemans; Lola Cañamero; Piero Cosi; Valentin Enescu; Raquel Ros Espinoza; Antoine Hiolle; Rémi Humbert; Bernd Kiefer; Ivana Kruijff-Korbayová; Rosemarijn Looije; Marco Mosconi; Mark A. Neerincx; Giulio Paci; Georgios Patsis; Clara Pozzi; Francesca Sacchitelli; Hichem Sahli; Alberto Sanna; Giacomo Sommavilla; Fabio Tesser; Yiannis Demiris; Tony Belpaeme

Social robots have the potential to provide support in a number of practical domains, such as learning and behaviour change. This potential is particularly relevant for children, who have proven receptive to interactions with social robots. To reach learning and therapeutic goals, a number of issues need to be investigated, notably the design of an effective child-robot interaction (cHRI) to ensure the child remains engaged in the relationship and that educational goals are met. Typically, current cHRI research experiments focus on a single type of interaction activity (e.g. a game). However, these can suffer from a lack of adaptation to the child, or from an increasingly repetitive nature of the activity and interaction. In this paper, we motivate and propose a practicable solution to this issue: an adaptive robot able to switch between multiple activities within single interactions. We describe a system that embodies this idea, and present a case study in which diabetic children collaboratively learn with the robot about various aspects of managing their condition. We demonstrate the ability of our system to induce a varied interaction and show the potential of this approach both as an educational tool and as a research method for long-term cHRI.


Archive | 2011

An Event-Based Conversational System for the Nao Robot

Ivana Kruijff-Korbayová; Georgios Athanasopoulos; Aryel Beck; Piero Cosi; Heriberto Cuayáhuitl; Tomas Dekens; Valentin Enescu; Antoine Hiolle; Bernd Kiefer; Hichem Sahli; Marc Schröder; Giacomo Sommavilla; Fabio Tesser; Werner Verhelst

Conversational systems play an important role in scenarios without a keyboard, e.g., talking to a robot. Communication in human-robot interaction (HRI) ultimately involves a combination of verbal and non-verbal inputs and outputs. HRI systems must process verbal and non-verbal observations and execute verbal and non-verbal actions in parallel, to interpret and produce synchronized behaviours. The development of such systems involves the integration of potentially many components and ensuring a complex interaction and synchronization between them. Most work in spoken dialogue system development uses pipeline architectures. Some exceptions are [1, 17], which execute system components in parallel (weakly-coupled or tightly-coupled architectures). The latter are more promising for building adaptive systems, which is one of the goals of contemporary research systems.


international conference on social robotics | 2011

Children interpretation of emotional body language displayed by a robot

Aryel Beck; Lola Cañamero; Luisa Damiano; Giacomo Sommavilla; Fabio Tesser; Piero Cosi

Previous results show that adults are able to interpret different key poses displayed by the robot and also that changing the head position affects the expressiveness of the key poses in a consistent way. Moving the head down leads to decreased arousal (the level of energy), valence (positive or negative) and stance (approaching or avoiding) whereas moving the head up produces an increase along these dimensions [1]. Hence, changing the head position during an interaction should send intuitive signals which could be used during an interaction. The ALIZ-E target group are children between the age of 8 and 11. Existing results suggest that they would be able to interpret human emotional body language [2, 3]. Based on these results, an experiment was conducted to test whether the results of [1] can be applied to children. If yes body postures and head position could be used to convey emotions during an interaction.


International Workshop on Evaluation of Natural Language and Speech Tool for Italian | 2013

SAD-Based Italian Forced Alignment Strategies

Giulio Paci; Giacomo Sommavilla; Piero Cosi

The Evalita 2011 contest proposed two forced alignment tasks, word and phone segmentation, and two modalities, “open” and “closed”. A system for each combination of task and modality has been proposed and submitted for evaluation. Direct use of Silence/Activity detection in forced alignment has been tested. Positive effects were shown in the acoustic model training step, especially when dealing with long pauses. The exploitation of multiple forced alignment systems through a voting procedure has also been tested.


Proceedings of the 5th International Workshop on Models and Analysis of Vocal Emissions for Biomedical Applications (MAVEBA 2007) | 2007

SMS-Festival : A New TTS Framework

Giacomo Sommavilla; Piero Cosi; Carlo Drioli; Giulio Paci

A new sinusoidal model based engine for FESTIVAL TTS system which performs the DSP (Digital Signal Processing) operations (i.e. converting a phonetic input into audio signal) of a diphone-based TTS concatenative system, taking as input the NLP (Natural Language Processing) data (a sequence of phonemes with length and intonation values elaborated from the text script) computed by FESTIVAL is described. The engine aims to be an alternative to MBROLA and makes use of SMS (“Spectral Modeling Synthesis”) representation, implemented with the CLAM (C++ Library for Audio and Music) framework. This program will be released with open source license (GPL), and will compile everywhere gcc and CLAM do (i.e.: Windows, Linux and Mac OS X operating systems).


human robot interaction | 2013

Multimodal child-robot interaction: building social bonds

Tony Belpaeme; Paul Baxter; Robin Read; Rachel Wood; Heriberto Cuayáhuitl; Bernd Kiefer; Stefania Racioppa; Ivana Kruijff-Korbayová; Georgios Athanasopoulos; Valentin Enescu; Rosemarijn Looije; Mark A. Neerincx; Yiannis Demiris; Raquel Ros-Espinoza; Aryel Beck; Lola Cañamero; Antione Hiolle; Matthew Lewis; Ilaria Baroni; Marco Nalin; Piero Cosi; Giulio Paci; Fabio Tesser; Giacomo Sommavilla; Rémi Humbert


WOCCI | 2012

Spoken language processing in a conversational system for child-robot interaction.

Ivana Kruijff-Korbayová; Heriberto Cuayáhuitl; Bernd Kiefer; Marc Schröder; Piero Cosi; Giulio Paci; Giacomo Sommavilla; Fabio Tesser; Hichem Sahli; Georgios Athanasopoulos; Weiyi Wang; Valentin Enescu; Werner Verhelst


WOCCI | 2014

Comparing open source ASR toolkits on Italian children speech.

Piero Cosi; Mauro Nicolao; Giulio Paci; Giacomo Sommavilla; Fabio Tesser

Collaboration


Dive into the Giacomo Sommavilla's collaboration.

Top Co-Authors

Avatar

Piero Cosi

National Research Council

View shared research outputs
Top Co-Authors

Avatar

Fabio Tesser

National Research Council

View shared research outputs
Top Co-Authors

Avatar

Giulio Paci

National Research Council

View shared research outputs
Top Co-Authors

Avatar

Valentin Enescu

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar

Aryel Beck

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hichem Sahli

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar

Antoine Hiolle

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar

Lola Cañamero

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar

Paul Baxter

Plymouth State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge