Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Piero Cosi is active.

Publication


Featured researches published by Piero Cosi.


Journal of New Music Research | 1994

Auditory Modelling and Self-Organizing Neural Networks for Timbre Classification

Piero Cosi; Giovanni De Poli; Giampaolo Lauzzana

Abstract A timbre classification system based on auditory processing and Kohonen self organizing neural networks is described. Preliminary results are given on a simple classification experiment involving 12 instruments in both clean and degraded conditions.


international conference on multimodal interfaces | 2002

Labial coarticulation modeling for realistic facial animation

Piero Cosi; Emanuela Magno Caldognetto; Giulio Perin; Claudio Zmarich

A modified version of the coarticulation model proposed by Cohen and Massaro (1993) is described. A semi-automatic minimization technique, working on real cinematic data, acquired by the ELITE opto-electronic system, was used to train the dynamic characteristics of the model. Finally, the model was applied with success to GRETA, an Italian talking head, and examples are illustrated to show the naturalness of the resulting animation technique.


International Journal of Social Robotics | 2013

Interpretation of Emotional Body Language Displayed by a Humanoid Robot: A Case Study with Children

Aryel Beck; Lola Cañamero; Antoine Hiolle; Luisa Damiano; Piero Cosi; Fabio Tesser; Giacomo Sommavilla

The work reported in this paper focuses on giving humanoid robots the capacity to express emotions with their body. Previous results show that adults are able to interpret different key poses displayed by a humanoid robot and also that changing the head position affects the expressiveness of the key poses in a consistent way. Moving the head down leads to decreased arousal (the level of energy) and valence (positive or negative emotion) whereas moving the head up produces an increase along these dimensions. Hence, changing the head position during an interaction should send intuitive signals. The study reported in this paper tested children’s ability to recognize the emotional body language displayed by a humanoid robot. The results suggest that body postures and head position can be used to convey emotions during child-robot interaction.


Language and Speech | 1988

English compound versus non-compound noun phrases in discourse: an acoustic and perceptual study.

Edda Farnetani; Carol Taylor Torsello; Piero Cosi

The aim of the present paper is to describe, in acoustic and perceptual terms, the prosodic pattern distinguishing English compound and non-compound noun phrases, and to determine how information structure and position affect the production and perception of the two forms. The study is based on the performance of ten English-speaking subjects (five speakers and five listeners). The test utterances were three minimal-pair noun phrases of two constituents, excised from conversational readings. These were analyzed acoustically, and submitted to the listeners for semantic identification. The results indicate that the distinction, when effective, lies primarily in the different prominence pattern: a sequence of an accented constituent followed by an unaccented one in compounds, and of two accented constituents (the second heard as stronger than the first) in non-compounds. It is also based on a different degree of internal cohesion, stronger in compounds and weaker in non-compounds. F 0, associated or trading with intensity, has proved to be the main cue to this distinction — more than duration, the major differentiating parameter in production. When an item is excised from the context, the perception of the intended category depends heavily on the communicative importance it had in the discourse. This means that information structure, through its effects on accentuation, becomes the determining factor in the perception of the distinction. The distinctive accentual pattern weakens or is completely neutralized when the test items convey old information. The degree of deaccentuation also seems to be affected by an immediately following focus, and, to a certain extent, by position. The data are viewed in the framework of speaker-listener interaction, and it is argued that deaccentuation, as well as accentuation, can have a communicative function.


Speech Communication | 2004

Modifications of phonetic labial targets in emotive speech: effects of the co-production of speech and emotions

Emanuela Magno Caldognetto; Piero Cosi; Carlo Drioli; Graziano Tisato; Federica Cavicchio

This paper describes how the visual and acoustic characteristics of some Italian phones (/’a/, /b/, /v/) are modifled in emotive speech by the expression of joy, surprise, sadness, disgust, anger, and fear. In this research we speciflcally analyze the interaction between labial conflgurations, peculiar to each emotion, and the articulatory lip movements of the Italian vowel /’a/ and consonants /b/ and /v/, deflned by phonetic-phonological rules. This interaction was quantifled examining the variations of the following parameters: lip opening, upper and lower lip vertical displacements, lip rounding, anterior/posterior movements (protrusion) of upper lip and lower lip, left and right lip corner horizontal displacements, left and right corner vertical displacements, and asymmetry parameters calculated as the difierence between right and left corner position along the horizontal and the vertical axes. Moreover, we present the correlations between articulatory data and the spectral features of the co-produced acoustic signal.


Proceedings 1998 IEEE 4th Workshop Interactive Voice Technology for Telecommunications Applications. IVTTA '98 (Cat. No.98TH8376) | 1998

Connected digit recognition experiments with the OGI Toolkit's neural network and HMM-based recognizers

Piero Cosi; John-Paul Hosom; Johan Shalkwyk; Stephen Sutton; Ronald A. Cole

This paper describes a series of experiments that compare different approaches to training a speaker-independent continuous-speech digit recognizer using the CSLU Toolkit. Comparisons are made between the hidden Markov model (HMM) and neural network (NN) approaches. In addition, a description of the CSLU Toolkit research environment is given. The CSLU Toolkit is a research and development software environment that provides a powerful and flexible tool for creating and using spoken language systems for telephone and PC applications. In particular, the CSLU-HMM, the CSLU-NN, and the CSLU-FBNN development environments, with which our experiments were implemented, are described in detail and recognition results are compared. Our speech corpus is OGI 30K-Numbers, which is a collection of spontaneous ordinal and cardinal numbers, continuous digit strings and isolated digit strings. The utterances were recorded by having a large number of people recite their ZIP code, street address, or other numeric information over the telephone. This corpus represents a very noisy and difficult recognition task. Our best results (98% word recognition, 92% sentence recognition), obtained with the FBNN architecture, suggest the effectiveness of the CSLU Toolkit in building real-life speech recognition systems.


human robot interaction | 2016

Towards long-term social child-robot interaction: using multi-activity switching to engage young users

Alexandre Coninx; Paul Baxter; Elettra Oleari; Sara Bellini; Bert P.B. Bierman; Olivier A. Blanson Henkemans; Lola Cañamero; Piero Cosi; Valentin Enescu; Raquel Ros Espinoza; Antoine Hiolle; Rémi Humbert; Bernd Kiefer; Ivana Kruijff-Korbayová; Rosemarijn Looije; Marco Mosconi; Mark A. Neerincx; Giulio Paci; Georgios Patsis; Clara Pozzi; Francesca Sacchitelli; Hichem Sahli; Alberto Sanna; Giacomo Sommavilla; Fabio Tesser; Yiannis Demiris; Tony Belpaeme

Social robots have the potential to provide support in a number of practical domains, such as learning and behaviour change. This potential is particularly relevant for children, who have proven receptive to interactions with social robots. To reach learning and therapeutic goals, a number of issues need to be investigated, notably the design of an effective child-robot interaction (cHRI) to ensure the child remains engaged in the relationship and that educational goals are met. Typically, current cHRI research experiments focus on a single type of interaction activity (e.g. a game). However, these can suffer from a lack of adaptation to the child, or from an increasingly repetitive nature of the activity and interaction. In this paper, we motivate and propose a practicable solution to this issue: an adaptive robot able to switch between multiple activities within single interactions. We describe a system that embodies this idea, and present a case study in which diabetic children collaboratively learn with the robot about various aspects of managing their condition. We demonstrate the ability of our system to induce a varied interaction and show the potential of this approach both as an educational tool and as a research method for long-term cHRI.


Archive | 2011

An Event-Based Conversational System for the Nao Robot

Ivana Kruijff-Korbayová; Georgios Athanasopoulos; Aryel Beck; Piero Cosi; Heriberto Cuayáhuitl; Tomas Dekens; Valentin Enescu; Antoine Hiolle; Bernd Kiefer; Hichem Sahli; Marc Schröder; Giacomo Sommavilla; Fabio Tesser; Werner Verhelst

Conversational systems play an important role in scenarios without a keyboard, e.g., talking to a robot. Communication in human-robot interaction (HRI) ultimately involves a combination of verbal and non-verbal inputs and outputs. HRI systems must process verbal and non-verbal observations and execute verbal and non-verbal actions in parallel, to interpret and produce synchronized behaviours. The development of such systems involves the integration of potentially many components and ensuring a complex interaction and synchronization between them. Most work in spoken dialogue system development uses pipeline architectures. Some exceptions are [1, 17], which execute system components in parallel (weakly-coupled or tightly-coupled architectures). The latter are more promising for building adaptive systems, which is one of the goals of contemporary research systems.


agent-directed simulation | 2004

Evaluation of Synthetic Faces: Human Recognition of Emotional Facial Displays

Erica Costantini; Fabio Pianesi; Piero Cosi

Despite the growing attention towards the communication adequacy of embodied conversational agents (ECAs), standards for their assessment are still missing. This paper reports about a methodology for the evaluation of the adequacy of facial displays in the expression of some basic emotional states, based on a recognition task. We consider recognition rates and error distribution, both in absolute terms and with respect to a human model. As to data analysis, we propose to resort to standard loglinear techniques and to information-theoretic ones. Results from an experiment are presented and the potentials of the methodology are discussed.


Archive | 1996

Lips and Jaw Movements for Vowels and Consonants: Spatio-Temporal Characteristics and Bimodal Recognition Applications

Piero Cosi; Emanuela Magno Caldognetto

This research focuses on the spatio-temporal characteristics of lips and jaw movements and on their relevance for lip-reading, bimodal communication theory and bimodal recognition applications. 3D visible articulatory targets for vowels and consonants are proposed. Relevant modifications on the spatiotemporal consonant targets due to coarticulatory phenomena are exemplified. When visual parameters are added to acoustic ones as inputs to a Recurrent Neural Network system, high recognition results in plosive classification experiments are obtained.

Collaboration


Dive into the Piero Cosi's collaboration.

Top Co-Authors

Avatar

Fabio Tesser

National Research Council

View shared research outputs
Top Co-Authors

Avatar

Graziano Tisato

National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Giulio Paci

National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Claudio Zmarich

National Research Council

View shared research outputs
Top Co-Authors

Avatar

Valentin Enescu

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge