Emanuela Magno Caldognetto
University of Padua
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Emanuela Magno Caldognetto.
international conference on multimodal interfaces | 2002
Piero Cosi; Emanuela Magno Caldognetto; Giulio Perin; Claudio Zmarich
A modified version of the coarticulation model proposed by Cohen and Massaro (1993) is described. A semi-automatic minimization technique, working on real cinematic data, acquired by the ELITE opto-electronic system, was used to train the dynamic characteristics of the model. Finally, the model was applied with success to GRETA, an Italian talking head, and examples are illustrated to show the naturalness of the resulting animation technique.
Acta Neurologica Scandinavica | 1984
Gianfranco Denes; Emanuela Magno Caldognetto; Carlo Semenza; Kyriaki Vagges; Marina Zettin
ABSTRACT— Discrimination and identification of emotions in human voice was studied in normal controls and in 4 groups of brain‐damaged subjects, subdivided along the right/left and anterior/posterior dimensions. Results showed a failure of right‐brain‐damaged patients, the right posterior group being significantly worse than all the other groups. Qualitative differences emerged as well: both a conceptual and an acoustic deficit seem to contribute to right posterior patient performance.
Speech Communication | 2004
Emanuela Magno Caldognetto; Piero Cosi; Carlo Drioli; Graziano Tisato; Federica Cavicchio
This paper describes how the visual and acoustic characteristics of some Italian phones (/’a/, /b/, /v/) are modifled in emotive speech by the expression of joy, surprise, sadness, disgust, anger, and fear. In this research we speciflcally analyze the interaction between labial conflgurations, peculiar to each emotion, and the articulatory lip movements of the Italian vowel /’a/ and consonants /b/ and /v/, deflned by phonetic-phonological rules. This interaction was quantifled examining the variations of the following parameters: lip opening, upper and lower lip vertical displacements, lip rounding, anterior/posterior movements (protrusion) of upper lip and lower lip, left and right lip corner horizontal displacements, left and right corner vertical displacements, and asymmetry parameters calculated as the difierence between right and left corner position along the horizontal and the vertical axes. Moreover, we present the correlations between articulatory data and the spectral features of the co-produced acoustic signal.
language resources and evaluation | 2007
Isabella Poggi; Federica Cavicchio; Emanuela Magno Caldognetto
Irony has been studied by famous scholars across centuries, as well as more recently in cognitive and pragmatic research. The prosodic and visual signals of irony were also studied. Irony is a communicative act in which the Sender’s literal goal is to communicate a meaning x, but through this meaning the Sender has the goal to communicate another meaning, y, which is contrasting, sometimes even opposite, to meaning x. In this case we have an antiphrastic irony. So an ironic act is an indirect speech act, in that its true meaning, the one really intended by the Sender, is not the one communicated by the literal meaning of the communicative act: it must be understood through inferences by the Addressee. The ironic statement may concern an event, object or person, and in this case, the Addressee, or a third person, or even the Sender itself (Self-irony). In this paper we define irony in terms of a goal and belief view of communication, and show how the annotation scheme, the Anvil-Score, and illustrate aspects of its expressive power by applying it to a particular case: ironic communication in a judicial debate.
agent-directed simulation | 2004
Emanuela Magno Caldognetto; Piero Cosi; Federica Cavicchio
The aim of the research is the phonetic articulatory description of emotive speech achievable studying the labial movements, which are the product of the compliance with both the phonetic-phonological constraints and the lip configuration required for the visual encoding of emotions. In this research we analyse the interaction between labial configurations, peculiar to six emotions (anger, disgust, joy, fear, surprise and sadness), and the articulatory lip movements defined by phonetic-phonological rules, specific to the vowel /’a/ and consonants /b/ and /v/.
adaptive agents and multi-agents systems | 2003
Isabella Poggi; Catherine Pelachaud; Emanuela Magno Caldognetto
We aim at creating expressive Embodied Conversational Agents (ECAs) able to communicate multimodally with a user or with other ECAs. In this paper we focus on the Gestural Mind Markers, that is, those gestures that convey information on the Speakers Mind; we present the ANVIL-SCORE, a tool to analyze and classify multimodal data that is a semantically augmented version of Kipps ANVIL [2].
Archive | 2011
Jean-Claude Martin; Laurence Devillers; Amaryllis Raouzaiou; George Caridakis; Zsófia Ruttkay; Catherine Pelachaud; Maurizio Mancini; Radek Niewiadomski; Hannes Pirker; Brigitte Krenn; Isabella Poggi; Emanuela Magno Caldognetto; Federica Cavicchio; Giorgio Merola; Alejandra García Rojas; Frédéric Vexo; Daniel Thalmann; Arjan Egges; Nadia Magnenat-Thalmann
In order to be believable, embodied conversational agents (ECAs) must show expression of emotions in a consistent and natural looking way across modalities. The ECA has to be able to display coordinated signs of emotion during realistic emotional behaviour. Such a capability requires one to study and represent emotions and coordination of modalities during non-basic realistic human behaviour, to define languages for representing such behaviours to be displayed by the ECA, to have access to mono-modal representations such as gesture repositories. This chapter is concerned about coordinating the generation of signs in multiple modalities in such an affective agent. Designers of an affective agent need to know how it should coordinate its facial expression, speech, gestures and other modalities in view of showing emotion. This synchronisation of modalities is a main feature of emotions.
Neuropsychologia | 1990
Marta Panzeri; Carlo Semenza; Emanuela Magno Caldognetto; Kyriaki Vagges
Hesitation analysis of spontaneous production from three neologistic jargonaphasics is described. The results appear to differ from patient to patient as far as the relative proportion in the number and length of pauses before correct words and mistakes is concerned. Generalization of the conclusion beyond single cases may not therefore be legitimate.
international conference on spoken language processing | 1996
Piero Cosi; Emanuela Magno Caldognetto; Franco Ferrero; M. Dugatto; Kyriaki Vagges
A speaker independent bimodal phonetic classification experiment regarding Italian plosive consonants is described. The phonetic classification scheme is based an a feedforward recurrent back-propagation neural network working on audio and visual information. The speech signal is processed by an auditory model producing spectral-like parameters, while the visual signal is processed by specialized hardware, called ELITE, computing lip and jaw kinematics parameters.
Archive | 1996
Isabella Poggi; Emanuela Magno Caldognetto