Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Costanza Navarretta is active.

Publication


Featured researches published by Costanza Navarretta.


language resources and evaluation | 2007

The MUMIN coding scheme for the annotation of feedback, turn management and sequencing phenomena

Jens Allwood; Loredana Cerrato; Kristiina Jokinen; Costanza Navarretta; Patrizia Paggio

This paper deals with a multimodal annotation scheme dedicated to the study of gestures in interpersonal communication, with particular regard to the role played by multimodal expressions for feedback, turn management and sequencing. The scheme has been developed under the framework of the MUMIN network and tested on the analysis of multimodal behaviour in short video clips in Swedish, Finnish and Danish. The preliminary results obtained in these studies show that the reliability of the categories defined in the scheme is acceptable, and that the scheme as a whole constitutes a versatile analysis tool for the study of multimodal communication behaviour.


international conference on machine learning | 2008

Distinguishing the Communicative Functions of Gestures

Kristiina Jokinen; Costanza Navarretta; Patrizia Paggio

This paper deals with the results of a machine learning experiment conducted on annotated gesture data from two case studies (Danish and Estonian). The data concern mainly facial displays, that are annotated with attributes relating to shape and dynamics, as well as communicative function. The results of the experiments show that the granularity of the attributes used seems appropriate for the task of distinguishing the desired communicative functions. This is a promising result in view of a future automation of the annotation task.


international conference on universal access in human computer interaction | 2011

Head movements, facial expressions and feedback in Danish first encounters interactions: a culture-specific analysis

Patrizia Paggio; Costanza Navarretta

This study deals with non-verbal behaviour in a video-recorded and manually annotated corpus of first encounters in Danish. It presents an analysis of head movements and facial expressions in the data, in particular their use to express feedback, and it discusses the results in the light of aspects of Danish culture that seem to privilege rather unconventional and non-emotional behaviour. The data provided can form the basis of multi-cultural studies where parallels are drawn to similar interactions in other languages.


international conference on computational linguistics | 2004

Resolving individual and abstract anaphora in texts and dialogues

Costanza Navarretta

This paper describes an extension of the DAR-algorithm (Navarretta, 2004) for resolving intersentential pronominal anaphors referring to individual and abstract entities in texts and dialogues. In DAR individual entities are resolved combining models which identify high degree of salience with high degree of givenness (topicality) of entities in the hearers cognitive model, e.g. (Grosz et al., 1995), with Hajicova et al.s (1990) salience account which assigns the highest degree of salience to entities in the focal part of an utterance in Information Structure terms, which often introduce new information in discourse. Anaphors referring to abstract entities are resolved with an extension of the algorithm presented by Eckert and Strube (2000). The extended DAR-algorithm accounts for differences in the resolution mechanisms of different types of Danish pronouns. Manual tests of the algorithm show that DAR performs better than other resolution algorithms on the same data.


COST'10 Proceedings of the 2010 international conference on Analysis of Verbal and Nonverbal Communication and Enactment | 2010

Annotating non-verbal behaviours in informal interactions

Costanza Navarretta

This paper deals with the annotations of non-verbal behaviours in a Danish multimodal corpus of naturally occurring interactions between people who are well-acquainted. The main goal of this work is to provide formally annotated data for describing and modelling various communicative phenomena in this corpus type. In the paper we describe the annotation model and present a first analysis of the annotated data focusing on feedback-related non-verbal behaviours. The data confirm that head movements are the most common feedback-related non-verbal behaviours, but they indicate also that there are differences in the way feedback is expressed in two-party and in three-party interactions.


Journal on Multimodal User Interfaces | 2013

Head movements, facial expressions and feedback in conversations: empirical evidence from Danish multimodal data

Patrizia Paggio; Costanza Navarretta

This article deals with multimodal feedback in two Danish multimodal corpora, i.e., a collection of map-task dialogues and a corpus of free conversations in first encounters between pairs of subjects. Machine learning techniques are applied to both sets of data to investigate various relations between the non-verbal behaviour—more specifically head movements and facial expressions—and speech with regard to the expression of feedback. In the map-task data, we study the extent to which the dialogue act type of linguistic feedback expressions can be classified automatically based on the non-verbal features. In the conversational data, on the other hand, non-verbal and speech features are used together to distinguish feedback from other multimodal behaviours. The results of the two sets of experiments indicate in general that head movements, and to a lesser extent facial expressions, are important indicators of feedback, and that gestures and speech disambiguate each other in the machine learning process.


Knowledge Based Systems | 2014

Predicting emotions in facial expressions from the annotations in naturally occurring first encounters

Costanza Navarretta

This paper deals with the automatic identification of emotions from the manual annotations of the shape and functions of facial expressions in a Danish corpus of video recorded naturally occurring first encounters. More specifically, a support vector classified is trained on the corpus annotations to identify emotions in facial expressions. In the classification experiments, we test to what extent emotions expressed in naturally-occurring conversations can be identified automatically by a classifier trained on the manual annotations of the shape of facial expressions and co-occurring speech tokens. We also investigate the relation between emotions and the communicative functions of facial expressions. Both emotion labels and their values in a three dimensional space are identified. The three dimensions are Pleasure, Arousal and Dominance. n nThe results of our experiments indicate that the classifiers perform well in identifying emotions from the coarse-grained descriptions of facial expressions and co-occurring speech. The communicative functions of facial expressions also contribute to emotion identification. The results are promising because the emotion label list comprises fine grained emotions and affective states in naturally occurring conversations, while the shape features of facial expressions are very coarse grained. The classification results also assess that the annotation scheme combining a discrete and a dimensional description, and the manual annotations produced according to it are reliable and can be used to model and test emotional behaviours in emotional cognitive infocommunicative systems.


international conference on multimodal interfaces | 2013

Predicting speech overlaps from speech tokens and co-occurring body behaviours in dyadic conversations

Costanza Navarretta

This paper deals with speech overlaps in dyadic video record-ed spontaneous conversations. Speech overlaps are quite common in everyday conversations and it is therefore important to study their occurrences in different communicative situations and settings and to model them in applied communicative systems.n In the present work, we wanted to investigate the frequency and use of speech overlaps in a multimodally annotated corpus of first encounters. Speech overlaps were automatically tagged and a Bayesian Network learner was trained on the multimodal annotations in order to determine to which extent overlaps can be predicted so they can be dealt with in conversational devices and to investigate the relation between overlaps, speech tokens and co-occurring body behaviours. The annotations comprise shape and functions of head movements, facial expressions and body postures.n 23% of the speech tokens and 90% of the spoken contributions of the first encounters are overlapping. The best classification results were obtained training the classifier on multimodal behaviours (speech and co-occurring head movements, facial expressions and body postures) which surround-ed the overlaps. Training the classifier on all speech tokens also gave good results while adding the shape of co-occurring body behaviours to them did not affect the results. Thus, the behaviours of the conversation participants does not change when there is a speech overlap. This could indicate that most of the overlaps in the first encounters are non competitive.


Cognitive Computation | 2014

The Automatic Identification of the Producers of Co-occurring Communicative Behaviours

Costanza Navarretta

AbstractnMultimodal communicative behaviours depend on numerous factors such as the communicative situation, the task, the culture and respective relationship of the people involved, their role, age, and background. This paper addresses the identification of the producers of co-occurring communicative non-verbal behaviours in a manually annotated multimodal corpus of spontaneous conversations. The work builds upon a preceding study in which a support vector machine was trained to identify the producers of communicative body behaviours using the annotations of individual behaviour types. In the present work, we investigate to which extent classification results can be improved adding to the training data the shape description of co-occurring body behaviours and temporal information. The inclusion of co-occurring behaviours reflects the fact that people often use more body behaviours at the same time when they communicate. The results of the classification experiments show that the identification of the producers of communicative behaviours improves significantly if co-occurring behaviours are added to the training data. Classification performance further improves when it also uses temporal information. Even though the results vary from body type to body type, they all show that the individual variation of communicative behaviours is large even in a very homogeneous group of people and that this variation is better modelled using information on co-occurring behaviours than individual behaviours. Being able to identify and then react correctly to individual behaviours of people is extremely important in the field of social robotics which involves the use of robots in private homes where they must interact in a natural way with different types of persons having varying needs.


Revised Selected Papers of the International Workshop on Multimodal Communication in Political Speech. Shaping Minds and Social Action - Volume 7688 | 2010

Multimodal Behaviour and Interlocutor Identification in Political Debates

Costanza Navarretta; Patrizia Paggio

The paper deals with the identification of interlocutors via speech and gestures in annotated televised political debates. The analysis of an American and a British debate shows that two of the politicians succeeded better than their political adversaries in identifying their various interlocutors. Since the same two politicians were also judged to be the winners of the debates in several opinion polls, our data can be said to confirm earlier claims that the correct identification of the interlocutor is important for succeeding in communication, particularly in televised political debates, during which politicians address several interlocutors in the physical room where the debates take place as well as outside of it.

Collaboration


Dive into the Costanza Navarretta's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jens Allwood

University of Gothenburg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Magdalena Lis

University of Copenhagen

View shared research outputs
Top Co-Authors

Avatar

Loredana Cerrato

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Bart Jongejan

University of Copenhagen

View shared research outputs
Top Co-Authors

Avatar

Bente Maegaard

University of Copenhagen

View shared research outputs
Top Co-Authors

Avatar

Claus Povlsen

University of Copenhagen

View shared research outputs
Researchain Logo
Decentralizing Knowledge