Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nadia Mana is active.

Publication


Featured researches published by Nadia Mana.


international conference on multimodal interfaces | 2008

Multimodal recognition of personality traits in social interactions

Fabio Pianesi; Nadia Mana; Alessandro Cappelletti; Bruno Lepri; Massimo Zancanaro

This paper targets the automatic detection of personality traits in a meeting environment by means of audio and visual features; information about the relational context is captured by means of acoustic features designed to that purpose. Two personality traits are considered: Extraversion (from the Big Five) and the Locus of Control. The classification task is applied to thin slices of behaviour, in the form of 1-minute sequences. SVM were used to test the performances of several training and testing instance setups, including a restricted set of audio features obtained through feature selection. The outcomes improve considerably over existing results, provide evidence about the feasibility of the multimodal analysis of personality, the role of social context, and pave the way to further studies addressing different features setups and/or targeting different personality traits.


TEXT, SPEECH AND LANGUAGE TECHNOLOGY | 2003

Building the Italian Syntactic-Semantic Treebank

Simonetta Montemagni; Francesco Barsotti; Marco Battista; Nicoletta Calzolari; Ornella Corazzari; Alessandro Lenci; Antonio Zampolli; Francesca Fanciulli; Maria Massetani; Remo Raffaelli; Roberto Basili; Maria Teresa Pazienza; Dario Saracino; Fabio Massimo Zanzotto; Nadia Mana; Fabio Pianesi; Rodolfo Delmonte

The paper reports on the design and construction of a multi-layered corpus of Italian, annotated at the syntactic and lexico-semantic levels, whose development is supported by dedicated software augmented with an intelligent interface. The issue of evaluating this type of resource is also addressed.


international conference on multimodal interfaces | 2011

Please, tell me about yourself: automatic personality assessment using short self-presentations

Ligia Maria Batrinca; Nadia Mana; Bruno Lepri; Fabio Pianesi; Nicu Sebe

Personality plays an important role in the way people manage the images they convey in self-presentations and employment interviews, trying to affect the others first impressions and increase effectiveness. This paper addresses the automatically detection of the Big Five personality traits from short (30-120 seconds) self-presentations, by investigating the effectiveness of 29 simple acoustic and visual non-verbal features. Our results show that Conscientiousness and Emotional Stability/Neuroticism are the best recognizable traits. The lower accuracy levels for Extraversion and Agreeableness are explained through the interaction between situational characteristics and the differential activation of the behavioral dispositions underlying those traits.


ubiquitous computing | 2010

What is happening now? Detection of activities of daily living from simple visual features

Bruno Lepri; Nadia Mana; Alessandro Cappelletti; Fabio Pianesi; Massimo Zancanaro

We propose and investigate a paradigm for activity recognition, distinguishing the “on-going activity” recognition task (OGA) from that addressing “complete activities” (CA). The former starts from a time interval and aims to discover which activities are going on inside it. The latter, in turn, focuses on terminated activities and amounts to taking an external perspective on activities. We argue that this distinction is quite natural and the OGA task has a number of interesting properties; e.g., the possibility of reconstructing complete activities in terms of on-going ones, the avoidance of the thorny issue of activity segmentation, and a straightforward accommodation of complex activities, etc. Moreover, some plausible properties of the OGA task are discussed and then investigated in a classification study, addressing: the dependence of classification performance on the duration of time windows and its relationship with actional types (homogeneous vs. non-homogeneous activities), and on the assortments of features used. Three types of visual features are exploited, obtained from a data set that tries to balance the pros and cons of laboratory-based and naturalistic ones. The results provide partial confirmation to the hypothesis and point to relevant open issues for future work.


Computers in Education | 2013

Interactive stories and exercises with dynamic feedback for improving reading comprehension skills in deaf children

Ornella Mich; Emanuele Pianta; Nadia Mana

Deaf children have significant difficulties in comprehending written text. This is mainly due to the hearing loss that prevents them from being exposed to oral language when they were an infant. However, it is also due to the type of educational intervention they are faced with, which accustoms them to decoding single words and isolated sentences, rather than entire texts. This paper presents an evolved version of a literacy web tool for deaf children based on stories and comprehension exercises. Two substantial improvements were made with the respect to the first version of our application. First, the text of the stories is now presented to children in the context of animated web pages. Second, intelligent dynamic feedback is given to the users when resolving the exercises. A preliminary evaluation study with deaf children, as the treatment group, and hearing children, as the control group, assessed the usability and effectiveness of the new system and its graphical interface.


meeting of the association for computational linguistics | 2002

Balancing Expressiveness and Simplicity in an Interlingua for Task Based Dialogue

Lori S. Levin; Donna Gates; Dorcas Pianta; Roldano Cattoni; Nadia Mana; Kay Peterson; Alon Lavie; Fabio Pianesi

In this paper we compare two interlingua representations for speech translation. The basis of this paper is a distributional analysis of the C-STAR II and NESPOLE databases tagged with interlingua representations. The C-STAR II database has been partially re-tagged with the NESPOLE interlingua, which enables us to make comparisons on the same data with two types of interlinguas and on two types of data (C-STAR II and NESPOLE) with the same interlingua. The distributional information presented in this paper show that the NESPOLE interlingua maintains the language-independence and simplicity of the C-STAR II speech-act-based approach, while increasing semantic expressiveness and scalability.


The Medical Roundtable | 2007

Multimodal corpus of multi-party meetings for automatic social behavior analysis and personality traits detection

Nadia Mana; Bruno Lepri; Paul Chippendale; Alessandro Cappelletti; Fabio Pianesi; Piergiorgio Svaizer; Massimo Zancanaro

This paper describes an automatically annotated multimodal corpus of multi-party meetings. The corpus provides for each subject involved in the experimental sessions information on her/his social behavior and personality traits, as well as audiovisual cues (speech rate, pitch and energy, head orientation, head, hand and body fidgeting). The corpus is based on the audio and video recordings of thirteen sessions, which took place in a lab setting equipped with cameras and microphones. Our main concern in collecting this corpus was to investigate the possibility of creating a system capable of automatically analyzing social behaviors and predicting personality traits using audio-visual cues.


international conference on multimodal interfaces | 2012

Multimodal recognition of personality traits in human-computer collaborative tasks

Ligia Maria Batrinca; Bruno Lepri; Nadia Mana; Fabio Pianesi

The users personality in Human-Computer Interaction (HCI) plays an important role for the overall success of the interaction. The present study focuses on automatically recognizing the Big Five personality traits from 2-5 min long videos, in which the computer interacts using different levels of collaboration, in order to elicit the manifestation of these personality traits. Emotional Stability and Extraversion are the easiest traits to automatically detect under the different collaborative settings: all the settings for Emotional Stability and intermediate and fully-non collaborative settings for Extraversion. Interestingly, Agreeableness and Conscientiousness can be detected only under a moderately non-collaborative setting. Finally, our task does not seem to activate the full range of dispositions for Creativity.


distributed computing and artificial intelligence | 2009

Multimodal Classification of Activities of Daily Living Inside Smart Homes

Vit Libal; Bhuvana Ramabhadran; Nadia Mana; Fabio Pianesi; Paul Chippendale; Oswald Lanz; Gerasimos Potamianos

Smart homes for the aging population have recently started attracting the attention of the research community. One of the problems of interest is this of monitoring the activities of daily living (ADLs) of the elderly aiming at their protection and well-being. In this work, we present our initial efforts to automatically recognize ADLs using multimodal input from audio-visual sensors. For this purpose, and as part of Integrated Project Netcarity, far-field microphones and cameras have been installed inside an apartment and used to collect a corpus of ADLs, acted by multiple subjects. The resulting data streams are processed to generate perception-based acoustic features, as well as human location coordinates that are employed as visual features. The extracted features are then presented to Gaussian mixture models for their classification into a set of predefined ADLs. Our experimental results show that both acoustic and visual features are useful in ADL classification, but performance of the latter deteriorates when subject tracking becomes inaccurate. Furthermore, joint audio-visual classification by simple concatenative feature fusion significantly outperforms both unimodal classifiers.


IEEE Transactions on Multimedia | 2016

Multimodal Personality Recognition in Collaborative Goal-Oriented Tasks

Ligia Maria Batrinca; Nadia Mana; Bruno Lepri; Nicu Sebe; Fabio Pianesi

Incorporating research on personality recognition into computers, both from a cognitive as well as an engineering perspective, would facilitate the interactions between humans and machines. Previous attempts on personality recognition have focused on a variety of different corpora (ranging from text to audiovisual data), scenarios (interviews, meetings), channels of communication (audio, video, text), and different subsets of personality traits (out of the five ones from the Big Five Model). Our study uses simple acoustic and visual nonverbal features extracted from multimodal data, which have been recorded in previously uninvestigated scenarios, and consider all five personality traits and not just a subset. First, we look at the human-machine interaction scenario, where we introduce the display of different “collaboration levels.” Second, we look at the contribution of the human-human interaction (HHI) scenario on the emergence of personality traits. Investigating the HHI scenario creates a stronger basis for future human-agents interactions. Our goal is to study, from a computational approach, the emergence degree of the five personality traits in these two scenarios. The results demonstrate the relevance of each of the two scenarios when it comes to the degree of emergence of certain traits and the feasibility to automatically recognize personality under different conditions.

Collaboration


Dive into the Nadia Mana's collaboration.

Top Co-Authors

Avatar

Fabio Pianesi

fondazione bruno kessler

View shared research outputs
Top Co-Authors

Avatar

Ornella Mich

fondazione bruno kessler

View shared research outputs
Top Co-Authors

Avatar

Bruno Lepri

fondazione bruno kessler

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michela Ferron

fondazione bruno kessler

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fabio Massimo Zanzotto

University of Rome Tor Vergata

View shared research outputs
Researchain Logo
Decentralizing Knowledge