Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alessandro Cappelletti is active.

Publication


Featured researches published by Alessandro Cappelletti.


international conference on multimodal interfaces | 2008

Multimodal recognition of personality traits in social interactions

Fabio Pianesi; Nadia Mana; Alessandro Cappelletti; Bruno Lepri; Massimo Zancanaro

This paper targets the automatic detection of personality traits in a meeting environment by means of audio and visual features; information about the relational context is captured by means of acoustic features designed to that purpose. Two personality traits are considered: Extraversion (from the Big Five) and the Locus of Control. The classification task is applied to thin slices of behaviour, in the form of 1-minute sequences. SVM were used to test the performances of several training and testing instance setups, including a restricted set of audio features obtained through feature selection. The outcomes improve considerably over existing results, provide evidence about the feasibility of the multimodal analysis of personality, the role of social context, and pave the way to further studies addressing different features setups and/or targeting different personality traits.


international conference on multimodal interfaces | 2007

Using the influence model to recognize functional roles in meetings

Wen Dong; Bruno Lepri; Alessandro Cappelletti; Alex Pentland; Fabio Pianesi; Massimo Zancanaro

In this paper, an influence model is used to recognize functional roles played during meetings. Previous works on the same corpus demonstrated a high recognition accuracy using SVMs with RBF kernels. In this paper, we discuss the problems of that approach, mainly over-fitting, the curse of dimensionality and the inability to generalize to different group configurations. We present results obtained with an influence modeling method that avoid these problems and ensures both greater robustness and generalization capability.


language resources and evaluation | 2007

A multimodal annotated corpus of consensus decision making meetings

Fabio Pianesi; Massimo Zancanaro; Bruno Lepri; Alessandro Cappelletti

In this paper we present an annotated audio–video corpus of multi-party meetings. The multimodal corpus provides for each subject involved in the experimental sessions six annotation dimensions referring to group dynamics; speech activity and body activity. The corpus is based on 11 audio and video recorded sessions which took place in a lab setting appropriately equipped with cameras and microphones. Our main concern in collecting this multimodal corpus was to explore the possibility of providing feedback services to facilitate group processes and to enhance self awareness among small groups engaged in meetings. We therefore introduce a coding scheme for annotating relevant functional roles that appear in a small group interaction. We also discuss the reliability of the coding scheme and we present the first results for automatic classification.


international conference on advanced learning technologies | 2004

Enforcing cooperative storytelling: first studies

Alessandro Cappelletti; Giulia Gelmini; Fabio Pianesi; Franca Rossi; Massimo Zancanaro

In this paper, we describe the first prototype of a system called StoryTable, aimed at supporting a group of children in the activity of storytelling. The system is based on a special multi-user touchable device (the MERL DiamondTouch) and it was designed with the purpose of enforcing collaboration between children. The paper discusses how the main design choices were influenced by the paradigm of cooperative learning and presents two observational studies to assess the effects of the different design choices on the storytelling activity.


ubiquitous computing | 2010

What is happening now? Detection of activities of daily living from simple visual features

Bruno Lepri; Nadia Mana; Alessandro Cappelletti; Fabio Pianesi; Massimo Zancanaro

We propose and investigate a paradigm for activity recognition, distinguishing the “on-going activity” recognition task (OGA) from that addressing “complete activities” (CA). The former starts from a time interval and aims to discover which activities are going on inside it. The latter, in turn, focuses on terminated activities and amounts to taking an external perspective on activities. We argue that this distinction is quite natural and the OGA task has a number of interesting properties; e.g., the possibility of reconstructing complete activities in terms of on-going ones, the avoidance of the thorny issue of activity segmentation, and a straightforward accommodation of complex activities, etc. Moreover, some plausible properties of the OGA task are discussed and then investigated in a classification study, addressing: the dependence of classification performance on the duration of time windows and its relationship with actional types (homogeneous vs. non-homogeneous activities), and on the assortments of features used. Three types of visual features are exploited, obtained from a data set that tries to balance the pros and cons of laboratory-based and naturalistic ones. The results provide partial confirmation to the hypothesis and point to relevant open issues for future work.


intelligent technologies for interactive entertainment | 2005

Enhancing social communication through story-telling among high-functioning children with autism

Eynat Gal; Dina Goren-Bar; E. Gazit; Nirit Bauminger; Alessandro Cappelletti; Fabio Pianesi; Oliviero Stock; Massimo Zancanaro; Patrice L. Weiss

We describe a first prototype of a system for storytelling for high functioning children with autism. The system, based on the Story-Table developed by IRST-itc, is aimed at supporting a group of children in the activity of storytelling. The system is based on a unique multi-user touchable device (the MERL Diamond Touch) designed with the purpose of enforcing collaboration between users. The instructions were simplified in order to allow children with communication disabilities to learn and operate the story table. First pilot results are very encouraging. The children were enthusiastic about communicating through the ST and appeared to be able to learn to operate it with little difficulty.


The Medical Roundtable | 2007

Multimodal corpus of multi-party meetings for automatic social behavior analysis and personality traits detection

Nadia Mana; Bruno Lepri; Paul Chippendale; Alessandro Cappelletti; Fabio Pianesi; Piergiorgio Svaizer; Massimo Zancanaro

This paper describes an automatically annotated multimodal corpus of multi-party meetings. The corpus provides for each subject involved in the experimental sessions information on her/his social behavior and personality traits, as well as audiovisual cues (speech rate, pitch and energy, head orientation, head, hand and body fidgeting). The corpus is based on the audio and video recordings of thirteen sessions, which took place in a lab setting equipped with cameras and microphones. Our main concern in collecting this corpus was to investigate the possibility of creating a system capable of automatically analyzing social behaviors and predicting personality traits using audio-visual cues.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2014

Audio-augmented paper for therapy and educational intervention for children with autistic spectrum disorder

Andrea Alessandrini; Alessandro Cappelletti; Massimo Zancanaro

Autism affects children@?s learning and social development. Commonly used rehabilitative treatments are aimed at stimulating the social skills of children with autism. In this article, we present a prototype and a pilot study on an audio-augmented paper to support the therapy of children with autism spectrum disorder (ASD). The prototype supports audio recording with standard sheets of paper by using tangible tools that can be shared between the therapist and the child. The prototype is a tool for the therapist to engage the child in a storytelling activity. We use a progressive design method based on a dynamic process that merges concept generation, technology benchmarking and activity design into continuously enriching actions. The paper highlights the qualities and benefits of using tangible audio-augmented artefacts for therapy and educational intervention for children with ASD. The work describes three main qualities of our prototype: from building cooperation to attention control, flow control, and using the children@?s own voices to foster attention.


international conference on pervasive computing | 2010

RFID: Recognizing failures in dressing activity

Kyriaki Kalimeri; Aleksandar Matic; Alessandro Cappelletti

In elderly care the manual assessment of Activities of Daily Living (ADLs) is a significant problem. Focusing on the dressing we aim not only to recognize the activity performed but also to evaluate its quality, using solely RFID technology. Our approach leverages sparse and noisy RFID readings by applying the two probabilistic methods, Bayesian Networks and Layered HMM. The goal is to follow the garments that patients are manipulating and to detect possible failures such as forgetting to put on something or putting clothes in wrong order. The experimental results showed that both methods can achieve high accuracy.


human factors in computing systems | 2013

Audio-augmented paper for the therapy of low-functioning autism children

Andrea Alessandrini; Alessandro Cappelletti; Massimo Zancanaro

In this paper, we present a prototype and an initial pilot study of audio-augmented paper to support the therapy of low-functioning autism children. The prototype supports the recording of audio on standard sheets of paper by using tangible tools that can be shared among the therapist and the child. The prototype is designed as tool for the therapist to engage a child in a storytelling activity.

Collaboration


Dive into the Alessandro Cappelletti's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bruno Lepri

fondazione bruno kessler

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nadia Mana

fondazione bruno kessler

View shared research outputs
Top Co-Authors

Avatar

Oliviero Stock

fondazione bruno kessler

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge