Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Giovanna Varni is active.

Publication


Featured researches published by Giovanna Varni.


IEEE Transactions on Multimedia | 2010

A System for Real-Time Multimodal Analysis of Nonverbal Affective Social Interaction in User-Centric Media

Giovanna Varni; Gualtiero Volpe; Antonio Camurri

This paper presents a multimodal system for real-time analysis of nonverbal affective social interaction in small groups of users. The focus is on two major aspects of affective social interaction: the synchronization of the affective behavior within a small group and the emergence of functional roles, such as leadership. A small group of users is modeled as a complex system consisting of single interacting components that can auto-organize and show global properties. Techniques are developed for computing quantitative measures of both synchronization and leadership. Music is selected as experimental test-bed since it is a clear example of interactive and social activity, where affective nonverbal communication plays a fundamental role. The system has been implemented as software modules for the EyesWeb XMI platform (http://www.eyesweb.org). It has been used in experimental frameworks (a violin duo and a string quartet) and in real-world applications (in user-centric applications for active music listening). Further application scenarios include entertainment, edutainment, therapy and rehabilitation, cultural heritage, and museum applications. Research has been carried out in the framework of the EU-ICT FP7 Project SAME (http://www.sameproject.eu).


new interfaces for musical expression | 2007

Developing multimodal interactive systems with EyesWeb XMI

Antonio Camurri; Paolo Coletta; Giovanna Varni; Simone Ghisio

EyesWeb XMI (for eXtended Multimodal Interaction) is the new version of the well-known EyesWeb platform. It has a main focus on multimodality and the main design target of this new release has been to improve the ability to process and correlate several streams of data. It has been used extensively to build a set of interactive systems for performing arts applications for Festival della Scienza 2006, Genoa, Italy. The purpose of this paper is to describe the developed installations as well as the new EyesWeb features that helped in their development.


Human-Computer Interaction | 2016

Go-with-the-Flow: Tracking, Analysis and Sonification of Movement and Breathing to Build Confidence in Activity Despite Chronic Pain

Aneesha Singh; Stefano Piana; Davide Pollarolo; Gualtiero Volpe; Giovanna Varni; Ana Tajadura-Jiménez; Amanda C. de C. Williams; Antonio Camurri; Nadia Bianchi-Berthouze

Chronic (persistent) pain (CP) affects 1 in 10 adults; clinical resources are insufficient, and anxiety about activity restricts lives. Technological aids monitor activity but lack necessary psychological support. This article proposes a new sonification framework, Go-with-the-Flow, informed by physiotherapists and people with CP. The framework proposes articulation of user-defined sonified exercise spaces (SESs) tailored to psychological needs and physical capabilities that enhance body and movement awareness to rebuild confidence in physical activity. A smartphone-based wearable device and a Kinect-based device were designed based on the framework to track movement and breathing and sonify them during physical activity. In control studies conducted to evaluate the sonification strategies, people with CP reported increased performance, motivation, awareness of movement, and relaxation with sound feedback. Home studies, a focus group, and a survey of CP patients conducted at the end of a hospital pain management session provided an in-depth understanding of how different aspects of the SESs and their calibration can facilitate self-directed rehabilitation and how the wearable version of the device can facilitate transfer of gains from exercise to feared or demanding activities in real life. We conclude by discussing the implications of our findings on the design of technology for physical rehabilitation.


HBU'12 Proceedings of the Third international conference on Human Behavior Understanding | 2012

Computing and evaluating the body laughter index

Maurizio Mancini; Giovanna Varni; Donald Glowinski; Gualtiero Volpe

The EU-ICT FET Project ILHAIRE is aimed at endowing machines with automated detection, analysis, and synthesis of laughter. This paper describes the Body Laughter Index (BLI) for automated detection of laughter starting from the analysis of body movement captured by a video source. The BLI algorithm is described, and the index is computed on a corpus of videos. The assessment of the algorithm by means of subjects rating is also presented. Results show that BLI can successfully distinguish between different videos of laughter, even if improvements are needed with respect to perception of subjects, multimodal fusion, cultural aspects, and generalization to a broad range of social contexts.


Proceedings of 4th International Workshop on Human Behavior Understanding - Volume 8212 | 2013

MMLI: Multimodal Multiperson Corpus of Laughter in Interaction

Radoslaw Niewiadomski; Maurizio Mancini; Tobias Baur; Giovanna Varni; Harry J. Griffin; Min S. H. Aung

The aim of the Multimodal and Multiperson Corpus of Laughter in Interaction (MMLI) was to collect multimodal data of laughter with the focus on full body movements and different laughter types. It contains both induced and interactive laughs from human triads. In total we collected 500 laugh episodes of 16 participants. The data consists of 3D body position information, facial tracking, multiple audio and video channels as well as physiological data. In this paper we discuss methodological and technical issues related to this data collection including techniques for laughter elicitation and synchronization between different independent sources of data. We also present the enhanced visualization and segmentation tool used to segment captured data. Finally we present data annotation as well as preliminary results of the analysis of the nonverbal behavior patterns in laughter.


user centric media | 2009

Sync’n’Move: Social Interaction Based on Music and Gesture

Giovanna Varni; Maurizio Mancini; Gualtiero Volpe; Antonio Camurri

In future User Centric Media the importance of the social dimension will likely increase. As social networks and Internet games show, the social dimension has a key role for active participation of the users in the overall media chain. In this paper, a first sample application for social active listening to music is presented. Sync’n’Move enables two users to explore a multi-channel pre-recorded music piece as the result of their social interaction. The application has been developed in the framework of the EU-ICT Project SAME (www.sameproject.eu) and has been presented for the first time at the Agora Festival (IRCAM, Paris, June 2009). In that occasion, Sync’n’Move has also been evaluated by both expert and non expert users.


Journal on Multimodal User Interfaces | 2012

Interactive sonification of synchronisation of motoric behaviour in social active listening to music with mobile devices

Giovanna Varni; Gaël Dubus; Sami Oksanen; Gualtiero Volpe; Marco Fabiani; Roberto Bresin; Jari Kleimola; Vesa Välimäki; Antonio Camurri

This paper evaluates three different interactive sonifications of dyadic coordinated human rhythmic activity. An index of phase synchronisation of gestures was chosen as coordination metric. The sonifications are implemented as three prototype applications exploiting mobile devices: Sync’n’Moog, Sync’n’Move, and Sync’n’Mood. Sync’n’Moog sonifies the phase synchronisation index by acting directly on the audio signal and applying a nonlinear time-varying filtering technique. Sync’n’Move intervenes on the multi-track music content by making the single instruments emerge and hide. Sync’n’Mood manipulates the affective features of the music performance. The three sonifications were also tested against a condition without sonification.


ieee international conference on automatic face & gesture recognition | 2008

Emotional entrainment in music performance

Giovanna Varni; Antonio Camurri; Paolo Coletta; Gualtiero Volpe

This work aims at defining a computational model of human emotional entrainment. Music, as a non-verbal language to express emotions, is chosen as an ideal test bed for these aims. We start from multimodal gesture and motion signals, recorded in a real world collaborative condition in an ecological setting. Four violin players were asked to play, alone or in duo, a music fragment in two different perceptual feedback modalities and in four different emotional states. We focused our attention on phase synchronisation of the head motions of the players. From observation by subjects (musicians and observers), an evidence of entrainment emerges between players. The preliminary results, based on a reduced data set, however do not grasp fully this phenomenon. A more extended analysis is current subject of investigation.


computational science and engineering | 2009

Toward a Real-Time Automated Measure of Empathy and Dominance

Giovanna Varni; Antonio Camurri; Paolo Coletta; Gualtiero Volpe

This paper presents a number of algorithms andrelated software modules for the automated analysisin real-time of non-verbal cues related to expressivegesture in social interaction. Analysis is based onPhase Synchronisation (PS) and RecurrenceQuantification Analysis. We start from the hypothesisthat PS is one of the low-level social signalsexplaining empathy and dominance in a small groupof users. Real- time implementation of the algorithmsis available in the EyesWeb XMI Social SignalProcessing Library. Applications of the approach isin user-centric media and in particular in activemusic listening (EU ICT FP7 Project SAME).


Archive | 2013

Automated Analysis of Non-Verbal Expressive Gesture

Stefano Piana; Maurizio Mancini; Antonio Camurri; Giovanna Varni; Gualtiero Volpe

A framework and software system for real-time tracking and analysis of non-verbal expressive and emotional behavior is proposed. The objective is to design and create multimodal interactive systems for the automated analysis of emotions. The system will give audiovisual feedback to support the therapy of children affected by Autism Spectrum Conditions (ASC). The system is based on the EyesWeb XMI open software platform and on Kinect depth sensors.

Collaboration


Dive into the Giovanna Varni's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christopher E. Peters

Royal Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge