Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gualtiero Volpe is active.

Publication


Featured researches published by Gualtiero Volpe.


Archive | 2004

Gesture-Based Communication in Human-Computer Interaction

Antonio Camurri; Gualtiero Volpe

This paper presents gesture analysis under the scope of motor control theory. Following the motor program view, some studies have revealed a number of invariant features that characterize movement trajectories in human hand-arm gestures. These features express general spatio-temporal laws underlying coordination and motor control processes. Some typical invariants are described and illustrated for planar pointing and tracing gestures. We finally discuss how these invariant laws can be used for motion edition and generation.


Computer Music Journal | 2000

EyesWeb: Toward Gesture and Affect Recognition in Interactive Dance and Music Systems

Antonio Camurri; Shuji Hashimoto; Matteo Ricchetti; Andrea Ricci; Kenji Suzuki; Riccardo Trocca; Gualtiero Volpe

The goal of the EyesWeb project is to develop a modular system for the real-time analysis of body movement and gesture. Such information can be used to control and generate sound, music, and visual media, and to control actuators (e.g., robots). Another goal of the project is to explore and develop models of interaction by extending music language toward gesture and visual languages, with a particular focus on the understanding of affect and expressive content in gesture. For example, we attempt to distinguish the expressive content from two instances of the same movement


International Gesture Workshop | 2003

Multimodal Analysis of Expressive Gesture in Music and Dance Performances

Antonio Camurri; Barbara Mazzarino; Matteo Ricchetti; Renee Timmers; Gualtiero Volpe

This paper presents ongoing research on the modelling of expressive gesture in multimodal interaction and on the development of multimodal interactive systems explicitly taking into account the role of non-verbal expressive gesture in the communication process. In this perspective, a particular focus is on dance and music as first-class conveyors of expressive and emotional content. Research outputs include (i) computational models of expressive gesture, (ii) validation by means of continuous ratings on spectators exposed to real artistic stimuli, and (iii) novel hardware and software components for the EyesWeb open platform (www.eyesweb.org), such as the recently developed Expressive Gesture Processing Library. The paper starts with a definition of expressive gesture. A unifying framework for the analysis of expressive gesture is then proposed. Finally, two experiments on expressive gesture in dance and music are discussed. This research work has been supported by the EU IST project MEGA (Multisensory Expressive Gesture Applications, www.megaproject.org) and the EU MOSART TMR Network.


IEEE MultiMedia | 2005

Communicating expressiveness and affect in multimodal interactive systems

Antonio Camurri; Gualtiero Volpe; G. De Poli; Marc Leman

Multisensory integrated expressive environments is a framework for mixed reality applications in the performing arts such as interactive dance, music, or video installations. MIEEs address the expressive aspects of nonverbal human communication. We present the multilayer conceptual framework of MIEEs, algorithms for expressive content analysis and processing, and MIEEs-based art applications.


IEEE Transactions on Affective Computing | 2011

Toward a Minimal Representation of Affective Gestures

Donald Glowinski; Nele Dael; Antonio Camurri; Gualtiero Volpe; Marcello Mortillaro; Klaus R. Scherer

This paper presents a framework for analysis of affective behavior starting with a reduced amount of visual information related to human upper-body movements. The main goal is to individuate a minimal representation of emotional displays based on nonverbal gesture features. The GEMEP (Geneva multimodal emotion portrayals) corpus was used to validate this framework. Twelve emotions expressed by 10 actors form the selected data set of emotion portrayals. Visual tracking of trajectories of head and hands were performed from a frontal and a lateral view. Postural/shape and dynamic expressive gesture features were identified and analyzed. A feature reduction procedure was carried out, resulting in a 4D model of emotion expression that effectively classified/grouped emotions according to their valence (positive, negative) and arousal (high, low). These results show that emotionally relevant information can be detected/measured/obtained from the dynamic qualities of gesture. The framework was implemented as software modules (plug-ins) extending the EyesWeb XMI Expressive Gesture Processing Library and is going to be used in user centric, networked media applications, including future mobiles, characterized by low computational resources, and limited sensor systems.


Lecture Notes in Computer Science | 2003

Analysis of Expressive gesture: The EyesWeb Expressive Gesture Processing library

Antonio Camurri; Barbara Mazzarino; Gualtiero Volpe

This paper presents some results of a research work concerning algorithms and computational models for real-time analysis of expressive gesture in full-body human movement. As a main concrete result of our research work, we present a collection of algorithms and related software modules for the EyesWeb open architecture (freely available from www.eyesweb.org). These software modules, collected in the EyesWeb Expressive Gesture Processing Library, have been used in real scenarios and applications, mainly in the fields of performing arts, therapy and rehabilitation, museum interactive installations, and other immersive augmented reality and cooperative virtual environment applications. The work has been carried out at DIST – InfoMus Lab in the framework of the EU IST Project MEGA (Multisensory Expressive Gesture Applications, www.megaproject.org ).


IEEE Transactions on Multimedia | 2010

A System for Real-Time Multimodal Analysis of Nonverbal Affective Social Interaction in User-Centric Media

Giovanna Varni; Gualtiero Volpe; Antonio Camurri

This paper presents a multimodal system for real-time analysis of nonverbal affective social interaction in small groups of users. The focus is on two major aspects of affective social interaction: the synchronization of the affective behavior within a small group and the emergence of functional roles, such as leadership. A small group of users is modeled as a complex system consisting of single interacting components that can auto-organize and show global properties. Techniques are developed for computing quantitative measures of both synchronization and leadership. Music is selected as experimental test-bed since it is a clear example of interactive and social activity, where affective nonverbal communication plays a fundamental role. The system has been implemented as software modules for the EyesWeb XMI platform (http://www.eyesweb.org). It has been used in experimental frameworks (a violin duo and a string quartet) and in real-world applications (in user-centric applications for active music listening). Further application scenarios include entertainment, edutainment, therapy and rehabilitation, cultural heritage, and museum applications. Research has been carried out in the framework of the EU-ICT FP7 Project SAME (http://www.sameproject.eu).


computer vision and pattern recognition | 2008

Technique for automatic emotion recognition by body gesture analysis

Donald Glowinski; Antonio Camurri; Gualtiero Volpe; Nele Dael; Klaus R. Scherer

This paper illustrates our recent work on the analysis of expressive gesture related to the motion of the upper body (the head and the hands) in the context of emotional portrayals performed by professional actors. An experiment is presented which is the result of a multidisciplinary joint work. The experiment aims at (i) developing models and algorithms for analysis of such expressive content (ii) individuating which motion cues are involved in conveying the actorpsilas expressive intentions to portray four emotions (anger, joy, relief, sadness) via a scenario approach. The paper discusses the experiment in detail with reference to related conceptual issues, developed techniques, and the obtained results.


Cognition, Technology and Work archive | 2004

Expressive interfaces

Antonio Camurri; Barbara Mazzarino; Gualtiero Volpe

Analysis of expressiveness in human gesture can lead to new paradigms for the design of improved human-machine interfaces, thus enhancing users’ participation and experience in mixed reality applications and context-aware mediated environments. The development of expressive interfaces decoding the highly affective information gestures convey opens novel perspectives in the design of interactive multimedia systems in several application domains: performing arts, museum exhibits, edutainment, entertainment, therapy, and rehabilitation. This paper describes some recent developments in our research on expressive interfaces by presenting computational models and algorithms for the real-time analysis of expressive gestures in human full-body movement. Such analysis is discussed both as an example and as a basic component for the development of effective expressive interfaces. As a concrete result of our research, a software platform named EyesWeb was developed (http://www.eyesweb.org). Besides supporting research, EyesWeb has also been employed as a concrete tool and open platform for developing real-time interactive applications.


Human-Computer Interaction | 2016

Go-with-the-Flow: Tracking, Analysis and Sonification of Movement and Breathing to Build Confidence in Activity Despite Chronic Pain

Aneesha Singh; Stefano Piana; Davide Pollarolo; Gualtiero Volpe; Giovanna Varni; Ana Tajadura-Jiménez; Amanda C. de C. Williams; Antonio Camurri; Nadia Bianchi-Berthouze

Chronic (persistent) pain (CP) affects 1 in 10 adults; clinical resources are insufficient, and anxiety about activity restricts lives. Technological aids monitor activity but lack necessary psychological support. This article proposes a new sonification framework, Go-with-the-Flow, informed by physiotherapists and people with CP. The framework proposes articulation of user-defined sonified exercise spaces (SESs) tailored to psychological needs and physical capabilities that enhance body and movement awareness to rebuild confidence in physical activity. A smartphone-based wearable device and a Kinect-based device were designed based on the framework to track movement and breathing and sonify them during physical activity. In control studies conducted to evaluate the sonification strategies, people with CP reported increased performance, motivation, awareness of movement, and relaxation with sound feedback. Home studies, a focus group, and a survey of CP patients conducted at the end of a hospital pain management session provided an in-depth understanding of how different aspects of the SESs and their calibration can facilitate self-directed rehabilitation and how the wearable version of the device can facilitate transfer of gains from exercise to feared or demanding activities in real life. We conclude by discussing the implications of our findings on the design of technology for physical rehabilitation.

Collaboration


Dive into the Gualtiero Volpe's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge