Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Antonio Camurri is active.

Publication


Featured researches published by Antonio Camurri.


Cognition, Technology and Work archive | 2004

Expressive interfaces

Antonio Camurri; Barbara Mazzarino; Gualtiero Volpe

Analysis of expressiveness in human gesture can lead to new paradigms for the design of improved human-machine interfaces, thus enhancing users’ participation and experience in mixed reality applications and context-aware mediated environments. The development of expressive interfaces decoding the highly affective information gestures convey opens novel perspectives in the design of interactive multimedia systems in several application domains: performing arts, museum exhibits, edutainment, entertainment, therapy, and rehabilitation. This paper describes some recent developments in our research on expressive interfaces by presenting computational models and algorithms for the real-time analysis of expressive gestures in human full-body movement. Such analysis is discussed both as an example and as a basic component for the development of effective expressive interfaces. As a concrete result of our research, a software platform named EyesWeb was developed (http://www.eyesweb.org). Besides supporting research, EyesWeb has also been employed as a concrete tool and open platform for developing real-time interactive applications.


Archive | 2011

Multimodal Analysis of Expressive Gesture in Music Performance

Antonio Camurri; Gualtiero Volpe

This chapter focuses on systems and interfaces for multimodal analysis of expressive gesture as a key element of music performance. Research on expressive gesture became particularly relevant in recent years. Psychological studies have been a fundamental source for automatic analysis of expressive gesture since their contribution in identifying the most significant features to be analysed. A further relevant source has been research in the humanistic tradition, in particular choreography. As a major example, in his Theory of Effort, choreographer Rudolf Laban describes the most significant qualities of movement. Starting from these sources, several models, systems, and techniques for analysis of expressive gesture were developed. This chapter presents an overview of methods for the analysis, modelling, and understanding of expressive gesture in musical performance. It introduces techniques resulted from the research developed over the years by the authors: from early experiments of human-robot interaction in the context of music performance up to recent set-ups of innovative interfaces and systems for active experience of sound and music content. The chapter ends with an overview of possible future research challenges.


robot and human interactive communication | 2006

Multimodal and cross-modal analysis of expressive gesture in tangible acoustic interfaces

Antonio Camurri; Gualtiero Volpe

This paper focuses on multimodal and cross-modal analysis of expressive gesture with a particular focus on collaborative interactive systems exploiting tangible acoustic interfaces (TAIs). We developed TAIs aiming at processing expressive information from users and supporting creativity in concrete music theatre and museum projects. The paper presents (i) techniques for extraction and analysis of high-level features from expressive gesture of TAIs users in collaborative frameworks, (ii) concrete examples of multimodal and cross-modal processing of expressive gesture, (iii) examples of how such results have been exploited in public events and artistic productions. In such occasions the developed techniques have been applied and evaluated with experiments involving both experts and the general audience. Research is carried out in the framework of the EU-IST STREP Project TAI-CHI (Tangible Acoustic Interfaces for Computer-Human Interaction). High-level expressive gesture analysis and multimodal and cross-modal processing are achieved in the new EyesWeb 4 open platform (available at www.eyesweb.org)


robot and human interactive communication | 2002

Improving the man-machine interface through the analysis of expressiveness in human movement

Antonio Camurri; Paolo Coletta; Barbara Mazzarino; R. Trocca; Gualtiero Volpe

In this paper our recent development in the research of computational models and algorithms for the real-time analysis of full-body human movement are presented. Our aim is to find methods and techniques to extract cues relevant to KANSEI and emotional content in human expressive gesture in real time. Analysis of expressiveness in human gestures can contribute to new paradigms for the design of improved human-robot interfaces. As a main concrete result of our research work, a software platform named EyesWeb has been developed and is distributed for free (www.eyesweb.org). EyesWeb supports research in multimodal interaction, and provides a concrete tool for developing real-time interactive applications. Human movement analysis is provided by means of a library of algorithms for sensors and video processing, features extraction, gesture segmentation, etc. A visual environment is provided to compose such basic algorithms in order to develop more sophisticated analysis techniques.


Archive | 2017

Report On Data-Driven And Model-Driven Analysis Methodologies

Antonio Camurri; Stefano Piana; Paolo Alborno; Ksenia Kolykhalova; Nikolas De Giorgis; Michele Buccoli; Massimiliano Zanoni

This deliverable summarizes the description of the development of techniques adopted for multimodal analysis of dance at both individual and group levels, data-driven, and model-driven analysis. Section 1 introduces the report and lists its objectives whereas Section 2 refers to the methodology employed in the data-driven approach. Section 3 provides an overview of developed model-driven approaches to extract movement dimensions related to the dance-learning scenario: from low-level model-based movement dimension to more complex intra- and inter- network related methodologies, including a technique to automatically segment dance sequences in meaningful chunks.


2007 Virtual Rehabilitation | 2007

Audio Patterns as a Source of Information for the Perception and Control of Orientation

Giovanna Varni; Thomas A. Stoffregen; Antonio Camurri; Barbara Mazzarino; Gualtiero Volpe

We discuss the use of dynamic auditory patterns to detect and control human action. We argue that such patterns can affect the ability of standing persons to control medio-lateral orientation of their body. Subjects (N = 10) stood a mechanical platform that could rotate in the subjects medio-lateral axis. Real-time data about platform roll were used to generate acoustic stimuli, presented via headphones. Subjects were asked to control standing sway so as to achieve specific patterns in the acoustic stimuli. The results suggest that acoustic stimuli can be used as a source of information for the perception and control of orientation.


Proceedings of the 3rd International Symposium on Movement and Computing | 2016

The Dancer in the Eye: Towards a Multi-Layered Computational Framework of Qualities in Movement

Antonio Camurri; Gualtiero Volpe; Stefano Piana; Maurizio Mancini; Radoslaw Niewiadomski; Nicola Ferrari; Corrado Canepa


SERVE@AVI | 2016

Designing Multimodal Interactive Systems using EyesWeb XMI.

Gualtiero Volpe; Paolo Alborno; Antonio Camurri; Paolo Coletta; Simone Ghisio; Maurizio Mancini; Radoslaw Niewiadomski; Stefano Piana


Archive | 2004

Expressive gesture and multimodal interactive systems

Antonio Camurri; Gualtiero Volpe; S. Menocci; Emmanuel Rocca; I. Vallone


Archive | 2005

Multimodal And Cross-Modal Processing In Interactive Systems Based On Tangible Acoustic Interfaces

Antonio Camurri; Corrado Canepa; Carlo Drioli; Alberto Massari; Barbara Mazzarino; Gualtiero Volpe

Collaboration


Dive into the Antonio Camurri's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge