Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Barbara Mazzarino is active.

Publication


Featured researches published by Barbara Mazzarino.


International Gesture Workshop | 2003

Multimodal Analysis of Expressive Gesture in Music and Dance Performances

Antonio Camurri; Barbara Mazzarino; Matteo Ricchetti; Renee Timmers; Gualtiero Volpe

This paper presents ongoing research on the modelling of expressive gesture in multimodal interaction and on the development of multimodal interactive systems explicitly taking into account the role of non-verbal expressive gesture in the communication process. In this perspective, a particular focus is on dance and music as first-class conveyors of expressive and emotional content. Research outputs include (i) computational models of expressive gesture, (ii) validation by means of continuous ratings on spectators exposed to real artistic stimuli, and (iii) novel hardware and software components for the EyesWeb open platform (www.eyesweb.org), such as the recently developed Expressive Gesture Processing Library. The paper starts with a definition of expressive gesture. A unifying framework for the analysis of expressive gesture is then proposed. Finally, two experiments on expressive gesture in dance and music are discussed. This research work has been supported by the EU IST project MEGA (Multisensory Expressive Gesture Applications, www.megaproject.org) and the EU MOSART TMR Network.


Lecture Notes in Computer Science | 2003

Analysis of Expressive gesture: The EyesWeb Expressive Gesture Processing library

Antonio Camurri; Barbara Mazzarino; Gualtiero Volpe

This paper presents some results of a research work concerning algorithms and computational models for real-time analysis of expressive gesture in full-body human movement. As a main concrete result of our research work, we present a collection of algorithms and related software modules for the EyesWeb open architecture (freely available from www.eyesweb.org). These software modules, collected in the EyesWeb Expressive Gesture Processing Library, have been used in real scenarios and applications, mainly in the fields of performing arts, therapy and rehabilitation, museum interactive installations, and other immersive augmented reality and cooperative virtual environment applications. The work has been carried out at DIST – InfoMus Lab in the framework of the EU IST Project MEGA (Multisensory Expressive Gesture Applications, www.megaproject.org ).


Cognition, Technology and Work archive | 2004

Expressive interfaces

Antonio Camurri; Barbara Mazzarino; Gualtiero Volpe

Analysis of expressiveness in human gesture can lead to new paradigms for the design of improved human-machine interfaces, thus enhancing users’ participation and experience in mixed reality applications and context-aware mediated environments. The development of expressive interfaces decoding the highly affective information gestures convey opens novel perspectives in the design of interactive multimedia systems in several application domains: performing arts, museum exhibits, edutainment, entertainment, therapy, and rehabilitation. This paper describes some recent developments in our research on expressive interfaces by presenting computational models and algorithms for the real-time analysis of expressive gestures in human full-body movement. Such analysis is discussed both as an example and as a basic component for the development of effective expressive interfaces. As a concrete result of our research, a software platform named EyesWeb was developed (http://www.eyesweb.org). Besides supporting research, EyesWeb has also been employed as a concrete tool and open platform for developing real-time interactive applications.


International Gesture Workshop | 2003

Ghost in the Cave – An Interactive Collaborative Game Using Non-verbal Communication

Marie-Louise Rinman; Anders Friberg; Bendik Bendiksen; Demian Cirotteau; Sofia Dahl; Ivar Kjellmo; Barbara Mazzarino; Antonio Camurri

The interactive game environment, Ghost in the Cave, presented in this short paper, is a work still in progress. The game involves participants in an activity using non-verbal emotional expressions. Two teams use expressive gestures in either voice or body movements to compete. Each team has an avatar controlled either by singing into a microphone or by moving in front of a video camera. Participants/players control their avatars by using acoustical or motion cues. The avatar is navigated in a 3D distributed virtual environment using the Octagon server and player system. The voice input is processed using a musical cue analysis module yielding performance variables such as tempo, sound level and articulation as well as an emotional prediction. Similarly, movements captured from a video camera are analyzed in terms of different movement cues. The target group is young teenagers and the main purpose to encourage creative expressions through new forms of collaboration.


interaction design and children | 2012

BeSound: embodied reflexion for music education in childhood

Gualtiero Volpe; Giovanna Varni; Anna Rita Addessi; Barbara Mazzarino

Embodiment and reflexive interaction proved to be effective approaches to music education in childhood. A research challenge consists of merging them. This paper presents BeSound, an application intended to support children in learning the basic elements of composition. Children explore rhythm, melody, and harmony by playing at mimicking objects or characters; the qualities of their whole-body movements are analysed in real-time according to Rudolf Labans Theory of Effort and used to control sound. The paper focuses on the design of BeSound and describes the analysis performed to distinguish between direct and flexible movements - the Labans Space component - and between light and heavy movements - the Labans Weight component.


digital interactive media in entertainment and arts | 2008

Social active listening and making of expressive music: the interactive piece the bow is bent and drawn

Antonio Camurri; Corrado Canepa; Paolo Coletta; Nicola Ferrari; Barbara Mazzarino; Gualtiero Volpe

This paper discusses the concepts, the interaction paradigms, the system design and implementation that have been developed for the interactive dance and music performance The Bow is bent and drawn (composer Nicola Ferrari), presented at Casa Paganini, Genova, Italy, in occasion of the opening concert of the 8th Intl. Conference on New Interfaces for Musical Expression (NIME08), June 4, 2008. The Bow is bent and drawn grounds its bases on current research at Casa Paganini -- InfoMus Lab on social active listening of sound and music content and on analysis and processing of expressiveness in human full-body movement and gesture. In particular, The Bow is bent and drawn exploits our recent system Mappe per Affetti Erranti (literally Maps for Wandering Affects), which enables a novel paradigm for social active experience and dynamic molding of expressive content of a music piece. In Mappe per Affetti Erranti multiple users can physically navigate a polyphonic music piece and can intervene in real-time on the expressive content music performance conveys through their full-body movement and gesture. The research topics addressed in this paper are currently investigated in the EUICT Project SAME (Sound and Music for Everyone, Everyday, Everywhere, Every Way, www.sameproject.eu).


Journal on Multimodal User Interfaces | 2010

Browsing a dance video collection: dance analysis and interface design

Damien Tardieu; Xavier Siebert; Barbara Mazzarino; Ricardo Chessini; Julien Dubois; Stéphane Dupont; Giovanna Varni; Alexandra Visentin

In this article we present a system for content-based browsing of a dance video database. A set of features describing dance is proposed, to quantify local gestures of the dancer as well as global stage usage. These features are used to compute similarities between recorded dance improvisations, which in turn serve to guide the visual exploration in the browsing methods presented here. The software integrating all these components is part of an interactive touch-screen installation, and is also accessible online in association with an artistic project. The different components of this browsing system are presented in this paper.


Journal of Vision | 2004

Perceiving Animacy and Arousal in Transformed Displays of Human Interaction

Phil McAleer; Barbara Mazzarino; Gaultiero Volpe; Antonio Camurri; Helena Patterson; Frank E. Pollick

When viewing a moving abstract stimulus, people tend to attribute social meaning and purpose to the movement. The classic work of Heider and Simmel [1] investigated how observers would describe movement of simple geometric shapes (circle, triangles, and a square) around a screen. A high proportion of participants reported seeing some form of purposeful interaction between the three abstract objects and defining this interaction as a social encounter. Various papers have subsequently found similar results [2,3] and gone on to show that, as Heider and Simmel suggested, the phenomenon was due more to the relationship in space and time of the objects, rather than any particular object characteristic. The research of Tremoulet and Feldman [4] has shown that the percept of animacy may be elicited with a solitary moving object. They asked observers to rate the movement of a single dot or rectangle for whether it was under the influence of an external force, or whether it was in control of its own motion. At mid-trajectory the shape would change speed or direction, or both. They found that shapes that either changed direction greater than 25 degrees from the original trajectory, or changed speed, were judged to be “more alive” than others. Further discussion and evidence of animacy with one or two small dots can be found in Gelman, Durgin and Kaufman [5] Our aim was to further study this phenomenon by using a different method of stimulus production. Previous methods for producing displays of animate objects have relied either on handcrafted stimuli or on parametric variations of simple motion patterns. It is our aim to work towards a new automatic approach by taking actual human movements, transforming them into basic shapes, and exploring what motion properties need to be preserved to obtain animacy. Though the phenomenon of animacy has been shown for many years, using various different displays, very few specific criteria have been set on the essential characteristics of the displays. Part of this research is to try and establish what movements result in percepts of animacy, and in turn, to give further understanding of essential characteristics of human movement and social interaction. In this paper we discuss two experiments in which we examine how different transformations of an original video of a dance influences perception of animacy. We also examine reports of arousal, Experiment 1, and emotional engagement in Experiment 2.


privacy security risk and trust | 2011

Towards a Social Retrieval of Music Content

Giovanna Varni; Gualtiero Volpe; Barbara Mazzarino

Endowing search engines with multimodal content indexing, sharing, and retrieval is a research challenge for the ICT community. This paper introduces a use case exploiting embodied cooperation as a paradigm for formulating social queries. It focuses on the assessment of the experience of users to this use case and on the design and exploitation of algorithms suitable to provide search engines with social intelligence.


GW'05 Proceedings of the 6th international conference on Gesture in Human-Computer Interaction and Simulation | 2005

Finger tracking methods using eyesweb

Anne-Marie Burns; Barbara Mazzarino

This paper compares different algorithms for tracking the position of fingers in a two-dimensional environment. Four algorithms have been implemented in EyesWeb, developed by DIST-InfoMus laboratory. The three first algorithms use projection signatures, the circular Hough transform, and geometric properties, and rely only on hand characteristics to locate the finger. The fourth algorithm uses color markers and is employed as a reference system for the other three. All the algorithms have been evaluated using two-dimensional video images of a hand performing different finger movements on a flat surface. Results about the accuracy, precision, latency and computer resource usage of the different algorithms are provided. Applications of this research include human-computer interaction systems based on hand gesture, sign language recognition, hand posture recognition, and gestural control of music.

Collaboration


Dive into the Barbara Mazzarino's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge