Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marco Fabiani is active.

Publication


Featured researches published by Marco Fabiani.


Journal on Multimodal User Interfaces | 2012

Interactive sonification of synchronisation of motoric behaviour in social active listening to music with mobile devices

Giovanna Varni; Gaël Dubus; Sami Oksanen; Gualtiero Volpe; Marco Fabiani; Roberto Bresin; Jari Kleimola; Vesa Välimäki; Antonio Camurri

This paper evaluates three different interactive sonifications of dyadic coordinated human rhythmic activity. An index of phase synchronisation of gestures was chosen as coordination metric. The sonifications are implemented as three prototype applications exploiting mobile devices: Sync’n’Moog, Sync’n’Move, and Sync’n’Mood. Sync’n’Moog sonifies the phase synchronisation index by acting directly on the audio signal and applying a nonlinear time-varying filtering technique. Sync’n’Move intervenes on the multi-track music content by making the single instruments emerge and hide. Sync’n’Mood manipulates the affective features of the music performance. The three sonifications were also tested against a condition without sonification.


Journal of the Acoustical Society of America | 2011

Influence of pitch, loudness, and timbre on the perception of instrument dynamics

Marco Fabiani; Anders Friberg

The effect of variations in pitch, loudness, and timbre on the perception of the dynamics of isolated instrumental tones is investigated. A full factorial design was used in a listening experiment. The subjects were asked to indicate the perceived dynamics of each stimulus on a scale from pianissimo to fortissimo. Statistical analysis showed that for the instruments included (i.e., clarinet, flute, piano, trumpet, and violin) timbre and loudness had equally large effects, while pitch was relevant mostly for the first three. The results confirmed our hypothesis that loudness alone is not a reliable estimate of the dynamics of musical tones.


user centric media | 2009

User-centric context-aware mobile applications for embodied music listening

Antonio Camurri; Gualtiero Volpe; Hugues Vinet; Roberto Bresin; Marco Fabiani; Gaël Dubus; Esteban Maestre; Jordi Llop; Jari Kleimola; Sami Oksanen; Vesa Välimäki; Jarno Seppänen

This paper surveys a collection of sample applications for networked user-centric context-aware embodied music listening. The applications have been designed and developed in the framework of the EU-ICT Project SAME (www.sameproject.eu) and have been presented at Agora Festival (IRCAM, Paris, France) in June 2009. All of them address in different ways the concept of embodied, active listening to music, i.e., enabling listeners to interactively operate in real-time on the music content by means of their movements and gestures as captured by mobile devices. In the occasion of the Agora Festival the applications have also been evaluated by both expert and non-expert users.


Acta Acustica United With Acustica | 2011

Analysis of the acoustics and playing strategies of turntable scratching

Kjetil Falkenberg Hansen; Marco Fabiani; Roberto Bresin

Scratching performed by a DJ (disk jockey) is a skillful style of playingthe turntable with complex musical output. This study focuses on the description of some of the acoustical parameters and pl ...


computer music modeling and retrieval | 2008

Rule-Based Expressive Modifications of Tempo in Polyphonic Audio Recordings

Marco Fabiani; Anders Friberg

This paper describes a few aspects of a system for expressive, rule-based modifications of audio recordings regarding tempo, dynamics and articulation. The input audio signal is first aligned with a score containing extra information on how to modify a performance. The signal is then transformed into the time-frequency domain. Each played tone is identified using partial tracking and the score information. Articulation and dynamics are changed by modifying the length and content of the partial tracks. The focus here is on the tempo modification which is done using a combination of time frequency techniques and phase reconstruction. Preliminary results indicate that the accuracy of the tempo modification is in average 8.2 mswhen comparing Inter Onset Intervals in the resulting signal with the desired ones. Possible applications of such a system are in music pedagogy, basic perception research as well as interactive music systems.


Journal on Multimodal User Interfaces | 2012

Interactive sonification of expressive hand gestures on a handheld device

Marco Fabiani; Roberto Bresin; Gaël Dubus

We present here a mobile phone application called MoodifierLive which aims at using expressive music performances for the sonification of expressive gestures through the mapping of the phone’s accelerometer data to the performance parameters (i.e. tempo, sound level, and articulation). The application, and in particular the sonification principle, is described in detail. An experiment was carried out to evaluate the perceived matching between the gesture and the music performance that it produced, using two distinct mappings between gestures and performance. The results show that the application produces consistent performances, and that the mapping based on data collected from real gestures works better than one defined a priori by the authors.


Archive | 2013

Systems for Interactive Control of Computer Generated Music Performance

Marco Fabiani; Anders Friberg; Roberto Bresin

This chapter is a literature survey of systems for real-time interactive control of automatic expressive music performance. A classification is proposed based on two initial design choices: the music material to interact with (i.e., MIDI or audio recordings) and the type of control (i.e., direct control of the low-level parameters such as tempo, intensity, and instrument balance or mapping from high-level parameters, such as emotions, to low-level parameters). Their pros and cons are briefly discussed. Then, a generic approach to interactive control is presented, comprising four steps: control data collection and analysis, mapping from control data to performance parameters, modification of the music material, and audiovisual feedback synthesis. Several systems are then described, focusing on different technical and expressive aspects. For many of the surveyed systems, a formal evaluation is missing. Possible methods for the evaluation of such systems are finally discussed.


Journal of the Acoustical Society of America | 2014

Using listener-based perceptual features as intermediate representations in music information retrieval

Anders Friberg; Erwin Schoonderwaldt; Anton Hedblad; Marco Fabiani; Anders Elowsson


new interfaces for musical expression | 2011

MoodifierLive: Interactive and Collaborative Expressive Music Performance on Mobile Devices.

Marco Fabiani; Gaël Dubus; Roberto Bresin


ISon 2010, 3rd Interactive Sonification Workshop, Stockholm, Sweden, April 7, 2010 | 2010

Interactive sonification of emotionally expressive gestures by means of music performance

Marco Fabiani; Gaël Dubus; Roberto Bresin

Collaboration


Dive into the Marco Fabiani's collaboration.

Top Co-Authors

Avatar

Roberto Bresin

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Anders Friberg

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Gaël Dubus

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Anders Elowsson

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Anton Hedblad

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Erwin Schoonderwaldt

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge