Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gaël Dubus is active.

Publication


Featured researches published by Gaël Dubus.


PLOS ONE | 2013

A systematic review of mapping strategies for the sonification of physical quantities.

Gaël Dubus; Roberto Bresin

The field of sonification has progressed greatly over the past twenty years and currently constitutes an established area of research. This article aims at exploiting and organizing the knowledge accumulated in previous experimental studies to build a foundation for future sonification works. A systematic review of these studies may reveal trends in sonification design, and therefore support the development of design guidelines. To this end, we have reviewed and analyzed 179 scientific publications related to sonification of physical quantities. Using a bottom-up approach, we set up a list of conceptual dimensions belonging to both physical and auditory domains. Mappings used in the reviewed works were identified, forming a database of 495 entries. Frequency of use was analyzed among these conceptual dimensions as well as higher-level categories. Results confirm two hypotheses formulated in a preliminary study: pitch is by far the most used auditory dimension in sonification applications, and spatial auditory dimensions are almost exclusively used to sonify kinematic quantities. To detect successful as well as unsuccessful sonification strategies, assessment of mapping efficiency conducted in the reviewed works was considered. Results show that a proper evaluation of sonification mappings is performed only in a marginal proportion of publications. Additional aspects of the publication database were investigated: historical distribution of sonification works is presented, projects are classified according to their primary function, and the sonic material used in the auditory display is discussed. Finally, a mapping-based approach for characterizing sonification is proposed.


Journal on Multimodal User Interfaces | 2012

Interactive sonification of synchronisation of motoric behaviour in social active listening to music with mobile devices

Giovanna Varni; Gaël Dubus; Sami Oksanen; Gualtiero Volpe; Marco Fabiani; Roberto Bresin; Jari Kleimola; Vesa Välimäki; Antonio Camurri

This paper evaluates three different interactive sonifications of dyadic coordinated human rhythmic activity. An index of phase synchronisation of gestures was chosen as coordination metric. The sonifications are implemented as three prototype applications exploiting mobile devices: Sync’n’Moog, Sync’n’Move, and Sync’n’Mood. Sync’n’Moog sonifies the phase synchronisation index by acting directly on the audio signal and applying a nonlinear time-varying filtering technique. Sync’n’Move intervenes on the multi-track music content by making the single instruments emerge and hide. Sync’n’Mood manipulates the affective features of the music performance. The three sonifications were also tested against a condition without sonification.


Journal on Multimodal User Interfaces | 2012

Evaluation of four models for the sonification of elite rowing

Gaël Dubus

Many aspects of sonification represent potential benefits for the practice of sports. Taking advantage of the characteristics of auditory perception, interactive sonification offers promising opportunities for enhancing the training of athletes. The efficient learning and memorizing abilities pertaining to the sense of hearing, together with the strong coupling between auditory and sensorimotor systems, make the use of sound a natural field of investigation in quest of efficiency optimization in individual sports at a high level. This study presents an application of sonification to elite rowing, introducing and evaluating four sonification models. The rapid development of mobile technology capable of efficiently handling numerical information offers new possibilities for interactive auditory display. Thus, these models have been developed under the specific constraints of a mobile platform, from data acquisition to the generation of a meaningful sound feedback. In order to evaluate the models, two listening experiments have then been carried out with elite rowers. Results show a good ability of the participants to efficiently extract basic characteristics of the sonified data, even in a non-interactive context. Qualitative assessment of the models highlights the need for a balance between function and aesthetics in interactive sonification design. Consequently, particular attention on usability is required for future displays to become widespread.


user centric media | 2009

User-centric context-aware mobile applications for embodied music listening

Antonio Camurri; Gualtiero Volpe; Hugues Vinet; Roberto Bresin; Marco Fabiani; Gaël Dubus; Esteban Maestre; Jordi Llop; Jari Kleimola; Sami Oksanen; Vesa Välimäki; Jarno Seppänen

This paper surveys a collection of sample applications for networked user-centric context-aware embodied music listening. The applications have been designed and developed in the framework of the EU-ICT Project SAME (www.sameproject.eu) and have been presented at Agora Festival (IRCAM, Paris, France) in June 2009. All of them address in different ways the concept of embodied, active listening to music, i.e., enabling listeners to interactively operate in real-time on the music content by means of their movements and gestures as captured by mobile devices. In the occasion of the Agora Festival the applications have also been evaluated by both expert and non-expert users.


Journal on Multimodal User Interfaces | 2012

Interactive sonification of expressive hand gestures on a handheld device

Marco Fabiani; Roberto Bresin; Gaël Dubus

We present here a mobile phone application called MoodifierLive which aims at using expressive music performances for the sonification of expressive gestures through the mapping of the phone’s accelerometer data to the performance parameters (i.e. tempo, sound level, and articulation). The application, and in particular the sonification principle, is described in detail. An experiment was carried out to evaluate the perceived matching between the gesture and the music performance that it produced, using two distinct mappings between gestures and performance. The results show that the application produces consistent performances, and that the mapping based on data collected from real gestures works better than one defined a priori by the authors.


automotive user interfaces and interactive vehicular applications | 2018

DriverSense: a hyper-realistic testbed for the design and evaluation of novel user interfaces in self-driving vehicles

Pietro Lungaro; Konrad Tollmar; Firdose Saeik; Conrado Mateu Gisbert; Gaël Dubus

This paper presents DriverSense, a novel experimental platform for designing and validating onboard user interfaces for self-driving and remotely controlled vehicles. Most of currently existing academic and industrial testbeds and vehicular simulators are designed to reproduce with high fidelity the ergonomic aspects associated with the driving experience. However, with increasing deployment of self-driving and remote controlled vehicular modalities, it is expected that the digital components of the driving experience will become more and more relevant, because users will be less engaged in the actual driving tasks and more involved with oversight activities. In this respect, high visual testbed fidelity becomes an important pre-requisite for supporting the design and evaluation of future onboard interfaces. DriverSense, which is based on the hyper-realistic video game GTA V, has been developed to satisfy this need. To showcase its experimental flexibility, a set of selected case studies, including Heads-Up Diplays (HUDs), Augmented Reality (ARs) and directional audio solutions, are presented.


automotive user interfaces and interactive vehicular applications | 2018

Demonstration of a low-cost hyper-realistic testbed for designing future onboard experiences

Pietro Lungaro; Konrad Tollmar; Firdose Saeik; Conrado Mateu Gisbert; Gaël Dubus

This demo presents DriverSense, a novel experimental platform for designing and validating onboard user interfaces for self-driving and remotely controlled vehicles. Most of currently existing vehicular testbeds and simulators are designed to reproduce with high fidelity the ergonomic aspects associated with the driving experience. However, with increasing deployment of self-driving and remotely controlled or monitored vehicles, it is expected that the digital components of the driving experience will become more relevant. That is because users will be less engaged in the actual driving tasks and more involved with oversight activities. In this respect, high visual testbed fidelity becomes an important pre-requisite for supporting the design and evaluation of future interfaces. DriverSense, which is based on the hyper-realistic video game GTA V, has been developed to satisfy this need. To showcase its experimental flexibility, a set of self-driving interfaces have been implemented, including Heads-Up Display (HUDs), Augmented Reality (ARs) and directional audio.


international conference on auditory display | 2011

Sonification of Physical Quantities Throughout History: A Meta-Study of Previous Mapping Strategies

Gaël Dubus; Roberto Bresin


new interfaces for musical expression | 2011

MoodifierLive: Interactive and Collaborative Expressive Music Performance on Mobile Devices.

Marco Fabiani; Gaël Dubus; Roberto Bresin


ISon 2010, 3rd Interactive Sonification Workshop, Stockholm, Sweden, April 7, 2010 | 2010

Interactive sonification of emotionally expressive gestures by means of music performance

Marco Fabiani; Gaël Dubus; Roberto Bresin

Collaboration


Dive into the Gaël Dubus's collaboration.

Top Co-Authors

Avatar

Roberto Bresin

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Marco Fabiani

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Conrado Mateu Gisbert

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Firdose Saeik

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Konrad Tollmar

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Pietro Lungaro

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge