Jules Françoise
IRCAM
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jules Françoise.
Computer Music Journal | 2014
Baptiste Caramiaux; Jules Françoise; Norbert Schnell; Frédéric Bevilacqua
Gesture-to-sound mapping is generally defined as the association between gestural and sound parameters. This article describes an approach that brings forward the perception–action loop as a fundamental design principle for gesture–sound mapping in digital music instrument. Our approach considers the processes of listening as the foundation—and the first step—in the design of action–sound relationships. In this design process, the relationship between action and sound is derived from actions that can be perceived in the sound. Building on previous work on listening modes and gestural descriptions, we propose to distinguish between three mapping strategies: instantaneous, temporal, and metaphorical. Our approach makes use of machine-learning techniques for building prototypes, from digital music instruments to interactive installations. Four different examples of scenarios and prototypes are described and discussed.
Journal of the Acoustical Society of America | 2015
Hugo Scurto; Guillaume Lemaitre; Jules Françoise; Frédéric Voisin; Frédéric Bevilacqua; Patrick Susini
Communicating about sounds is a difficult task without a technical language, and naive speakers often rely on different kinds of non-linguistic vocalizations and body gestures (Lemaitre et al. 2014). Previous work has independently studied how effectively people describe sounds with gestures or vocalizations (Caramiaux, 2014, Lemaitre and Rocchesso, 2014). However, speech communication studies suggest a more intimate link between the two processes (Kendon, 2004). Our study thus focused on the combination of manual gestures and non-speech vocalizations in the communication of sounds. We first collected a large database of vocal and gestural imitations of a variety of sounds (audio, video, and motion sensor data). Qualitative analysis of gestural strategies resulted in three hypotheses: (1) voice is more effective than gesture for communicating rhythmic information, (2) textural aspects are communicated with shaky gestures, and (3) concurrent streams of sound events can be split between gestures and voice. ...
PLOS ONE | 2017
Guillaume Lemaitre; Hugo Scurto; Jules Françoise; Frédéric Bevilacqua; Olivier Houix; Patrick Susini
Communicating an auditory experience with words is a difficult task and, in consequence, people often rely on imitative non-verbal vocalizations and gestures. This work explored the combination of such vocalizations and gestures to communicate auditory sensations and representations elicited by non-vocal everyday sounds. Whereas our previous studies have analyzed vocal imitations, the present research focused on gestural depictions of sounds. To this end, two studies investigated the combination of gestures and non-verbal vocalizations. A first, observational study examined a set of vocal and gestural imitations of recordings of sounds representative of a typical everyday environment (ecological sounds) with manual annotations. A second, experimental study used non-ecological sounds whose parameters had been specifically designed to elicit the behaviors highlighted in the observational study, and used quantitative measures and inferential statistics. The results showed that these depicting gestures are based on systematic analogies between a referent sound, as interpreted by a receiver, and the visual aspects of the gestures: auditory-visual metaphors. The results also suggested a different role for vocalizations and gestures. Whereas the vocalizations reproduce all features of the referent sounds as faithfully as vocally possible, the gestures focus on one salient feature with metaphors based on auditory-visual correspondences. Both studies highlighted two metaphors consistently shared across participants: the spatial metaphor of pitch (mapping different pitches to different positions on the vertical dimension), and the rustling metaphor of random fluctuations (rapidly shaking of hands and fingers). We interpret these metaphors as the result of two kinds of representations elicited by sounds: auditory sensations (pitch and loudness) mapped to spatial position, and causal representations of the sound sources (e.g. rain drops, rustling leaves) pantomimed and embodied by the participants’ gestures.
new interfaces for musical expression | 2014
Jules Françoise; Norbert Schnell; Riccardo Borghesi; Frédéric Bevilacqua
xCoAx 2015. Computation, Communication, Aesthetics & X | 2015
Olivier Houix; Frédéric Bevilacqua; Nicolas Misdariis; Patrick Susini; Emmanuel Fléty; Jules Françoise; Julien Groboz
Archive | 2017
Frédéric Bevilacqua; Norbert Schnell; Jules Françoise; Eric O. Boyer; Diemo Schwarz; Baptiste Caramiaux
human factors in computing systems | 2016
Jules Françoise; Olivier Chapuis; Sylvain Hanneton; Frédéric Bevilacqua
new interfaces for musical expression | 2017
Hugo Scurto; Frédéric Bevilacqua; Jules Françoise
Archive | 2017
Baptiste Caramiaux; Jules Françoise; Frédéric Bevilacqua
MOCO '14 International Workshop on Movement and Computing | 2015
Sarah Fdili Alaoui; Philippe Pasquier; Thecla Schiphorst; Jules Françoise; Frédéric Bevilacqua