Sylvain Le Beux
Centre national de la recherche scientifique
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sylvain Le Beux.
Journal on Multimodal User Interfaces | 2007
Nicolas d’Alessandro; Pascale Woodruff; Yohann Fabre; Thierry Dutoit; Sylvain Le Beux; Boris Doval; Christophe d’Alessandro
In this paper, we describe a full computer-based musical instrument allowing realtime synthesis of expressive singing voice. The expression results from the continuous action of an interpreter through a gestural control interface. In this context, expressive features of voice are discussed. New real-time implementations of a spectral model of glottal flow (CALM) are described. These interactive modules are then used to identify and quantify voice quality dimensions. Experiments are conducted in order to develop a first framework for voice quality control. The representation of vocal tract and the control of several vocal tract movements are explained and a solution is proposed and integrated. Finally, some typical controllers are connected to the system and expressivity is evaluated.
Journal of the Acoustical Society of America | 2014
Christophe d'Alessandro; Lionel Feugère; Sylvain Le Beux; Olivier Perrotin; Albert Rilliard
Cantor Digitalis, a real-time formant synthesizer controlled by a graphic tablet and a stylus, is used for assessment of melodic precision and accuracy in singing synthesis. Melodic accuracy and precision are measured in three experiments for groups of 20 and 28 subjects. The task of the subjects is to sing musical intervals and short melodies, at various tempi, using chironomy (hand-controlled singing), mute chironomy (without audio feedback), and their own voices. The results show the high accuracy and precision obtained by all the subjects for chironomic control of singing synthesis. Some subjects performed significantly better in chironomic singing compared to natural singing, although other subjects showed comparable proficiency. For the chironomic condition, mean note accuracy is less than 12 cents and mean interval accuracy is less than 25 cents for all the subjects. Comparing chironomy and mute chironomy shows that the skills used for writing and drawing are used for chironomic singing, but that the audio feedback helps in interval accuracy. Analysis of blind chironomy (without visual reference) indicates that a visual feedback helps greatly in both note and interval accuracy and precision. This study demonstrates the capabilities of chironomy as a precise and accurate mean for controlling singing synthesis.
smart graphics | 2009
Christian Jacquemin; Rami Ajaj; Sylvain Le Beux; Christophe d'Alessandro; Markus Noisternig; Brian F. G. Katz; Bertrand Planes
The Organ and Augmented Reality (ORA) project has been presented to public audiences at two immersive concerts, with both visual and audio augmentations of an historic church organ. On the visual side, the organ pipes displayed a spectral analysis of the music using visuals inspired by LED-bar VU-meters. On the audio side, the audience was immersed in a periphonic sound field, acoustically placing listeners inside the instrument. The architecture of the graphical side of the installation is made of acoustic analysis and calibration, mapping from sound levels to animation, visual calibration, real-time multi-layer graphical composition and animation. It opens new perspectives to musical instrument augmentation where the purpose is to make the instrument more legible while offering the audience enhanced artistic content.
International Journal of Creative Interfaces and Computer Graphics | 2010
Christian Jacquemin; Rami Ajaj; Sylvain Le Beux; Christophe d'Alessandro; Markus Noisternig; Brian F. G. Katz; Bertrand Planes
This paper discusses the Organ Augmented Reality ORA project, which considers an audio and visual augmentation of an historical church organ to enhance the understanding and perception of the instrument through intuitive and familiar mappings and outputs. ORA has been presented to public audiences at two immersive concerts. The visual part of the installation was based on a spectral analysis of the music. The visuals were projections of LED-bar VU-meters on the organ pipes. The audio part was an immersive periphonic sound field, created from the live capture of the organ sounds, so that the listeners had the impression of being inside the augmented instrument. The graphical architecture of the installation is based on acoustic analysis, mapping from sound levels to synchronous graphics through visual calibration, real-time multi-layer graphical composition and animation. The ORA project is a new approach to musical instrument augmentation that combines enhanced instrument legibility and enhanced artistic content.
Journal of the Acoustical Society of America | 2011
Christophe d’Alessandro; Albert Rilliard; Sylvain Le Beux
international computer music conference | 2009
Christophe d’Alessandro; Markus Noisternig; Sylvain Le Beux; Lorenzo Picinali; Brian F. G. Katz; Christian Jacquemin; Rami Ajaj; Bertrand Planes; Nicolas Strurmel; Nathalie Delprat
conference of the international speech communication association | 2011
Sylvain Le Beux; Lionel Feugère; Christophe d'Alessandro
Journal of The Audio Engineering Society | 2010
Sylvain Le Beux; Boris Doval; Christophe d'Alessandro
ISCA Speech Synthesis Workshop (SSW 2007) | 2007
Sylvain Le Beux; Albert Rilliard; Christophe d'Alessandro
Archive | 2011
Lionel Feug; Sylvain Le Beux; Christophe d'Alessandro; Cantor Digitalis