Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Thierry Voinier is active.

Publication


Featured researches published by Thierry Voinier.


Journal of the Acoustical Society of America | 2005

Real-time synthesis of clarinet-like instruments using digital impedance models

Philippe Guillemain; Jean Kergomard; Thierry Voinier

A real-time synthesis model of wind instruments sounds, based upon a classical physical model, is presented. The physical model describes the nonlinear coupling between the resonator and the excitor through the Bernoulli equation. While most synthesis methods use wave variables and their sampled equivalent in order to describe the resonator of the instrument, the synthesis model presented here uses sampled versions of the physical variables all along the synthesis process, and hence constitutes a straightforward digital transposition of each part of the physical model. Moreover, the resolution scheme of the problem (i.e., the synthesis algorithm) is explicit and all the parameters of the algorithm are expressed analytically as functions of the physical and the control parameters.


international conference on auditory display | 2009

Imagine the sounds: an intuitive control of an impact sound synthesizer

Mitsuko Aramaki; Charles Gondre; Richard Kronland-Martinet; Thierry Voinier; Sølvi Ystad

In this paper we present a synthesizer developed for musical and Virtual Reality purposes that offers an intuitive control of impact sounds. A three layer control strategy is proposed for this purpose, where the top layer gives access to a control of the sound source through verbal descriptions, the middle layer to a control of perceptually relevant sound descriptors, while the bottom layer is directly linked to the parameters of the additive synthesis model. The mapping strategies between the parameters of the different layers are described. The synthesizer has been implemented using Max/MSP, offering the possibility to manipulate intrinsic characteristics of sounds in real-time through the control of few parameters.


Computer Music Journal | 2001

A Virtually Real Flute

Sølvi Ystad; Thierry Voinier

Since the first keyboard-controlled digital synthesizers became available, several new synthesis interfaces have been developed (e.g., Mathews 1991a, 1991b; Cook 1992; De Laubier 1998). As most of these digital instruments differ considerably from traditional instruments, musicians must learn new techniques to play them (Kronland-Martinet, Voinier, and Guillemain 1997). Here, we propose overcoming this difficulty by designing a digital flute using a traditional instrument form factor to control a synthesis model. The digital flute was assumed to extend the technical scope of the traditional flute, but we also wanted to be able to use the instrument in the traditional way. To connect the instrument to a computer, we added sensors to its key pads and placed a microphone inside the mouthpiece. The synthesis model to be controlled by this interface had to take the physical characteristics of the instrument into account. A physical model was therefore developed to simulate the propagation of waves inside the flute. The system of excitation involved in flute-playing is highly complex from a physical point of view. To construct a real-time model with parameters that can be measured while the instrument is being played, we used a signal model to simulate the source excitation. By injecting this model into the physical one, we constructed a hybrid model which accounts for both the physical and perceptual aspects of the sound produced.


Computer Music Journal | 2006

A Percussive Sound Synthesizer Based on Physical and Perceptual Attributes

Mitsuko Aramaki; Richard Kronland-Martinet; Thierry Voinier; Sølvi Ystad

Synthesis of impact sounds is far from a trivialtask owing to the high density of modes generallycontained in such signals. Several authors have ad-dressed this problem and proposed different ap-proaches to model such sounds. The majority ofthese models are based on the physics of vibratingstructures, as with for instance modal synthesis(Adrien 1991; Pai et al. 2001; van den Doel, Kry,and Pai 2001; Cook 2002; Rocchesso, Bresin, andFernstrom 2003). Nevertheless, modal synthesis isnot always suitable for complex sounds, such asthose with a high density of mixed modes. Otherapproaches have also been proposed using algorith-mic techniques based on digital signal processing.Cook (2002), for example, proposed a granular-synthesis approach based on a wavelet decomposi-tion of sounds.The sound-synthesis model proposed in this ar-ticle takes into account both physical and percep-tual aspects related to sounds. Many subjectivetests have shown the existence of perceptual cluesallowing the source of the impact sound (its mate-rial, size, etc.) to be identified merely by listening(Klatzky, Pai, and Krotkov 2000; Tucker and Brown2002). Moreover, these tests have brought to thefore some correlations between physical attributes(the nature of the material and dimensions of thestructure) and perceptual attributes (perceived ma-terial and perceived dimensions). Hence, it hasbeen shown that the perception of the materialmainly correlates with the damping coefficient ofthe spectral components contained in the sound.This damping is frequency-dependent, and high-frequency modes are generally more heavilydamped than low-frequency modes. Actually, thedissipation of vibrating energy owing to the cou-pling between the structure and the air increaseswith frequency (see, for example, Caracciolo andValette 1995).To take into account this fundamental sound be-havior from a synthesis point of view, a time-varying filtering technique has been chosen. It iswell known that the size and shape of an object’sattributes are mainly perceived by the pitch of thegenerated sound and its spectral richness. The per-ception of the pitch primarily correlates with thevibrating modes (Carello, Anderson, and Kunkler-Peck 1998). For complex structures, the modal den-sity generally increases with the frequency, so thathigh frequency modes overlap and become indis-cernible. This phenomenon is well known and isdescribed for example in previous works on roomacoustics (Kuttruff 1991).Under such a condition, the human ear deter-mines the pitch of the sound from emergent spec-tral components with consistent frequency ratios.When a complex percussive sound contains severalharmonic or inharmonic series (i.e., spectral compo-nents that are not exact multiples of the fundamen-tal frequency), different pitches can generally beheard. The dominant pitch then mainly depends onthe frequencies and the amplitudes of the spectralcomponents belonging to a so-called dominant fre-quency region (Terhardt, Stoll, and Seewann 1982)in which the ear is pitch sensitive. (We will discussthis further in the Tuning section of this article.)With all these aspects in mind, and wishing to pro-pose an easy and intuitive control of the model,we have divided it into three parts represented byan excitation element, a material element, and anobject element.The large number of parameters available throughsuch a model necessitates a control strategy. Thisstrategy (generally called a mapping) is of great im-portance for the expressive capabilities of the in-strument, and it inevitably influences the way itcan be used in a musical context (Gobin et al. 2004).


Eurasip Journal on Audio, Speech, and Music Processing | 2008

Real-Time Perceptual Simulation of Moving Sources: Application to the Leslie Cabinet and 3D Sound Immersion

Richard Kronland-Martinet; Thierry Voinier

Perception of moving sound sources obeys different brain processes from those mediating the localization of static sound events. In view of these specificities, a preprocessing model was designed, based on the main perceptual cues involved in the auditory perception of moving sound sources, such as the intensity, timbre, reverberation, and frequency shift processes. This model is the first step toward a more general moving sound source system, including a system of spatialization. Two applications of this model are presented: the simulation of a system involving rotating sources, the Leslie Cabinet and a 3D sound immersion installation based on the sonification of cosmic particles, the Cosmophone.


computer music modeling and retrieval | 2003

Designing Musical Interfaces with Composition in Mind

Pascal Gobin; Richard Kronland-Martinet; Guy-André Lagesse; Thierry Voinier; Sølvi Ystad

This paper addresses three different strategies to map real-time synthesis of sounds and controller events. The design of the corresponding interfaces takes into account both the artistic goals and the expressive capabilities of these new instruments. Common to all these cases, as to traditional instruments, is the fact that their specificity influence the music which is written for them. This means that the composition already starts with the construction of the interface. As a first approach, synthesis models are piloted by completely new interfaces, leading to “sound sculpting machines”. An example of sound transformations using the Radio Baton illustrates this concept. The second approach consists in making interfaces that are adapted to the gestures already acquired by the performers. Two examples are treated in this case: the extension of a traditional instrument and the design of interfaces for disabled performers. The third approach uses external events such as natural phenomena to influence a synthesis model. The Cosmophone, which associates sound events to the flux of cosmic rays, illustrates this concept.


Ai & Society | 2012

Cosmic ray sonification: the COSMOPHONE

Richard Kronland-Martinet; Thierry Voinier; David Calvet; Claude Michel Vallee

The Cosmophone is an attempt to show the close connections existing between the infinitely small and the infinitely large in sensory terms by detecting and imaging the continuous flow of elementary particles (cosmic rays) originating from our entire galaxy.


computer music modeling and retrieval | 2005

Timbre variations as an attribute of naturalness in clarinet play

Snorre Farner; Richard Kronland-Martinet; Thierry Voinier; Sølvi Ystad

A digital clarinet played by a human and timed by a metronome was used to record two playing control parameters, the breath control and the reed displacement, for 20 repeated performances. The regular behaviour of the parameters was extracted by averaging and the fluctuation was quantified by the standard deviation. It was concluded that the movement of the parameters seem to follow rules. When removing the fluctuations of the parameters by averaging over the repetitions, the result sounded less expressive, although it still seemed to be played by a human. The variation in timbre during the play, in particular within a notes duration, was observed and then fixed while the natural temporal envelope was kept. The result seemed unnatural, indicating that the variation of timbre is important for the naturalness.


Journal of the Acoustical Society of America | 2004

Stiff piano string modeling: Computational comparison between finite differences and digital waveguide

Julien Bensa; Stefan Bilbao; Richard Kronland-Martinet; Thierry Voinier; Julius O. Smith

As is well known, digital waveguides offer a computationally efficient, and physically motivated, means of simulating wave propagation in strings. The method is based on sampling the traveling wave solution to the ideal wave equation and linearly filtering this solution to simulate dispersive effects due to stiffness and frequency‐dependent loss; such digital filters may terminate the waveguide or be embedded along its length. For strings of high stiffness, however, dispersion filters can be difficult to design and expensive to implement. It is shown how high‐quality time‐domain terminating filters may be derived from given frequency‐domain specifications which depend on the model parameters. Particular attention is paid to the problem of phase approximation, which, in the case of high stiffness, is strongly nonlinear. Finally, in the interest of determining the limits of applicability of digital waveguide techniques, we make a comparison with more conventional finite difference schemes, in terms of compu...


computer music modeling and retrieval | 2003

Characterization of Musical Performance Using Physical Sound Synthesis Models

Philippe Guillemain; Thierry Voinier

Sound synthesis can be considered as a tool to characterize a musical performance together with the performer himself. Indeed, the association of real-time synthesis algorithms based on an accurate description of the physical behavior of the instrument and its control parameters, and gestures capture devices the output parameters of which can be recorded simultaneously with the synthesised sound, can be considered as a “spying chain” allowing the study of a musical performance. In this paper, we present a clarinet synthesis model the parameters and controls of which are fully and explicitely related to the physics, and we use this model as a starting point to make a link between the playing and the sound.

Collaboration


Dive into the Thierry Voinier's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sølvi Ystad

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Sølvi Ystad

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Philippe Guillemain

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Jean Kergomard

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Julien Bensa

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge