Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Charles Verron is active.

Publication


Featured researches published by Charles Verron.


IEEE Transactions on Audio, Speech, and Language Processing | 2010

A 3-D Immersive Synthesizer for Environmental Sounds

Charles Verron; Mitsuko Aramaki; Richard Kronland-Martinet; Grégory Pallone

Nowadays, interactive 3-D environments tend to include both synthesis and spatialization processes to increase the realism of virtual scenes. In typical systems, audio generation is created in two stages: first, a monophonic sound is synthesized (generation of the intrinsic timbre properties) and then it is spatialized (positioned in its environment). In this paper, we present the design of a 3-D immersive synthesizer dedicated to environmental sounds, and intended to be used in the framework of interactive virtual reality applications. The system is based on a physical categorization of environmental sounds (vibrating solids, liquids, aerodynamics). The synthesis engine has a novel architecture combining an additive synthesis model and 3-D audio modules at the prime level of sound generation. An original approach exploiting the synthesis capabilities for simulating the spatial extension of sound sources is also presented. The subjective results, evaluated with a formal listening test, are discussed. Finally, new control strategies based on a global manipulation of timbre and spatial attributes of sound sources are introduced.


IEEE Transactions on Audio, Speech, and Language Processing | 2010

Time–Frequency Synthesis of Noisy Sounds With Narrow Spectral Components

Damián Marelli; Mitsuko Aramaki; Richard Kronland-Martinet; Charles Verron

The inverse fast Fourier transform (FFT) method was proposed to alleviate the computational complexity of the additive sound synthesis method in real-time applications, and consists in synthesizing overlapping blocks of samples in the frequency domain. However, its application is limited by its inherent tradeoff between time and frequency resolution. In this paper, we propose an alternative to the inverse FFT method for synthesizing colored noise. The proposed approach uses subband signal processing to generate time-frequency noise with an autocorrelation function such that the noise obtained after converting it to time domain has the desired power spectral density. We show that the inverse FFT method can be interpreted as a particular case of the proposed method, and therefore, the latter offers some extra design flexibility. Exploiting this property, we present experimental results showing that the proposed method can offer a better tradeoff between time and frequency resolution, at the expense of some extra computations.


workshop on applications of signal processing to audio and acoustics | 2009

Controlling a spatialized environmental sound synthesizer

Charles Verron; Grégory Pallone; Mitsuko Aramaki; Richard Kronland-Martinet

This paper presents the design and the control of a spatialized additive synthesizer aiming at simulating environmental sounds. First the synthesis engine, based on a combination of an additive signal model and spatialization processes, is presented. Then, the control of the synthesizer, based on a hierarchical organization of sounds, is discussed. Complex environmental sounds (such as a water flow or a fire) may then be designed thanks to an adequate combination of a limited number of basic sounds consisting in elementary signals (impacts, chirps, noises). The mapping between parameters describing these basic sounds and high-level descriptors describing an environmental auditory scene is finally presented in the case of a rainy sound ambiance.


IEEE Transactions on Audio, Speech, and Language Processing | 2013

Spectral and Spatial Multichannel Analysis/Synthesis of Interior Aircraft Sounds

Charles Verron; Philippe-Aubert Gauthier; Jennifer Langlois; Catherine Guastavino

A method for spectral and spatial multichannel analysis/synthesis of interior aircraft sounds is presented. We propose two extensions of the classical sinusoids+noise model, adapted to multichannel stationary sounds. First, a spectral estimator is described, using average information across channels for spectral peak detection. Second, the residual modeling is extended to integrate two interchannel spatial cues (i.e., coherence and phase difference). This approach allows real-time synthesis and control of sounds spectral and spatial characteristics. It finds applications for multichannel aircraft sound reproduction, and more generally for musical and environmental sound synthesis. The ability of the model to reproduce multichannel aircraft sounds is assessed by a numerical simulation.


workshop on applications of signal processing to audio and acoustics | 2011

Perceptual evaluation of interior aircraft sound models

Jennifer Langlois; Charles Verron; Philippe-Aubert Gauthier; Catherine Guastavino

We report a listening test conducted to investigate the validity of sinusoids+noise synthesis models for interior aircraft sounds. Two models were evaluated, one for monaural signals and the other for binaural signals. A parameter common to both models is the size of the analysis/synthesis window. This size determines the computation cost and the time/frequency resolution of the synthesis. To evaluate the perceptual impact of reducing the window size, we varied systematically the size Ns of the analysis/synthesis window. We used three reference sounds corresponding to three different rows. Twenty-two participants completed an ABX discrimination task comparing original recorded sounds to various resynthesized versions. The results highlight a better discrimination between resynthesized sounds and original recorded sounds for the monaural model than for the binaural model and for a window size of 128 samples than for larger window sizes. We also observed a significant effect of row on discrimination. An analysis/synthesis window size Ns of 1024 samples seems to be sufficient to synthesize binaural sounds which are indistinguishable from original sounds; but for monaural sounds, a window size of 2048 samples is needed to resynthesize original sounds with no perceptible difference.


workshop on applications of signal processing to audio and acoustics | 2011

Binaural analysis/synthesis of interior aircraft sounds

Charles Verron; Philippe-Aubert Gauthier; Jennifer Langlois; Catherine Guastavino

A binaural sinusoids+noise synthesis model is proposed for reproducing interior aircraft sounds. First, a method for spectral and spatial characterization of binaural interior aircraft sounds is presented. This characterization relies on a stationarity hypothesis and involves four estimators: left and right power spectra, interaural coherence and interaural phase difference. Then we present two extensions of the classical sinusoids+noise model for the analysis and synthesis of stationary binaural sounds. First, we propose a binaural estimator using relevant information in both left and right channels for peak detection. Second, the residual modeling is extended to integrate two interaural spatial cues, namely coherence and phase difference. The resulting binaural sinusoids+noise model is evaluated on a recorded aircraft sound.


international conference on haptic and audio interaction design | 2012

Supporting sounds: design and evaluation of an audio-haptic interface

Emma Murphy; Camille Moussette; Charles Verron; Catherine Guastavino

The design and evaluation of a multimodal interface is presented in order to investigate how spatial audio and haptic feedback can be used to convey the navigational structure of a virtual environment. The non-visual 3D virtual environment is composed of a number of parallel planes with either horizontal or vertical orientations. The interface was evaluated using a target-finding task to explore how auditory feedback can be used in isolation or combined with haptic feedback for navigation. Twenty-three users were asked to locate targets using auditory feedback in the virtual structure across both horizontal and vertical orientations of the planes, with and without haptic feedback. Findings from the evaluation experiment reveal that users performed the task faster in the bi-modal conditions (with combined auditory and haptic feedback) with a horizontal orientation of the virtual planes.


IEEE Transactions on Audio, Speech, and Language Processing | 2012

An Efficient Time–Frequency Method for Synthesizing Noisy Sounds With Short Transients and Narrow Spectral Components

Damián Marelli; Mitsuko Aramaki; Richard Kronland-Martinet; Charles Verron

The inverse fast Fourier transform (IFFT) method is a time-frequency technique which was proposed to alleviate the complexity of the additive sound synthesis method in real-time applications. However, its application is limited by its inherent tradeoff between time and frequency resolutions, which are determined by the number of frequencies used for time-frequency processing. In a previous work, the authors proposed a frequency-refining technique for overcoming this frequency limitation, permitting achieving any time and frequency resolution using a small number of frequencies. In this correspondence we extend this work, by proposing a time-refining technique which permits overcoming the time resolution limitation for a given number of frequencies. Additionally, we propose an alternative to the frequency-refining technique proposed in our previous work, which requires about half the computations. The combination of these two results permits achieving any time and frequency resolution for any given number of frequencies. Using this property, we find the number of frequencies which minimizes the overall complexity. We do so considering two different application scenarios (i.e., offline sound design and online real-time synthesis). This results in a major complexity reduction in comparison with the design proposed in our previous work.


international conference on auditory display | 2009

Spatialized synthesis of noisy environmental sounds

Charles Verron; Mitsuko Aramaki; Richard Kronland-Martinet; Grégory Pallone

In this paper, an overview of the stochastic modeling for analysis/synthesis of noisy sounds is presented. In particular, we focused on the time-frequency domain synthesis based on the inverse fast Fourier transform (IFFT) algorithm from which we proposed the design of a spatialized synthesizer. The originality of this synthesizer remains in its one-stage architecture that efficiently combines the synthesis with 3D audio techniques at the same level of sound generation. This architecture also allowed including a control of the source width rendering to reproduce naturally diffused environments. The proposed approach led to perceptually realistic 3D immersive auditory scenes. Applications of this synthesizer are here presented in the case of noisy environmental sounds such as air swishing, sea wave or wind sound. We finally discuss the limitations but also the possibilities offered by the synthesizer to achieve sound transformations based on the analysis of recorded sounds.


Journal of Sound and Vibration | 2016

Experiments of multichannel least-square methods for sound field reproduction inside aircraft mock-up: Objective evaluations

Philippe-Aubert Gauthier; C. Camier; F.-A. Lebel; Y. Pasco; Alain Berry; Jennifer Langlois; Charles Verron; Catherine Guastavino

Collaboration


Dive into the Charles Verron's collaboration.

Top Co-Authors

Avatar

Mitsuko Aramaki

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mitsuko Aramaki

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Alain Berry

Université de Sherbrooke

View shared research outputs
Top Co-Authors

Avatar

C. Camier

Université de Sherbrooke

View shared research outputs
Researchain Logo
Decentralizing Knowledge