Sylvain Favrot
Technical University of Denmark
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sylvain Favrot.
Journal of the Acoustical Society of America | 2013
Gerald Kidd; Sylvain Favrot; Joseph G. Desloge; Timothy Streeter; Christine R. Mason
An approach to hearing aid design is described, and preliminary acoustical and perceptual measurements are reported, in which an acoustic beam-forming microphone array is coupled to an eye-glasses-mounted eye-tracker. This visually guided hearing aid (VGHA)-currently a laboratory-based prototype-senses direction of gaze using the eye tracker and an interface converts those values into control signals that steer the acoustic beam accordingly. Preliminary speech intelligibility measurements with noise and speech maskers revealed near- or better-than normal spatial release from masking with the VGHA. Although not yet a wearable prosthesis, the principle underlying the device is supported by these findings.
Acta Acustica United With Acustica | 2012
Sylvain Favrot; Jörg M. Buchholz
loudspeaker arrays DTU Orbit (01/05/2019) Reproduction of nearby sound sources using higher-order ambisonics with practical loudspeaker arrays In order to reproduce nearby sound sources with distant loudspeakers to a single listener, the near field compensated (NFC) method for higher-order Ambisonics (HOA) has been previously proposed. In practical realization, this method requires the use of regularization functions. This study analyzes the impact of two existing and a new proposed regularization function on the reproduced sound fields and on the main auditory cue for nearby sound sources outside the median plane, i.e, low-frequencies interaural level differences (ILDs). The proposed regularization function led to a better reproduction of point source sound fields compared to existing regularization functions for NFC-HOA. Measurements in realistic playback environments showed that, for very close sources, significant ILDs for frequencies above about 250 Hz can be reproduced with NFC-HOA and the proposed regularization function whereas the existing regularization functions failed to provide ILDs below 500 Hz. A listening test showed that these lower-frequency ILDs provided by the proposed regularization function lead to a significantly improved distance perception performance. This test also showed that the distance of virtual sources are perceived less accurately than corresponding physical sources when amplitude cues are not available.
The Hearing journal | 2010
Pauli Minnaar; Sylvain Favrot; Jörg M. Buchholz
Simulating the real world in the lab OCTOBER 2010 • VOL. 63 • NO. 10 Complex acoustic environments are encountered frequently in everyday life, including, for example, in train stations, supermarkets, and busy restaurants. People with normal hearing can usually communicate without effort in these environments, but people with a hearing impairment often have difficulties, even when wearing hearing aids. Digital hearing aids are gradually becoming more powerful, though, and more advanced signal processing can be provided to help those who wear them. However, to develop these new processing methods it is important to perform listening tests in these difficult listening situations. Traditional hearing aid testing in the laboratory normally focuses on how well speech is understood in noise. However, there is much more to the experience of sound in everyday life. Specifically, the spatial aspects of sound, such as from which direction a sound comes or how far away a sound object is, are also important. This spatial awareness is used, for example, when switching attention from one person to another during a meeting or a dinner conversation. It also plays a major role when trying to understand what someone is saying in a very reverberant room. These aspects of hearing have to be taken into account to ensure that hearing aids maintain the full richness of the acoustical information available to listeners and to help them extract meaning from it. Therefore, there is a need to develop and test future hearing aids in a variety of real-life sound environments. Recent developments in room acoustics modeling and sound reproduction have made it possible to create complex listening situations in the laboratory by creating a so-called virtual sound environment (VSE).1,2 In a VSE, sound scenes are constructed in a computer by modeling sound sources in a simulated (or virtual) room. The sound of these virtual sources is then played through a large array of loudspeakers. A VSE can include many virtual sound sources around the listening position (in different directions and at different distances), and the dimensions and wall properties of the room can be changed. When placed in the middle of the loudspeaker array, the listener perceives all attributes of the sound as in a real physical environment. Thus, the listener can be “transported” to different listening environments, bridging the gap between the laboratory and real life. Oticon recently constructed a VSE system at its head office in Denmark. The physical setup consists of 29 loudspeakers placed on a sphere around the listening position in a sound studio (see Figure 1). Many listening tests can be performed in this VSE system to gain insights into the perception of speech in complex acoustic environments. It also opens up the possibility of studying many other aspects of sound, such as sound localization, reverberation, and masking phenomena. The system allows for testing new advancements in hearing aid technology directly on users early in the development process. Thus, the needs of users can be clarified and the benefit of the hearing aids maximized.
Journal of the Acoustical Society of America | 2013
Sylvain Favrot; Christine R. Mason; Timothy Streeter; Joseph G. Desloge; Gerald Kidd
A visual guided hearing aid (VGHA) has recently been developed, which uses an eye tracker to steer the “acoustic look direction” (ALD) of a beamforming microphone array. The current study evaluates the performance of this highly directional microphone in providing spatial release from masking (SRM) under acoustically dry and reverberant conditions. Four normal-hearing subjects participated in a speech intelligibility test with collocated and spatially separated speech maskers when listening either through the microphone array or through KEMAR to simulate “natural” binaural conditions. The results indicated that near-normal SRM was achieved by listening through the VGHA in both environments. In the acoustically dry condition, SRM was similar to the measured signal-to-noise ratio (SNR) gain from the microphone array. However, in the reverberant condition, subjects showed significantly greater SRM than predicted from the measured SNR gain from the array. This is consistent with the measured improvement in SN...
Journal of the Acoustical Society of America | 2008
Sylvain Favrot; Jörg M. Buchholz
In the present study, a novel multi-channel loudspeaker-based virtual auditory environment (VAE) is introduced. The VAE aims at providing a versatile research environment for investigating the auditory signal processing in real environments, i.e., considering multiple sound sources and room reverberation. The environment is based on the ODEON room acoustic simulation software to render the acoustical scene. ODEON outputs are processed using a combination of dierent order Ambisonic techniques to calculate multi-channel room impulse responses (mRIR). Auralization is then obtained by the convolution of the mRIR with an acoustic signal. The derivation of the mRIRs takes into account that (i) auditory localization is most sensitive to the location of the direct sound and (ii) that auditory localization performance is rather poor for early reflections and even worse for late reverberation. Throughout the VAE development, special care was taken in order to achieve a realistic auditory percept and to avoid ”artefacts” such as unnatural coloration. The performance of the VAE has been evaluated and optimized on a 29 loudspeaker setup using both objective and subjective measurement techniques.
Journal of The Audio Engineering Society | 2012
Marton Marschall; Sylvain Favrot; Jörg M. Buchholz
Forum Acusticum 2011 | 2011
Johannes Käsbach; Sylvain Favrot; Jörg M. Buchholz
Journal of The Audio Engineering Society | 2011
Sylvain Favrot; Marton Marschall; Johannes Käsbach; Jörg M. Buchholz; Tobias Weller
Audio Engineering Society Conference: UK 25th Conference: Spatial Audio in Today’s 3D World | 2012
Sylvain Favrot; Marton Marschall
Journal of The Audio Engineering Society | 2009
Sylvain Favrot; Jörg M. Buchholz