Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Léo Varnet is active.

Publication


Featured researches published by Léo Varnet.


Frontiers in Human Neuroscience | 2013

Using auditory classification images for the identification of fine acoustic cues used in speech perception

Léo Varnet; Kenneth Knoblauch; Fanny Meunier; Michel Hoen

An essential step in understanding the processes underlying the general mechanism of perceptual categorization is to identify which portions of a physical stimulation modulate the behavior of our perceptual system. More specifically, in the context of speech comprehension, it is still a major open challenge to understand which information is used to categorize a speech stimulus as one phoneme or another, the auditory primitives relevant for the categorical perception of speech being still unknown. Here we propose to adapt a method relying on a Generalized Linear Model with smoothness priors, already used in the visual domain for the estimation of so-called classification images, to auditory experiments. This statistical model offers a rigorous framework for dealing with non-Gaussian noise, as it is often the case in the auditory modality, and limits the amount of noise in the estimated template by enforcing smoother solutions. By applying this technique to a specific two-alternative forced choice experiment between stimuli “aba” and “ada” in noise with an adaptive SNR, we confirm that the second formantic transition is key for classifying phonemes into /b/ or /d/ in noise, and that its estimation by the auditory system is a relative measurement across spectral bands and in relation to the perceived height of the second formant in the preceding syllable. Through this example, we show how the GLM with smoothness priors approach can be applied to the identification of fine functional acoustic cues in speech perception. Finally we discuss some assumptions of the model in the specific case of speech perception.


Scientific Reports | 2015

How musical expertise shapes speech perception: evidence from auditory classification images.

Léo Varnet; Tianyun Wang; Chloe Peter; Fanny Meunier; Michel Hoen

It is now well established that extensive musical training percolates to higher levels of cognition, such as speech processing. However, the lack of a precise technique to investigate the specific listening strategy involved in speech comprehension has made it difficult to determine how musicians’ higher performance in non-speech tasks contributes to their enhanced speech comprehension. The recently developed Auditory Classification Image approach reveals the precise time-frequency regions used by participants when performing phonemic categorizations in noise. Here we used this technique on 19 non-musicians and 19 professional musicians. We found that both groups used very similar listening strategies, but the musicians relied more heavily on the two main acoustic cues, at the first formant onset and at the onsets of the second and third formants onsets. Additionally, they responded more consistently to stimuli. These observations provide a direct visualization of auditory plasticity resulting from extensive musical training and shed light on the level of functional transfer between auditory processing and speech perception.


PLOS ONE | 2015

A psychophysical imaging method evidencing auditory cue extraction during speech perception: A group analysis of auditory classification images

Léo Varnet; Kenneth Knoblauch; Willy Serniclaes; Fanny Meunier; Michel Hoen

Although there is a large consensus regarding the involvement of specific acoustic cues in speech perception, the precise mechanisms underlying the transformation from continuous acoustical properties into discrete perceptual units remains undetermined. This gap in knowledge is partially due to the lack of a turnkey solution for isolating critical speech cues from natural stimuli. In this paper, we describe a psychoacoustic imaging method known as the Auditory Classification Image technique that allows experimenters to estimate the relative importance of time-frequency regions in categorizing natural speech utterances in noise. Importantly, this technique enables the testing of hypotheses on the listening strategies of participants at the group level. We exemplify this approach by identifying the acoustic cues involved in da/ga categorization with two phonetic contexts, Al- or Ar-. The application of Auditory Classification Images to our group of 16 participants revealed significant critical regions on the second and third formant onsets, as predicted by the literature, as well as an unexpected temporal cue on the first formant. Finally, through a cluster-based nonparametric test, we demonstrate that this method is sufficiently sensitive to detect fine modifications of the classification strategies between different utterances of the same phoneme.


Journal of the Acoustical Society of America | 2017

A cross-linguistic study of speech modulation spectra

Léo Varnet; Maria Clemencia Ortiz-Barajas; Ramón Guevara Erra; Judit Gervain; Christian Lorenzi

Languages show systematic variation in their sound patterns and grammars. Accordingly, they have been classified into typological categories such as stress-timed vs syllable-timed, or Head-Complement (HC) vs Complement-Head (CH). To date, it has remained incompletely understood how these linguistic properties are reflected in the acoustic characteristics of speech in different languages. In the present study, the amplitude-modulation (AM) and frequency-modulation (FM) spectra of 1797 utterances in ten languages were analyzed. Overall, the spectra were found to be similar in shape across languages. However, significant effects of linguistic factors were observed on the AM spectra. These differences were magnified with a perceptually plausible representation based on the modulation index (a measure of the signal-to-noise ratio at the output of a logarithmic modulation filterbank): the maximum value distinguished between HC and CH languages, with the exception of Turkish, while the exact frequency of this maximum differed between stress-timed and syllable-timed languages. An additional study conducted on a semi-spontaneous speech corpus showed that these differences persist for a larger number of speakers but disappear for less constrained semi-spontaneous speech. These findings reveal that broad linguistic categories are reflected in the temporal modulation features of different languages, although this may depend on speaking style.


PLOS ONE | 2016

Direct Viewing of Dyslexics' Compensatory Strategies in Speech in Noise Using Auditory Classification Images.

Léo Varnet; Fanny Meunier; Gwendoline Trollé; Michel Hoen

A vast majority of dyslexic children exhibit a phonological deficit, particularly noticeable in phonemic identification or discrimination tasks. The gap in performance between dyslexic and normotypical listeners appears to decrease into adulthood, suggesting that some individuals with dyslexia develop compensatory strategies. Some dyslexic adults however remain impaired in more challenging listening situations such as in the presence of background noise. This paper addresses the question of the compensatory strategies employed, using the recently developed Auditory Classification Image (ACI) methodology. The results of 18 dyslexics taking part in a phoneme categorization task in noise were compared with those of 18 normotypical age-matched controls. By fitting a penalized Generalized Linear Model on the data of each participant, we obtained his/her ACI, a map of the time-frequency regions he/she relied on to perform the task. Even though dyslexics performed significantly less well than controls, we were unable to detect a robust difference between the mean ACIs of the two groups. This is partly due to the considerable heterogeneity in listening strategies among a subgroup of 7 low-performing dyslexics, as confirmed by a complementary analysis. When excluding these participants to restrict our comparison to the 11 dyslexics performing as well as their average-reading peers, we found a significant difference in the F3 onset of the first syllable, and a tendency of difference on the F4 onset, suggesting that these listeners can compensate for their deficit by relying upon additional allophonic cues.


Journal of the Acoustical Society of America | 2018

Sensorineural hearing loss impairs sensitivity but spares temporal integration for detection of frequency modulation

Nicolas Wallaert; Léo Varnet; Brian C. J. Moore; Christian Lorenzi

The effect of the number of modulation cycles (N) on frequency-modulation (FM) detection thresholds (FMDTs) was measured with and without interfering amplitude modulation (AM) for hearing-impaired (HI) listeners, using a 500-Hz sinusoidal carrier and FM rates of 2 and 20 Hz. The data were compared with FMDTs for normal-hearing (NH) listeners and AM detection thresholds (AMDTs) for NH and HI listeners [Wallaert, Moore, and Lorenzi (2016). J. Acoust. Soc. 139, 3088-3096; Wallaert, Moore, Ewert, and Lorenzi (2017). J. Acoust. Soc. 141, 971-980]. FMDTs were higher for HI than for NH listeners, but the effect of increasing N was similar across groups. In contrast, AMDTs were lower and the effect of increasing N was greater for HI listeners than for NH listeners. A model of temporal-envelope processing based on a modulation filter-bank and a template-matching decision strategy accounted better for the FMDTs at 20 Hz than at 2 Hz for young NH listeners and predicted greater temporal integration of FM than observed for all groups. These results suggest that different mechanisms underlie AM and FM detection at low rates and that hearing loss impairs FM-detection mechanisms, but preserves the memory and decision processes responsible for temporal integration of FM.


conference of the international speech communication association | 2016

Speech Reductions Cause a De-Weighting of Secondary Acoustic Cues.

Léo Varnet; Fanny Meunier; Michel Hoen

The ability of the auditory system to change the perceptual weighting of acoustic cues when faced with degraded speech has long been evidenced. However, the exact changes that occur remain mostly unknown. Here, we proposed to use the Auditory Classification Image (ACI) methodology to reveal the acoustic cues used in natural speech comprehension and in reduced (i.e. noise-vocoded or re-synthesized) speech comprehension. The results show that in the latter case the auditory system updates its listening strategy by de-weighting secondary acoustic cues. Indeed, these are often weaker and thus more easily erased in adverse listening conditions. Furthermore our data suggests that this de-weighting does not directly depend on the actual reliability of the cues, but rather on the expected change in informativeness.


Neuropsychologia | 2014

Neural correlates of non-verbal social interactions: a dual-EEG study.

Mathilde Ménoret; Léo Varnet; Raphaël Fargier; Anne Cheylus; Aurore Curie; Vincent des Portes; Tatjana A. Nazir; Yves Paulignan


5th International Brain-Computer Interface Conference 2011 (BCI 2011) | 2011

Brain Invaders: a prototype of an open-source P300- based video game working with the OpenViBE platform

Marco Congedo; Matthieu Goyat; Nicolas Tarrin; Gelu Ionescu; Léo Varnet; Bertrand Rivet; Ronald Phlypo; Nisrine Jrad; Michaël A. S. Acquadro; Christian Jutten


conference of the international speech communication association | 2012

Phoneme resistance during speech-in-speech comprehension

Léo Varnet; Julien Meyer; Michel Hoen; Fanny Meunier

Collaboration


Dive into the Léo Varnet's collaboration.

Top Co-Authors

Avatar

Fanny Meunier

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Michel Hoen

French Institute of Health and Medical Research

View shared research outputs
Top Co-Authors

Avatar

Fanny Meunier

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Elsa Spinelli

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Christian Lorenzi

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anne Cheylus

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Aurore Curie

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Bertrand Rivet

Centre national de la recherche scientifique

View shared research outputs
Researchain Logo
Decentralizing Knowledge