Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fanny Meunier is active.

Publication


Featured researches published by Fanny Meunier.


Attention Perception & Psychophysics | 2010

An intonational cue to word segmentation in phonemically identical sequences.

Elsa Spinelli; Nicolas Grimault; Fanny Meunier; Pauline Welby

We investigated the use of language-specific intonational cues to word segmentation in French. Participants listened to phonemically identical sequences such as /selafij/, C’est la fiche/l’affiche “It’s the sheet/poster.” We modified the f0 of the first vowel /a/ of the natural consonant-initial production la fiche, so that it was equal to that of the natural vowel-initial production l’affiche (resynth-consonant-equal condition), higher (resynthconsonant-higher condition), or lower (resynth-consonant-lower condition). In a two-alternative forced choice task (Experiment 1), increasing the f 0 in the /a/ of la fiche increased the percentage of vowel-initial (affiche) responses. In Experiment 2, participants made visual lexical decisions to vowel-initial targets (affiche) following both the natural consonant-initial production (la fiche) and the resynth-consonant-equal version. Facilitation was found only for the resynth-consonant-equal condition, suggesting that raising the f 0 allowed online activation of vowel-initial targets. The recognition system seems to exploit intonational information to guide segmentation toward the beginning of content words.


Brain and Language | 2002

Cross-modal morphological priming in French.

Fanny Meunier; Juan Segui

We investigated the lexical representation of morphologically complex words in French using a cross-modal priming experiment. We asked if the lexical representation for derivationally suffixed and prefixed words is morphologically structured and how this relates to the phonological transparency of the surface relationship between stem and affix. Overall we observed a clear effect of the morphological structure for derived words, an effect that is not explicable by a formal effect. Prefixed words prime their stems, even when they have a phonologically opaque relationship, and a prefixed word primes another prefixed word derived from the same stem. However, suffixed words prime their stems only if their relationship is phonologically transparent. Two suffixed words derived from the same stem prime each other. These two latter results differ from those observed in English by Marslen-Wilson, Tyler, Waksler, and Older (1994). We argue that it is the specific properties of the language, such as rhythm, that could explain the differences between the results observed for the two languages and we propose a model where prefixed and suffixed words are decomposed at different stages during their identification process.


PLOS ONE | 2013

Speech Recognition in Natural Background Noise

Julien Meyer; Laure Madeleine Dentel; Fanny Meunier

In the real world, human speech recognition nearly always involves listening in background noise. The impact of such noise on speech signals and on intelligibility performance increases with the separation of the listener from the speaker. The present behavioral experiment provides an overview of the effects of such acoustic disturbances on speech perception in conditions approaching ecologically valid contexts. We analysed the intelligibility loss in spoken word lists with increasing listener-to-speaker distance in a typical low-level natural background noise. The noise was combined with the simple spherical amplitude attenuation due to distance, basically changing the signal-to-noise ratio (SNR). Therefore, our study draws attention to some of the most basic environmental constraints that have pervaded spoken communication throughout human history. We evaluated the ability of native French participants to recognize French monosyllabic words (spoken at 65.3 dB(A), reference at 1 meter) at distances between 11 to 33 meters, which corresponded to the SNRs most revealing of the progressive effect of the selected natural noise (−8.8 dB to −18.4 dB). Our results showed that in such conditions, identity of vowels is mostly preserved, with the striking peculiarity of the absence of confusion in vowels. The results also confirmed the functional role of consonants during lexical identification. The extensive analysis of recognition scores, confusion patterns and associated acoustic cues revealed that sonorant, sibilant and burst properties were the most important parameters influencing phoneme recognition. . Altogether these analyses allowed us to extract a resistance scale from consonant recognition scores. We also identified specific perceptual consonant confusion groups depending of the place in the words (onset vs. coda). Finally our data suggested that listeners may access some acoustic cues of the CV transition, opening interesting perspectives for future studies.


Neuropsychologia | 2014

Functional correlates of the speech-in-noise perception impairment in dyslexia: An MRI study

Marjorie Dole; Fanny Meunier; Michel Hoen

Dyslexia is a language-based neurodevelopmental disorder. It is characterized as a persistent deficit in reading and spelling. These difficulties have been shown to result from an underlying impairment of the phonological component of language, possibly also affecting speech perception. Although there is little evidence for such a deficit under optimal, quiet listening conditions, speech perception difficulties in adults with dyslexia are often reported under more challenging conditions, such as when speech is masked by noise. Previous studies have shown that these difficulties are more pronounced when the background noise is speech and when little spatial information is available to facilitate differentiation between target and background sound sources. In this study, we investigated the neuroimaging correlates of speech-in-speech perception in typical readers and participants with dyslexia, focusing on the effects of different listening configurations. Fourteen adults with dyslexia and 14 matched typical readers performed a subjective intelligibility rating test with single words presented against concurrent speech during functional magnetic resonance imaging (fMRI) scanning. Target words were always presented with a four-talker background in one of three listening configurations: Dichotic, Binaural or Monaural. The results showed that in the Monaural configuration, in which no spatial information was available and energetic masking was maximal, intelligibility was severely decreased in all participants, and this effect was particularly strong in participants with dyslexia. Functional imaging revealed that in this configuration, participants partially compensate for their poorer listening abilities by recruiting several areas in the cerebral networks engaged in speech perception. In the Binaural configuration, participants with dyslexia achieved the same performance level as typical readers, suggesting that they were able to use spatial information when available. This result was, however, associated with increased activation in the right superior temporal gyrus, suggesting the need to reallocate neural resources to overcome speech-in-speech difficulties. Taken together, these results provide further understanding of the speech-in-speech perception deficit observed in dyslexia.


Frontiers in Human Neuroscience | 2013

Using auditory classification images for the identification of fine acoustic cues used in speech perception

Léo Varnet; Kenneth Knoblauch; Fanny Meunier; Michel Hoen

An essential step in understanding the processes underlying the general mechanism of perceptual categorization is to identify which portions of a physical stimulation modulate the behavior of our perceptual system. More specifically, in the context of speech comprehension, it is still a major open challenge to understand which information is used to categorize a speech stimulus as one phoneme or another, the auditory primitives relevant for the categorical perception of speech being still unknown. Here we propose to adapt a method relying on a Generalized Linear Model with smoothness priors, already used in the visual domain for the estimation of so-called classification images, to auditory experiments. This statistical model offers a rigorous framework for dealing with non-Gaussian noise, as it is often the case in the auditory modality, and limits the amount of noise in the estimated template by enforcing smoother solutions. By applying this technique to a specific two-alternative forced choice experiment between stimuli “aba” and “ada” in noise with an adaptive SNR, we confirm that the second formantic transition is key for classifying phonemes into /b/ or /d/ in noise, and that its estimation by the auditory system is a relative measurement across spectral bands and in relation to the perceived height of the second formant in the preceding syllable. Through this example, we show how the GLM with smoothness priors approach can be applied to the identification of fine functional acoustic cues in speech perception. Finally we discuss some assumptions of the model in the specific case of speech perception.


PLOS ONE | 2013

Gray and White Matter Distribution in Dyslexia: A VBM Study of Superior Temporal Gyrus Asymmetry

Marjorie Dole; Fanny Meunier; Michel Hoen

In the present study, we investigated brain morphological signatures of dyslexia by using a voxel-based asymmetry analysis. Dyslexia is a developmental disorder that affects the acquisition of reading and spelling abilities and is associated with a phonological deficit. Speech perception disabilities have been associated with this deficit, particularly when listening conditions are challenging, such as in noisy environments. These deficits are associated with known neurophysiological correlates, such as a reduction in the functional activation or a modification of functional asymmetry in the cortical regions involved in speech processing, such as the bilateral superior temporal areas. These functional deficits have been associated with macroscopic morphological abnormalities, which potentially include a reduction in gray and white matter volumes, combined with modifications of the leftward asymmetry along the perisylvian areas. The purpose of this study was to investigate gray/white matter distribution asymmetries in dyslexic adults using automated image processing derived from the voxel-based morphometry technique. Correlations with speech-in-noise perception abilities were also investigated. The results confirmed the presence of gray matter distribution abnormalities in the superior temporal gyrus (STG) and the superior temporal Sulcus (STS) in individuals with dyslexia. Specifically, the gray matter of adults with dyslexia was symmetrically distributed over one particular region of the STS, the temporal voice area, whereas normal readers showed a clear rightward gray matter asymmetry in this area. We also identified a region in the left posterior STG in which the white matter distribution asymmetry was correlated to speech-in-noise comprehension abilities in dyslexic adults. These results provide further information concerning the morphological alterations observed in dyslexia, revealing the presence of both gray and white matter distribution anomalies and the potential involvement of these defects in speech-in-noise deficits.


Scientific Reports | 2015

How musical expertise shapes speech perception: evidence from auditory classification images.

Léo Varnet; Tianyun Wang; Chloe Peter; Fanny Meunier; Michel Hoen

It is now well established that extensive musical training percolates to higher levels of cognition, such as speech processing. However, the lack of a precise technique to investigate the specific listening strategy involved in speech comprehension has made it difficult to determine how musicians’ higher performance in non-speech tasks contributes to their enhanced speech comprehension. The recently developed Auditory Classification Image approach reveals the precise time-frequency regions used by participants when performing phonemic categorizations in noise. Here we used this technique on 19 non-musicians and 19 professional musicians. We found that both groups used very similar listening strategies, but the musicians relied more heavily on the two main acoustic cues, at the first formant onset and at the onsets of the second and third formants onsets. Additionally, they responded more consistently to stimuli. These observations provide a direct visualization of auditory plasticity resulting from extensive musical training and shed light on the level of functional transfer between auditory processing and speech perception.


PLOS ONE | 2013

Let's All Speak Together! Exploring the Masking Effects of Various Languages on Spoken Word Identification in Multi-Linguistic Babble

Aurore Gautreau; Michel Hoen; Fanny Meunier

This study aimed to characterize the linguistic interference that occurs during speech-in-speech comprehension by combining offline and online measures, which included an intelligibility task (at a −5 dB Signal-to-Noise Ratio) and 2 lexical decision tasks (at a −5 dB and 0 dB SNR) that were performed with French spoken target words. In these 3 experiments we always compared the masking effects of speech backgrounds (i.e., 4-talker babble) that were produced in the same language as the target language (i.e., French) or in unknown foreign languages (i.e., Irish and Italian) to the masking effects of corresponding non-speech backgrounds (i.e., speech-derived fluctuating noise). The fluctuating noise contained similar spectro-temporal information as babble but lacked linguistic information. At −5 dB SNR, both tasks revealed significantly divergent results between the unknown languages (i.e., Irish and Italian) with Italian and French hindering French target word identification to a similar extent, whereas Irish led to significantly better performances on these tasks. By comparing the performances obtained with speech and fluctuating noise backgrounds, we were able to evaluate the effect of each language. The intelligibility task showed a significant difference between babble and fluctuating noise for French, Irish and Italian, suggesting acoustic and linguistic effects for each language. However, the lexical decision task, which reduces the effect of post-lexical interference, appeared to be more accurate, as it only revealed a linguistic effect for French. Thus, although French and Italian had equivalent masking effects on French word identification, the nature of their interference was different. This finding suggests that the differences observed between the masking effects of Italian and Irish can be explained at an acoustic level but not at a linguistic level.


PLOS ONE | 2015

A psychophysical imaging method evidencing auditory cue extraction during speech perception: A group analysis of auditory classification images

Léo Varnet; Kenneth Knoblauch; Willy Serniclaes; Fanny Meunier; Michel Hoen

Although there is a large consensus regarding the involvement of specific acoustic cues in speech perception, the precise mechanisms underlying the transformation from continuous acoustical properties into discrete perceptual units remains undetermined. This gap in knowledge is partially due to the lack of a turnkey solution for isolating critical speech cues from natural stimuli. In this paper, we describe a psychoacoustic imaging method known as the Auditory Classification Image technique that allows experimenters to estimate the relative importance of time-frequency regions in categorizing natural speech utterances in noise. Importantly, this technique enables the testing of hypotheses on the listening strategies of participants at the group level. We exemplify this approach by identifying the acoustic cues involved in da/ga categorization with two phonetic contexts, Al- or Ar-. The application of Auditory Classification Images to our group of 16 participants revealed significant critical regions on the second and third formant onsets, as predicted by the literature, as well as an unexpected temporal cue on the first formant. Finally, through a cluster-based nonparametric test, we demonstrate that this method is sufficiently sensitive to detect fine modifications of the classification strategies between different utterances of the same phoneme.


Frontiers in Human Neuroscience | 2015

Decomposability and mental representation of French verbs

Gustavo L. Estivalet; Fanny Meunier

In French, regardless of stem regularity, inflectional verbal suffixes are extremely regular and paradigmatic. Considering the complexity of the French verbal system, we argue that all French verbs are polymorphemic forms that are decomposed during visual recognition independently of their stem regularity. We conducted a behavioral experiment in which we manipulated the surface and cumulative frequencies of verbal inflected forms and asked participants to perform a visual lexical decision task. We tested four types of verbs with respect to their stem variants: a. fully regular (parler “to speak,” [parl-]); b. phonological change e/E verbs with orthographic markers (répéter “to repeat,” [répét-] and [répèt-]); c. phonological change o/O verbs without orthographic markers (adorer “to adore,” [ador-] and [adOr-]); and d. idiosyncratic (boire “to drink,” [boi-] and [buv-]). For each type of verb, we contrasted four conditions, forms with high and low surface frequencies and forms with high and low cumulative frequencies. Our results showed a significant cumulative frequency effect for the fully regular and idiosyncratic verbs, indicating that different stems within idiosyncratic verbs (such as [boi-] and [buv-]) have distinct representations in the mental lexicon as different fully regular verbs. For the phonological change verbs, we found a significant cumulative frequency effect only when considering the two forms of the stem together ([répét-] and [répèt-]), suggesting that they share a single abstract and under specified phonological representation. Our results also revealed a significant surface frequency effect for all types of verbs, which may reflect the recombination of the stem lexical representation with the functional information of the suffixes. Overall, these results indicate that all inflected verbal forms in French are decomposed during visual recognition and that this process could be due to the regularities of the French inflectional verbal suffixes.

Collaboration


Dive into the Fanny Meunier's collaboration.

Top Co-Authors

Avatar

Julien Meyer

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Léo Varnet

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Laure Dentel

Instituto Politécnico Nacional

View shared research outputs
Top Co-Authors

Avatar

Elsa Spinelli

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Marjorie Dole

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Juan Segui

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Julien Meyer

Centre national de la recherche scientifique

View shared research outputs
Researchain Logo
Decentralizing Knowledge