Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michel Hoen is active.

Publication


Featured researches published by Michel Hoen.


Neuroreport | 2000

ERP analysis of cognitive sequencing : a left-anterior negativity related to structural transformation processing

Michel Hoen; Peter Ford Dominey

A major objective of cognitive neuroscience is to identify those neurocomputational processes that may be shared by multiple cognitive functions vs those that are highly specific. This problem of identifying general vs specialized functions is of particular interest in the domain of language processing. Within this domain, event related brain potential (ERP) studies have demonstrated a left anterior negativity (LAN) in a range 300–700 ms, associated with syntactic processing, often linked to grammatical function words. These words have little or no semantic content, but rather play a role in encoding syntactic structure required for parsing. In the current study we test the hypothesis that the LAN reflects the operation of a more general sequence processing capability in which special symbols encode structural information that, when combined with past elements in the sequence, allows the prediction of successor elements. We recorded ERPs during a non-linguistic sequencing task that required subjects (n = 10) to process special symbols possessing the functional property defined above. When compared to ERPs in a control condition, function symbol processing elicits a left anterior negative shift between temporal and spatial characteristics quite similar to the LAN described during function word processing in language, supporting our hypothesis. These results are discussed in the context of related studies of syntactic and cognitive sequence processing.


Frontiers in Neurorobotics | 2010

Linking language with embodied and teleological representations of action for humanoid cognition.

Stéphane Lallée; Carol J. Madden; Michel Hoen; Peter Ford Dominey

The current research extends our framework for embodied language and action comprehension to include a teleological representation that allows goal-based reasoning for novel actions. The objective of this work is to implement and demonstrate the advantages of a hybrid, embodied-teleological approach to action–language interaction, both from a theoretical perspective, and via results from human–robot interaction experiments with the iCub robot. We first demonstrate how a framework for embodied language comprehension allows the system to develop a baseline set of representations for processing goal-directed actions such as “take,” “cover,” and “give.” Spoken language and visual perception are input modes for these representations, and the generation of spoken language is the output mode. Moving toward a teleological (goal-based reasoning) approach, a crucial component of the new system is the representation of the subcomponents of these actions, which includes relations between initial enabling states, and final resulting states for these actions. We demonstrate how grammatical categories including causal connectives (e.g., because, if–then) can allow spoken language to enrich the learned set of state-action-state (SAS) representations. We then examine how this enriched SAS inventory enhances the robots ability to represent perceived actions in which the environment inhibits goal achievement. The paper addresses how language comes to reflect the structure of action, and how it can subsequently be used as an input and output vector for embodied and teleological aspects of action.


Neuropsychologia | 2012

Speech-in-noise perception deficit in adults with dyslexia: effects of background type and listening configuration.

Marjorie Dole; Michel Hoen; Fanny Meunier

Developmental dyslexia is associated with impaired speech-in-noise perception. The goal of the present research was to further characterize this deficit in dyslexic adults. In order to specify the mechanisms and processing strategies used by adults with dyslexia during speech-in-noise perception, we explored the influence of background type, presenting single target-words against backgrounds made of cocktail party sounds, modulated speech-derived noise or stationary noise. We also evaluated the effect of three listening configurations differing in terms of the amount of spatial processing required. In a monaural condition, signal and noise were presented to the same ear while in a dichotic situation, target and concurrent sound were presented to two different ears, finally in a spatialised configuration, target and competing signals were presented as if they originated from slightly differing positions in the auditory scene. Our results confirm the presence of a speech-in-noise perception deficit in dyslexic adults, in particular when the competing signal is also speech, and when both signals are presented to the same ear, an observation potentially relating to phonological accounts of dyslexia. However, adult dyslexics demonstrated better levels of spatial release of masking than normal reading controls when the background was speech, suggesting that they are well able to rely on denoising strategies based on spatial auditory scene analysis strategies.


Neuropsychologia | 2014

Functional correlates of the speech-in-noise perception impairment in dyslexia: An MRI study

Marjorie Dole; Fanny Meunier; Michel Hoen

Dyslexia is a language-based neurodevelopmental disorder. It is characterized as a persistent deficit in reading and spelling. These difficulties have been shown to result from an underlying impairment of the phonological component of language, possibly also affecting speech perception. Although there is little evidence for such a deficit under optimal, quiet listening conditions, speech perception difficulties in adults with dyslexia are often reported under more challenging conditions, such as when speech is masked by noise. Previous studies have shown that these difficulties are more pronounced when the background noise is speech and when little spatial information is available to facilitate differentiation between target and background sound sources. In this study, we investigated the neuroimaging correlates of speech-in-speech perception in typical readers and participants with dyslexia, focusing on the effects of different listening configurations. Fourteen adults with dyslexia and 14 matched typical readers performed a subjective intelligibility rating test with single words presented against concurrent speech during functional magnetic resonance imaging (fMRI) scanning. Target words were always presented with a four-talker background in one of three listening configurations: Dichotic, Binaural or Monaural. The results showed that in the Monaural configuration, in which no spatial information was available and energetic masking was maximal, intelligibility was severely decreased in all participants, and this effect was particularly strong in participants with dyslexia. Functional imaging revealed that in this configuration, participants partially compensate for their poorer listening abilities by recruiting several areas in the cerebral networks engaged in speech perception. In the Binaural configuration, participants with dyslexia achieved the same performance level as typical readers, suggesting that they were able to use spatial information when available. This result was, however, associated with increased activation in the right superior temporal gyrus, suggesting the need to reallocate neural resources to overcome speech-in-speech difficulties. Taken together, these results provide further understanding of the speech-in-speech perception deficit observed in dyslexia.


Frontiers in Human Neuroscience | 2013

Using auditory classification images for the identification of fine acoustic cues used in speech perception

Léo Varnet; Kenneth Knoblauch; Fanny Meunier; Michel Hoen

An essential step in understanding the processes underlying the general mechanism of perceptual categorization is to identify which portions of a physical stimulation modulate the behavior of our perceptual system. More specifically, in the context of speech comprehension, it is still a major open challenge to understand which information is used to categorize a speech stimulus as one phoneme or another, the auditory primitives relevant for the categorical perception of speech being still unknown. Here we propose to adapt a method relying on a Generalized Linear Model with smoothness priors, already used in the visual domain for the estimation of so-called classification images, to auditory experiments. This statistical model offers a rigorous framework for dealing with non-Gaussian noise, as it is often the case in the auditory modality, and limits the amount of noise in the estimated template by enforcing smoother solutions. By applying this technique to a specific two-alternative forced choice experiment between stimuli “aba” and “ada” in noise with an adaptive SNR, we confirm that the second formantic transition is key for classifying phonemes into /b/ or /d/ in noise, and that its estimation by the auditory system is a relative measurement across spectral bands and in relation to the perceived height of the second formant in the preceding syllable. Through this example, we show how the GLM with smoothness priors approach can be applied to the identification of fine functional acoustic cues in speech perception. Finally we discuss some assumptions of the model in the specific case of speech perception.


PLOS ONE | 2013

Gray and White Matter Distribution in Dyslexia: A VBM Study of Superior Temporal Gyrus Asymmetry

Marjorie Dole; Fanny Meunier; Michel Hoen

In the present study, we investigated brain morphological signatures of dyslexia by using a voxel-based asymmetry analysis. Dyslexia is a developmental disorder that affects the acquisition of reading and spelling abilities and is associated with a phonological deficit. Speech perception disabilities have been associated with this deficit, particularly when listening conditions are challenging, such as in noisy environments. These deficits are associated with known neurophysiological correlates, such as a reduction in the functional activation or a modification of functional asymmetry in the cortical regions involved in speech processing, such as the bilateral superior temporal areas. These functional deficits have been associated with macroscopic morphological abnormalities, which potentially include a reduction in gray and white matter volumes, combined with modifications of the leftward asymmetry along the perisylvian areas. The purpose of this study was to investigate gray/white matter distribution asymmetries in dyslexic adults using automated image processing derived from the voxel-based morphometry technique. Correlations with speech-in-noise perception abilities were also investigated. The results confirmed the presence of gray matter distribution abnormalities in the superior temporal gyrus (STG) and the superior temporal Sulcus (STS) in individuals with dyslexia. Specifically, the gray matter of adults with dyslexia was symmetrically distributed over one particular region of the STS, the temporal voice area, whereas normal readers showed a clear rightward gray matter asymmetry in this area. We also identified a region in the left posterior STG in which the white matter distribution asymmetry was correlated to speech-in-noise comprehension abilities in dyslexic adults. These results provide further information concerning the morphological alterations observed in dyslexia, revealing the presence of both gray and white matter distribution anomalies and the potential involvement of these defects in speech-in-noise deficits.


Neuroreport | 2005

ERP correlates of lexical analysis: N280 reflects processing complexity rather than category or frequency effects.

Angèle Brunellière; Michel Hoen; Peter Ford Dominey

In the context of language processing, the N280 is an anterior negative event-related potential profile associated with the lexical categorization of grammatical function words versus content words. Subsequent studies suggested that this effect was related to word statistics including length and frequency in the lexicon. The current research tests the hypothesis that the N280 effect is related to an index of grammatical complexity. We recorded event-related potentials during a sentence reading task. Comparing content versus function words revealed the classic N280. Within function words, we compared the relative pronouns ‘qui’ and ‘que’ (which are identical for length and frequency) that in French indicate a subsequent simple (subject–subject) and complex (subject–object) relative clause, respectively. A left anterior N280 effect was observed only for ‘que’, supporting our hypothesis that the N280 reflects grammatical complexity that can be confounded with lexical category and statistical properties.


Scientific Reports | 2015

How musical expertise shapes speech perception: evidence from auditory classification images.

Léo Varnet; Tianyun Wang; Chloe Peter; Fanny Meunier; Michel Hoen

It is now well established that extensive musical training percolates to higher levels of cognition, such as speech processing. However, the lack of a precise technique to investigate the specific listening strategy involved in speech comprehension has made it difficult to determine how musicians’ higher performance in non-speech tasks contributes to their enhanced speech comprehension. The recently developed Auditory Classification Image approach reveals the precise time-frequency regions used by participants when performing phonemic categorizations in noise. Here we used this technique on 19 non-musicians and 19 professional musicians. We found that both groups used very similar listening strategies, but the musicians relied more heavily on the two main acoustic cues, at the first formant onset and at the onsets of the second and third formants onsets. Additionally, they responded more consistently to stimuli. These observations provide a direct visualization of auditory plasticity resulting from extensive musical training and shed light on the level of functional transfer between auditory processing and speech perception.


PLOS ONE | 2013

Let's All Speak Together! Exploring the Masking Effects of Various Languages on Spoken Word Identification in Multi-Linguistic Babble

Aurore Gautreau; Michel Hoen; Fanny Meunier

This study aimed to characterize the linguistic interference that occurs during speech-in-speech comprehension by combining offline and online measures, which included an intelligibility task (at a −5 dB Signal-to-Noise Ratio) and 2 lexical decision tasks (at a −5 dB and 0 dB SNR) that were performed with French spoken target words. In these 3 experiments we always compared the masking effects of speech backgrounds (i.e., 4-talker babble) that were produced in the same language as the target language (i.e., French) or in unknown foreign languages (i.e., Irish and Italian) to the masking effects of corresponding non-speech backgrounds (i.e., speech-derived fluctuating noise). The fluctuating noise contained similar spectro-temporal information as babble but lacked linguistic information. At −5 dB SNR, both tasks revealed significantly divergent results between the unknown languages (i.e., Irish and Italian) with Italian and French hindering French target word identification to a similar extent, whereas Irish led to significantly better performances on these tasks. By comparing the performances obtained with speech and fluctuating noise backgrounds, we were able to evaluate the effect of each language. The intelligibility task showed a significant difference between babble and fluctuating noise for French, Irish and Italian, suggesting acoustic and linguistic effects for each language. However, the lexical decision task, which reduces the effect of post-lexical interference, appeared to be more accurate, as it only revealed a linguistic effect for French. Thus, although French and Italian had equivalent masking effects on French word identification, the nature of their interference was different. This finding suggests that the differences observed between the masking effects of Italian and Irish can be explained at an acoustic level but not at a linguistic level.


PLOS ONE | 2015

A psychophysical imaging method evidencing auditory cue extraction during speech perception: A group analysis of auditory classification images

Léo Varnet; Kenneth Knoblauch; Willy Serniclaes; Fanny Meunier; Michel Hoen

Although there is a large consensus regarding the involvement of specific acoustic cues in speech perception, the precise mechanisms underlying the transformation from continuous acoustical properties into discrete perceptual units remains undetermined. This gap in knowledge is partially due to the lack of a turnkey solution for isolating critical speech cues from natural stimuli. In this paper, we describe a psychoacoustic imaging method known as the Auditory Classification Image technique that allows experimenters to estimate the relative importance of time-frequency regions in categorizing natural speech utterances in noise. Importantly, this technique enables the testing of hypotheses on the listening strategies of participants at the group level. We exemplify this approach by identifying the acoustic cues involved in da/ga categorization with two phonetic contexts, Al- or Ar-. The application of Auditory Classification Images to our group of 16 participants revealed significant critical regions on the second and third formant onsets, as predicted by the literature, as well as an unexpected temporal cue on the first formant. Finally, through a cluster-based nonparametric test, we demonstrate that this method is sufficiently sensitive to detect fine modifications of the classification strategies between different utterances of the same phoneme.

Collaboration


Dive into the Michel Hoen's collaboration.

Top Co-Authors

Avatar

Fanny Meunier

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Léo Varnet

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Dan Gnansia

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar

Marjorie Dole

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nicolas Guevara

University of Nice Sophia Antipolis

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge