Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hélène Lœvenbruck is active.

Publication


Featured researches published by Hélène Lœvenbruck.


Behavioural Brain Research | 2014

What is that little voice inside my head? Inner speech phenomenology, its role in cognitive performance, and its relation to self-monitoring

Lucile Rapin; Jean-Philippe Lachaux; Monica Baciu; Hélène Lœvenbruck

The little voice inside our head, or inner speech, is a common everyday experience. It plays a central role in human consciousness at the interplay of language and thought. An impressive host of research works has been carried out on inner speech these last fifty years. Here we first describe the phenomenology of inner speech by examining five issues: common behavioural and cerebral correlates with overt speech, different types of inner speech (wilful verbal thought generation and verbal mind wandering), presence of inner speech in reading and in writing, inner signing and voice-hallucinations in deaf people. Secondly, we review the role of inner speech in cognitive performance (i.e., enhancement vs. perturbation). Finally, we consider agency in inner speech and how our inner voice is known to be self-generated and not produced by someone else.


Schizophrenia Bulletin | 2015

Left-Dominant Temporal-Frontal Hypercoupling in Schizophrenia Patients With Hallucinations During Speech Perception

Katie M. Lavigne; Lucile Rapin; Paul D. Metzak; Jennifer C. Whitman; Kwanghee Jung; Marion Dohen; Hélène Lœvenbruck; Todd S. Woodward

BACKGROUND Task-based functional neuroimaging studies of schizophrenia have not yet replicated the increased coordinated hyperactivity in speech-related brain regions that is reported with symptom-capture and resting-state studies of hallucinations. This may be due to suboptimal selection of cognitive tasks. METHODS In the current study, we used a task that allowed experimental manipulation of control over verbal material and compared brain activity between 23 schizophrenia patients (10 hallucinators, 13 nonhallucinators), 22 psychiatric (bipolar), and 27 healthy controls. Two conditions were presented, one involving inner verbal thought (in which control over verbal material was required) and another involving speech perception (SP; in which control verbal material was not required). RESULTS A functional connectivity analysis resulted in a left-dominant temporal-frontal network that included speech-related auditory and motor regions and showed hypercoupling in past-week hallucinating schizophrenia patients (relative to nonhallucinating patients) during SP only. CONCLUSIONS These findings replicate our previous work showing generalized speech-related functional network hypercoupling in schizophrenia during inner verbal thought and SP, but extend them by suggesting that hypercoupling is related to past-week hallucination severity scores during SP only, when control over verbal material is not required. This result opens the possibility that practicing control over inner verbal thought processes may decrease the likelihood or severity of hallucinations.


PLOS ONE | 2014

Cross-Modal Matching of Audio-Visual German and French Fluent Speech in Infancy

Claudia Kubicek; Anne Hillairet de Boisferon; Eve Dupierrix; Olivier Pascalis; Hélène Lœvenbruck; Judit Gervain; Gudrun Schwarzer

The present study examined when and how the ability to cross-modally match audio-visual fluent speech develops in 4.5-, 6- and 12-month-old German-learning infants. In Experiment 1, 4.5- and 6-month-old infants’ audio-visual matching ability of native (German) and non-native (French) fluent speech was assessed by presenting auditory and visual speech information sequentially, that is, in the absence of temporal synchrony cues. The results showed that 4.5-month-old infants were capable of matching native as well as non-native audio and visual speech stimuli, whereas 6-month-olds perceived the audio-visual correspondence of native language stimuli only. This suggests that intersensory matching narrows for fluent speech between 4.5 and 6 months of age. In Experiment 2, auditory and visual speech information was presented simultaneously, therefore, providing temporal synchrony cues. Here, 6-month-olds were found to match native as well as non-native speech indicating facilitation of temporal synchrony cues on the intersensory perception of non-native fluent speech. Intriguingly, despite the fact that audio and visual stimuli cohered temporally, 12-month-olds matched the non-native language only. Results were discussed with regard to multisensory perceptual narrowing during the first year of life.


Psychiatry Research-neuroimaging | 2012

Hyperintensity of functional networks involving voice-selective cortical regions during silent thought in schizophrenia

Lucile Rapin; Marion Dohen; Hélène Lœvenbruck; Jennifer C. Whitman; Paul D. Metzak; Todd S. Woodward

An important aspect of schizophrenia symptomatology is inner-outer confusion, or blurring of ego boundaries, which is linked to symptoms such as hallucinations and Schneiderian delusions. Dysfunction in the cognitive processes involved in the generation of private thoughts may contribute to blurring of the ego boundaries through increased activation in functional networks including speech- and voice-selective cortical regions. In the present study, the neural underpinnings of silent verbal thought generation and speech perception were investigated using functional magnetic resonance imaging (fMRI). Functional connectivity analysis was performed using constrained principal component analysis for fMRI (fMRI-CPCA). Group differences were observable on two functional networks: one reflecting hyperactivity in speech- and voice-selective cortical regions (e.g., bilateral superior temporal gyri (STG)) during both speech perception and silent verbal thought generation, and another involving hyperactivity in a multiple demands (i.e., task-positive) network that included Wernickes area, during silent verbal thought generation. This set of preliminary results suggests that hyperintensity of functional networks involving voice-selective cortical regions may contribute to the blurring of ego boundaries characteristic of schizophrenia.


International Journal of Behavioral Development | 2013

Face-scanning behavior to silently-talking faces in 12-month-old infants: The impact of pre-exposed auditory speech

Claudia Kubicek; Anne Hillairet de Boisferon; Eve Dupierrix; Hélène Lœvenbruck; Judit Gervain; Gudrun Schwarzer

The present eye-tracking study aimed to investigate the impact of auditory speech information on 12-month-olds’ gaze behavior to silently-talking faces. We examined German infants’ face-scanning behavior to side-by-side presentation of a bilingual speaker’s face silently speaking German utterances on one side and French on the other side, before and after auditory familiarization with one of the two languages. The results showed that 12-month-old infants showed no general visual preference for either of the visual speeches, neither before nor after auditory input. But, infants who heard native speech decreased their looking time to the mouth area and focused longer on the eyes compared to their scanning behavior without auditory language input, whereas infants who heard non-native speech increased their visual attention on the mouth region and focused less on the eyes. Thus, it can be assumed that 12-month-olds quickly identified their native language based on auditory speech and guided their visual attention more to the eye region than infants who have listened to non-native speech.


Human Brain Mapping | 2013

Neural correlates of the perception of contrastive prosodic focus in French: a functional magnetic resonance imaging study.

Marion Dohen; Hélène Lœvenbruck; Marc Sato; Cédric Pichat; Monica Baciu

This functional magnetic resonance imaging (fMRI) study aimed at examining the cerebral regions involved in the auditory perception of prosodic focus using a natural focus detection task. Two conditions testing the processing of simple utterances in French were explored, narrow‐focused versus broad‐focused. Participants performed a correction detection task. The utterances in both conditions had exactly the same segmental, lexical, and syntactic contents, and only differed in their prosodic realization. The comparison between the two conditions therefore allowed us to examine processes strictly associated with prosodic focus processing. To assess the specific effect of pitch on hemispheric specialization, a parametric analysis was conducted using a parameter reflecting pitch variations specifically related to focus. The comparison between the two conditions reveals that brain regions recruited during the detection of contrastive prosodic focus can be described as a right‐hemisphere dominant dual network consisting of (a) ventral regions which include the right posterosuperior temporal and bilateral middle temporal gyri and (b) dorsal regions including the bilateral inferior frontal, inferior parietal and left superior parietal gyri. Our results argue for a dual stream model of focus perception compatible with the asymmetric sampling in time hypothesis. They suggest that the detection of prosodic focus involves an interplay between the right and left hemispheres, in which the computation of slowly changing prosodic cues in the right hemisphere dynamically feeds an internal model concurrently used by the left hemisphere, which carries out computations over shorter temporal windows. Hum Brain Mapp 34:2574–2591, 2013.


British Journal of Psychology | 2017

More evidence of the linkage between face processing and language processing

Olivier Pascalis; Marjorie Dole; Hélène Lœvenbruck

This review of the literature on the emergence of language describes two opposing views of phonological development, the sound-based versus the whole-word-based accounts. An integrative model is proposed which claims that learning sublexical speech sounds and producing wordlike vocalizations are in fact parallel processes that feed each other during language development. We argue that this model might find unexpected support from the face processing literature.


Clinical Linguistics & Phonetics | 2018

Speech recovery and language plasticity can be facilitated by Sensori-Motor Fusion training in chronic non-fluent aphasia. A case report study

Célise Haldin; Audrey Acher; Louise Kauffmann; Thomas Hueber; Emilie Cousin; Pierre Badin; Pascal Perrier; Diandra Fabre; D. Pérennou; Olivier Detante; Assia Jaillard; Hélène Lœvenbruck; Monica Baciu

ABSTRACT The rehabilitation of speech disorders benefits from providing visual information which may improve speech motor plans in patients. We tested the proof of concept of a rehabilitation method (Sensori-Motor Fusion, SMF; Ultraspeech player) in one post-stroke patient presenting chronic non-fluent aphasia. SMF allows visualisation by the patient of target tongue and lips movements using high-speed ultrasound and video imaging. This can improve the patient’s awareness of his/her own lingual and labial movements, which can, in turn, improve the representation of articulatory movements and increase the ability to coordinate and combine articulatory gestures. The auditory and oro-sensory feedback received by the patient as a result of his/her own pronunciation can be integrated with the target articulatory movements they watch. Thus, this method is founded on sensorimotor integration during speech. The SMF effect on this patient was assessed through qualitative comparison of language scores and quantitative analysis of acoustic parameters measured in a speech production task, before and after rehabilitation. We also investigated cerebral patterns of language reorganisation for rhyme detection and syllable repetition, to evaluate the influence of SMF on phonological-phonetic processes. Our results showed that SMF had a beneficial effect on this patient who qualitatively improved in naming, reading, word repetition and rhyme judgment tasks. Quantitative measurements of acoustic parameters indicate that the patient’s production of vowels and syllables also improved. Compared with pre-SMF, the fMRI data in the post-SMF session revealed the activation of cerebral regions related to articulatory, auditory and somatosensory processes, which were expected to be recruited by SMF. We discuss neurocognitive and linguistic mechanisms which may explain speech improvement after SMF, as well as the advantages of using this speech rehabilitation method.


Biological Psychology | 2017

Orofacial electromyographic correlates of induced verbal rumination

Ladislas Nalborczyk; Céline Baeyens; Romain Grandchamp; Mircea Polosan; Elsa Spinelli; Ernst H. W. Koster; Hélène Lœvenbruck

Rumination is predominantly experienced in the form of repetitive verbal thoughts. Verbal rumination is a particular case of inner speech. According to the Motor Simulation view, inner speech is a kind of motor action, recruiting the speech motor system. In this framework, we predicted an increase in speech muscle activity during rumination as compared to rest. We also predicted increased forehead activity, associated with anxiety during rumination. We measured electromyographic activity over the orbicularis oris superior and inferior, frontalis and flexor carpi radialis muscles. Results showed increased lip and forehead activity after rumination induction compared to an initial relaxed state, together with increased self-reported levels of rumination. Moreover, our data suggest that orofacial relaxation is more effective in reducing rumination than non-orofacial relaxation. Altogether, these results support the hypothesis that verbal rumination involves the speech motor system, and provide a promising psychophysiological index to assess the presence of verbal rumination.


Journal of the Acoustical Society of America | 1996

Motor control information recovering using a target‐based model of articulatory trajectory generation

Hélène Lœvenbruck; Pascal Perrier

A quantitative target‐based model of articulatory trajectory formation in speech is proposed here, where for a same vowel sequence, equilibrium targets are assumed to remain identical, independent of speaking rate and stress; only their timing and the muscle cocontraction level are adjusted following prosodic requirements. This model is used to examine how relevant motor control information could be extracted from the acoustic signal to help identifying vowels by providing clues on the stress or rate conditions. Sequences [iai] and [iei] under three prosodic conditions (slow stressed —ideal condition—, slow unstressed and fast stressed —reduced conditions—) are analyzed. Equilibrium targets are imposed as the actual positions are reached by the tongue body under the ideal condition. The cocontraction level and the timing of the commands are inferred using a two‐step inversion procedure: from the acoustic signal to tongue body trajectories then to motor commands. It is shown that at a given speaking rate, ...

Collaboration


Dive into the Hélène Lœvenbruck's collaboration.

Top Co-Authors

Avatar

Marion Dohen

Grenoble Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Olivier Pascalis

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Anne Hillairet de Boisferon

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Judit Gervain

Paris Descartes University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lucile Rapin

Université du Québec à Montréal

View shared research outputs
Top Co-Authors

Avatar

Eve Dupierrix

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Monica Baciu

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Pascal Perrier

Centre national de la recherche scientifique

View shared research outputs
Researchain Logo
Decentralizing Knowledge