Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hélène Loevenbruck is active.

Publication


Featured researches published by Hélène Loevenbruck.


Speech Communication | 2004

Visual perception of contrastive focus in reiterant French speech

Marion Dohen; Hélène Loevenbruck; Marie-Agnès Cathiard; Jean-Luc Schwartz

The aim of this paper is to study how contrastive focus is conveyed by prosody both articulatorily and acoustically and how viewers extract focus structure from visual prosodic realizations. Is the visual modality useful for the perception of prosody? An audiovisual corpus was recorded from a male native speaker of French. The sentences had a subject-verb-object (SVO) structure. Four contrastive focus conditions were studied: focus on each phrase (S, V or O) and broad focus. Normal and reiterant modes were recorded, only the latter was studied. An acoustic validation (fundamental frequency, duration and intensity) showed that the speaker had pronounced the utterances with a typical focused intonation on the focused phrase. Then, lip height and jaw opening were extracted from the video data. An articulatory analysis suggested a set of possible visual cues to focus for rei\terant /ma/ speech: (a) prefocal lengthening, (b) large jaw opening and high opening velocities on all the focused syllables; (c) long lip closure for the first focused syllable and (d) hypo-articulation (reduced jaw opening and duration) of the following phrases. A visual perception test was developed. It showed that (a) contrastive focus was well perceived visually for reiterant speech; (b) no training was necessary and (c) subject focus was slightly easier to identify than the other focus conditions. We also found that if the visual cues identified in our articulatory analysis were present and marked, perception was enhanced. This enables us to assume that the visual cues extracted from the corpus are probably the ones which are indeed perceptively salient.


Language and Speech | 2009

Interaction of audition and vision for the perception of prosodic contrastive focus.

Marion Dohen; Hélène Loevenbruck

Prosodic contrastive focus is used to attract the listeners attention to a specific part of the utterance. Mostly conceived of as auditory/acoustic, it also has visible correlates which have been shown to be perceived. This study aimed at analyzing auditory-visual perception of prosodic focus by elaborating a paradigm enabling an auditory-visual advantage measurement (avoiding the ceiling effect) and by examining the interaction between audition and vision. A first experiment proved the efficiency of a whispered speech paradigm to measure an auditory-visual advantage for the perception of prosodic features. A second experiment used this paradigm to examine and characterize the auditory-visual perceptual processes. It combined performance assessment (focus detection score) to reaction time measurements and confirmed and extended the results from the first experiment. This study showed that adding vision to audition for perception of prosodic focus can not only improve focus detection but also reduce reaction times. A further analysis suggested that audition and vision are actually integrated for the perception of prosodic focus. Visual-only perception appeared to be facilitated for whispered speech suggesting an enhancement of visual cues in whispering. Moreover, the potential influence of the presence of facial markers on perception is discussed.


Child Development Perspectives | 2014

On the Links Among Face Processing, Language Processing, and Narrowing During Development.

Olivier Pascalis; Hélène Loevenbruck; Paul C. Quinn; Sonia Kandel; James W. Tanaka; Kang Lee

From the beginning of life, face and language processing are crucial for establishing social communication. Studies on the development of systems for processing faces and language have yielded such similarities as perceptual narrowing across both domains. In this article, we review several functions of human communication, and then describe how the tools used to accomplish those functions are modified by perceptual narrowing. We conclude that narrowing is common to all forms of social communication. We argue that during evolution, social communication engaged different perceptual and cognitive systems—face, facial expression, gesture, vocalization, sound, and oral language—that emerged at different times. These systems are interactive and linked to some extent. In this framework, narrowing can be viewed as a way infants adapt to their native social group.


Attention Perception & Psychophysics | 2006

Multistable syllables as enacted percepts : A source of an asymmetric bias in the verbal transformation effect

Marc Sato; Jean-Luc Schwartz; Christian Abry; Marie-Agnès Cathiard; Hélène Loevenbruck

Perceptual changes are experienced during rapid and continuous repetition of a speech form, leading to an auditory illusion known as theverbal transformation effect. Although verbal transformations are considered to reflect mainly the perceptual organization and interpretation of speech, the present study was designed to test whether or not speech production constraints may participate in the emergence of verbal representations. With this goal in mind, we examined whether variations in the articulatory cohesion of repeated nonsense words—specifically, temporal relationships between articulatory events—could lead to perceptual asymmetries in verbal transformations. The first experiment displayed variations in timing relations between two consonantal gestures embedded in various nonsense syllables in a repetitive speech production task. In the second experiment, French participants repeatedly uttered these syllables while searching for verbal transformation. Syllable transformation frequencies followed the temporal clustering between consonantal gestures: The more synchronized the gestures, the more stable and attractive the syllable. In the third experiment, which involved a covert repetition mode, the pattern was maintained without external speech movements. However, when a purely perceptual condition was used in a fourth experiment, the previously observed perceptual asymmetries of verbal transformations disappeared. These experiments demonstrate the existence of an asymmetric bias in the verbal transformation effect linked to articulatory control constraints. The persistence of this effect from an overt to a covert repetition procedure provides evidence that articulatory stability constraints originating from the action system may be involved in auditory imagery. The absence of the asymmetric bias during a purely auditory procedure rules out perceptual mechanisms as a possible explanation of the observed asymmetries.


IEICE Electronics Express | 2009

Exploiting visual information for NAM recognition

Panikos Heracleous; Denis Beautemps; Viet-Anh Tran; Hélène Loevenbruck; Gérard Bailly

Non-audible murmur (NAM) is an unvoiced speech received through body tissue using special acoustic sensors (i.e., NAM microphones) attached behind the talkers ear. Although NAM has different frequency characteristics compared to normal speech, it is possible to perform automatic speech recognition (ASR) using conventional methods. In using a NAM microphone, body transmission and the loss of lip radiation act as a low-pass filter; as a result, higher frequency components are attenuated in NAM signal. A decrease in NAM recognition performance is attributed to spectral reduction. To address the problem of loss of lip radiation, visual information extracted from the talkers facial movements is fused with NAM speech. Experimental results revealed a relative improvement of 39% when fused NAM speech and facial information were used as compared to using only NAM speech. Results also showed that improvements in the recognition rate depend on the place of articulation.


PLOS ONE | 2017

Audio-Visual Perception of Gender by Infants Emerges Earlier for Adult-Directed Speech

Anne-Raphaëlle Richoz; Paul C. Quinn; Anne Hillairet de Boisferon; Carole Berger; Hélène Loevenbruck; David J. Lewkowicz; Kang Lee; Marjorie Dole; Roberto Caldara; Olivier Pascalis

Early multisensory perceptual experiences shape the abilities of infants to perform socially-relevant visual categorization, such as the extraction of gender, age, and emotion from faces. Here, we investigated whether multisensory perception of gender is influenced by infant-directed (IDS) or adult-directed (ADS) speech. Six-, 9-, and 12-month-old infants saw side-by-side silent video-clips of talking faces (a male and a female) and heard either a soundtrack of a female or a male voice telling a story in IDS or ADS. Infants participated in only one condition, either IDS or ADS. Consistent with earlier work, infants displayed advantages in matching female relative to male faces and voices. Moreover, the new finding that emerged in the current study was that extraction of gender from face and voice was stronger at 6 months with ADS than with IDS, whereas at 9 and 12 months, matching did not differ for IDS versus ADS. The results indicate that the ability to perceive gender in audiovisual speech is influenced by speech manner. Our data suggest that infants may extract multisensory gender information developmentally earlier when looking at adults engaged in conversation with other adults (i.e., ADS) than when adults are directly talking to them (i.e., IDS). Overall, our findings imply that the circumstances of social interaction may shape early multisensory abilities to perceive gender.


Journal of the Acoustical Society of America | 1999

Articulatory effects of contrastive emphasis on the Accentual Phrase in French

Hélène Loevenbruck

Recent works (Beckman, 1996) show that prosody is itself a complex linguistic structure, and it is imperative to better describe its phonological and phonetic (acoustic and articulatory) characteristics. Articulatory studies of French prosody provide variable conclusions. The irregularities could come from the fact that prosodic structure is rarely considered and that different phenomena (‘‘accents primaires,’’ ‘‘secondaires’’) are examined together. Articulatory correlates of a prosodic entity, the Accentual Phrase (AP), are studied here, using a model of French prosody (Fougeron and Jun, 1998). The AP features an initial high tone Hi, also called ‘‘accent secondaire,’’ a final high tone H* (‘‘primaire’’), and two low L tones preceding them. Sentences containing four‐syllable words (APs), were recorded for two French speakers (one male, one female), using EMA. The position of the AP in the sentence varied, several speaking conditions were elicited. Displacement, peak velocity, and movement duration are a...


Clinical Linguistics & Phonetics | 2017

Realisation of voicing by French-speaking CI children after long-term implant use: An acoustic study

Bénédicte Grandon; Anne Vilain; Hélène Loevenbruck; Sébastien Schmerber; Eric Truy

ABSTRACT Studies of speech production in French-speaking cochlear-implanted (CI) children are very scarce. Yet, difficulties in speech production have been shown to impact the intelligibility of these children. The goal of this study is to understand the effect of long-term use of cochlear implant on speech production, and more precisely on the coordination of laryngeal-oral gestures in stop production. The participants were all monolingual French children: 13 6;6- to 10;7-year-old CI children and 20 age-matched normally hearing (NH) children. We compared /p/, /t/, /k/, /b/, /d/ and /g/ in word-initial consonant-vowel sequences, produced in isolation in two different tasks, and we studied the effects of CI use, vowel context, task and age factors (i.e. chronological age, age at implantation and duration of implant use). Statistical analyses show a difference in voicing production between groups for voiceless consonants (shorter Voice Onset Times for CI children), with significance reached only for /k/, but no difference for voiced consonants. Our study indicates that in the long run, use of CI seems to have limited effects on the acquisition of oro-laryngeal coordination needed to produce voicing, except for specific difficulties located on velars. In a follow-up study, further acoustic analyses on vowel and fricative production by the same children reveal more difficulties, which suggest that cochlear implantation impacts frequency-based features (second formant of vowels and spectral moments of fricatives) more than durational cues (voicing).


Embodied and Situated Language Processing: Stepping out of the Frame (ESLP 2015) | 2015

Orofacial electromyographic correlates of induced verbal rumination

Nalborczyk Ladislas; Céline Baeyens; Romain Grandchamp; Hélène Loevenbruck; Mircea Polosan

Rumination is predominantly experienced in the form of repetitive verbal thoughts. Verbal rumination is a particular case of inner speech. According to the Motor Simulation view, inner speech is a kind of motor action, recruiting the speech motor system. In this framework, we predicted an increase in speech muscle activity during rumination as compared to rest. We also predicted increased forehead activity, associated with anxiety during rumination. We measured electromyographic activity over the orbicularis oris superior and inferior, frontalis and flexor carpi radialis muscles. Results showed increased lip and forehead activity after rumination induction compared to an initial relaxed state, together with increased self-reported levels of rumination. Moreover, our data suggest that orofacial relaxation is more effective in reducing rumination than non-orofacial relaxation. Altogether, these results support the hypothesis that verbal rumination involves the speech motor system, and provide a promising psychophysiological index to assess the presence of verbal rumination.


Schizophrenia Research | 2010

VERBAL THOUGHT GENERATION IN SCHIZOPHRENIA PATIENTS IS ASSOCIATED WITH ABERRANT ACTIVATION IN A NEURAL NETWORK INVOLVING TASK-POSITIVE AND TASK-NEGATIVE ASPECTS

Lucile Rapin; Paul D. Metzak; Jennifer C. Whitman; Marion Dohen; Hélène Loevenbruck; Marc Sato; Todd S. Woodward

Schizophrenia, particularly auditory verbal hallucinations (AVH), has been associated with impairments in source monitoring where patients tend to misattribute the source of an internal speech event to an external agent. Previous research has proposed that abnormalities in generating thoughts induce more vivid auditory sensations in schizophrenia patients through a failure of corollary discharge between the frontal and the temporal cortices (Frith, 1996). The patients are able to generate willed actions but cannot control the intentions behind it and therefore experience them as originating from an external source. This could account for source attribution errors, and at a higher threshold, could lead to AVH. In the present study, we investigated the neural underpinnings of a verbal thought generation (VTG) task using fMRI in 5 schizophrenia patients (DSM-IV; mean age = 33.8; sd= 7.53) and in 12 healthy controls (mean age = 25.9; sd= 7.08). The study sought to investigate the patterns of cerebral activation associated with generating thoughts in schizophrenia patients. Methods: Two conditions were examined. In the first condition, participants were required to mentally generate a definition of a common word presented on the screen. In the second condition, they had to listen to the definition of a common word presented on the screen. An event-related fMRI protocol was used during two 9.25 minute scanning sessions in a 3T scanner. Results: Statistical analyses were performed using constrained principal component analysis (CPCA) with a finite impulse response (FIR) model. During the mental generation task, activations (task-positive network) were observed for both groups in the anterior cingulate (BA 32) and in the left prefrontal (BA 47) gyri, while deactivations (task-negative network) included the posterior cingulate cortex (BA 31), the medial frontal gyrus bilaterally (BA10), and the bilateral angular gyrus (BA 39, 40). Importantly, this network showed less activation of the task-positive and more deactivation of the task-negative networks in schizophrenia patients relative to healthy controls. On the contrary, no group differences were detected in the listening only condition, with activations found within auditory superior temporal and dorsolateral frontal regions in both groups. Discussion: These results suggest abnormalities in task-positive and task-negative networks associated with the generation of thoughts in schizophrenia, but these abnormalities were not found during listening. Given the hypothesized role of the above-mentioned regions in internal speech and self attributed mental processes (Buckner et al., 2008), these abnormalities might result in self referential misattribution which may play a part in the genesis of auditory verbal hallucinations.

Collaboration


Dive into the Hélène Loevenbruck's collaboration.

Top Co-Authors

Avatar

Marion Dohen

Grenoble Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Cédric Pichat

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Monica Baciu

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Jean-Luc Schwartz

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Anne Vilain

Institut Universitaire de France

View shared research outputs
Top Co-Authors

Avatar

Marc Sato

University of Grenoble

View shared research outputs
Top Co-Authors

Avatar

Olivier Pascalis

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lucile Rapin

Grenoble Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Marcela Perrone

Centre national de la recherche scientifique

View shared research outputs
Researchain Logo
Decentralizing Knowledge