Brigitte Charlier
Université libre de Bruxelles
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Brigitte Charlier.
Quarterly Journal of Experimental Psychology | 2000
Brigitte Charlier; Jacqueline Leybaert
Two experiments investigated whether profoundly deaf childrens rhyming ability was determined by the linguistic input that they were exposed to in their early childhood. Children educated with Cued Speech (CS) were compared to other deaf children, educated orally or with sign language. In CS, speechreading is combined with manual cues that disambiguate it. The central hypothesis is that CS allows deaf children to develop accurate phonological representations, which, in turn, assist in the emergence of accurate rhyming abilities. Experiment 1 showed that the deaf children educated early with CS performed better at rhyme judgement than did other deaf children. The performance of early CS-users was not influenced by word spelling. Experiment 2 confirmed this result in a rhyme generation task. Taken together, results support the hypothesis that rhyming ability depends on early exposure to a linguistic input specifying all phonological contrasts, independently of the modality (visual or auditory) in which this input is perceived.
Scandinavian Journal of Psychology | 2012
Mario Aparicio; Philippe Peigneux; Brigitte Charlier; Charlotte Neyrat; Jacqueline Leybaert
It is known that deaf individuals usually outperform normal hearing subjects in speechreading; however, the underlying reasons remain unclear. In the present study, speechreading performance was assessed in normal hearing participants (NH), deaf participants who had been exposed to the Cued Speech (CS) system early and intensively, and deaf participants exposed to oral language without Cued Speech (NCS). Results show a gradation in performance with highest performance in CS, then in NCS, and finally NH participants. Moreover, error analysis suggests that speechreading processing is more accurate in the CS group than in the other groups. Given that early and intensive CS has been shown to promote development of accurate phonological processing, we propose that the higher speechreading results in Cued Speech users are linked to a better capacity in phonological decoding of visual articulators.
Frontiers in Psychology | 2017
Mario Aparicio; Philippe Peigneux; Brigitte Charlier; Danielle Balériaux; Martin Kavec; Jacqueline Leybaert
We present here the first neuroimaging data for perception of Cued Speech (CS) by deaf adults who are native users of CS. CS is a visual mode of communicating a spoken language through a set of manual cues which accompany lipreading and disambiguate it. With CS, sublexical units of the oral language are conveyed clearly and completely through the visual modality without requiring hearing. The comparison of neural processing of CS in deaf individuals with processing of audiovisual (AV) speech in normally hearing individuals represents a unique opportunity to explore the similarities and differences in neural processing of an oral language delivered in a visuo-manual vs. an AV modality. The study included deaf adult participants who were early CS users and native hearing users of French who process speech audiovisually. Words were presented in an event-related fMRI design. Three conditions were presented to each group of participants. The deaf participants saw CS words (manual + lipread), words presented as manual cues alone, and words presented to be lipread without manual cues. The hearing group saw AV spoken words, audio-alone and lipread-alone. Three findings are highlighted. First, the middle and superior temporal gyrus (excluding Heschl’s gyrus) and left inferior frontal gyrus pars triangularis constituted a common, amodal neural basis for AV and CS perception. Second, integration was inferred in posterior parts of superior temporal sulcus for audio and lipread information in AV speech, but in the occipito-temporal junction, including MT/V5, for the manual cues and lipreading in CS. Third, the perception of manual cues showed a much greater overlap with the regions activated by CS (manual + lipreading) than lipreading alone did. This supports the notion that manual cues play a larger role than lipreading for CS processing. The present study contributes to a better understanding of the role of manual cues as support of visual speech perception in the framework of the multimodal nature of human communication.
Journal of Deaf Studies and Deaf Education | 1996
Jacqueline Leybaert; Brigitte Charlier
Archive | 1998
Jacqueline Leybaert; Jesus Alegria Iscoa; Catherine Hage; Brigitte Charlier; Ruth Campbell; Barbara Dodd; Denis Burnham
Archive | 2006
Catherine Hage; Brigitte Charlier; Jacqueline Leybaert
Archive | 1994
Brigitte Charlier; Jesus Alegria Iscoa
Cahiers de l'audition | 2008
Cécile Colin; Jacqueline Leybaert; Brigitte Charlier; Anne-Laure Mansbach; Chantal Ligny; M. Ventura; Paul Deltenre
Archive | 2006
Jacqueline Leybaert; Catherine Hage; Brigitte Charlier
Archive | 1992
Jesus Alegria Iscoa; Jacqueline Leybaert; Brigitte Charlier; Catherine Hage; Jose Morais