Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Coriandre Vilain is active.

Publication


Featured researches published by Coriandre Vilain.


Human Brain Mapping | 2012

Functional MRI assessment of orofacial articulators: Neural correlates of lip, jaw, larynx, and tongue movements

Krystyna Grabski; Laurent Lamalle; Coriandre Vilain; Jean-Luc Schwartz; Nathalie Vallée; Irène Troprès; Monica Baciu; Jean François Le Bas; Marc Sato

Compared with complex coordinated orofacial actions, few neuroimaging studies have attempted to determine the shared and distinct neural substrates of supralaryngeal and laryngeal articulatory movements when performed independently. To determine cortical and subcortical regions associated with supralaryngeal motor control, participants produced lip, tongue and jaw movements while undergoing functional magnetic resonance imaging (fMRI). For laryngeal motor activity, participants produced the steady‐state/i/vowel. A sparse temporal sampling acquisition method was used to minimize movement‐related artifacts. Three main findings were observed. First, the four tasks activated a set of largely overlapping, common brain areas: the sensorimotor and premotor cortices, the right inferior frontal gyrus, the supplementary motor area, the left parietal operculum and the adjacent inferior parietal lobule, the basal ganglia and the cerebellum. Second, differences between tasks were restricted to the bilateral auditory cortices and to the left ventrolateral sensorimotor cortex, with greater signal intensity for vowel vocalization. Finally, a dorso‐ventral somatotopic organization of lip, jaw, vocalic/laryngeal, and tongue movements was observed within the primary motor and somatosensory cortices using individual region‐of‐interest (ROI) analyses. These results provide evidence for a core neural network involved in laryngeal and supralaryngeal motor control and further refine the sensorimotor somatotopic organization of orofacial articulators. Hum Brain Mapp 33:2306–2321, 2012.


Journal of the Acoustical Society of America | 2003

Influence of collision on the flow through in-vitro rigid models of the vocal folds

Mickael Deverge; Xavier Pelorson; Coriandre Vilain; Pierre-Yves Lagrée; F Chentouf; Jan Willems; A Avraham Hirschberg

Measurements of pressure in oscillating rigid replicas of vocal folds are presented. The pressure upstream of the replica is used as input to various theoretical approximations to predict the pressure within the glottis. As the vocal folds collide the classical quasisteady boundary layer theory fails. It appears however that for physiologically reasonable shapes of the replicas, viscous effects are more important than the influence of the flow unsteadiness due to the wall movement. A simple model based on a quasisteady Bernoulli equation corrected for viscous effect, combined with a simple boundary layer separation model does globally predict the observed pressure behavior.


Comptes Rendus Biologies | 2002

Biomechanical models to simulate consequences of maxillofacial surgery

Yohan Payan; Matthieu Chabanas; Xavier Pelorson; Coriandre Vilain; Patrick Levy; Vincent Luboz; Pascal Perrier

This paper presents the biomechanical finite element models that have been developed in the framework of the computer-assisted maxillofacial surgery. After a brief overview of the continuous elastic modelling method, two models are introduced and their use for computer-assisted applications discussed. The first model deals with orthognathic surgery and aims at predicting the facial consequences of maxillary and mandibular osteotomies. For this, a generic three-dimensional model of the face is automatically adapted to the morphology of the patient by the mean of elastic registration. Qualitative simulations of the consequences of an osteotomy of the mandible can thus be provided. The second model addresses the Sleep Apnoea Syndrome. Its aim is to develop a complete modelling of the interaction between airflow and upper airways walls during breathing. Dynamical simulations of the interaction during a respiratory cycle are computed and compared with observed phenomena.


Journal of Neurolinguistics | 2013

Shared and distinct neural correlates of vowel perception and production

Krystyna Grabski; Jean-Luc Schwartz; Laurent Lamalle; Coriandre Vilain; Nathalie Vallée; Monica Baciu; Jean-François Le Bas; Marc Sato

Recent neurobiological models postulate that sensorimotor interactions play a key role in speech perception and speech motor control, especially under adverse listening conditions or in case of complex articulatory speech sequences. The present fMRI study aimed to investigate whether isolated vowel perception and production might also induce sensorimotor activity, independently of syllable sequencing and coarticulation mechanisms and using a sparse acquisition technique in order to limit influence of scanner noise. To this aim, participants first passively listened to French vowels previously recorded from their own voice. In a subsequent production task, done within the same imaging session and using the same acquisition parameters, participants were asked to overtly produce the same vowels. Our results demonstrate that a left postero-dorsal stream, linking auditory speech percepts with articulatory representations and including the posterior inferior frontal gyrus, the adjacent ventral premotor cortex and the temporoparietal junction, is an influential part of both vowel perception and production. Specific analyses on phonetic features further confirmed the involvement of the left postero-dorsal stream in vowel processing and motor control. Altogether, these results suggest that vowel representations are largely distributed over sensorimotor brain areas and provide further evidence for a functional coupling between speech perception and production systems.


Frontiers in Psychology | 2014

A possible neurophysiological correlate of audiovisual binding and unbinding in speech perception

Attigodu Chandrashekara Ganesh; Frédéric Berthommier; Coriandre Vilain; Marc Sato; Jean-Luc Schwartz

Audiovisual (AV) speech integration of auditory and visual streams generally ends up in a fusion into a single percept. One classical example is the McGurk effect in which incongruent auditory and visual speech signals may lead to a fused percept different from either visual or auditory inputs. In a previous set of experiments, we showed that if a McGurk stimulus is preceded by an incongruent AV context (composed of incongruent auditory and visual speech materials) the amount of McGurk fusion is largely decreased. We interpreted this result in the framework of a two-stage “binding and fusion” model of AV speech perception, with an early AV binding stage controlling the fusion/decision process and likely to produce “unbinding” with less fusion if the context is incoherent. In order to provide further electrophysiological evidence for this binding/unbinding stage, early auditory evoked N1/P2 responses were here compared during auditory, congruent and incongruent AV speech perception, according to either prior coherent or incoherent AV contexts. Following the coherent context, in line with previous electroencephalographic/magnetoencephalographic studies, visual information in the congruent AV condition was found to modify auditory evoked potentials, with a latency decrease of P2 responses compared to the auditory condition. Importantly, both P2 amplitude and latency in the congruent AV condition increased from the coherent to the incoherent context. Although potential contamination by visual responses from the visual cortex cannot be discarded, our results might provide a possible neurophysiological correlate of early binding/unbinding process applied on AV interactions.


Neuropsychologia | 2014

Haptic and visual information speed up the neural processing of auditory speech in live dyadic interactions.

Avril Treille; Camille Cordeboeuf; Coriandre Vilain; Marc Sato

Speech can be perceived not only by the ear and by the eye but also by the hand, with speech gestures felt from manual tactile contact with the speaker׳s face. In the present electro-encephalographic study, early cross-modal interactions were investigated by comparing auditory evoked potentials during auditory, audio-visual and audio-haptic speech perception in dyadic interactions between a listener and a speaker. In line with previous studies, early auditory evoked responses were attenuated and speeded up during audio-visual compared to auditory speech perception. Crucially, shortened latencies of early auditory evoked potentials were also observed during audio-haptic speech perception. Altogether, these results suggest early bimodal interactions during live face-to-face and hand-to-face speech perception in dyadic interactions.


Frontiers in Psychology | 2014

The sound of your lips: electrophysiological cross-modal interactions during hand-to-face and face-to-face speech perception

Avril Treille; Coriandre Vilain; Marc Sato

Recent magneto-encephalographic and electro-encephalographic studies provide evidence for cross-modal integration during audio-visual and audio-haptic speech perception, with speech gestures viewed or felt from manual tactile contact with the speaker’s face. Given the temporal precedence of the haptic and visual signals on the acoustic signal in these studies, the observed modulation of N1/P2 auditory evoked responses during bimodal compared to unimodal speech perception suggest that relevant and predictive visual and haptic cues may facilitate auditory speech processing. To further investigate this hypothesis, auditory evoked potentials were here compared during auditory-only, audio-visual and audio-haptic speech perception in live dyadic interactions between a listener and a speaker. In line with previous studies, auditory evoked potentials were attenuated and speeded up during both audio-haptic and audio-visual compared to auditory speech perception. Importantly, the observed latency and amplitude reduction did not significantly depend on the degree of visual and haptic recognition of the speech targets. Altogether, these results further demonstrate cross-modal interactions between the auditory, visual and haptic speech signals. Although they do not contradict the hypothesis that visual and haptic sensory inputs convey predictive information with respect to the incoming auditory speech input, these results suggest that, at least in live conversational interactions, systematic conclusions on sensory predictability in bimodal speech integration have to be taken with caution, with the extraction of predictive cues likely depending on the variability of the speech stimuli.


Speech Communication | 2013

An experimental study of speech/gesture interactions and distance encoding

Chloe Gonseth; Anne Vilain; Coriandre Vilain

This paper explores the possible encoding of distance information in vocal and manual pointing and its relationship with the linguistic structure of deictic words, as well as speech/gesture cooperation within the process of deixis. Two experiments required participants to point at and/or name a close or distant target, with speech only, with gesture only, or with speech+gesture. Acoustic, articulatory, and manual data were recorded. We investigated the interaction between vocal and manual pointing, with respect to the distance to the target. There are two major findings. First, distance significantly affects both articulatory and manual pointing, since participants perform larger vocal and manual gestures to designate a more distant target. Second, modality influences both deictic speech and gesture, since pointing is more emphatic in unimodal use of either over bimodal use of both, to compensate for the loss of the other mode. These findings suggest that distance is encoded in both vocal and manual pointing. We also demonstrate that the correlates of distance encoding in the vocal modality can be related to the typology of deictic words. Finally, our data suggest a two-way interaction between speech and gesture, and support the hypothesis that these two modalities are cooperating within a single communication system.


Journal of Cognitive Neuroscience | 2014

Adaptive coding of orofacial and speech actions in motor and somatosensory spaces with and without overt motor behavior

Marc Sato; Coriandre Vilain; Laurent Lamalle; Krystyna Grabski

Studies of speech motor control suggest that articulatory and phonemic goals are defined in multidimensional motor, somatosensory, and auditory spaces. To test whether motor simulation might rely on sensory–motor coding common with those for motor execution, we used a repetition suppression (RS) paradigm while measuring neural activity with sparse sampling fMRI during repeated overt and covert orofacial and speech actions. RS refers to the phenomenon that repeated stimuli or motor acts lead to decreased activity in specific neural populations and are associated with enhanced adaptive learning related to the repeated stimulus attributes. Common suppressed neural responses were observed in motor and posterior parietal regions in the achievement of both repeated overt and covert orofacial and speech actions, including the left premotor cortex and inferior frontal gyrus, the superior parietal cortex and adjacent intraprietal sulcus, and the left IC and the SMA. Interestingly, reduced activity of the auditory cortex was observed during overt but not covert speech production, a finding likely reflecting a motor rather an auditory imagery strategy by the participants. By providing evidence for adaptive changes in premotor and associative somatosensory brain areas, the observed RS suggests online state coding of both orofacial and speech actions in somatosensory and motor spaces with and without motor behavior and sensory feedback.


Journal of Cognitive Neuroscience | 2017

Inside speech: Multisensory and modality-specific processing of tongue and lip speech actions

Avril Treille; Coriandre Vilain; Thomas Hueber; Laurent Lamalle; Marc Sato

Action recognition has been found to rely not only on sensory brain areas but also partly on the observers motor system. However, whether distinct auditory and visual experiences of an action modulate sensorimotor activity remains largely unknown. In the present sparse sampling fMRI study, we determined to which extent sensory and motor representations interact during the perception of tongue and lip speech actions. Tongue and lip speech actions were selected because tongue movements of our interlocutor are accessible via their impact on speech acoustics but not visible because of its position inside the vocal tract, whereas lip movements are both “audible” and visible. Participants were presented with auditory, visual, and audiovisual speech actions, with the visual inputs related to either a sagittal view of the tongue movements or a facial view of the lip movements of a speaker, previously recorded by an ultrasound imaging system and a video camera. Although the neural networks involved in visual visuolingual and visuofacial perception largely overlapped, stronger motor and somatosensory activations were observed during visuolingual perception. In contrast, stronger activity was found in auditory and visual cortices during visuofacial perception. Complementing these findings, activity in the left premotor cortex and in visual brain areas was found to correlate with visual recognition scores observed for visuolingual and visuofacial speech stimuli, respectively, whereas visual activity correlated with RTs for both stimuli. These results suggest that unimodal and multimodal processing of lip and tongue speech actions rely on common sensorimotor brain areas. They also suggest that visual processing of audible but not visible movements induces motor and visual mental simulation of the perceived actions to facilitate recognition and/or to learn the association between auditory and visual signals.

Collaboration


Dive into the Coriandre Vilain's collaboration.

Top Co-Authors

Avatar

Marc Sato

University of Grenoble

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jean-Luc Schwartz

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Monica Baciu

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Anne Vilain

University of Grenoble

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xavier Pelorson

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

A Avraham Hirschberg

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar

Audrey Acher

Grenoble Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge