Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Avril Treille is active.

Publication


Featured researches published by Avril Treille.


Neuropsychologia | 2014

Haptic and visual information speed up the neural processing of auditory speech in live dyadic interactions.

Avril Treille; Camille Cordeboeuf; Coriandre Vilain; Marc Sato

Speech can be perceived not only by the ear and by the eye but also by the hand, with speech gestures felt from manual tactile contact with the speaker׳s face. In the present electro-encephalographic study, early cross-modal interactions were investigated by comparing auditory evoked potentials during auditory, audio-visual and audio-haptic speech perception in dyadic interactions between a listener and a speaker. In line with previous studies, early auditory evoked responses were attenuated and speeded up during audio-visual compared to auditory speech perception. Crucially, shortened latencies of early auditory evoked potentials were also observed during audio-haptic speech perception. Altogether, these results suggest early bimodal interactions during live face-to-face and hand-to-face speech perception in dyadic interactions.


Frontiers in Psychology | 2014

The sound of your lips: electrophysiological cross-modal interactions during hand-to-face and face-to-face speech perception

Avril Treille; Coriandre Vilain; Marc Sato

Recent magneto-encephalographic and electro-encephalographic studies provide evidence for cross-modal integration during audio-visual and audio-haptic speech perception, with speech gestures viewed or felt from manual tactile contact with the speaker’s face. Given the temporal precedence of the haptic and visual signals on the acoustic signal in these studies, the observed modulation of N1/P2 auditory evoked responses during bimodal compared to unimodal speech perception suggest that relevant and predictive visual and haptic cues may facilitate auditory speech processing. To further investigate this hypothesis, auditory evoked potentials were here compared during auditory-only, audio-visual and audio-haptic speech perception in live dyadic interactions between a listener and a speaker. In line with previous studies, auditory evoked potentials were attenuated and speeded up during both audio-haptic and audio-visual compared to auditory speech perception. Importantly, the observed latency and amplitude reduction did not significantly depend on the degree of visual and haptic recognition of the speech targets. Altogether, these results further demonstrate cross-modal interactions between the auditory, visual and haptic speech signals. Although they do not contradict the hypothesis that visual and haptic sensory inputs convey predictive information with respect to the incoming auditory speech input, these results suggest that, at least in live conversational interactions, systematic conclusions on sensory predictability in bimodal speech integration have to be taken with caution, with the extraction of predictive cues likely depending on the variability of the speech stimuli.


Journal of Cognitive Neuroscience | 2017

Inside speech: Multisensory and modality-specific processing of tongue and lip speech actions

Avril Treille; Coriandre Vilain; Thomas Hueber; Laurent Lamalle; Marc Sato

Action recognition has been found to rely not only on sensory brain areas but also partly on the observers motor system. However, whether distinct auditory and visual experiences of an action modulate sensorimotor activity remains largely unknown. In the present sparse sampling fMRI study, we determined to which extent sensory and motor representations interact during the perception of tongue and lip speech actions. Tongue and lip speech actions were selected because tongue movements of our interlocutor are accessible via their impact on speech acoustics but not visible because of its position inside the vocal tract, whereas lip movements are both “audible” and visible. Participants were presented with auditory, visual, and audiovisual speech actions, with the visual inputs related to either a sagittal view of the tongue movements or a facial view of the lip movements of a speaker, previously recorded by an ultrasound imaging system and a video camera. Although the neural networks involved in visual visuolingual and visuofacial perception largely overlapped, stronger motor and somatosensory activations were observed during visuolingual perception. In contrast, stronger activity was found in auditory and visual cortices during visuofacial perception. Complementing these findings, activity in the left premotor cortex and in visual brain areas was found to correlate with visual recognition scores observed for visuolingual and visuofacial speech stimuli, respectively, whereas visual activity correlated with RTs for both stimuli. These results suggest that unimodal and multimodal processing of lip and tongue speech actions rely on common sensorimotor brain areas. They also suggest that visual processing of audible but not visible movements induces motor and visual mental simulation of the perceived actions to facilitate recognition and/or to learn the association between auditory and visual signals.


Neuropsychologia | 2018

Electrophysiological evidence for Audio-visuo-lingual speech integration

Avril Treille; Coriandre Vilain; Jean-Luc Schwartz; Thomas Hueber; Marc Sato

&NA; Recent neurophysiological studies demonstrate that audio‐visual speech integration partly operates through temporal expectations and speech‐specific predictions. From these results, one common view is that the binding of auditory and visual, lipread, speech cues relies on their joint probability and prior associative audio‐visual experience. The present EEG study examined whether visual tongue movements integrate with relevant speech sounds, despite little associative audio‐visual experience between the two modalities. A second objective was to determine possible similarities and differences of audio‐visual speech integration between unusual audio‐visuo‐lingual and classical audio‐visuo‐labial modalities. To this aim, participants were presented with auditory, visual, and audio‐visual isolated syllables, with the visual presentation related to either a sagittal view of the tongue movements or a facial view of the lip movements of a speaker, with lingual and facial movements previously recorded by an ultrasound imaging system and a video camera. In line with previous EEG studies, our results revealed an amplitude decrease and a latency facilitation of P2 auditory evoked potentials in both audio‐visual‐lingual and audio‐visuo‐labial conditions compared to the sum of unimodal conditions. These results argue against the view that auditory and visual speech cues solely integrate based on prior associative audio‐visual perceptual experience. Rather, they suggest that dynamic and phonetic informational cues are sharable across sensory modalities, possibly through a cross‐modal transfer of implicit articulatory motor knowledge. HighlightsThe EEG study examined whether visual tongue movements integrate with speech sounds.A modulation of P2 AEPs was observed for AV compared to A+V conditions.Dynamic and phonetic informational cues appear sharable across sensory modalities.


Experimental Brain Research | 2017

Electrophysiological evidence for a self-processing advantage during audiovisual speech integration

Avril Treille; Coriandre Vilain; Sonia Kandel; Marc Sato


Neurobiology of Language Conference (2016) | 2016

The role of the premotor cortex in multisensory speech perception throughout adulthood: a rTMS study

Avril Treille; Marc Sato; Jean-Luc Schwartz; Coriandre Vilain; Pascale Tremblay


INSHEA - Colloque intenational « Toucher pour apprendre, toucher pour communiquer » | 2016

Tes mots me touchent : étude des apports de la modalité tactile dans la perception de la parole

Avril Treille; Coriandre Vilain; Marc Sato


Seventh Annual Meeting of the Society for the Neurobiology of Language | 2015

Speech in the mirror? Neurobiological correlates of self speech perception

Avril Treille; Coriandre Vilain; Sonia Kandel; Jean-Luc Schwartz; Marc Sato


IMRF 2015 - 16th international multisensory research forum | 2015

Electrophysiological evidence for audio-visuo-lingual speech integration

Coriandre Vilain; Avril Treille; Marc Sato


IMRF 2015 - 16th international multisensory research forum | 2015

Seeing our own voice: an electrophysiological study of audiovisual speech integration during self perception

Avril Treille; Coriandre Vilain; Sonia Kandel; Marc Sato

Collaboration


Dive into the Avril Treille's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marc Sato

University of Grenoble

View shared research outputs
Top Co-Authors

Avatar

Jean-Luc Schwartz

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sonia Kandel

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Thomas Hueber

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge