Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Elisabeth Fonteneau is active.

Publication


Featured researches published by Elisabeth Fonteneau.


Biological Psychology | 1998

Semantic, repetition and rime priming between spoken words: behavioral and electrophysiological evidence

Monique Radeau; Mireille Besson; Elisabeth Fonteneau; São Luís Castro

Semantic, phonological and repetition priming for auditorily presented words were examined, using both behavioral reaction times (RTs) and electrophysiological event-related potentials (ERPs) measures. On critical trials, a word prime was followed by a word target that was semantically or phonologically related (rime) or not related (control) to the prime. Pairs of word-pseudoword items served as fillers. Participants were asked to respond to word targets in the RT experiment and to pseudowords in the ERP experiment. In each experiment stimuli were presented once and then repeated in the very same way. RTs were found to be fastest for semantic, intermediate for rime and slowest for control targets; large repetition effects occurred for all targets. ERPs results showed that both semantic and phonological priming influenced the same component, namely the N400, whose amplitude was smallest to semantic, intermediate to rime and largest to control targets; repetition effects were only found for semantic trials.


PLOS ONE | 2008

Electrical Brain Responses in Language-Impaired Children Reveal Grammar-Specific Deficits

Elisabeth Fonteneau; Heather K. J. van der Lely

Background Scientific and public fascination with human language have included intensive scrutiny of language disorders as a new window onto the biological foundations of language and its evolutionary origins. Specific language impairment (SLI), which affects over 7% of children, is one such disorder. SLI has received robust scientific attention, in part because of its recent linkage to a specific gene and loci on chromosomes and in part because of the prevailing question regarding the scope of its language impairment: Does the disorder impact the general ability to segment and process language or a specific ability to compute grammar? Here we provide novel electrophysiological data showing a domain-specific deficit within the grammar of language that has been hitherto undetectable through behavioural data alone. Methods and Findings We presented participants with Grammatical(G)-SLI, age-matched controls, and younger child and adult controls, with questions containing syntactic violations and sentences containing semantic violations. Electrophysiological brain responses revealed a selective impairment to only neural circuitry that is specific to grammatical processing in G-SLI. Furthermore, the participants with G-SLI appeared to be partially compensating for their syntactic deficit by using neural circuitry associated with semantic processing and all non-grammar-specific and low-level auditory neural responses were normal. Conclusions The findings indicate that grammatical neural circuitry underlying language is a developmentally unique system in the functional architecture of the brain, and this complex higher cognitive system can be selectively impaired. The findings advance fundamental understanding about how cognitive systems develop and all human language is represented and processed in the brain.


International Journal of Psychophysiology | 1999

Spatio-temporal analysis of electric brain activity during semantic and phonological word processing.

Asaid Khateb; Jean-Marie Annoni; Theodor Landis; Alan J. Pegna; Marie-Carmen Custodi; Elisabeth Fonteneau; Stephanie M. Morand; Christoph M. Michel

There is an ongoing debate in cognitive neuroscience about the time course and the functional independence of the different processes involved in encoding written language material. New data indicate very fast and highly parallel language analysis networks in the brain. Here we demonstrate a methodological approach to study the temporal dynamics of this network by searching for time periods where different task demands emphasize different aspects of the network. Multi-channel event related potentials (ERPs) were recorded during a semantic and a phonological reading task from 14 healthy subjects. Signals were analyzed exclusively on the basis of the spatial configuration of the electric potential distributions (ERP maps), since differences in these spatial patterns directly reflect changes in the configuration of the active sources in the brain. This analysis did not reveal any differences of the evoked brain electric fields between the two tasks up to 280 ms post-stimulus. The ERP maps then differed for a brief period between 280 and 380 ms, before they were similar again. The analysis of the maps using a global linear localization procedure revealed a network of areas, active in both tasks, that mainly involved the left postero-temporal and left antero-temporal regions. The left posterior activation was found already around 100 ms post-stimulus, indicating that language-specific functions appear early in time. We therefore conclude that phonological and semantic processing are essentially performed in both tasks and that only late decision-related processes influence the relative strength of activity of the different modules in the complex language network.


Journal of Cognition and Culture | 2008

Cultural Differences in Perception: Observations from a Remote Culture

Jules Davidoff; Elisabeth Fonteneau; Julie Goldstein

Perceptual similarity was examined in a remote culture (Himba) and compared to that of Western observers. Similarity was assessed in a relative size judgement task and in an odd-one-out detection task. Thus, we examined the effects of culture on what might be considered low-level visual abilities. For both tasks, we found that performance was affected by stimuli that were culturally relevant to the tasks. In Experiment 1, we showed that the use of cow stimuli instead of the standard circles increased illusory strength for the Himba. In Experiment 2, only the Himba showed more accurate detection based on category differences in the displays. It is argued that that Categorical Perception in Experiment 2, based on its presumed Whorfian origins, was the more reliable procedure for examining the effects of culture on perception.


international workshop on pattern recognition in neuroimaging | 2012

Spatiotemporal Searchlight Representational Similarity Analysis in EMEG Source Space

Li Su; Elisabeth Fonteneau; William D. Marslen-Wilson; Nikolaus Kriegeskorte

Time resolved imaging techniques, such as MEG and EEG, are unique in their ability to reveal the rich dynamic spatiotemporal patterning of neural activities. Here we propose a technique based on spatiotemporal searchlight Representational Similarity Analysis (RSA) of combined MEG and EEG (EMEG) data to directly analyze the multivariate pattern of information flow across the brain. This novel technique can recognize fine-grained dynamic neural computations both in space and in time. A prime example of such neural computations is our ability to understand spoken words in real time. A computational approach to these processes is suggested by the Cohort Model of spoken-word recognition. Here we show how spatiotemporal searchlight RSA applied to source estimations of EMEG data can provide insights into the neural correlates of the cohort model within bilateral front temporal brain regions.


Human Brain Mapping | 2015

Grammatical analysis as a distributed neurobiological function

Mirjana Bozic; Elisabeth Fonteneau; Li Su; William D. Marslen-Wilson

Language processing engages large‐scale functional networks in both hemispheres. Although it is widely accepted that left perisylvian regions have a key role in supporting complex grammatical computations, patient data suggest that some aspects of grammatical processing could be supported bilaterally. We investigated the distribution and the nature of grammatical computations across language processing networks by comparing two types of combinatorial grammatical sequences—inflectionally complex words and minimal phrases—and contrasting them with grammatically simple words. Novel multivariate analyses revealed that they engage a coalition of separable subsystems: inflected forms triggered left‐lateralized activation, dissociable into dorsal processes supporting morphophonological parsing and ventral, lexically driven morphosyntactic processes. In contrast, simple phrases activated a consistently bilateral pattern of temporal regions, overlapping with inflectional activations in L middle temporal gyrus. These data confirm the role of the left‐lateralized frontotemporal network in supporting complex grammatical computations. Critically, they also point to the capacity of bilateral temporal regions to support simple, linear grammatical computations. This is consistent with a dual neurobiological framework where phylogenetically older bihemispheric systems form part of the network that supports language function in the modern human, and where significant capacities for language comprehension remain intact even following severe left hemisphere damage. Hum Brain Mapp 36:1190–1201, 2015.


Cerebral Cortex | 2015

Brain Network Connectivity During Language Comprehension: Interacting Linguistic and Perceptual Subsystems

Elisabeth Fonteneau; Mirjana Bozic; William D. Marslen-Wilson

The dynamic neural processes underlying spoken language comprehension require the real-time integration of general perceptual and specialized linguistic information. We recorded combined electro- and magnetoencephalographic measurements of participants listening to spoken words varying in perceptual and linguistic complexity. Combinatorial linguistic complexity processing was consistently localized to left perisylvian cortices, whereas competition-based perceptual complexity triggered distributed activity over both hemispheres. Functional connectivity showed that linguistically complex words engaged a distributed network of oscillations in the gamma band (20–60 Hz), which only partially overlapped with the network supporting perceptual analysis. Both processes enhanced cross-talk between left temporal regions and bilateral pars orbitalis (BA47). The left-lateralized synchrony between temporal regions and pars opercularis (BA44) was specific to the linguistically complex words, suggesting a specific role of left frontotemporal cross-cortical interactions in morphosyntactic computations. Synchronizations in oscillatory dynamics reveal the transient coupling of functional networks that support specific computational processes in language comprehension.


Frontiers in Computational Neuroscience | 2015

Tracking cortical entrainment in neural activity: auditory processes in human temporal cortex.

Andrew Thwaites; Ian Nimmo-Smith; Elisabeth Fonteneau; Roy D. Patterson; Paula Buttery; William D. Marslen-Wilson

A primary objective for cognitive neuroscience is to identify how features of the sensory environment are encoded in neural activity. Current auditory models of loudness perception can be used to make detailed predictions about the neural activity of the cortex as an individual listens to speech. We used two such models (loudness-sones and loudness-phons), varying in their psychophysiological realism, to predict the instantaneous loudness contours produced by 480 isolated words. These two sets of 480 contours were used to search for electrophysiological evidence of loudness processing in whole-brain recordings of electro- and magneto-encephalographic (EMEG) activity, recorded while subjects listened to the words. The technique identified a bilateral sequence of loudness processes, predicted by the more realistic loudness-sones model, that begin in auditory cortex at ~80 ms and subsequently reappear, tracking progressively down the superior temporal sulcus (STS) at lags from 230 to 330 ms. The technique was then extended to search for regions sensitive to the fundamental frequency (F0) of the voiced parts of the speech. It identified a bilateral F0 process in auditory cortex at a lag of ~90 ms, which was not followed by activity in STS. The results suggest that loudness information is being used to guide the analysis of the speech stream as it proceeds beyond auditory cortex down STS toward the temporal pole.


Frontiers in Neuroscience | 2014

Mapping tonotopic organization in human temporal cortex: representational similarity analysis in EMEG source space

Li Su; Isma Zulfiqar; Fawad Jamshed; Elisabeth Fonteneau; William D. Marslen-Wilson

A wide variety of evidence, from neurophysiology, neuroanatomy, and imaging studies in humans and animals, suggests that human auditory cortex is in part tonotopically organized. Here we present a new means of resolving this spatial organization using a combination of non-invasive observables (EEG, MEG, and MRI), model-based estimates of spectrotemporal patterns of neural activation, and multivariate pattern analysis. The method exploits both the fine-grained temporal patterning of auditory cortical responses and the millisecond scale temporal resolution of EEG and MEG. Participants listened to 400 English words while MEG and scalp EEG were measured simultaneously. We estimated the location of cortical sources using the MRI anatomically constrained minimum norm estimate (MNE) procedure. We then combined a form of multivariate pattern analysis (representational similarity analysis) with a spatiotemporal searchlight approach to successfully decode information about patterns of neuronal frequency preference and selectivity in bilateral superior temporal cortex. Observed frequency preferences in and around Heschls gyrus matched current proposals for the organization of tonotopic gradients in primary acoustic cortex, while the distribution of narrow frequency selectivity similarly matched results from the fMRI literature. The spatial maps generated by this novel combination of techniques seem comparable to those that have emerged from fMRI or ECOG studies, and a considerable advance over earlier MEG results.


PLOS Computational Biology | 2017

Relating dynamic brain states to dynamic machine states: Human and machine solutions to the speech recognition problem

Cai Arran Wingfield; Li Su; Xunying Liu; Chao Zhang; Philip C. Woodland; Andrew Thwaites; Elisabeth Fonteneau; William D. Marslen-Wilson

There is widespread interest in the relationship between the neurobiological systems supporting human cognition and emerging computational systems capable of emulating these capacities. Human speech comprehension, poorly understood as a neurobiological process, is an important case in point. Automatic Speech Recognition (ASR) systems with near-human levels of performance are now available, which provide a computationally explicit solution for the recognition of words in continuous speech. This research aims to bridge the gap between speech recognition processes in humans and machines, using novel multivariate techniques to compare incremental ‘machine states’, generated as the ASR analysis progresses over time, to the incremental ‘brain states’, measured using combined electro- and magneto-encephalography (EMEG), generated as the same inputs are heard by human listeners. This direct comparison of dynamic human and machine internal states, as they respond to the same incrementally delivered sensory input, revealed a significant correspondence between neural response patterns in human superior temporal cortex and the structural properties of ASR-derived phonetic models. Spatially coherent patches in human temporal cortex responded selectively to individual phonetic features defined on the basis of machine-extracted regularities in the speech to lexicon mapping process. These results demonstrate the feasibility of relating human and ASR solutions to the problem of speech recognition, and suggest the potential for further studies relating complex neural computations in human speech comprehension to the rapidly evolving ASR systems that address the same problem domain.

Collaboration


Dive into the Elisabeth Fonteneau's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Li Su

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chao Zhang

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xunying Liu

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge