Susan Prejawa
Wellcome Trust Centre for Neuroimaging
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Susan Prejawa.
Cerebral Cortex | 2012
O Parker Jones; David W. Green; Alice Grogan; Christos Pliatsikas; K Filippopolitis; N Ali; Hwee Ling Lee; S. Ramsden; K Gazarian; Susan Prejawa; Mohamed L. Seghier; Cathy J. Price
Using functional magnetic resonance imaging, we found that when bilinguals named pictures or read words aloud, in their native or nonnative language, activation was higher relative to monolinguals in 5 left hemisphere regions: dorsal precentral gyrus, pars triangularis, pars opercularis, superior temporal gyrus, and planum temporale. We further demonstrate that these areas are sensitive to increasing demands on speech production in monolinguals. This suggests that the advantage of being bilingual comes at the expense of increased work in brain areas that support monolingual word processing. By comparing the effect of bilingualism across a range of tasks, we argue that activation is higher in bilinguals compared with monolinguals because word retrieval is more demanding; articulation of each word is less rehearsed; and speech output needs careful monitoring to avoid errors when competition for word selection occurs between, as well as within, language.
NeuroImage | 2016
Mohamed L. Seghier; Elnas Patel; Susan Prejawa; Sue Ramsden; Andre Selmer; Louise Lim; Rachel Browne; Johanna Rae; Zula Haigh; Deborah Ezekiel; Thomas M. H. Hope; Alexander P. Leff; Cathy J. Price
The PLORAS Database is a relational repository of anatomical and functional imaging data that has primarily been acquired from stroke survivors, along with standardized scores on a wide range of sensory, motor and cognitive abilities, demographic details and medical history. As of January 2015, we have data from 750 patients with an expected accrual rate of 200 patients per year. Expansion will accelerate as we extend our collaborations. The main aim of the database is to Predict Language Outcome and Recovery After Stroke (PLORAS) on the basis of a single structural (anatomical) brain scan that indexes the stereotactic location and extent of brain damage. Predictions are made for individual patients by indicating how other patients with the most similar brain damage, cognitive abilities and demographic details recovered their language skills over time. Predictions are validated by longitudinal follow-ups of patients who initially presented with speech and language difficulties. The PLORAS Database can also be used to predict recovery of other cognitive abilities on the basis of anatomical brain scans. The functional imaging data can be used to understand the neural mechanisms that support recovery from brain damage; and all the data can be used to understand the main sources of inter-subject variability in structure–function mappings in the human brain. Data will be made available for sharing, subject to: funding, ethical approval and patient consent.
Frontiers in Human Neuroscience | 2014
Ōiwi Parker Jones; Susan Prejawa; Thomas M. H. Hope; Marion Oberhuber; Mohamed L. Seghier; Alexander P. Leff; David W. Green; Cathy J. Price
The aim of this paper was to investigate the neurological underpinnings of auditory-to-motor translation during auditory repetition of unfamiliar pseudowords. We tested two different hypotheses. First we used functional magnetic resonance imaging in 25 healthy subjects to determine whether a functionally defined area in the left temporo-parietal junction (TPJ), referred to as Sylvian-parietal-temporal region (Spt), reflected the demands on auditory-to-motor integration during the repetition of pseudowords relative to a semantically mediated nonverbal sound-naming task. The experiment also allowed us to test alternative accounts of Spt function, namely that Spt is involved in subvocal articulation or auditory processing that can be driven either bottom-up or top-down. The results did not provide convincing evidence that activation increased in either Spt or any other cortical area when non-semantic auditory inputs were being translated into motor outputs. Instead, the results were most consistent with Spt responding to bottom up or top down auditory processing, independent of the demands on auditory-to-motor integration. Second, we investigated the lesion sites in eight patients who had selective difficulties repeating heard words but with preserved word comprehension, picture naming and verbal fluency (i.e., conduction aphasia). All eight patients had white-matter tract damage in the vicinity of the arcuate fasciculus and only one of the eight patients had additional damage to the Spt region, defined functionally in our fMRI data. Our results are therefore most consistent with the neurological tradition that emphasizes the importance of the arcuate fasciculus in the non-semantic integration of auditory and motor speech processing.
Frontiers in Human Neuroscience | 2014
Thomas M. H. Hope; Susan Prejawa; Ōiwi Parker Jones; Marion Oberhuber; Mohamed L. Seghier; David W. Green; Cathy J. Price
This fMRI study used a single, multi-factorial, within-subjects design to dissociate multiple linguistic and non-linguistic processing areas that are all involved in repeating back heard words. The study compared: (1) auditory to visual inputs; (2) phonological to non-phonological inputs; (3) semantic to non-semantic inputs; and (4) speech production to finger-press responses. The stimuli included words (semantic and phonological inputs), pseudowords (phonological input), pictures and sounds of animals or objects (semantic input), and colored patterns and hums (non-semantic and non-phonological). The speech production tasks involved auditory repetition, reading, and naming while the finger press tasks involved one-back matching. The results from the main effects and interactions were compared to predictions from a previously reported functional anatomical model of language based on a meta-analysis of many different neuroimaging experiments. Although many findings from the current experiment replicated many of those predicted, our within-subject design also revealed novel results by providing sufficient anatomical precision to dissect several different regions within the anterior insula, pars orbitalis, anterior cingulate, SMA, and cerebellum. For example, we found one part of the pars orbitalis was involved in phonological processing and another in semantic processing. We also dissociated four different types of phonological effects in the left superior temporal sulcus (STS), left putamen, left ventral premotor cortex, and left pars orbitalis. Our findings challenge some of the commonly-held opinions on the functional anatomy of language, and resolve some previously conflicting findings about specific brain regions—and our experimental design reveals details of the word repetition process that are not well captured by current models.
Frontiers in Human Neuroscience | 2013
Marion Oberhuber; Ōiwi Parker Jones; Thomas M. H. Hope; Susan Prejawa; Mohamed L. Seghier; David W. Green; Cathy J. Price
Previous studies have investigated orthographic-to-phonological mapping during reading by comparing brain activation for (1) reading words to object naming, or (2) reading pseudowords (e.g., “phume”) to words (e.g., “plume”). Here we combined both approaches to provide new insights into the underlying neural mechanisms. In fMRI data from 25 healthy adult readers, we first identified activation that was greater for reading words and pseudowords relative to picture and color naming. The most significant effect was observed in the left putamen, extending to both anterior and posterior borders. Second, consistent with previous studies, we show that both the anterior and posterior putamen are involved in articulating speech with greater activation during our overt speech production tasks (reading, repetition, object naming, and color naming) than silent one-back-matching on the same stimuli. Third, we compared putamen activation for words versus pseudowords during overt reading and auditory repetition. This revealed that the anterior putamen was most activated by reading pseudowords, whereas the posterior putamen was most activated by words irrespective of whether the task was reading words or auditory word repetition. The pseudoword effect in the anterior putamen is consistent with prior studies that associated this region with the initiation of novel sequences of movements. In contrast, the heightened word response in the posterior putamen is consistent with other studies that associated this region with “memory guided movement.” Our results illustrate how the functional dissociation between the anterior and posterior putamen supports sublexical and lexical processing during reading.
NeuroImage | 2016
Thomas M. H. Hope; Mohamed L. Seghier; Susan Prejawa; Alexander P. Leff; Cathy J. Price
Brain imaging studies of functional outcomes after white matter damage have quantified the severity of white matter damage in different ways. Here we compared how the outcome of such studies depends on two different types of measurements: the proportion of the target tract that has been destroyed (‘lesion load’) and tract disconnection. We demonstrate that conclusions from analyses based on two examples of these measures diverge and that conclusions based solely on lesion load may be misleading. First, we reproduce a recent lesion-load-only analysis which suggests that damage to the arcuate fasciculus, and not to the uncinate fasciculus, is significantly associated with deficits in fluency and naming skills. Next, we repeat the analysis after replacing the measures of lesion load with measures of tract disconnection for both tracts, and observe significant associations between both tracts and both language skills: i.e. the change increases the apparent relevance of the uncinate fasciculus to fluency and naming skills. Finally we show that, in this dataset, disconnection data explains significant variance in both language skills that is not accounted for by lesion load or volume, but lesion load data explains no unique variance in those skills, once disconnection and lesion volume are taken into account.
Brain | 2017
Thomas M. H. Hope; Alexander P. Leff; Susan Prejawa; Rachel Bruce; Zula Haigh; Louise Lim; Sue Ramsden; Marion Oberhuber; Philipp Ludersdorfer; Jenny Crinion; Mohamed L. Seghier; Cathy J. Price
Language difficulties after stroke are commonly thought to stabilise within a year. Hope et al. report surprising evidence to the contrary, showing that the language skills of patients with post-stroke aphasia continue to change even years after stroke. The changes are associated with structural adaptation in the intact right hemisphere.
Neuropsychologia | 2015
Ana Sanjuán; Thomas M. H. Hope; Ōiwi Parker Jones; Susan Prejawa; Marion Oberhuber; Julie Guerin; Mohamed L. Seghier; David W. Green; Cathy J. Price
We used fMRI in 35 healthy participants to investigate how two neighbouring subregions in the lateral anterior temporal lobe (LATL) contribute to semantic matching and object naming. Four different levels of processing were considered: (A) recognition of the object concepts; (B) search for semantic associations related to object stimuli; (C) retrieval of semantic concepts of interest; and (D) retrieval of stimulus specific concepts as required for naming. During semantic association matching on picture stimuli or heard object names, we found that activation in both subregions was higher when the objects were semantically related (mug–kettle) than unrelated (car–teapot). This is consistent with both LATL subregions playing a role in (C), the successful retrieval of amodal semantic concepts. In addition, one subregion was more activated for object naming than matching semantically related objects, consistent with (D), the retrieval of a specific concept for naming. We discuss the implications of these novel findings for cognitive models of semantic processing and left anterior temporal lobe function.
Brain | 2017
Diego L. Lorca-Puls; Andrea Gajardo-Vidal; Mohamed L. Seghier; Alexander P. Leff; Varun Sethi; Susan Prejawa; Thomas M. H. Hope; Joseph T. Devlin; Cathy J. Price
Predicting how brain damage will affect cognition is notoriously challenging. By using findings from non-invasive brain stimulation studies of healthy subjects to guide lesion-deficit mapping, Lorca-Puls et al. identify two lesion sites, in frontal and parietal areas, that consistently cause persistent language difficulties after stroke.
The Journal of Neuroscience | 2015
Mohamed L. Seghier; Thomas M. H. Hope; Susan Prejawa; Parker Jones; Vitkovitch M; Cathy J. Price
The parietal operculum, particularly the cytoarchitectonic area OP1 of the secondary somatosensory area (SII), is involved in somatosensory feedback. Using fMRI with 58 human subjects, we investigated task-dependent differences in SII/OP1 activity during three familiar speech production tasks: object naming, reading and repeatedly saying “1-2-3.” Bilateral SII/OP1 was significantly suppressed (relative to rest) during object naming, to a lesser extent when repeatedly saying “1-2-3” and not at all during reading. These results cannot be explained by task difficulty but the contrasting difference between naming and reading illustrates how the demands on somatosensory activity change with task, even when motor output (i.e., production of object names) is matched. To investigate what determined SII/OP1 deactivation during object naming, we searched the whole brain for areas where activity increased as that in SII/OP1 decreased. This across subject covariance analysis revealed a region in the right superior temporal sulcus (STS) that lies within the auditory cortex, and is activated by auditory feedback during speech production. The tradeoff between activity in SII/OP1 and STS was not observed during reading, which showed significantly more activation than naming in both SII/OP1 and STS bilaterally. These findings suggest that, although object naming is more error prone than reading, subjects can afford to rely more or less on somatosensory or auditory feedback during naming. In contrast, fast and efficient error-free reading places more consistent demands on both types of feedback, perhaps because of the potential for increased competition between lexical and sublexical codes at the articulatory level.