Thomas M. Schofield
Wellcome Trust Centre for Neuroimaging
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Thomas M. Schofield.
PLOS Computational Biology | 2010
William D. Penny; Klaas E. Stephan; Jean Daunizeau; Maria Joao Rosa; K. J. Friston; Thomas M. Schofield; Alexander P. Leff
Mathematical models of scientific data can be formally compared using Bayesian model evidence. Previous applications in the biological sciences have mainly focussed on model selection in which one first selects the model with the highest evidence and then makes inferences based on the parameters of that model. This “best model” approach is very useful but can become brittle if there are a large number of models to compare, and if different subjects use different models. To overcome this shortcoming we propose the combination of two further approaches: (i) family level inference and (ii) Bayesian model averaging within families. Family level inference removes uncertainty about aspects of model structure other than the characteristic of interest. For example: What are the inputs to the system? Is processing serial or parallel? Is it linear or nonlinear? Is it mediated by a single, crucial connection? We apply Bayesian model averaging within families to provide inferences about parameters that are independent of further assumptions about model structure. We illustrate the methods using Dynamic Causal Models of brain imaging data.
Brain | 2009
Alexander P. Leff; Thomas M. Schofield; Jennifer T. Crinion; Mohamed L. Seghier; Alice Grogan; David W. Green; Cathy J. Price
Competing theories of short-term memory function make specific predictions about the functional anatomy of auditory short-term memory and its role in language comprehension. We analysed high-resolution structural magnetic resonance images from 210 stroke patients and employed a novel voxel based analysis to test the relationship between auditory short-term memory and speech comprehension. Using digit span as an index of auditory short-term memory capacity we found that the structural integrity of a posterior region of the superior temporal gyrus and sulcus predicted auditory short-term memory capacity, even when performance on a range of other measures was factored out. We show that the integrity of this region also predicts the ability to comprehend spoken sentences. Our results therefore support cognitive models that posit a shared substrate between auditory short-term memory capacity and speech comprehension ability. The method applied here will be particularly useful for modelling structure–function relationships within other complex cognitive domains.
The Journal of Neuroscience | 2008
Alexander P. Leff; Thomas M. Schofield; Klass E. Stephan; Jennifer T. Crinion; K. J. Friston; Cathy J. Price
An important and unresolved question is how the human brain processes speech for meaning after initial analyses in early auditory cortical regions. A variety of left-hemispheric areas have been identified that clearly support semantic processing, although a systematic analysis of directed interactions among these areas is lacking. We applied dynamic causal modeling of functional magnetic resonance imaging responses and Bayesian model selection to investigate, for the first time, experimentally induced changes in coupling among three key multimodal regions that were activated by intelligible speech: the posterior and anterior superior temporal sulcus (pSTS and aSTS, respectively) and pars orbitalis (POrb) of the inferior frontal gyrus. We tested 216 different dynamic causal models and found that the best model was a “forward” system that was driven by auditory inputs into the pSTS, with forward connections from the pSTS to both the aSTS and the POrb that increased considerably in strength (by 76 and 150%, respectively) when subjects listened to intelligible speech. Task-related, directional effects can now be incorporated into models of speech comprehension.
PLOS Computational Biology | 2011
Kay Henning Brodersen; Thomas M. Schofield; Alexander P. Leff; Cheng Soon Ong; Ekaterina I. Lomakina; Joachim M. Buhmann; Klaas E. Stephan
Decoding models, such as those underlying multivariate classification algorithms, have been increasingly used to infer cognitive or clinical brain states from measures of brain activity obtained by functional magnetic resonance imaging (fMRI). The practicality of current classifiers, however, is restricted by two major challenges. First, due to the high data dimensionality and low sample size, algorithms struggle to separate informative from uninformative features, resulting in poor generalization performance. Second, popular discriminative methods such as support vector machines (SVMs) rarely afford mechanistic interpretability. In this paper, we address these issues by proposing a novel generative-embedding approach that incorporates neurobiologically interpretable generative models into discriminative classifiers. Our approach extends previous work on trial-by-trial classification for electrophysiological recordings to subject-by-subject classification for fMRI and offers two key advantages over conventional methods: it may provide more accurate predictions by exploiting discriminative information encoded in ‘hidden’ physiological quantities such as synaptic connection strengths; and it affords mechanistic interpretability of clinical classifications. Here, we introduce generative embedding for fMRI using a combination of dynamic causal models (DCMs) and SVMs. We propose a general procedure of DCM-based generative embedding for subject-wise classification, provide a concrete implementation, and suggest good-practice guidelines for unbiased application of generative embedding in the context of fMRI. We illustrate the utility of our approach by a clinical example in which we classify moderately aphasic patients and healthy controls using a DCM of thalamo-temporal regions during speech processing. Generative embedding achieves a near-perfect balanced classification accuracy of 98% and significantly outperforms conventional activation-based and correlation-based methods. This example demonstrates how disease states can be detected with very high accuracy and, at the same time, be interpreted mechanistically in terms of abnormalities in connectivity. We envisage that future applications of generative embedding may provide crucial advances in dissecting spectrum disorders into physiologically more well-defined subgroups.
NeuroImage | 2008
Mohamed L. Seghier; Hwee Ling Lee; Thomas M. Schofield; Caroline Ellis; Cathy J. Price
Cognitive models of reading predict that high frequency regular words can be read in more than one way. We investigated this hypothesis using functional MRI and covariance analysis in 43 healthy skilled readers. Our results dissociated two sets of regions that were differentially engaged across subjects who were reading the same familiar words. Some subjects showed more activation in left inferior frontal and anterior occipito-temporal regions while other subjects showed more activation in right inferior parietal and left posterior occipito-temporal regions. To explore the behavioural correlates of these systems, we measured the difference between reading speed for irregularly spelled words relative to pseudowords outside the scanner in fifteen of our subjects and correlated this measure with fMRI activation for reading familiar words. The faster the lexical reading the greater the activation in left posterior occipito-temporal and right inferior parietal regions. Conversely, the slower the lexical reading the greater the activation in left anterior occipito-temporal and left ventral inferior frontal regions. Thus, the double dissociation in irregular and pseudoword reading behaviour predicted the double dissociation in neuronal activation for reading familiar words. We discuss the implications of these results which may be important for understanding how reading is learnt in childhood or re-learnt following brain damage in adulthood.
Proceedings of the National Academy of Sciences of the United States of America | 2009
Thomas M. Schofield; Paul Iverson; Stefan J. Kiebel; Klaas E. Stephan; James M. Kilner; K. J. Friston; Jennifer T. Crinion; Cathy J. Price; Alexander P. Leff
Processing of speech and nonspeech sounds occurs bilaterally within primary auditory cortex and surrounding regions of the superior temporal gyrus; however, the manner in which these regions interact during speech and nonspeech processing is not well understood. Here, we investigate the underlying neuronal architecture of the auditory system with magnetoencephalography and a mismatch paradigm. We used a spoken word as a repeating “standard” and periodically introduced 3 “oddball” stimuli that differed in the frequency spectrum of the words vowel. The closest deviant was perceived as the same vowel as the standard, whereas the other 2 deviants were perceived as belonging to different vowel categories. The neuronal responses to these vowel stimuli were compared with responses elicited by perceptually matched tone stimuli under the same paradigm. For both speech and tones, deviant stimuli induced coupling changes within the same bilateral temporal lobe system. However, vowel oddball effects increased coupling within the left posterior superior temporal gyrus, whereas perceptually equivalent nonspeech oddball effects increased coupling within the right primary auditory cortex. Thus, we show a dissociation in neuronal interactions, occurring at both different hierarchal levels of the auditory system (superior temporal versus primary auditory cortex) and in different hemispheres (left versus right). This hierarchical specificity depends on whether auditory stimuli are embedded in a perceptual context (i.e., a word). Furthermore, our lateralization results suggest left hemisphere specificity for the processing of phonological stimuli, regardless of their elemental (i.e., spectrotemporal) characteristics.
The Journal of Neuroscience | 2012
Thomas M. Schofield; William D. Penny; Klaas E. Stephan; Jennifer T. Crinion; Alan J. Thompson; Cathy J. Price; Alexander P. Leff
We compared brain structure and function in two subgroups of 21 stroke patients with either moderate or severe chronic speech comprehension impairment. Both groups had damage to the supratemporal plane; however, the severe group suffered greater damage to two unimodal auditory areas: primary auditory cortex and the planum temporale. The effects of this damage were investigated using fMRI while patients listened to speech and speech-like sounds. Pronounced changes in connectivity were found in both groups in undamaged parts of the auditory hierarchy. Compared to controls, moderate patients had significantly stronger feedback connections from planum temporale to primary auditory cortex bilaterally, while in severe patients this connection was significantly weaker in the undamaged right hemisphere. This suggests that predictive feedback mechanisms compensate in moderately affected patients but not in severely affected patients. The key pathomechanism in humans with persistent speech comprehension impairments may be impaired feedback connectivity to unimodal auditory areas.
Journal of Cognitive Neuroscience | 2011
Fiona M. Richardson; Sue Ramsden; Caroline Ellis; Stephanie Burnett; Odette Megnin; Caroline Catmur; Thomas M. Schofield; Alexander P. Leff; Cathy J. Price
A central feature of auditory STM is its item-limited processing capacity. We investigated whether auditory STM capacity correlated with regional gray and white matter in the structural MRI images from 74 healthy adults, 40 of whom had a prior diagnosis of developmental dyslexia whereas 34 had no history of any cognitive impairment. Using whole-brain statistics, we identified a region in the left posterior STS where gray matter density was positively correlated with forward digit span, backward digit span, and performance on a “spoonerisms” task that required both auditory STM and phoneme manipulation. Across tasks and participant groups, the correlation was highly significant even when variance related to reading and auditory nonword repetition was factored out. Although the dyslexics had poorer phonological skills, the effect of auditory STM capacity in the left STS was the same as in the cognitively normal group. We also illustrate that the anatomical location of this effect is in proximity to a lesion site recently associated with reduced auditory STM capacity in patients with stroke damage. This result, therefore, indicates that gray matter density in the posterior STS predicts auditory STM capacity in the healthy and damaged brain. In conclusion, we suggest that our present findings are consistent with the view that there is an overlap between the mechanisms that support language processing and auditory STM.
Neural Networks | 2012
Melissa Zavaglia; Ryan T. Canolty; Thomas M. Schofield; Alexander P. Leff; Mauro Ursino; Robert T. Knight; William D. Penny
This paper describes a dynamical process which serves both as a model of temporal pattern recognition in the brain and as a forward model of neuroimaging data. This process is considered at two separate levels of analysis: the algorithmic and implementation levels. At an algorithmic level, recognition is based on the use of Occurrence Time features. Using a speech digit database we show that for noisy recognition environments, these features rival standard cepstral coefficient features. At an implementation level, the model is defined using a Weakly Coupled Oscillator (WCO) framework and uses a transient synchronization mechanism to signal a recognition event. In a second set of experiments, we use the strength of the synchronization event to predict the high gamma (75–150 Hz) activity produced by the brain in response to word versus non-word stimuli. Quantitative model fits allow us to make inferences about parameters governing pattern recognition dynamics in the brain.
Clinical Linguistics & Phonetics | 2011
David W. Green; Louise Ruffle; Alice Grogan; Nilufa Ali; Sue Ramsden; Thomas M. Schofield; Alexander P. Leff; Jenny Crinion; Cathy J. Price
We illustrate the value of the Bilingual Aphasia Test in the diagnostic assessment of a trilingual speaker post-stroke living in England for whom English was a non-native language. The Comprehensive Aphasia Test is routinely used to assess patients in English, but only in combination with the Bilingual Aphasia Test is it possible and practical to provide a full picture of the language impairment. We describe our test selection and the assessment it allows us to make.