Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sophie K. Scott is active.

Publication


Featured researches published by Sophie K. Scott.


Nature Neuroscience | 2009

Maps and streams in the auditory cortex: nonhuman primates illuminate human speech processing

Josef P. Rauschecker; Sophie K. Scott

Speech and language are considered uniquely human abilities: animals have communication systems, but they do not match human linguistic skills in terms of recursive structure and combinatorial power. Yet, in evolution, spoken language must have emerged from neural mechanisms at least partially available in animals. In this paper, we will demonstrate how our understanding of speech perception, one important facet of language, has profited from findings and theory in nonhuman primate studies. Chief among these are physiological and anatomical studies showing that primate auditory cortex, across species, shows patterns of hierarchical structure, topographic mapping and streams of functional processing. We will identify roles for different cortical areas in the perceptual processing of speech and review functional imaging work in humans that bears on our understanding of how the brain decodes and monitors speech. A new model connects structures in the temporal, frontal and parietal lobes linking speech perception and production.


The Lancet | 1999

Brain regions involved in articulation

Rjs Wise; J. Greene; C. Büchel; Sophie K. Scott

BACKGROUND The left inferior frontal gyrus (Brocas area) is generally believed to be critical for the motor act of speech. A lesion-based analysis has, however, shown that the left anterior insula is necessary for accurate articulation. We used functional imaging in normal people to show the neural systems involved in speech during different speech tasks. METHODS 12 normal people underwent positron emission tomography with oxygen-15-labelled water as tracer. We measured cerebral activity while participants performed three different tasks: repetition of heard nouns at different rates; listening to single nouns at different rates; and anticipation of listening or repetition. We analysed the data with imaging software. FINDINGS Repetition of single words did not activate Brocas area but activity in three left-lateralised regions was seen: the anterior insula, a localised region in the lateral premotor cortex, and the posterior pallidum. The left anterior insula and lateral premotor cortex showed a conjunction of activity for hearing and articulation. In addition, articulation modulated the response to hearing words in the left dorsolateral temporal cortex, the physiological expression of the speakers auditory attention being directed towards the stimuli and not his or her articulated responses. INTERPRETATION The formulation of an articulatory plan is a function of the left anterior insula and lateral premotor cortex and not of Brocas area. The left basal ganglia seem to be dominant for speech, although the axial muscles involved receive their motor output from both cerebral hemispheres.


Neuropsychologia | 2003

The role of the rostral frontal cortex (area 10) in prospective memory: a lateral versus medial dissociation

Paul W. Burgess; Sophie K. Scott; Chris Frith

Using the H(2)(15)O PET method, we investigated whether previous findings of regional cerebral blood flow (rCBF) changes in the polar and superior rostral aspects of the frontal lobes (principally Brodmanns area (BA) 10) during prospective memory (PM) paradigms (i.e. those involving carrying out an intended action after a delay) can be attributed merely to the greater difficulty of such tasks over the baseline conditions typically employed. Three different tasks were administered under four conditions: baseline simple RT; attention-demanding ongoing task only; ongoing task plus a delayed intention (unpracticed); ongoing task plus delayed intention (practiced). Under prospective memory conditions, we found significant rCBF decreases in the superior medial aspects of the rostral prefrontal cortex (BA 10) relative to the baseline or ongoing task only conditions. However more lateral aspects of area 10 (plus the medio-dorsal thalamus) showed the opposite pattern, with rCBF increases in the prospective memory conditions relative to the other conditions. These patterns were broadly replicated over all three tasks. Since both the medial and lateral rostral regions showed: (a) instances where rCBF was lower during a more effortful condition (as estimated by increased RTs and error rates) than in a less effortful one; and (b) there was no correlation between rCBF and RT durations or number of errors in these regions, a simple task difficulty explanation of the rCBF changes in the rostral aspects of the frontal lobes during prospective memory tasks is rejected. Instead, the favoured explanation concentrates upon the particular processing demands made by these situations irrespective of the precise stimuli used or the exact nature of the intention. Moreover, the results suggest different roles for medial and lateral rostral prefrontal cortex, with the former involved in suppressing internally-generated thought, and the latter in maintaining it.


Neuropsychologia | 2003

Facial expression recognition across the adult life span

Andrew J. Calder; Jill Keane; Tom Manly; Reiner Sprengelmeyer; Sophie K. Scott; Ian Nimmo-Smith; Andrew W. Young

We report three experiments investigating the recognition of emotion from facial expressions across the adult life span. Increasing age produced a progressive reduction in the recognition of fear and, to a lesser extent, anger. In contrast, older participants showed no reduction in recognition of disgust, rather there was some evidence of an improvement. The results are discussed in terms of studies from the neuropsychological and functional imaging literature that indicate that separate brain regions may underlie the emotions fear and disgust. We suggest that the dissociable effects found for fear and disgust are consistent with the differential effects of ageing on brain regions involved in these emotions.


Proceedings of the National Academy of Sciences of the United States of America | 2010

Cross-cultural recognition of basic emotions through nonverbal emotional vocalizations.

Disa Sauter; Frank Eisner; Paul Ekman; Sophie K. Scott

Emotional signals are crucial for sharing important information, with conspecifics, for example, to warn humans of danger. Humans use a range of different cues to communicate to others how they feel, including facial, vocal, and gestural signals. We examined the recognition of nonverbal emotional vocalizations, such as screams and laughs, across two dramatically different cultural groups. Western participants were compared to individuals from remote, culturally isolated Namibian villages. Vocalizations communicating the so-called “basic emotions” (anger, disgust, fear, joy, sadness, and surprise) were bidirectionally recognized. In contrast, a set of additional emotions was only recognized within, but not across, cultural boundaries. Our findings indicate that a number of primarily negative emotions have vocalizations that can be recognized across cultures, while most positive emotions are communicated with culture-specific signals.


The Journal of Neuroscience | 2007

Functional integration across brain regions improves speech perception under adverse listening conditions

Jonas Obleser; Richard Wise; M. Alex Dresner; Sophie K. Scott

Speech perception is supported by both acoustic signal decomposition and semantic context. This study, using event-related functional magnetic resonance imaging, investigated the neural basis of this interaction with two speech manipulations, one acoustic (spectral degradation) and the other cognitive (semantic predictability). High compared with low predictability resulted in the greatest improvement in comprehension at an intermediate level of degradation, and this was associated with increased activity in the left angular gyrus, the medial and left lateral prefrontal cortices, and the posterior cingulate gyrus. Functional connectivity between these regions was also increased, particularly with respect to the left angular gyrus. In contrast, activity in both superior temporal sulci and the left inferior frontal gyrus correlated with the amount of spectral detail in the speech signal, regardless of predictability. These results demonstrate that increasing functional connectivity between high-order cortical areas, remote from the auditory cortex, facilitates speech comprehension when the clarity of speech is reduced.


Cognition | 2004

The functional neuroanatomy of prelexical processing in speech perception

Sophie K. Scott; Richard Wise

In this paper we attempt to relate the prelexical processing of speech, with particular emphasis on functional neuroimaging studies, to the study of auditory perceptual systems by disciplines in the speech and hearing sciences. The elaboration of the sound-to-meaning pathways in the human brain enables their integration into models of the human language system and the definition of potential auditory processing differences between the two cerebral hemispheres. Further, it facilitates comparison with recent developments in the study of the anatomy of non-human primate auditory cortex, which has very precisely revealed architectonically distinct regions, connectivity, and functional specialization.


Neuropsychologia | 1999

Saying it with feeling: neural responses to emotional vocalizations

J. S. Morris; Sophie K. Scott; R. J. Dolan

To determine how vocally expressed emotion is processed in the brain, we measured neural activity in healthy volunteers listening to fearful, sad, happy and neutral non-verbal vocalizations. Enhanced responses to emotional vocalizations were seen in the caudate nucleus, as well as anterior insular, temporal and prefrontal cortices. The right amygdala exhibited decreased responses to fearful vocalizations as well as fear-specific inhibitory interactions with left anterior insula. A region of the pons, implicated in acoustic startle responses also showed fear-specific interactions with the amygdala. The data demonstrate: firstly, that processing of vocal emotion involves a bilaterally distributed network of brain regions; and secondly, that processing of fear-related auditory stimuli involves context-specific interactions between the amygdala and other cortical and brainstem regions implicated in fear processing.


The Journal of Neuroscience | 2006

Converging Language Streams in the Human Temporal Lobe

Galina Spitsyna; Jane E. Warren; Sophie K. Scott; Federico Turkheimer; Richard Wise

There is general agreement that, after initial processing in unimodal sensory cortex, the processing pathways for spoken and written language converge to access verbal meaning. However, the existing literature provides conflicting accounts of the cortical location of this convergence. Most aphasic stroke studies localize verbal comprehension to posterior temporal and inferior parietal cortex (Wernicke’s area), whereas evidence from focal cortical neurodegenerative syndromes instead implicates anterior temporal cortex. Previous functional imaging studies in normal subjects have failed to reconcile these opposing positions. Using a functional imaging paradigm in normal subjects that used spoken and written narratives and multiple baselines, we demonstrated common activation during implicit comprehension of spoken and written language in inferior and lateral regions of the left anterior temporal cortex and at the junction of temporal, occipital, and parietal cortex. These results indicate that verbal comprehension uses unimodal processing streams that converge in both anterior and posterior heteromodal cortical regions in the left temporal lobe.


Language and Cognitive Processes | 2012

Speech recognition in adverse conditions: A review

Sven L. Mattys; Matthew H. Davis; Ann R. Bradlow; Sophie K. Scott

This article presents a review of the effects of adverse conditions (ACs) on the perceptual, linguistic, cognitive, and neurophysiological mechanisms underlying speech recognition. The review starts with a classification of ACs based on their origin: Degradation at the source (production of a noncanonical signal), degradation during signal transmission (interfering signal or medium-induced impoverishment of the target signal), receiver limitations (peripheral, linguistic, cognitive). This is followed by a parallel, yet orthogonal classification of ACs based on the locus of their effect: Perceptual processes, mental representations, attention, and memory functions. We then review the added value that ACs provide for theories of speech recognition, with a focus on fundamental themes in psycholinguistics: Content and format of lexical representations, time-course of lexical access, word segmentation, feed-back in speech perception and recognition, lexical-semantic integration, interface between the speech system and general cognition, neuroanatomical organisation of speech processing. We conclude by advocating an approach to speech recognition that includes rather than neutralises complex listening environments and individual differences.

Collaboration


Dive into the Sophie K. Scott's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Richard Wise

Imperial College London

View shared research outputs
Top Co-Authors

Avatar

Stuart Rosen

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Disa Sauter

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar

Rjs Wise

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

César F. Lima

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Samuel Evans

University College London

View shared research outputs
Researchain Logo
Decentralizing Knowledge