Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Disa Sauter is active.

Publication


Featured researches published by Disa Sauter.


Proceedings of the National Academy of Sciences of the United States of America | 2010

Cross-cultural recognition of basic emotions through nonverbal emotional vocalizations.

Disa Sauter; Frank Eisner; Paul Ekman; Sophie K. Scott

Emotional signals are crucial for sharing important information, with conspecifics, for example, to warn humans of danger. Humans use a range of different cues to communicate to others how they feel, including facial, vocal, and gestural signals. We examined the recognition of nonverbal emotional vocalizations, such as screams and laughs, across two dramatically different cultural groups. Western participants were compared to individuals from remote, culturally isolated Namibian villages. Vocalizations communicating the so-called “basic emotions” (anger, disgust, fear, joy, sadness, and surprise) were bidirectionally recognized. In contrast, a set of additional emotions was only recognized within, but not across, cultural boundaries. Our findings indicate that a number of primarily negative emotions have vocalizations that can be recognized across cultures, while most positive emotions are communicated with culture-specific signals.


Quarterly Journal of Experimental Psychology | 2010

Perceptual cues in nonverbal vocal expressions of emotion.

Disa Sauter; Frank Eisner; Andrew J. Calder; Sophie K. Scott

Work on facial expressions of emotions (Calder, Burton, Miller, Young, & Akamatsu, [2001]) and emotionally inflected speech (Banse & Scherer, [1996]) has successfully delineated some of the physical properties that underlie emotion recognition. To identify the acoustic cues used in the perception of nonverbal emotional expressions like laugher and screams, an investigation was conducted into vocal expressions of emotion, using nonverbal vocal analogues of the “basic” emotions (anger, fear, disgust, sadness, and surprise; Ekman & Friesen, [1971]; Scott et al., [1997]), and of positive affective states (Ekman, [1992], [2003]; Sauter & Scott, [2007]). First, the emotional stimuli were categorized and rated to establish that listeners could identify and rate the sounds reliably and to provide confusion matrices. A principal components analysis of the rating data yielded two underlying dimensions, correlating with the perceived valence and arousal of the sounds. Second, acoustic properties of the amplitude, pitch, and spectral profile of the stimuli were measured. A discriminant analysis procedure established that these acoustic measures provided sufficient discrimination between expressions of emotional categories to permit accurate statistical classification. Multiple linear regressions with participants’ subjective ratings of the acoustic stimuli showed that all classes of emotional ratings could be predicted by some combination of acoustic measures and that most emotion ratings were predicted by different constellations of acoustic features. The results demonstrate that, similarly to affective signals in facial expressions and emotionally inflected speech, the perceived emotional character of affective vocalizations can be predicted on the basis of their physical features.


The Journal of Neuroscience | 2006

Positive Emotions Preferentially Engage an Auditory–Motor “Mirror” System

Jane E. Warren; Disa Sauter; Frank Eisner; Jade Wiland; M. Alexander Dresner; Richard Wise; Stuart Rosen; Sophie K. Scott

Social interaction relies on the ability to react to communication signals. Although cortical sensory–motor “mirror” networks are thought to play a key role in visual aspects of primate communication, evidence for a similar generic role for auditory–motor interaction in primate nonverbal communication is lacking. We demonstrate that a network of human premotor cortical regions activated during facial movement is also involved in auditory processing of affective nonverbal vocalizations. Within this auditory–motor mirror network, distinct functional subsystems respond preferentially to emotional valence and arousal properties of heard vocalizations. Positive emotional valence enhanced activation in a left posterior inferior frontal region involved in representation of prototypic actions, whereas increasing arousal enhanced activation in presupplementary motor area cortex involved in higher-order motor control. Our findings demonstrate that listening to nonverbal vocalizations can automatically engage preparation of responsive orofacial gestures, an effect that is greatest for positive-valence and high-arousal emotions. The automatic engagement of responsive orofacial gestures by emotional vocalizations suggests that auditory–motor interactions provide a fundamental mechanism for mirroring the emotional states of others during primate social behavior. Motor facilitation by positive vocal emotions suggests a basic neural mechanism for establishing cohesive bonds within primate social groups.


Journal of Child Psychology and Psychiatry | 2011

A multimodal approach to emotion recognition ability in autism spectrum disorders

Catherine R. G. Jones; Andrew Pickles; Milena Falcaro; Anita J.S. Marsden; Francesca Happé; Sophie K. Scott; Disa Sauter; Jenifer Tregay; Rebecca J. Phillips; Gillian Baird; Emily Simonoff; Tony Charman

BACKGROUND Autism spectrum disorders (ASD) are characterised by social and communication difficulties in day-to-day life, including problems in recognising emotions. However, experimental investigations of emotion recognition ability in ASD have been equivocal, hampered by small sample sizes, narrow IQ range and over-focus on the visual modality. METHODS We tested 99 adolescents (mean age 15;6 years, mean IQ 85) with an ASD and 57 adolescents without an ASD (mean age 15;6 years, mean IQ 88) on a facial emotion recognition task and two vocal emotion recognition tasks (one verbal; one non-verbal). Recognition of happiness, sadness, fear, anger, surprise and disgust were tested. Using structural equation modelling, we conceptualised emotion recognition ability as a multimodal construct, measured by the three tasks. We examined how the mean levels of recognition of the six emotions differed by group (ASD vs. non-ASD) and IQ (≥ 80 vs. < 80). RESULTS We found no evidence of a fundamental emotion recognition deficit in the ASD group and analysis of error patterns suggested that the ASD group were vulnerable to the same pattern of confusions between emotions as the non-ASD group. However, recognition ability was significantly impaired in the ASD group for surprise. IQ had a strong and significant effect on performance for the recognition of all six emotions, with higher IQ adolescents outperforming lower IQ adolescents. CONCLUSIONS The findings do not suggest a fundamental difficulty with the recognition of basic emotions in adolescents with ASD.


NeuroImage | 2011

The structural neuroanatomy of music emotion recognition: Evidence from frontotemporal lobar degeneration

Rohani Omar; Susie M.D. Henley; Jonathan W. Bartlett; Julia C. Hailstone; Elizabeth Gordon; Disa Sauter; Chris Frost; Sophie K. Scott; Jason D. Warren

Despite growing clinical and neurobiological interest in the brain mechanisms that process emotion in music, these mechanisms remain incompletely understood. Patients with frontotemporal lobar degeneration (FTLD) frequently exhibit clinical syndromes that illustrate the effects of breakdown in emotional and social functioning. Here we investigated the neuroanatomical substrate for recognition of musical emotion in a cohort of 26 patients with FTLD (16 with behavioural variant frontotemporal dementia, bvFTD, 10 with semantic dementia, SemD) using voxel-based morphometry. On neuropsychological evaluation, patients with FTLD showed deficient recognition of canonical emotions (happiness, sadness, anger and fear) from music as well as faces and voices compared with healthy control subjects. Impaired recognition of emotions from music was specifically associated with grey matter loss in a distributed cerebral network including insula, orbitofrontal cortex, anterior cingulate and medial prefrontal cortex, anterior temporal and more posterior temporal and parietal cortices, amygdala and the subcortical mesolimbic system. This network constitutes an essential brain substrate for recognition of musical emotion that overlaps with brain regions previously implicated in coding emotional value, behavioural context, conceptual knowledge and theory of mind. Musical emotion recognition may probe the interface of these processes, delineating a profile of brain damage that is essential for the abstraction of complex social emotions.


Neuropsychologia | 2009

Developmental phonagnosia: A selective deficit of vocal identity recognition

Lúcia Garrido; Frank Eisner; Carolyn McGettigan; Lauren Stewart; Disa Sauter; J. Richard Hanley; Stefan R. Schweinberger; Jason D. Warren; Brad Duchaine

Phonagnosia, the inability to recognize familiar voices, has been studied in brain-damaged patients but no cases due to developmental problems have been reported. Here we describe the case of KH, a 60-year-old active professional woman who reports that she has always experienced severe voice recognition difficulties. Her hearing abilities are normal, and an MRI scan showed no evidence of brain damage in regions associated with voice or auditory perception. To better understand her condition and to assess models of voice and high-level auditory processing, we tested KH on behavioural tasks measuring voice recognition, recognition of vocal emotions, face recognition, speech perception, and processing of environmental sounds and music. KH was impaired on tasks requiring the recognition of famous voices and the learning and recognition of new voices. In contrast, she performed well on nearly all other tasks. Her case is the first report of developmental phonagnosia, and the results suggest that the recognition of a speakers vocal identity depends on separable mechanisms from those used to recognize other information from the voice or non-vocal auditory stimuli.


Journal of Experimental Psychology: Human Perception and Performance | 2009

The Roles of Feature-Specific Task Set and Bottom-Up Salience in Attentional Capture: An ERP Study.

Martin Eimer; Monika Kiss; Clare Press; Disa Sauter

We investigated the roles of top-down task set and bottom-up stimulus salience for feature-specific attentional capture. Spatially nonpredictive cues preceded search arrays that included a color-defined target. For target-color singleton cues, behavioral spatial cueing effects were accompanied by cue-induced N2pc components, indicative of attentional capture. These effects were only minimally attenuated for nonsingleton target-color cues, underlining the dominance of top-down task set over salience in attentional capture. Nontarget-color singleton cues triggered no N2pc, but instead an anterior N2 component indicative of top-down inhibition. In Experiment 2, inverted behavioral cueing effects of these cues were accompanied by a delayed N2pc to targets at cued locations, suggesting that perceptually salient but task-irrelevant visual events trigger location-specific inhibition mechanisms that can delay subsequent target selection.


Journal of Cognitive Neuroscience | 2010

Rapid detection of emotion from human vocalizations

Disa Sauter; Martin Eimer

The rapid detection of affective signals from conspecifics is crucial for the survival of humans and other animals; if those around you are scared, there is reason for you to be alert and to prepare for impending danger. Previous research has shown that the human brain detects emotional faces within 150 msec of exposure, indicating a rapid differentiation of visual social signals based on emotional content. Here we use event-related brain potential (ERP) measures to show for the first time that this mechanism extends to the auditory domain, using human nonverbal vocalizations, such as screams. An early fronto-central positivity to fearful vocalizations compared with spectrally rotated and thus acoustically matched versions of the same sounds started 150 msec after stimulus onset. This effect was also observed for other vocalized emotions (achievement and disgust), but not for affectively neutral vocalizations, and was linked to the perceived arousal of an emotion category. That the timing, polarity, and scalp distribution of this new ERP correlate are similar to ERP markers of emotional face processing suggests that common supramodal brain mechanisms may be involved in the rapid detection of affectively relevant visual and auditory signals.


British Journal of Development Psychology | 2013

Children's recognition of emotions from vocal cues.

Disa Sauter; Charlotte Panattoni; Francesca Happé

Emotional cues contain important information about the intentions and feelings of others. Despite a wealth of research into childrens understanding of facial signals of emotions, little research has investigated the developmental trajectory of interpreting affective cues in the voice. In this study, 48 children ranging between 5 and 10 years were tested using forced-choice tasks with non-verbal vocalizations and emotionally inflected speech expressing different positive, neutral and negative states. Children as young as 5 years were proficient in interpreting a range of emotional cues from vocal signals. Consistent with previous work, performance was found to improve with age. Furthermore, the two tasks, examining recognition of non-verbal vocalizations and emotionally inflected speech, respectively, were sensitive to individual differences, with high correspondence of performance across the tasks. From this demonstration of childrens ability to recognize emotions from vocal stimuli, we also conclude that this auditory emotion recognition task is suitable for a wide age range of children, providing a novel, empirical way to investigate childrens affect recognition skills.


Cortex | 2012

Receptive prosody in nonfluent primary progressive aphasias.

Jonathan D. Rohrer; Disa Sauter; Sophie K. Scott; Jason D. Warren

Introduction Prosody has been little studied in the primary progressive aphasias (PPAs), a group of neurodegenerative disorders presenting with progressive language impairment. Methods Here we conducted a systematic investigation of different dimensions of prosody processing (acoustic, linguistic and emotional) in a cohort of 19 patients with nonfluent PPA syndromes (11 with progressive nonfluent aphasia, PNFA; five with progressive logopenic/phonological aphasia, LPA; three with progranulin-associated aphasia, GRN-PPA) compared with a group of healthy older controls. Voxel-based morphometry (VBM) was used to identify neuroanatomical associations of prosodic functions. Results Broadly comparable receptive prosodic deficits were exhibited by the PNFA, LPA and GRN-PPA subgroups, for acoustic, linguistic and affective dimensions of prosodic analysis. Discrimination of prosodic contours was significantly more impaired than discrimination of simple acoustic cues, and discrimination of intonation was significantly more impaired than discrimination of stress at phrasal level. Recognition of vocal emotions was more impaired than recognition of facial expressions for the PPA cohort, and recognition of certain emotions (in particular, disgust and fear) was relatively more impaired than others (sadness, surprise). VBM revealed atrophy associated with acoustic and linguistic prosody impairments in a distributed cortical network including areas likely to be involved in perceptual analysis of vocalisations (posterior temporal and inferior parietal cortices) and working memory (fronto-parietal circuitry). Grey matter associations of emotional prosody processing were identified for negative emotions (disgust, fear, sadness) in a broadly overlapping network of frontal, temporal, limbic and parietal areas. Conclusions Taken together, the findings show that receptive prosody is impaired in nonfluent PPA syndromes, and suggest a generic early perceptual deficit of prosodic signal analysis with additional relatively specific deficits (recognition of particular vocal emotions).

Collaboration


Dive into the Disa Sauter's collaboration.

Top Co-Authors

Avatar

Sophie K. Scott

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul Ekman

University of California

View shared research outputs
Top Co-Authors

Avatar

Jason D. Warren

UCL Institute of Neurology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dacher Keltner

University of California

View shared research outputs
Top Co-Authors

Avatar

Andrew J. Calder

Cognition and Brain Sciences Unit

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge