Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Thomas Ethofer is active.

Publication


Featured researches published by Thomas Ethofer.


NeuroImage | 2005

Identification of emotional intonation evaluated by fMRI.

Dirk Wildgruber; Axel Riecker; Ingo Hertrich; Michael Erb; Wolfgang Grodd; Thomas Ethofer; Hermann Ackermann

During acoustic communication among human beings, emotional information can be expressed both by the propositional content of verbal utterances and by the modulation of speech melody (affective prosody). It is well established that linguistic processing is bound predominantly to the left hemisphere of the brain. By contrast, the encoding of emotional intonation has been assumed to depend specifically upon right-sided cerebral structures. However, prior clinical and functional imaging studies yielded discrepant data with respect to interhemispheric lateralization and intrahemispheric localization of brain regions contributing to processing of affective prosody. In order to delineate the cerebral network engaged in the perception of emotional tone, functional magnetic resonance imaging (fMRI) was performed during recognition of prosodic expressions of five different basic emotions (happy, sad, angry, fearful, and disgusted) and during phonetic monitoring of the same stimuli. As compared to baseline at rest, both tasks yielded widespread bilateral hemodynamic responses within frontal, temporal, and parietal areas, the thalamus, and the cerebellum. A comparison of the respective activation maps, however, revealed comprehension of affective prosody to be bound to a distinct right-hemisphere pattern of activation, encompassing posterior superior temporal sulcus (Brodmann Area [BA] 22), dorsolateral (BA 44/45), and orbitobasal (BA 47) frontal areas. Activation within left-sided speech areas, in contrast, was observed during the phonetic task. These findings indicate that partially distinct cerebral networks subserve processing of phonetic and intonational information during speech perception.


Progress in Brain Research | 2006

Cerebral processing of linguistic and emotional prosody: fMRI studies.

Dirk Wildgruber; Hermann Ackermann; Benjamin Kreifelts; Thomas Ethofer

During acoustic communication in humans, information about a speakers emotional state is predominantly conveyed by modulation of the tone of voice (emotional or affective prosody). Based on lesion data, a right hemisphere superiority for cerebral processing of emotional prosody has been assumed. However, the available clinical studies do not yet provide a coherent picture with respect to interhemispheric lateralization effects of prosody recognition and intrahemispheric localization of the respective brain regions. To further delineate the cerebral network engaged in the perception of emotional tone, a series of experiments was carried out based upon functional magnetic resonance imaging (fMRI). The findings obtained from these investigations allow for the separation of three successive processing stages during recognition of emotional prosody: (1) extraction of suprasegmental acoustic information predominantly subserved by right-sided primary and higher order acoustic regions; (2) representation of meaningful suprasegmental acoustic sequences within posterior aspects of the right superior temporal sulcus; (3) explicit evaluation of emotional prosody at the level of the bilateral inferior frontal cortex. Moreover, implicit processing of affective intonation seems to be bound to subcortical regions mediating automatic induction of specific emotional reactions such as activation of the amygdala in response to fearful stimuli. As concerns lower level processing of the underlying suprasegmental acoustic cues, linguistic and emotional prosody seem to share the same right hemisphere neural resources. Explicit judgment of linguistic aspects of speech prosody, however, appears to be linked to left-sided language areas whereas bilateral orbitofrontal cortex has been found involved in explicit evaluation of emotional prosody. These differences in hemispheric lateralization effects might explain that specific impairments in nonverbal emotional communication subsequent to focal brain lesions are relatively rare clinical observations as compared to the more frequent aphasic disorders.


NeuroImage | 2007

Audiovisual integration of emotional signals in voice and face: An event-related fMRI study

Benjamin Kreifelts; Thomas Ethofer; Wolfgang Grodd; Michael Erb; Dirk Wildgruber

In a natural environment, non-verbal emotional communication is multimodal (i.e. speech melody, facial expression) and multifaceted concerning the variety of expressed emotions. Understanding these communicative signals and integrating them into a common percept is paramount to successful social behaviour. While many previous studies have focused on the neurobiology of emotional communication in the auditory or visual modality alone, far less is known about multimodal integration of auditory and visual non-verbal emotional information. The present study investigated this process using event-related fMRI. Behavioural data revealed that audiovisual presentation of non-verbal emotional information resulted in a significant increase in correctly classified stimuli when compared with visual and auditory stimulation. This behavioural gain was paralleled by enhanced activation in bilateral posterior superior temporal gyrus (pSTG) and right thalamus, when contrasting audiovisual to auditory and visual conditions. Further, a characteristic of these brain regions, substantiating their role in the emotional integration process, is a linear relationship between the gain in classification accuracy and the strength of the BOLD response during the bimodal condition. Additionally, enhanced effective connectivity between audiovisual integration areas and associative auditory and visual cortices was observed during audiovisual stimulation, offering further insight into the neural process accomplishing multimodal integration. Finally, we were able to document an enhanced sensitivity of the putative integration sites to stimuli with emotional non-verbal content as compared to neutral stimuli.


NeuroImage | 2006

Cerebral pathways in processing of affective prosody: A dynamic causal modeling study

Thomas Ethofer; Silke Anders; Michael Erb; Cornelia Herbert; Sarah Wiethoff; Johanna Kissler; Wolfgang Grodd; Dirk Wildgruber

This study was conducted to investigate the connectivity architecture of neural structures involved in processing of emotional speech melody (prosody). 24 subjects underwent event-related functional magnetic resonance imaging (fMRI) while rating the emotional valence of either prosody or semantics of binaurally presented adjectives. Conventional analysis of fMRI data revealed activation within the right posterior middle temporal gyrus and bilateral inferior frontal cortex during evaluation of affective prosody and left temporal pole, orbitofrontal, and medial superior frontal cortex during judgment of affective semantics. Dynamic causal modeling (DCM) in combination with Bayes factors was used to compare competing neurophysiological models with different intrinsic connectivity structures and input regions within the network of brain regions underlying comprehension of affective prosody. Comparison on group level revealed superiority of a model in which the right temporal cortex serves as input region as compared to models in which one of the frontal areas is assumed to receive external inputs. Moreover, models with parallel information conductance from the right temporal cortex were superior to models in which the two frontal lobes accomplish serial processing steps. In conclusion, connectivity analysis supports the view that evaluation of affective prosody requires prior analysis of acoustic features within the temporal and that transfer of information from the temporal cortex to the frontal lobes occurs via parallel pathways.


Current Biology | 2009

Decoding of Emotional Information in Voice-Sensitive Cortices

Thomas Ethofer; Dimitri Van De Ville; Klaus R. Scherer; Patrik Vuilleumier

The ability to correctly interpret emotional signals from others is crucial for successful social interaction. Previous neuroimaging studies showed that voice-sensitive auditory areas activate to a broad spectrum of vocally expressed emotions more than to neutral speech melody (prosody). However, this enhanced response occurs irrespective of the specific emotion category, making it impossible to distinguish different vocal emotions with conventional analyses. Here, we presented pseudowords spoken in five prosodic categories (anger, sadness, neutral, relief, joy) during event-related functional magnetic resonance imaging (fMRI), then employed multivariate pattern analysis to discriminate between these categories on the basis of the spatial response pattern within the auditory cortex. Our results demonstrate successful decoding of vocal emotions from fMRI responses in bilateral voice-sensitive areas, which could not be obtained by using averaged response amplitudes only. Pairwise comparisons showed that each category could be classified against all other alternatives, indicating for each emotion a specific spatial signature that generalized across speakers. These results demonstrate for the first time that emotional information is represented by distinct spatial patterns that can be decoded from brain activity in modality-specific cortical areas.


Magnetic Resonance in Medicine | 2003

Comparison of longitudinal metabolite relaxation times in different regions of the human brain at 1.5 and 3 Tesla

Thomas Ethofer; Irina Mader; Uwe Seeger; Gunther Helms; Michael Erb; Wolfgang Grodd; Albert C. Ludolph; Uwe Klose

In vivo longitudinal relaxation times of N‐acetyl compounds (NA), choline‐containing substances (Cho), creatine (Cr), myo‐inositol (mI), and tissue water were measured at 1.5 and 3 T using a point‐resolved spectroscopy (PRESS) sequence with short echo time (TE). T1 values were determined in six different brain regions: the occipital gray matter (GM), occipital white matter (WM), motor cortex, frontoparietal WM, thalamus, and cerebellum. The T1 relaxation times of water protons were 26–38% longer at 3 T than at 1.5 T. Significantly longer metabolite T1 values at 3 T (11–36%) were found for NA, Cho, and Cr in the motor cortex, frontoparietal WM, and thalamus. The amounts of GM, WM, and cerebrospinal fluid (CSF) within the voxel were determined by segmentation of a 3D image data set. No influence of tissue composition on metabolite T1 values was found, while the longitudinal relaxation times of water protons were strongly correlated with the relative GM content. Magn Reson Med 50:1296–1301, 2003.


NeuroImage | 2008

Cerebral processing of emotional prosody--influence of acoustic parameters and arousal.

Sarah Wiethoff; Dirk Wildgruber; Benjamin Kreifelts; Hubertus G. T. Becker; Cornelia Herbert; Wolfgang Grodd; Thomas Ethofer

The human brain has a preference for processing of emotionally salient stimuli. In the auditory modality, emotional prosody can induce such involuntary biasing of processing resources. To investigate the neural correlates underlying automatic processing of emotional information in the voice, words spoken in neutral, happy, erotic, angry, and fearful prosody were presented in a passive-listening functional magnetic resonance imaging (fMRI) experiment. Hemodynamic responses in right mid superior temporal gyrus (STG) were significantly stronger for all emotional than for neutral intonations. To disentangle the contribution of basic acoustic features and emotional arousal to this activation, the relation between event-related responses and these parameters was evaluated by means of regression analyses. A significant linear dependency between hemodynamic responses of right mid STG and mean intensity, mean fundamental frequency, variability of fundamental frequency, duration, and arousal of the stimuli was observed. While none of the acoustic parameters alone explained the stronger responses of right mid STG to emotional relative to neutral prosody, this stronger responsiveness was abolished both by correcting for arousal or the conjoint effect of the acoustic parameters. In conclusion, our results demonstrate that right mid STG is sensitive to various emotions conveyed by prosody, an effect which is driven by a combination of acoustic features that express the emotional arousal in the speakers voice.


Cerebral Cortex | 2012

Mapping Aesthetic Musical Emotions in the Brain

Wiebke Trost; Thomas Ethofer; Marcel Zentner; Patrik Vuilleumier

Music evokes complex emotions beyond pleasant/unpleasant or happy/sad dichotomies usually investigated in neuroscience. Here, we used functional neuroimaging with parametric analyses based on the intensity of felt emotions to explore a wider spectrum of affective responses reported during music listening. Positive emotions correlated with activation of left striatum and insula when high-arousing (Wonder, Joy) but right striatum and orbitofrontal cortex when low-arousing (Nostalgia, Tenderness). Irrespective of their positive/negative valence, high-arousal emotions (Tension, Power, and Joy) also correlated with activations in sensory and motor areas, whereas low-arousal categories (Peacefulness, Nostalgia, and Sadness) selectively engaged ventromedial prefrontal cortex and hippocampus. The right parahippocampal cortex activated in all but positive high-arousal conditions. Results also suggested some blends between activation patterns associated with different classes of emotions, particularly for feelings of Wonder or Transcendence. These data reveal a differentiated recruitment across emotions of networks involved in reward, memory, self-reflective, and sensorimotor processes, which may account for the unique richness of musical emotions.


Social Cognitive and Affective Neuroscience | 2009

Amygdala activation during reading of emotional adjectives—an advantage for pleasant content

Cornelia Herbert; Thomas Ethofer; Silke Anders; Markus Junghöfer; Dirk Wildgruber; Wolfgang Grodd; Johanna Kissler

This event-related functional magnetic resonance imaging (fMRI) study investigated brain activity elicited by emotional adjectives during silent reading without specific processing instructions. Fifteen healthy volunteers were asked to read a set of randomly presented high-arousing emotional (pleasant and unpleasant) and low-arousing neutral adjectives. Silent reading of emotional in contrast to neutral adjectives evoked enhanced activations in visual, limbic and prefrontal brain regions. In particular, reading pleasant adjectives produced a more robust activation pattern in the left amygdala and the left extrastriate visual cortex than did reading unpleasant or neutral adjectives. Moreover, extrastriate visual cortex and amygdala activity were significantly correlated during reading of pleasant adjectives. Furthermore, pleasant adjectives were better remembered than unpleasant and neutral adjectives in a surprise free recall test conducted after scanning. Thus, visual processing was biased towards pleasant words and involved the amygdala, underscoring recent theoretical views of a general role of the human amygdala in relevance detection for both pleasant and unpleasant stimuli. Results indicate preferential processing of pleasant information in healthy young adults and can be accounted for within the framework of appraisal theory.


NeuroImage | 2011

Flow of affective information between communicating brains.

Silke Anders; Jakob Heinzle; Nikolaus Weiskopf; Thomas Ethofer; John-Dylan Haynes

When people interact, affective information is transmitted between their brains. Modern imaging techniques permit to investigate the dynamics of this brain-to-brain transfer of information. Here, we used information-based functional magnetic resonance imaging (fMRI) to investigate the flow of affective information between the brains of senders and perceivers engaged in ongoing facial communication of affect. We found that the level of neural activity within a distributed network of the perceivers brain can be successfully predicted from the neural activity in the same network in the senders brain, depending on the affect that is currently being communicated. Furthermore, there was a temporal succession in the flow of affective information from the senders brain to the perceivers brain, with information in the perceivers brain being significantly delayed relative to information in the senders brain. This delay decreased over time, possibly reflecting some ‘tuning in’ of the perceiver with the sender. Our data support current theories of intersubjectivity by providing direct evidence that during ongoing facial communication a ‘shared space’ of affect is successively built up between senders and perceivers of affective facial signals.

Collaboration


Dive into the Thomas Ethofer's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge