Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Benjamin Kreifelts is active.

Publication


Featured researches published by Benjamin Kreifelts.


Progress in Brain Research | 2006

Cerebral processing of linguistic and emotional prosody: fMRI studies.

Dirk Wildgruber; Hermann Ackermann; Benjamin Kreifelts; Thomas Ethofer

During acoustic communication in humans, information about a speakers emotional state is predominantly conveyed by modulation of the tone of voice (emotional or affective prosody). Based on lesion data, a right hemisphere superiority for cerebral processing of emotional prosody has been assumed. However, the available clinical studies do not yet provide a coherent picture with respect to interhemispheric lateralization effects of prosody recognition and intrahemispheric localization of the respective brain regions. To further delineate the cerebral network engaged in the perception of emotional tone, a series of experiments was carried out based upon functional magnetic resonance imaging (fMRI). The findings obtained from these investigations allow for the separation of three successive processing stages during recognition of emotional prosody: (1) extraction of suprasegmental acoustic information predominantly subserved by right-sided primary and higher order acoustic regions; (2) representation of meaningful suprasegmental acoustic sequences within posterior aspects of the right superior temporal sulcus; (3) explicit evaluation of emotional prosody at the level of the bilateral inferior frontal cortex. Moreover, implicit processing of affective intonation seems to be bound to subcortical regions mediating automatic induction of specific emotional reactions such as activation of the amygdala in response to fearful stimuli. As concerns lower level processing of the underlying suprasegmental acoustic cues, linguistic and emotional prosody seem to share the same right hemisphere neural resources. Explicit judgment of linguistic aspects of speech prosody, however, appears to be linked to left-sided language areas whereas bilateral orbitofrontal cortex has been found involved in explicit evaluation of emotional prosody. These differences in hemispheric lateralization effects might explain that specific impairments in nonverbal emotional communication subsequent to focal brain lesions are relatively rare clinical observations as compared to the more frequent aphasic disorders.


NeuroImage | 2007

Audiovisual integration of emotional signals in voice and face: An event-related fMRI study

Benjamin Kreifelts; Thomas Ethofer; Wolfgang Grodd; Michael Erb; Dirk Wildgruber

In a natural environment, non-verbal emotional communication is multimodal (i.e. speech melody, facial expression) and multifaceted concerning the variety of expressed emotions. Understanding these communicative signals and integrating them into a common percept is paramount to successful social behaviour. While many previous studies have focused on the neurobiology of emotional communication in the auditory or visual modality alone, far less is known about multimodal integration of auditory and visual non-verbal emotional information. The present study investigated this process using event-related fMRI. Behavioural data revealed that audiovisual presentation of non-verbal emotional information resulted in a significant increase in correctly classified stimuli when compared with visual and auditory stimulation. This behavioural gain was paralleled by enhanced activation in bilateral posterior superior temporal gyrus (pSTG) and right thalamus, when contrasting audiovisual to auditory and visual conditions. Further, a characteristic of these brain regions, substantiating their role in the emotional integration process, is a linear relationship between the gain in classification accuracy and the strength of the BOLD response during the bimodal condition. Additionally, enhanced effective connectivity between audiovisual integration areas and associative auditory and visual cortices was observed during audiovisual stimulation, offering further insight into the neural process accomplishing multimodal integration. Finally, we were able to document an enhanced sensitivity of the putative integration sites to stimuli with emotional non-verbal content as compared to neutral stimuli.


NeuroImage | 2008

Cerebral processing of emotional prosody--influence of acoustic parameters and arousal.

Sarah Wiethoff; Dirk Wildgruber; Benjamin Kreifelts; Hubertus G. T. Becker; Cornelia Herbert; Wolfgang Grodd; Thomas Ethofer

The human brain has a preference for processing of emotionally salient stimuli. In the auditory modality, emotional prosody can induce such involuntary biasing of processing resources. To investigate the neural correlates underlying automatic processing of emotional information in the voice, words spoken in neutral, happy, erotic, angry, and fearful prosody were presented in a passive-listening functional magnetic resonance imaging (fMRI) experiment. Hemodynamic responses in right mid superior temporal gyrus (STG) were significantly stronger for all emotional than for neutral intonations. To disentangle the contribution of basic acoustic features and emotional arousal to this activation, the relation between event-related responses and these parameters was evaluated by means of regression analyses. A significant linear dependency between hemodynamic responses of right mid STG and mean intensity, mean fundamental frequency, variability of fundamental frequency, duration, and arousal of the stimuli was observed. While none of the acoustic parameters alone explained the stronger responses of right mid STG to emotional relative to neutral prosody, this stronger responsiveness was abolished both by correcting for arousal or the conjoint effect of the acoustic parameters. In conclusion, our results demonstrate that right mid STG is sensitive to various emotions conveyed by prosody, an effect which is driven by a combination of acoustic features that express the emotional arousal in the speakers voice.


Journal of Cognitive Neuroscience | 2009

Differential influences of emotion, task, and novelty on brain regions underlying the processing of speech melody

Thomas Ethofer; Benjamin Kreifelts; Sarah Wiethoff; Jonathan Wolf; Wolfgang Grodd; Patrik Vuilleumier; Dirk Wildgruber

We investigated the functional characteristics of brain regions implicated in processing of speech melody by presenting words spoken in either neutral or angry prosody during a functional magnetic resonance imaging experiment using a factorial habituation design. Subjects judged either affective prosody or word class for these vocal stimuli, which could be heard for either the first, second, or third time. Voice-sensitive temporal cortices, as well as the amygdala, insula, and mediodorsal thalami, reacted stronger to angry than to neutral prosody. These stimulus-driven effects were not influenced by the task, suggesting that these brain structures are automatically engaged during processing of emotional information in the voice and operate relatively independent of cognitive demands. By contrast, the right middle temporal gyrus and the bilateral orbito-frontal cortices (OFC) responded stronger during emotion than word classification, but were also sensitive to anger expressed by the voices, suggesting that some perceptual aspects of prosody are also encoded within these regions subserving explicit processing of vocal emotion. The bilateral OFC showed a selective modulation by emotion and repetition, with particularly pronounced responses to angry prosody during the first presentation only, indicating a critical role of the OFC in detection of vocal information that is both novel and behaviorally relevant. These results converge with previous findings obtained for angry faces and suggest a general involvement of the OFC for recognition of anger irrespective of the sensory modality. Taken together, our study reveals that different aspects of voice stimuli and perceptual demands modulate distinct areas involved in the processing of emotional prosody.


Neuropsychologia | 2009

Cerebral representation of non-verbal emotional perception: fMRI reveals audiovisual integration area between voice- and face-sensitive regions in the superior temporal sulcus

Benjamin Kreifelts; Thomas Ethofer; Thomas Shiozawa; Wolfgang Grodd; Dirk Wildgruber

Successful social interaction relies on multimodal integration of non-verbal emotional signals. The neural correlates of this function, along with those underlying the processing of human faces and voices, have been linked to the superior temporal sulcus (STS) in previous neuroimaging studies. Yet, recently it has been demonstrated that this structure consists of several anatomically defined sections, including a trunk section as well as two separate terminal branches, and exhibits a pronounced spatial variability across subjects. Using functional magnetic resonance imaging (fMRI), we demonstrated that the neural representations of the audiovisual integration of non-verbal emotional signals, voice sensitivity and face sensitivity are located in different parts of the STS with maximum voice sensitivity in the trunk section and maximum face sensitivity in the posterior terminal ascending branch. The audiovisual integration area for emotional signals is located at the bifurcation of the STS at an overlap of voice- and face-sensitive regions. In summary, our findings evidence a functional subdivision of the STS into modules subserving the processing of different aspects of social communication, here exemplified in human voices and faces and audiovisual integration of emotional signals from these sources and suggest a possible interaction of the underlying voice- and face-sensitive neuronal populations during the formation of the audiovisual emotional percept.


International Journal of Speech-Language Pathology | 2009

A cerebral network model of speech prosody comprehension

Dirk Wildgruber; Thomas Ethofer; Didier Maurice Grandjean; Benjamin Kreifelts

Comprehension of information conveyed by the tone of voice is highly important for successful social interactions (Grandjean et al., 2006). Based on lesion data, a superiority of the right hemisphere for cerebral processing of speech prosody has been assumed. According to an early neuroanatomical model, prosodic information is encoded within distinct right-sided perisylvian regions which are organized in complete analogy to the left-sided language areas (Ross, 1981). While the majority of lesion studies are in line with the assumption that the right temporal cortex is highly important for the comprehension of speech melody (Adolphs et al., 2001; Borod et al., 2002; Heilman et al., 1984), some studies indicate a widespread network of partially bilateral cerebral regions to contribute to prosody processing including the frontal cortex (Adolphs et al., 2002; Hornak et al., 2003; Rolls, 1999) and the basal ganglia (Cancellieve & Kertesz, 1990; Pell & Leonard, 2003). More recently, functional imaging experiments have helped to differentiate specific functions of distinct brain areas contributing to recognition of speech prosody (Ackermann et al., 2004; Schirmer & Kotz, 2006; Wildgruber et al., 2006). Observations in healthy subjects indicate a strong association of cerebral responses and acoustic voice properties in some regions (stimulus-driven effects), whereas other areas show modulation of activation linked to the focusing of attention to specific task components (task-dependent effects). Here we present a refined model of prosody processing and cross-modal integration of emotional signals from face and voice which differentiates successive steps of cerebral processing involving auditory analysis and multimodal integration of communicative signals within the temporal cortex and evaluative judgements within the frontal lobes.


Emotion | 2012

Age-related decrease in recognition of emotional facial and prosodic expressions.

Lena Lambrecht; Benjamin Kreifelts; Dirk Wildgruber

The recognition of nonverbal emotional signals and the integration of multimodal emotional information are essential for successful social communication among humans of any age. Whereas prior studies of age dependency in the recognition of emotion often focused on either the prosodic or the facial aspect of nonverbal signals, our purpose was to create a more naturalistic setting by presenting dynamic stimuli under three experimental conditions: auditory, visual, and audiovisual. Eighty-four healthy participants (women = 44, men = 40; age range 20-70 years) were tested for their abilities to recognize emotions either mono- or bimodally on the basis of emotional (happy, alluring, angry, disgusted) and neutral nonverbal stimuli from voice and face. Additionally, we assessed visual and auditory acuity, working memory, verbal intelligence, and emotional intelligence to explore potential explanatory effects of these population parameters on the relationship between age and emotion recognition. Applying unbiased hit rates as performance measure, we analyzed data with linear regression analyses, t tests, and with mediation analyses. We found a linear, age-related decrease in emotion recognition independent of stimulus modality and emotional category. In contrast, the improvement in recognition rates associated with audiovisual integration of bimodal stimuli seems to be maintained over the life span. The reduction in emotion recognition ability at an older age could not be sufficiently explained by age-related decreases in hearing, vision, working memory, and verbal intelligence. These findings suggest alterations in social perception at a level of complexity beyond basic perceptional and cognitive abilities.


Human Brain Mapping | 2009

Association of trait emotional intelligence and individual fMRI-activation patterns during the perception of social signals from voice and face

Benjamin Kreifelts; Thomas Ethofer; Elisabeth Huberle; Wolfgang Grodd; Dirk Wildgruber

Multimodal integration of nonverbal social signals is essential for successful social interaction. Previous studies have implicated the posterior superior temporal sulcus (pSTS) in the perception of social signals such as nonverbal emotional signals as well as in social cognitive functions like mentalizing/theory of mind. In the present study, we evaluated the relationships between trait emotional intelligence (EI) and fMRI activation patterns in individual subjects during the multimodal perception of nonverbal emotional signals from voice and face. Trait EI was linked to hemodynamic responses in the right pSTS, an area which also exhibits a distinct sensitivity to human voices and faces. Within all other regions known to subserve the perceptual audiovisual integration of human social signals (i.e., amygdala, fusiform gyrus, thalamus), no such linked responses were observed. This functional difference in the network for the audiovisual perception of human social signals indicates a specific contribution of the pSTS as a possible interface between the perception of social information and social cognition. Hum Brain Mapp, 2010.


Social Cognitive and Affective Neuroscience | 2007

The voices of seduction: cross-gender effects in processing of erotic prosody

Thomas Ethofer; Sarah Wiethoff; Silke Anders; Benjamin Kreifelts; Wolfgang Grodd; Dirk Wildgruber

Gender specific differences in cognitive functions have been widely discussed. Considering social cognition such as emotion perception conveyed by non-verbal cues, generally a female advantage is assumed. In the present study, however, we revealed a cross-gender interaction with increasing responses to the voice of opposite sex in male and female subjects. This effect was confined to erotic tone of speech in behavioural data and haemodynamic responses within voice sensitive brain areas (right middle superior temporal gyrus). The observed response pattern, thus, indicates a particular sensitivity to emotional voices that have a high behavioural relevance for the listener.


NeuroImage | 2010

It is not always tickling: Distinct cerebral responses during perception of different laughter types

Diana P. Szameitat; Benjamin Kreifelts; Kai Alter; André J. Szameitat; Annette Sterr; Wolfgang Grodd; Dirk Wildgruber

Laughter is highly relevant for social interaction in human beings and non-human primates. In humans as well as in non-human primates laughter can be induced by tickling. Human laughter, however, has further diversified and encompasses emotional laughter types with various communicative functions, e.g. joyful and taunting laughter. Here, it was evaluated if this evolutionary diversification of ecological functions is associated with distinct cerebral responses underlying laughter perception. Functional MRI revealed a double-dissociation of cerebral responses during perception of tickling laughter and emotional laughter (joy and taunt) with higher activations in the anterior rostral medial frontal cortex (arMFC) when emotional laughter was perceived, and stronger responses in the right superior temporal gyrus (STG) during appreciation of tickling laughter. Enhanced activation of the arMFC for emotional laughter presumably reflects increasing demands on social cognition processes arising from the greater social salience of these laughter types. Activation increase in the STG for tickling laughter may be linked to the higher acoustic complexity of this laughter type. The observed dissociation of cerebral responses for emotional laughter and tickling laughter was independent of task-directed focusing of attention. These findings support the postulated diversification of human laughter in the course of evolution from an unequivocal play signal to laughter with distinct emotional contents subserving complex social functions.

Collaboration


Dive into the Benjamin Kreifelts's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Heike Jacob

University of Tübingen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jan Ritter

University of Tübingen

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge