Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sascha Frühholz is active.

Publication


Featured researches published by Sascha Frühholz.


Biological Psychology | 2011

Time course of implicit processing and explicit processing of emotional faces and emotional words.

Sascha Frühholz; Anne Jellinghaus; Manfred Herrmann

Facial expressions are important emotional stimuli during social interactions. Symbolic emotional cues, such as affective words, also convey information regarding emotions that is relevant for social communication. Various studies have demonstrated fast decoding of emotions from words, as was shown for faces, whereas others report a rather delayed decoding of information about emotions from words. Here, we introduced an implicit (color naming) and explicit task (emotion judgment) with facial expressions and words, both containing information about emotions, to directly compare the time course of emotion processing using event-related potentials (ERP). The data show that only negative faces affected task performance, resulting in increased error rates compared to neutral faces. Presentation of emotional faces resulted in a modulation of the N170, the EPN and the LPP components and these modulations were found during both the explicit and implicit tasks. Emotional words only affected the EPN during the explicit task, but a task-independent effect on the LPP was revealed. Finally, emotional faces modulated source activity in the extrastriate cortex underlying the generation of the N170, EPN and LPP components. Emotional words led to a modulation of source activity corresponding to the EPN and LPP, but they also affected the N170 source on the right hemisphere. These data show that facial expressions affect earlier stages of emotion processing compared to emotional words, but the emotional value of words may have been detected at early stages of emotional processing in the visual cortex, as was indicated by the extrastriate source activity.


Cerebral Cortex | 2012

Specific Brain Networks during Explicit and Implicit Decoding of Emotional Prosody

Sascha Frühholz; Leonardo Ceravolo; Didier Maurice Grandjean

To better define the underlying brain network for the decoding of emotional prosody, we recorded high-resolution brain scans during an implicit and explicit decoding task of angry and neutral prosody. Several subregions in the right superior temporal gyrus (STG) and bilateral in the inferior frontal gyrus (IFG) were sensitive to emotional prosody. Implicit processing of emotional prosody engaged regions in the posterior superior temporal gyrus (pSTG) and bilateral IFG subregions, whereas explicit processing relied more on mid STG, left IFG, amygdala, and subgenual anterior cingulate cortex. Furthermore, whereas some bilateral pSTG regions and the amygdala showed general sensitivity to prosody-specific acoustical features during implicit processing, activity in inferior frontal brain regions was insensitive to these features. Together, the data suggest a differentiated STG, IFG, and subcortical network of brain regions, which varies with the levels of processing and shows a higher specificity during explicit decoding of emotional prosody.


Neuroscience & Biobehavioral Reviews | 2013

Subthalamic nucleus: A key structure for emotional component synchronization in humans

Julie Anne Peron; Sascha Frühholz; Marc Vérin; Didier Maurice Grandjean

Affective neuroscience is concerned with identifying the neural bases of emotion. For historical and methodological reasons, models describing the brain architecture that supports emotional processes in humans have tended to neglect the basal ganglia, focusing instead on cortical and amygdalar mechanisms. Now, however, deep brain stimulation (DBS) of the subthalamic nucleus (STN), a neurosurgical treatment for Parkinsons disease and obsessive-compulsive disorder, is helping researchers explore the possible functional role of this particular basal ganglion in emotional processes. After reviewing studies that have used DBS in this way, we propose a model in which the STN plays a crucial role in producing temporally organized neural co-activation patterns at the cortical and subcortical levels that are essential for generating emotions and related feelings.


Progress in Neurobiology | 2014

The role of the medial temporal limbic system in processing emotions in voice and music.

Sascha Frühholz; Wiebke Trost; Didier Maurice Grandjean

Subcortical brain structures of the limbic system, such as the amygdala, are thought to decode the emotional value of sensory information. Recent neuroimaging studies, as well as lesion studies in patients, have shown that the amygdala is sensitive to emotions in voice and music. Similarly, the hippocampus, another part of the temporal limbic system (TLS), is responsive to vocal and musical emotions, but its specific roles in emotional processing from music and especially from voices have been largely neglected. Here we review recent research on vocal and musical emotions, and outline commonalities and differences in the neural processing of emotions in the TLS in terms of emotional valence, emotional intensity and arousal, as well as in terms of acoustic and structural features of voices and music. We summarize the findings in a neural framework including several subcortical and cortical functional pathways between the auditory system and the TLS. This framework proposes that some vocal expressions might already receive a fast emotional evaluation via a subcortical pathway to the amygdala, whereas cortical pathways to the TLS are thought to be equally used for vocal and musical emotions. While the amygdala might be specifically involved in a coarse decoding of the emotional value of voices and music, the hippocampus might process more complex vocal and musical emotions, and might have an important role especially for the decoding of musical emotions by providing memory-based and contextual associations.


Neuroscience & Biobehavioral Reviews | 2013

Processing of emotional vocalizations in bilateral inferior frontal cortex

Sascha Frühholz; Didier Maurice Grandjean

A current view proposes that the right inferior frontal cortex (IFC) is particularly responsible for attentive decoding and cognitive evaluation of emotional cues in human vocalizations. Although some studies seem to support this view, an exhaustive review of all recent imaging studies points to an important functional role of both the right and the left IFC in processing vocal emotions. Second, besides a supposed predominant role of the IFC for an attentive processing and evaluation of emotional voices in IFC, these recent studies also point to a possible role of the IFC in preattentive and implicit processing of vocal emotions. The studies specifically provide evidence that both the right and the left IFC show a similar anterior-to-posterior gradient of functional activity in response to emotional vocalizations. This bilateral IFC gradient depends both on the nature or medium of emotional vocalizations (emotional prosody versus nonverbal expressions) and on the level of attentive processing (explicit versus implicit processing), closely resembling the distribution of terminal regions of distinct auditory pathways, which provide either global or dynamic acoustic information. Here we suggest a functional distribution in which several IFC subregions process different acoustic information conveyed by emotional vocalizations. Although the rostro-ventral IFC might categorize emotional vocalizations, the caudo-dorsal IFC might be specifically sensitive to their temporal features.


NeuroImage | 2011

Spatio-temporal brain dynamics in a combined stimulus–stimulus and stimulus–response conflict task

Sascha Frühholz; Ben Godde; Mareike Finke; Manfred Herrmann

It is yet not well known whether different types of conflicts share common or rely on distinct brain mechanisms of conflict processing. We used a combined Flanker (stimulus-stimulus; S-S) and Simon (stimulus-response; S-R) conflict paradigm both in an fMRI and an EEG study. S-S conflicts induced stronger behavioral interference effects compared to S-R conflicts and the latter decayed with increasing response latencies. Besides some similar medial frontal activity across all conflict trials, which was, however, not statically consistent across trials, we especially found distinct activations depending on the type of conflict. S-S conflicts activated the anterior cingulate cortex and modulated the N2 and early P3 component with underlying source activity in inferior frontal cortex. S-R conflicts produced distinct activations in the posterior cingulate cortex and modulated the late P3b component with underlying source activity in superior parietal cortex. Double conflict trials containing both S-S and S-R conflicts revealed, first, distinct anterior frontal activity representing a meta-processing unit and, second, a sequential modulation of the N2 and the P3b component. The N2 modulation during double conflict trials was accompanied by increased source activity in the medial frontal gyrus (MeFG). In summary, S-S and S-R conflict processing mostly rely on distinct mechanisms of conflict processing and these conflicts differentially modulate the temporal stages of stimulus processing.


Cortex | 2013

Amygdala subregions differentially respond and rapidly adapt to threatening voices.

Sascha Frühholz; Didier Maurice Grandjean

Emotional states can influence the human voice during speech utterances. Here, we tested the sensitivity and signal adaptation of functional activity located in amygdala subregions to threatening voices during high-resolution functional magnetic resonance imaging. Bilateral superficial (SF) complex and the right laterobasal (LB) complex of the amygdala were generally sensitive to emotional cues from speech prosody. Activity was stronger, however, when listeners directly focused on the emotional prosody of the voice instead of attending to a nonemotional feature. Explicit attention to prosody especially elicited activity in the right LB complex. Furthermore, the right SF specifically showed an effect of sensitization indicated by a significant signal increase in response to emotional voices which were preceded by neutral events. The bilateral SF showed signal habituation to repeated emotional voices indicated by a significant signal decrease for an emotional event preceded by another emotional event. The right SF and LB finally showed an effect of desensitization after the processing of emotional voices indicated by a signal decrease for neutral events that followed emotional events. Thus, different amygdala subregions are sensitive to threatening emotional voices, and their activity depends on the attentional focus as well as on the proximal temporal context of other neutral and emotional events.


Neuroscience & Biobehavioral Reviews | 2013

Multiple subregions in superior temporal cortex are differentially sensitive to vocal expressions: A quantitative meta-analysis

Sascha Frühholz; Didier Maurice Grandjean

Vocal expressions of emotions consistently activate regions in the superior temporal cortex (STC), including regions in the primary and secondary auditory cortex (AC). Studies usually report broadly extended functional activations in response to vocal expressions, with considerable variation in peak locations across several auditory subregions. This might suggest different and distributed functional roles across these subregions instead of a uniform role for the decoding of vocal emotions. We reviewed recent studies and conducted an activation likelihood estimation meta-analysis summarizing recent fMRI and PET studies dealing with the processing of vocal expressions in the STC and AC. We included two stimulus-specific factors (paraverbal/nonverbal expression, stimulus valence) and one task-specific factor (attentional focus) in the analysis. These factors considerably influenced whether functional activity was located in the AC or STC (influence of valence and attentional focus), the laterality of activations (influence of paraverbal/nonverbal expressions), and the anterior-posterior location of STC activity (influence of valence). These data suggest distributed functional roles and a differentiated network of auditory subregions in response to vocal expressions.


NeuroImage | 2012

Towards a fronto-temporal neural network for the decoding of angry vocal expressions

Sascha Frühholz; Didier Maurice Grandjean

Vocal expressions commonly elicit activity in superior temporal and inferior frontal cortices, indicating a distributed network to decode vocally expressed emotions. We examined the involvement of this fronto-temporal network for the decoding of angry voices during attention towards (explicit attention) or away from emotional cues in voices (implicit attention) based on a reanalysis of previous data (Frühholz, S., Ceravolo, L., Grandjean, D., 2012. Cerebral Cortex 22, 1107-1117). The general network revealed high interconnectivity of bilateral inferior frontal gyrus (IFG) to different bilateral voice-sensitive regions in mid and posterior superior temporal gyri. Right superior temporal gyrus (STG) regions showed connectivity to the left primary auditory cortex and secondary auditory cortex (AC) as well as to high-level auditory regions. This general network revealed differences in connectivity depending on the attentional focus. Explicit attention to angry voices revealed a specific right-left STG network connecting higher-level AC. During attention to a nonemotional vocal feature we also found a left-right STG network implicitly elicited by angry voices that also included low-level left AC. Furthermore, only during this implicit processing there was widespread interconnectivity between bilateral IFG and bilateral STG. This indicates that while implicit attention to angry voices recruits extended bilateral STG and IFG networks for the sensory and evaluative decoding of voices, explicit attention to angry voices solely involves a network of bilateral STG regions probably for the integrative recognition of emotional cues from voices.


Neuroscience & Biobehavioral Reviews | 2016

The sound of emotions-Towards a unifying neural network perspective of affective sound processing

Sascha Frühholz; Wiebke Trost; Sonja A. Kotz

Affective sounds are an integral part of the natural and social environment that shape and influence behavior across a multitude of species. In human primates, these affective sounds span a repertoire of environmental and human sounds when we vocalize or produce music. In terms of neural processing, cortical and subcortical brain areas constitute a distributed network that supports our listening experience to these affective sounds. Taking an exhaustive cross-domain view, we accordingly suggest a common neural network that facilitates the decoding of the emotional meaning from a wide source of sounds rather than a traditional view that postulates distinct neural systems for specific affective sound types. This new integrative neural network view unifies the decoding of affective valence in sounds, and ascribes differential as well as complementary functional roles to specific nodes within a common neural network. It also highlights the importance of an extended brain network beyond the central limbic and auditory brain systems engaged in the processing of affective sounds.

Collaboration


Dive into the Sascha Frühholz's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jun Deng

University of Passau

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge