Susanne Dietrich
University of Tübingen
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Susanne Dietrich.
Mechanisms of Development | 1993
Susanne Dietrich; Frank R. Schubert; Peter Gruss
We have characterised the patterning capacity of the notochord on the somite using the murine Pax-1 gene as a ventral, and Pax-3 as a dorsal molecular marker. As model systems we chose the four mouse notochord mutants Brachyury curtailed (Tc), Danforths short tail (Sd), Pintail (Pt) and truncate (tc). Their notochord either is initially absent or progressively degenerates. The use of these mutants enabled us to compare the effect of graded notochord deficiencies. All four mutants show premature termination of the vertebral column. This phenotype can be traced back to an impaired dorsoventral specification of the somites. In tc/tc and Tc/+ embryos the notochord in the affected regions is missing from the beginning. Consequently, Pax-1 is never activated, and Pax-3 remains to be expressed in the entire somite. In contrast, in Sd and Pt embryos the notochord secondarily degenerates. At the end of the prevertebral column Pax-1 expression is lost, while the Pax-3 signal occupies the former Pax-1 expressing zone. The altered pax gene expression in the notochord mutants suggests that the notochord is required for two processes in the dorsoventral patterning of the somite: first the induction of ventral structures, and second the maintenance of the ventral fate.
Emotion | 2009
Diana P. Szameitat; Kai Alter; André J. Szameitat; C. J. Darwin; Dirk Wildgruber; Susanne Dietrich; Annette Sterr
Although laughter is important in human social interaction, its role as a communicative signal is poorly understood. Because laughter is expressed in various emotional contexts, the question arises as to whether different emotions are communicated. In the present study, participants had to appraise 4 types of laughter sounds (joy, tickling, taunting, schadenfreude) either by classifying them according to the underlying emotion or by rating them according to different emotional dimensions. The authors found that emotions in laughter (a) can be classified into different emotional categories, and (b) can have distinctive profiles on W. Wundts (1905) emotional dimensions. This shows that laughter is a multifaceted social behavior that can adopt various emotional connotations. The findings support the postulated function of laughter in establishing group structure, whereby laughter is used either to include or to exclude individuals from group coherence.
Neurocase | 2009
Ingo Hertrich; Susanne Dietrich; Anja Moos; Jürgen Trouvain; Hermann Ackermann
Blind individuals may learn to understand ultra-fast synthetic speech at a rate of up to about 25 syllables per second (syl)/s, an accomplishment by far exceeding the maximum performance level of normal-sighted listeners (8–10 syl/s). The present study indicates that this exceptional skill engages distinct regions of the central-visual system. Hemodynamic brain activation during listening to moderately- (8 syl/s) and ultra-fast speech (16 syl/s) was measured in a blind individual and six normal-sighted controls. Moderately-fast speech activated posterior and anterior ‘language zones’ in all subjects. Regarding ultra-fast tokens, the controls showed exclusive activation of supratemporal regions whereas the blind participant exhibited enhanced left inferior frontal and temporoparietal responses as well as significant hemodynamic activation of left fusiform gyrus (FG) and right primary visual cortex. Since left FG is known to be involved in phonological processing, this structure, presumably, provides the functional link betweeen the central-auditory and -visual systems.
Neuroscience & Biobehavioral Reviews | 2016
Ingo Hertrich; Susanne Dietrich; Hermann Ackermann
Apart from its function in speech motor control, the supplementary motor area (SMA) has largely been neglected in models of speech and language processing in the brain. The aim of this review paper is to summarize more recent work, suggesting that the SMA has various superordinate control functions during speech communication and language reception, which is particularly relevant in case of increased task demands. The SMA is subdivided into a posterior region serving predominantly motor-related functions (SMA proper) whereas the anterior part (pre-SMA) is involved in higher-order cognitive control mechanisms. In analogy to motor triggering functions of the SMA proper, the pre-SMA seems to manage procedural aspects of cognitive processing. These latter functions, among others, comprise attentional switching, ambiguity resolution, context integration, and coordination between procedural and declarative memory structures. Regarding language processing, this refers, for example, to the use of inner speech mechanisms during language encoding, but also to lexical disambiguation, syntax and prosody integration, and context-tracking.
Journal of Cognitive Neuroscience | 2011
Ingo Hertrich; Susanne Dietrich; Hermann Ackermann
During speech communication, visual information may interact with the auditory system at various processing stages. Most noteworthy, recent magnetoencephalography (MEG) data provided first evidence for early and preattentive phonetic/phonological encoding of the visual data stream—prior to its fusion with auditory phonological features [Hertrich, I., Mathiak, K., Lutzenberger, W., & Ackermann, H. Time course of early audiovisual interactions during speech and non-speech central-auditory processing: An MEG study. Journal of Cognitive Neuroscience, 21, 259–274, 2009]. Using functional magnetic resonance imaging, the present follow-up study aims to further elucidate the topographic distribution of visual–phonological operations and audiovisual (AV) interactions during speech perception. Ambiguous acoustic syllables—disambiguated to /pa/ or /ta/ by the visual channel (speaking face)—served as test materials, concomitant with various control conditions (nonspeech AV signals, visual-only and acoustic-only speech, and nonspeech stimuli). (i) Visual speech yielded an AV-subadditive activation of primary auditory cortex and the anterior superior temporal gyrus (STG), whereas the posterior STG responded both to speech and nonspeech motion. (ii) The inferior frontal and the fusiform gyrus of the right hemisphere showed a strong phonetic/phonological impact (differential effects of visual /pa/ vs. /ta/) upon hemodynamic activation during presentation of speaking faces. Taken together with the previous MEG data, these results point at a dual-pathway model of visual speech information processing: On the one hand, access to the auditory system via the anterior supratemporal “what” path may give rise to direct activation of “auditory objects.” On the other hand, visual speech information seems to be represented in a right-hemisphere visual working memory, providing a potential basis for later interactions with auditory information such as the McGurk effect.
Neuroreport | 2008
Susanne Dietrich; Ingo Hertrich; Kai Alter; Anja Ischebeck; Hermann Ackermann
Using functional magnetic resonance imaging, the distribution of hemodynamic brain responses bound to the perceptual processing of interjections, that is ‘exclamations inserted into an utterance without grammatical connection to it’, was determined (vs. a silent baseline condition). These utterances convey information about a speakers affective/emotional state by their ‘tone’ (emotional prosody) and/or their lexical content. Both communicative aspects of interjections elicited significant bilateral blood-oxygen-level-dependent signal changes within superior temporal cortex. In addition, affective-prosodic cues yielded hemodynamic activation of the posterior insula as well as cortical/subcortical structures engaged in the control of innate emotional behavior. These observations corroborate the suggestion that interjections might trace back to proto-speech vocalizations of an early stage of language evolution.
BMC Neuroscience | 2013
Susanne Dietrich; Ingo Hertrich; Hermann Ackermann
BackgroundIndividuals suffering from vision loss of a peripheral origin may learn to understand spoken language at a rate of up to about 22 syllables (syl) per second - exceeding by far the maximum performance level of normal-sighted listeners (ca. 8 syl/s). To further elucidate the brain mechanisms underlying this extraordinary skill, functional magnetic resonance imaging (fMRI) was performed in blind subjects of varying ultra-fast speech comprehension capabilities and sighted individuals while listening to sentence utterances of a moderately fast (8 syl/s) or ultra-fast (16 syl/s) syllabic rate.ResultsBesides left inferior frontal gyrus (IFG), bilateral posterior superior temporal sulcus (pSTS) and left supplementary motor area (SMA), blind people highly proficient in ultra-fast speech perception showed significant hemodynamic activation of right-hemispheric primary visual cortex (V1), contralateral fusiform gyrus (FG), and bilateral pulvinar (Pv).ConclusionsPresumably, FG supports the left-hemispheric perisylvian “language network”, i.e., IFG and superior temporal lobe, during the (segmental) sequencing of verbal utterances whereas the collaboration of bilateral pulvinar, right auditory cortex, and ipsilateral V1 implements a signal-driven timing mechanism related to syllabic (suprasegmental) modulation of the speech signal. These data structures, conveyed via left SMA to the perisylvian “language zones”, might facilitate – under time-critical conditions – the consolidation of linguistic information at the level of verbal working memory.
Brain and Language | 2013
Ingo Hertrich; Susanne Dietrich; Hermann Ackermann
Blind people can learn to understand speech at ultra-high syllable rates (ca. 20 syllables/s), a capability associated with hemodynamic activation of the central-visual system. To further elucidate the neural mechanisms underlying this skill, magnetoencephalographic (MEG) measurements during listening to sentence utterances were cross-correlated with time courses derived from the speech signal (envelope, syllable onsets and pitch periodicity) to capture phase-locked MEG components (14 blind, 12 sighted subjects; speech rate=8 or 16 syllables/s, pre-defined source regions: auditory and visual cortex, inferior frontal gyrus). Blind individuals showed stronger phase locking in auditory cortex than sighted controls, and right-hemisphere visual cortex activity correlated with syllable onsets in case of ultra-fast speech. Furthermore, inferior-frontal MEG components time-locked to pitch periodicity displayed opposite lateralization effects in sighted (towards right hemisphere) and blind subjects (left). Thus, ultra-fast speech comprehension in blind individuals appears associated with changes in early signal-related processing mechanisms both within and outside the central-auditory terrain.
Psychophysiology | 2012
Ingo Hertrich; Susanne Dietrich; Jürgen Trouvain; Anja Moos; Hermann Ackermann
During speech perception, acoustic correlates of syllable structure and pitch periodicity are directly reflected in electrophysiological brain activity. Magnetoencephalography (MEG) recordings were made while 10 participants listened to natural or formant-synthesized speech at moderately fast or ultrafast rate. Cross-correlation analysis was applied to show brain activity time-locked to the speech envelope, to an acoustic marker of syllable onsets, and to pitch periodicity. The envelope yielded a right-lateralized M100-like response, syllable onsets gave rise to M50/M100-like fields with an additional anterior M50 component, and pitch (ca. 100u2009Hz) elicited a neural resonance bound to a central auditory source at a latency of 30u2009ms. The strength of these MEG components showed differential effects of syllable rate and natural versus synthetic speech. Presumingly, such phase-locking mechanisms serve as neuronal triggers for the extraction of information-bearing elements.
Progress in Brain Research | 2006
Susanne Dietrich; Hermann Ackermann; Diana P. Szameitat; Kai Alter
Both intonation (affective prosody) and lexical meaning of verbal utterances participate in the vocal expression of a speakers emotional state, an important aspect of human communication. However, it is still a matter of debate how the information of these two channels is integrated during speech perception. In order to further analyze the impact of affective prosody on lexical access, so-called interjections, i.e., short verbal emotional utterances, were investigated. The results of a series of psychoacoustic studies indicate the processing of emotional interjections to be mediated by a divided cognitive mechanism encompassing both lexical access and the encoding of prosodic data. Emotional interjections could be separated into elements with high- or low-lexical content. As concerns the former items, both prosodic and propositional cues have a significant influence upon recognition rates, whereas the processing of the low-lexical cognates rather solely depends upon prosodic information. Incongruencies between lexical and prosodic data structures compromise stimulus identification. Thus, the analysis of utterances characterized by a dissociation of the prosodic and lexical dimension revealed prosody to exert a stronger impact upon listeners judgments than lexicality. Taken together, these findings indicate that both propositional and prosodic speech components closely interact during speech perception.