Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ingo Hertrich is active.

Publication


Featured researches published by Ingo Hertrich.


NeuroImage | 2005

Identification of emotional intonation evaluated by fMRI.

Dirk Wildgruber; Axel Riecker; Ingo Hertrich; Michael Erb; Wolfgang Grodd; Thomas Ethofer; Hermann Ackermann

During acoustic communication among human beings, emotional information can be expressed both by the propositional content of verbal utterances and by the modulation of speech melody (affective prosody). It is well established that linguistic processing is bound predominantly to the left hemisphere of the brain. By contrast, the encoding of emotional intonation has been assumed to depend specifically upon right-sided cerebral structures. However, prior clinical and functional imaging studies yielded discrepant data with respect to interhemispheric lateralization and intrahemispheric localization of brain regions contributing to processing of affective prosody. In order to delineate the cerebral network engaged in the perception of emotional tone, functional magnetic resonance imaging (fMRI) was performed during recognition of prosodic expressions of five different basic emotions (happy, sad, angry, fearful, and disgusted) and during phonetic monitoring of the same stimuli. As compared to baseline at rest, both tasks yielded widespread bilateral hemodynamic responses within frontal, temporal, and parietal areas, the thalamus, and the cerebellum. A comparison of the respective activation maps, however, revealed comprehension of affective prosody to be bound to a distinct right-hemisphere pattern of activation, encompassing posterior superior temporal sulcus (Brodmann Area [BA] 22), dorsolateral (BA 44/45), and orbitobasal (BA 47) frontal areas. Activation within left-sided speech areas, in contrast, was observed during the phonetic task. These findings indicate that partially distinct cerebral networks subserve processing of phonetic and intonational information during speech perception.


Neurology | 2005

fMRI reveals two distinct cerebral networks subserving speech motor control

Axel Riecker; Krystyna A. Mathiak; Dirk Wildgruber; Michael Erb; Ingo Hertrich; Wolfgang Grodd; Hermann Ackermann

Background: There are few data on the cerebral organization of motor aspects of speech production and the pathomechanisms of dysarthric deficits subsequent to brain lesions and diseases. The authors used fMRI to further examine the neural basis of speech motor control. Methods and Results: In eight healthy volunteers, fMRI was performed during syllable repetitions synchronized to click trains (2 to 6 Hz; vs a passive listening task). Bilateral hemodynamic responses emerged at the level of the mesiofrontal and sensorimotor cortex, putamen/pallidum, thalamus, and cerebellum (two distinct activation spots at either side). In contrast, dorsolateral premotor cortex and anterior insula showed left-sided activation. Calculation of rate/response functions revealed a negative linear relationship between repetition frequency and blood oxygen level–dependent (BOLD) signal change within the striatum, whereas both cerebellar hemispheres exhibited a step-wise increase of activation at ∼3 Hz. Analysis of the temporal dynamics of the BOLD effect found the various cortical and subcortical brain regions engaged in speech motor control to be organized into two separate networks (medial and dorsolateral premotor cortex, anterior insula, and superior cerebellum vs sensorimotor cortex, basal ganglia, and inferior cerebellum). Conclusion: These data provide evidence for two levels of speech motor control bound, most presumably, to motor preparation and execution processes. They also help to explain clinical observations such as an unimpaired or even accelerated speaking rate in Parkinson disease and slowed speech tempo, which does not fall below a rate of 3 Hz, in cerebellar disorders.


The Cerebellum | 2013

Consensus Paper: Language and the Cerebellum: an Ongoing Enigma

Peter Mariën; Herman Ackermann; Michael Adamaszek; Caroline H. S. Barwood; Alan A. Beaton; John E. Desmond; Elke De Witte; Angela J. Fawcett; Ingo Hertrich; Michael Küper; Maria Leggio; Cherie L. Marvel; Marco Molinari; Bruce E. Murdoch; Roderick I. Nicolson; Jeremy D. Schmahmann; Catherine J. Stoodley; Markus Thürling; Dagmar Timmann; Ellen Wouters; Wolfram Ziegler

In less than three decades, the concept “cerebellar neurocognition” has evolved from a mere afterthought to an entirely new and multifaceted area of neuroscientific research. A close interplay between three main strands of contemporary neuroscience induced a substantial modification of the traditional view of the cerebellum as a mere coordinator of autonomic and somatic motor functions. Indeed, the wealth of current evidence derived from detailed neuroanatomical investigations, functional neuroimaging studies with healthy subjects and patients and in-depth neuropsychological assessment of patients with cerebellar disorders shows that the cerebellum has a cardinal role to play in affective regulation, cognitive processing, and linguistic function. Although considerable progress has been made in models of cerebellar function, controversy remains regarding the exact role of the “linguistic cerebellum” in a broad variety of nonmotor language processes. This consensus paper brings together a range of different viewpoints and opinions regarding the contribution of the cerebellum to language function. Recent developments and insights in the nonmotor modulatory role of the cerebellum in language and some related disorders will be discussed. The role of the cerebellum in speech and language perception, in motor speech planning including apraxia of speech, in verbal working memory, in phonological and semantic verbal fluency, in syntax processing, in the dynamics of language production, in reading and in writing will be addressed. In addition, the functional topography of the linguistic cerebellum and the contribution of the deep nuclei to linguistic function will be briefly discussed. As such, a framework for debate and discussion will be offered in this consensus paper.


Folia Phoniatrica Et Logopaedica | 1995

Oral diadochokinesis in neurological dysarthrias

Hermann Ackermann; Ingo Hertrich; Thomas Hehr

Rapid syllable repetitions require alternating articulatory movements and, thus, provide a test for oral diadochokinesis. The present study performed an acoustic analysis of rapid syllable repetitions in patients suffering from idiopathic Parkinsons disease (n = 17), Huntingtons chorea (n = 14), Friedreichs ataxia (n = 9), or from a purely cerebellar syndrome (n = 13). Four parameters were considered: the mean number of syllables per train, the median syllable duration with its variation coefficient, and articulatory imprecision in terms of the percentage of incomplete closures. Apart from a few subjects with minor motor deficits only, in all patients at least one of the four measures of diadochokinesis exceeded the normal range. Accordingly, discriminant analysis revealed a highly significant difference between controls and patients with respect to the considered parameters. Thus, oral diadochokinesis tasks represent a sensitive measure of orofacial motor impairment. Moreover, multivariate analysis showed that Parkinsons disease and Friedreichs ataxia are characterized by a highly specific profile of diadochokinesis performance.


Journal of Cognitive Neuroscience | 2002

Cerebellum and Speech Perception: A Functional Magnetic Resonance Imaging Study

Klaus Mathiak; Ingo Hertrich; Wolfgang Grodd; Hermann Ackermann

A variety of data indicate that the cerebellum participates in perceptual tasks requiring the precise representation of temporal information. Access to the word form of a lexical item requires, among other functions, the processing of durational parameters of verbal utterances. Therefore, cerebellar dysfunctions must be expected to impair word recognition. In order to specify the topography of the assumed cerebellar speech perception mechanism, a functional magnetic resonance imaging study was performed using the German lexical items Boden ([bodn], Engl. floor) and Boten ([botn], messengers) as test materials. The contrast in sound structure of these two lexical items can be signaled either by the length of the wordmedial pause (closure time, CLT; an exclusively temporal measure) or by the aspiration noise of wordmedial d or t (voice onset time, VOT; an intrasegmental cue). A previous study found bilateral cerebellar disorders to compromise word recognition based on CLT whereas the encoding of VOT remained unimpaired. In the present study, two series of BodenBoten utterances were resynthesized, systematically varying either in CLT or VOT. Subjects had to identify both words Boden and Boten by analysis of either the durational parameter CLT or the VOT aspiration segment. In a subtraction design, CLT categorization as compared to VOT identification (CLT VOT) yielded a significant hemodynamic response of the right cerebellar hemisphere (neocerebellum Crus I) and the frontal lobe (anterior to Brocas area). The reversed contrast (VOT CLT) resulted in a single activation cluster located at the level of the supra-temporal plane of the dominant hemisphere. These findings provide first evidence for a distinct contribution of the right cerebellar hemisphere to speech perception in terms of encoding of durational parameters of verbal utterances. Verbal working memory tasks, lexical response selection, and auditory imagery of word strings have been reported to elicit activation clusters of a similar location. Conceivably, representation of the temporal structure of speech sound sequences represents the common denominator of cerebellar participation in cognitive tasks acting on a phonetic code.


Journal of Neurolinguistics | 2000

The contribution of the cerebellum to speech processing

Hermann Ackermann; Ingo Hertrich

Abstract Lesions to or diseases of the cerebellum may disrupt the motor components of speech production both at the segmental and suprasegmental level giving rise to “ataxic dysarthria”, a syndrome characterized, among others, by slowed speaking rate, distorted consonant and vowel productions and impaired prosodic modulation of sentence utterances. The concept of ataxic dysarthria has been further refined by measurements at the acoustic speech signal, tracking of articulatory movements and preliminary functional imaging studies. In summary, the available parametric data indicate the cerebellum to support several aspects of speech processing: acceleration of orofacial gestures, timing (coordination) of complex articulatory sequences, presumably, in cooperation with the anterior perisylvian language zone and control of brainstem reflexes monitoring respiratory and laryngeal muscle activity. Lesion studies and functional imaging indicate superior aspects of the cerebellum to mediate the speech motor functions referred to. Laterality and extent of the respective representation area, presumably, depend upon task demands. Besides speech motor deficits, recent investigations found disrupted speech perception in cerebellar subjects in terms of impaired categorization of specific durational minimal pairs.


Human Brain Mapping | 2002

Mismatch responses to randomized gradient switching noise as reflected by fMRI and whole-head magnetoencephalography

Klaus Mathiak; Alexander Rapp; Tilo Kircher; Wolfgang Grodd; Ingo Hertrich; Nikolaus Weiskopf; Werner Lutzenberger; Herrmann Ackermann

The central auditory system of the human brain uses a variety of mechanisms to analyze auditory scenes, among others, preattentive detection of sudden changes in the sound environment. Electroencephalography (EEG) and magnetoencephalography (MEG) provide a measure to monitor neuronal cortical currents. The mismatch negativity (MMN) or field (MMNm) reflect preattentive activation in response to deviants within a sequence of homogenous auditory stimuli. Functional magnetic resonance imaging (fMRI) allows for a higher spatial resolution as compared to the extracranial electrophysiological techniques. The image encoding gradients of echo planar imaging (EPI) sequences, however, elicit an interfering background noise. To circumvent this shortcoming, the present study applied multi‐echo EPI mimicking an auditory oddball design. The gradient trains (SOA = 800 msec, 94.5 dB SPL, stimulus duration = 152 msec) comprised amplitude (−9 dB) and duration (76 msec) deviants in a randomized sequence. Moreover, the scanner noise was recorded and applied in a whole‐head MEG device to validate the properties of this specific material. Robust fMRI activation patterns emerged in response to the deviant gradient switching. Changes in amplitude activated the entire auditory cortex, whereas the duration deviants elicited right‐lateralized signal increase in secondary areas. The recorded scanner noise evoked reliably right‐lateralized mismatch MEG responses. Source localization was in accordance with activation of secondary auditory cortex. The presented paradigm provides a robust and feasible tool to study the functional anatomy of early cognitive auditory processing in clinical populations such as schizophrenia. Hum. Brain Mapping 16:190–195, 2002.


Annals of Otology, Rhinology, and Laryngology | 1995

Gender-specific vocal dysfunctions in Parkinson's disease : electroglottographic and acoustic analyses

Ingo Hertrich; Hermann Ackermann

Electroglottographic (EGG) and acoustic recordings were obtained during sustained vowel production in men and women suffering from Parkinsons disease (PD). The computed EGG spectrograms allowed us to differentiate various kinds of phonatory disturbances: intervals with subharmonic energy (“low-frequency segments”), “noise-like regions,” and abrupt shifts of fundamental frequency (F0). Female PD subjects presented with a significantly increased portion of subharmonic segments and with significantly more abrupt F0 shifts as compared to both controls and male PD subjects. Presumably, these alterations in spectral energy distribution reflect different oscillatory modes of the glottal source. Thus, PD seems to have a differential impact on phonation in men and women. Conceivably, these gender-specific vocal dysfunctions are determined by the well-known sexual dimorphism of laryngeal size.


NeuroImage | 2006

Gamma-band activity over early sensory areas predicts detection of changes in audiovisual speech stimuli.

Jochen Kaiser; Ingo Hertrich; Hermann Ackermann; Werner Lutzenberger

Oscillatory activity in the gamma-band range in human magneto- and electroencephalogram is thought to reflect the oscillatory synchronization of cortical networks. Findings of enhanced gamma-band activity (GBA) during cognitive processes like gestalt perception, attention and memory have led to the notion that GBA may reflect the activation of internal object representations. However, there is little direct evidence suggesting that GBA is related to subjective perceptual experience. In the present study, magnetoencephalogram was recorded during an audiovisual oddball paradigm with infrequent visual (auditory /ta/ + visual /pa/) or acoustic deviants (auditory /pa/ + visual /ta/) interspersed in a sequence of frequent audiovisual standard stimuli (auditory /ta/ + visual /ta/). Sixteen human subjects had to respond to perceived acoustic changes which could be produced either by real acoustic or illusory (visual) deviants. Statistical probability mapping served to identify correlations between oscillatory activity in response to visual and acoustic deviants, respectively, and the detection rates for either type of deviant. The perception of illusory acoustic changes induced by visual deviants was closely associated with gamma-band amplitude at approximately 80 Hz between 250 and 350 ms over midline occipital cortex. In contrast, the detection of real acoustic deviants correlated positively with induced GBA at approximately 42 Hz between 200 and 300 ms over left superior temporal cortex and negatively with evoked gamma responses at approximately 41 Hz between 220 and 240 ms over occipital areas. These findings support the relevance of high-frequency oscillatory activity over early sensory areas for perceptual experience.


Journal of the Acoustical Society of America | 2000

Lip-jaw and tongue-jaw coordination during rate-controlled syllable repetitions.

Ingo Hertrich; Hermann Ackermann

The present study investigated the relationship between functionally relevant compound gestures and single-articulator component movements of the jaw and the constrictors lower lip and tongue tip during rate-controlled syllable repetitions. In nine healthy speakers, the effects of speaking rate (3 vs 5 Hz), place of articulation, and vowel type during stop consonant-vowel repetitions (/pa/, /pi/, /ta/, /ti/) on the amplitude and peak velocity of differential jaw and constrictor opening-closing movements were measured by means of electromagnetic articulography. Rather than homogeneously scaled compound gestures, the results suggest distinct control mechanisms for the jaw and the constrictors. In particular, jaw amplitude was closely linked to vowel height during bilabial articulation, whereas the lower lip component amplitude turned out to be predominantly rate sensitive. However, the observed variability across subjects and conditions does not support the assumption that single-articulator gestures directly correspond to basic phonological units. The nonhomogeneous effects of speech rate on articulatory subsystem parameters indicate that single structures are differentially rate sensitive. On average, an increase in speech rate resulted in a more or less proportional increase of the steepness of peak velocity/amplitude scaling for jaw movements, whereas the constrictors were less rate sensitive in this respect. Negative covariation across repetitions between jaw and constrictor amplitudes has been considered an indicator of motor equivalence. Although significant in some cases, such a relationship was not consistently observed across subjects. Considering systematic sources of variability such as vowel height, speech rate, and subjects, jaw-constrictor amplitude correlations showed a nonhomogeneous pattern strongly depending on place of articulation.

Collaboration


Dive into the Ingo Hertrich's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Irene Daum

Ruhr University Bochum

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge