Olaf Hauk
Cognition and Brain Sciences Unit
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Olaf Hauk.
Psychophysiology | 2002
Andreas Keil; Margaret M. Bradley; Olaf Hauk; Brigitte Rockstroh; Thomas Elbert; Peter J. Lang
Hemodynamic and electrophysiological studies indicate differential brain response to emotionally arousing, compared to neutral, pictures. The time course and source distribution of electrocortical potentials in response to emotional stimuli, using a high-density electrode (129-sensor) array were examined here. Event-related potentials (ERPs) were recorded while participants viewed pleasant, neutral, and unpleasant pictures. ERP voltages were examined in six time intervals, roughly corresponding to P1, N1, early P3, late P3 and a slow wave window. Differential activity was found for emotional, compared to neutral, pictures at both of the P3 intervals, as well as enhancement of later posterior positivity. Source space projection was performed using a minimum norm procedure that estimates the source currents generating the extracranially measured electrical gradient. Sources of slow wave modulation were located in occipital and posterior parietal cortex, with a right-hemispheric dominance.
European Journal of Neuroscience | 2005
Friedemann Pulvermüller; Olaf Hauk; Vadim V. Nikulin; Risto J. Ilmoniemi
Transcranial magnetic stimulation (TMS) was applied to motor areas in the left language‐dominant hemisphere while right‐handed human subjects made lexical decisions on words related to actions. Response times to words referring to leg actions (e.g. kick) were compared with those to words referring to movements involving the arms and hands (e.g. pick). TMS of hand and leg areas influenced the processing of arm and leg words differentially, as documented by a significant interaction of the factors Stimulation site and Word category. Arm area TMS led to faster arm than leg word responses and the reverse effect, faster lexical decisions on leg than arm words, was present when TMS was applied to leg areas. TMS‐related differences between word categories were not seen in control conditions, when TMS was applied to hand and leg areas in the right hemisphere and during sham stimulation. Our results show that the left hemispheric cortical systems for language and action are linked to each other in a category‐specific manner and that activation in motor and premotor areas can influence the processing of specific kinds of words semantically related to arm or leg actions. By demonstrating specific functional links between action and language systems during lexical processing, these results call into question modular theories of language and motor functions and provide evidence that the two systems interact in the processing of meaningful information about language and action.
Proceedings of the National Academy of Sciences of the United States of America | 2006
Friedemann Pulvermüller; Martina Huss; Ferath Kherif; Fermín Moscoso del Prado Martín; Olaf Hauk; Yury Shtyrov
The processing of spoken language has been attributed to areas in the superior temporal lobe, where speech stimuli elicit the greatest activation. However, neurobiological and psycholinguistic models have long postulated that knowledge about the articulatory features of individual phonemes has an important role in their perception and in speech comprehension. To probe the possible involvement of specific motor circuits in the speech-perception process, we used event-related functional MRI and presented experimental subjects with spoken syllables, including [p] and [t] sounds, which are produced by movements of the lips or tongue, respectively. Physically similar nonlinguistic signal-correlated noise patterns were used as control stimuli. In localizer experiments, subjects had to silently articulate the same syllables and, in a second task, move their lips or tongue. Speech perception most strongly activated superior temporal cortex. Crucially, however, distinct motor regions in the precentral gyrus sparked by articulatory movements of the lips and tongue were also differentially activated in a somatotopic manner when subjects listened to the lip- or tongue-related phonemes. This sound-related somatotopic activation in precentral gyrus shows that, during speech perception, specific motor circuits are recruited that reflect phonetic distinctive features of the speech sounds encountered, thus providing direct neuroimaging support for specific links between the phonological mechanisms for speech perception and production.
NeuroImage | 2006
Olaf Hauk; Matthew H. Davis; Michael Ford; Friedemann Pulvermüller; William D. Marslen-Wilson
EEG correlates of a range of psycholinguistic word properties were used to investigate the time course of access to psycholinguistic information during visual word recognition. Neurophysiological responses recorded in a visual lexical decision task were submitted to linear regression analysis. First, 10 psycholinguistic features of each of 300 stimulus words were submitted to a principal component analysis, which yielded four orthogonal variables likely to reflect separable processes in visual word recognition: Word length, Letter n-gram frequency, Lexical frequency and Semantic coherence of a words morphological family. Since the lexical decision task required subjects to distinguish between words and pseudowords, the binary variable Lexicality was also investigated using a factorial design. Word-pseudoword differences in the event-related potential first appeared at 160 ms after word onset. However, regression analysis of EEG data documented a much earlier effect of both Word length and Letter n-gram frequency around 90 ms. Lexical frequency showed its earliest effect slightly later, at 110 ms, and Semantic coherence significantly correlated with neurophysiological measures around 160 ms, simultaneously with the lexicality effect. Source estimates indicated parieto-temporo-occipital generators for the factors Length, Letter n-gram frequency and Word frequency, but widespread activation with foci in left anterior temporal lobe and inferior frontal cortex related to Semantic coherence. At later stages (>200 ms), all variables exhibited simultaneous EEG correlates. These results indicate that information about surface form and meaning of a lexical item is first accessed at different times in different brain systems and then processed simultaneously, thus supporting cascaded interactive processing models.
Cerebral Cortex | 2009
Véronique Boulenger; Olaf Hauk; Friedemann Pulvermüller
Single words and sentences referring to bodily actions activate the motor cortex. However, this semantic grounding of concrete language does not address the critical question whether the sensory–motor system contributes to the processing of abstract meaning and thought. We examined functional magnetic resonance imaging activation to idioms and literal sentences including arm- and leg-related action words. A common left fronto-temporal network was engaged in sentence reading, with idioms yielding relatively stronger activity in (pre)frontal and middle temporal cortex. Crucially, somatotopic activation along the motor strip, in central and precentral cortex, was elicited by idiomatic and literal sentences, reflecting the body part reference of the words embedded in the sentences. Semantic somatotopy was most pronounced after sentence ending, thus reflecting sentence-level processing rather than that of single words. These results indicate that semantic representations grounded in the sensory–motor system play a role in the composition of sentence-level meaning, even in the case of idioms.
Clinical Neurophysiology | 2004
Olaf Hauk; Friedemann Pulvermüller
OBJECTIVE We investigated the influence of the length and frequency of printed words on the amplitude and peak latencies of event-related potentials (ERPs). This served two goals, namely (I) to clarify their possible effects as confounds in ERP experiments employing word-stimuli, and (II) to determine the point in time of lexical access in visual word recognition. METHODS EEG was recorded from 64 scalp sites while subjects (n=12) performed a lexical decision task. Word length and frequency were orthogonally varied between stimulus groups, whereas variables including regularity of spelling and orthographic tri-gram frequency were kept constant. RESULTS Long words produced the strongest brain response early on (approximately 100 ms after stimulus onset), whereas those to short words became strongest later (150-360 ms). Lower ERP amplitudes were elicited by words with high frequency compared with low frequency words in the latency ranges 150-190 ms and 320-360 ms. However, we did not find evidence for a robust alteration of peak latencies with word frequency. CONCLUSIONS Length and frequency of word stimuli have independent and additive effects on the amplitude of the ERP. Studies on the precise time course of cognitive processes should consider their potentially confounding character. Our data support the view that lexical access takes place as early as 150 ms after onset of written word stimuli.
Human Brain Mapping | 2004
Olaf Hauk; Friedman Pulvermüller
It has been suggested that the processing of action words referring to leg, arm, and face movements (e.g., to kick, to pick, to lick) leads to distinct patterns of neurophysiological activity. We addressed this issue using multi‐channel EEG and beam‐former estimates of distributed current sources within the head. The categories of leg‐, arm‐, and face‐related words were carefully matched for important psycholinguistic factors, including word frequency, imageability, valence, and arousal, and evaluated in a behavioral study for their semantic associations. EEG was recorded from 64 scalp electrodes while stimuli were presented visually in a reading task. We applied a linear beam‐former technique to obtain optimal estimates of the sources underlying the word‐evoked potentials. These suggested differential activation in frontal areas of the cortex, including primary motor, pre‐motor, and pre‐frontal sites. Leg words activated dorsal fronto‐parietal areas more strongly than face‐ or arm‐related words, whereas face‐words produced more activity at left inferior‐frontal sites. In the right hemisphere, arm‐words activated lateral‐frontal areas. We interpret the findings in the framework of a neurobiological model of language and discuss the possible role of mirror neurons in the premotor cortex in language processing. Hum. Brain Mapp. 21:191–201, 2004.
NeuroImage | 2004
Olaf Hauk
The present study aims at finding the optimal inverse solution for the bioelectromagnetic inverse problem in the absence of reliable a priori information about the generating sources. Three approaches to tackle this problem are compared theoretically: the maximum-likelihood approach, the minimum norm approach, and the resolution optimization approach. It is shown that in all three of these frameworks, it is possible to make use of the same kind of a priori information if available, and the same solutions are obtained if the same a priori information is implemented. In particular, they all yield the minimum norm pseudoinverse (MNP) in the complete absence of such information. This indicates that the properties of the MNP, and in particular, its limitations like the inability to localize sources in depth, are not specific to this method but are fundamental limitations of the recording modalities. The minimum norm solution provides the amount of information that is actually present in the data themselves, and is therefore optimally suited to investigate the general resolution and accuracy limits of EEG and MEG measurement configurations. Furthermore, this strongly suggests that the classical minimum norm solution is a valuable method whenever no reliable a priori information about source generators is available, that is, when complex cognitive tasks are employed or when very noisy data (e.g., single-trial data) are analyzed. For that purpose, an efficient and practical implementation of this method will be suggested and illustrated with simulations using a realistic head geometry.
Brain and Language | 2009
Friedemann Pulvermüller; Yury Shtyrov; Olaf Hauk
How long does it take the human mind to grasp the idea when hearing or reading a sentence? Neurophysiological methods looking directly at the time course of brain activity indexes of comprehension are critical for finding the answer to this question. As the dominant cognitive approaches, models of serial/cascaded and parallel processing, make conflicting predictions on the time course of psycholinguistic information access, they can be tested using neurophysiological brain activation recorded in MEG and EEG experiments. Seriality and cascading of lexical, semantic and syntactic processes receives support from late (latency ∼1/2 s) sequential neurophysiological responses, especially N400 and P600. However, parallelism is substantiated by early near-simultaneous brain indexes of a range of psycholinguistic processes, up to the level of semantic access and context integration, emerging already 100–250 ms after critical stimulus information is present. Crucially, however, there are reliable latency differences of 20–50 ms between early cortical area activations reflecting lexical, semantic and syntactic processes, which are left unexplained by current serial and parallel brain models of language. We here offer a mechanistic model grounded in cortical nerve cell circuits that builds upon neuroanatomical and neurophysiological knowledge and explains both near-simultaneous activations and fine-grained delays. A key concept is that of discrete distributed cortical circuits with specific inter-area topographies. The full activation, or ignition, of specifically distributed binding circuits explains the near-simultaneity of early neurophysiological indexes of lexical, syntactic and semantic processing. Activity spreading within circuits determined by between-area conduction delays accounts for comprehension-related regional activation differences in the millisecond range.
European Journal of Neuroscience | 2004
Yury Shtyrov; Olaf Hauk; Friedemann Pulvermüller
Mismatch negativity (MMN), an index of experience‐dependent memory traces, was used to investigate the processing of action‐related words in the human brain. Responses to auditorily presented movement‐related English words were recorded in a non‐attend odd‐ball protocol using a high‐density electroencephalographic (EEG) set‐up. MMN was calculated using responses to the same words presented as standard and deviant stimuli in different sessions to avoid contamination from phonetic–acoustic differences. The topography of the mismatch negativity to action words revealed an unusual centro‐posterior distribution of the responses, suggesting that activity was at least in part generated posterior to usually observed frontal MMNs. Moreover, responses to hand‐related word stimulus (pick) had a more widespread lateral distribution, whereas leg‐related stimulus (kick) elicited a more focal dorsal negativity. These differences, remarkably reminiscent of sensorimotor cortex topography, were further assessed using distributed source analysis of the EEG signal (L2 minimum‐norm current estimates). The source analysis also confirmed differentially distributed activation for the two stimuli. We suggest that these results indicate activation of distributed neuronal assemblies that function as category‐specific memory traces for words and may involve sensorimotor cortical structures for encoding action words.