Markus Ostarek
Max Planck Society
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Markus Ostarek.
Journal of Experimental Psychology: Human Perception and Performance | 2017
Markus Ostarek; Falk Huettig
The notion that processing spoken (object) words involves activation of category-specific representations in visual cortex is a key prediction of modality-specific theories of representation that contrasts with theories assuming dedicated conceptual representational systems abstracted away from sensorimotor systems. In the present study, we investigated whether participants can detect otherwise invisible pictures of objects when they are presented with the corresponding spoken word shortly before the picture appears. Our results showed facilitated detection for congruent (“bottle” → picture of a bottle) versus incongruent (“bottle” → picture of a banana) trials. A second experiment investigated the time-course of the effect by manipulating the timing of picture presentation relative to word onset and revealed that it arises as soon as 200–400 ms after word onset and decays at 600 ms after word onset. Together, these data strongly suggest that spoken words can rapidly activate low-level category-specific visual representations that affect the mere detection of a stimulus, that is, what we see. More generally, our findings fit best with the notion that spoken words activate modality-specific visual representations that are low level enough to provide information related to a given token and at the same time abstract enough to be relevant not only for previously seen tokens but also for generalizing to novel exemplars one has never seen before.
Journal of Experimental Psychology: Learning, Memory and Cognition | 2017
Markus Ostarek; Falk Huettig
It is well established that the comprehension of spoken words referring to object concepts relies on high-level visual areas in the ventral stream that build increasingly abstract representations. It is much less clear whether basic low-level visual representations are also involved. Here we asked in what task situations low-level visual representations contribute functionally to concrete word comprehension using an interference paradigm. We interfered with basic visual processing while participants performed a concreteness task (Experiment 1), a lexical-decision task (Experiment 2), and a word class judgment task (Experiment 3). We found that visual noise interfered more with concrete versus abstract word processing, but only when the task required visual information to be accessed. This suggests that basic visual processes can be causally involved in language comprehension, but that their recruitment is not automatic and rather depends on the type of information that is required in a given task situation.
Cognition | 2019
Markus Ostarek; Dennis Joosen; Adil Ishag; Monique de Nijs; Falk Huettig
Many studies have shown that sentences implying an object to have a certain shape produce a robust reaction time advantage for shape-matching pictures in the sentence-picture verification task. Typically, this finding has been interpreted as evidence for perceptual simulation, i.e., that access to implicit shape information involves the activation of modality-specific visual processes. It follows from this proposal that disrupting visual processing during sentence comprehension should interfere with perceptual simulation and obliterate the match effect. Here we directly test this hypothesis. Participants listened to sentences while seeing either visual noise that was previously shown to strongly interfere with basic visual processing or a blank screen. Experiments 1 and 2 replicated the match effect but crucially visual noise did not modulate it. When an interference technique was used that targeted high-level semantic processing (Experiment 3) however the match effect vanished. Visual noise specifically targeting high-level visual processes (Experiment 4) only had a minimal effect on the match effect. We conclude that the shape match effect in the sentence-picture verification paradigm is unlikely to rely on perceptual simulation.
bioRxiv | 2018
Markus Ostarek; Jeroen van Paridon; Falk Huettig
Processing words with referents that are typically observed up or down in space (up/down words) influences the subsequent identification of visual targets in congruent locations. Eye-tracking studies have shown that up/down word comprehension shortens launch times of subsequent saccades to congruent locations and modulates concurrent saccade trajectories. This can be explained by a task-dependent interaction of semantic processing and oculomotor programs or by a direct recruitment of direction-specific processes in oculomotor and spatial systems as part of semantic processing. To test the latter possibility, we conducted a functional magnetic resonance imaging experiment and used multi-voxel pattern analysis to assess 1) whether the typical location of word referents can be decoded from the fronto-parietal spatial network and 2) whether activity patterns are shared between up/down words and up/down saccadic eye movements. In line with these hypotheses, significant decoding of up vs. down words and cross-decoding between up/down saccades and up/down words were observed in the frontal eye field region in the superior frontal sulcus and the inferior parietal lobule. Beyond these spatial attention areas, typical location of word referents could be decoded from a set of occipital, temporal, and frontal areas, indicating that interactions between high-level regions typically implicated with lexical-semantic processing and spatial/oculomotor regions constitute the neural basis for access to spatial aspects of word meanings.
Journal of Experimental Psychology: Learning, Memory and Cognition | 2018
Markus Ostarek; Idal Ishag; Dennis Joosen; Falk Huettig
Implicit up/down words, such as bird and foot, systematically influence performance on visual tasks involving immediately following targets in compatible versus incompatible locations. Recent studies have observed that the semantic relation between prime words and target pictures can strongly influence the size and even the direction of the effect: Semantically related targets are processed faster in congruent versus incongruent locations (location-specific priming), whereas unrelated targets are processed slower in congruent locations. Here, we used eye-tracking to investigate the moment-to-moment processes underlying this pattern. Our reaction time (RT) results for related targets replicated the location-specific priming effect and showed a trend toward interference for unrelated targets. We then used growth curve analysis to test how up/down words and their match versus mismatch with immediately following targets in terms of semantics and vertical location influence concurrent saccadic eye movements. There was a strong main effect of spatial association on linear growth, with up words biasing changes in y-coordinates over time upward relative to down words (and vice versa). Similar to the case with the RT data, this effect was strongest for semantically related targets and reversed for unrelated targets. It is intriguing that all conditions showed a bias in the congruent direction in the initial stage of the saccade. Then, at around halfway into the saccade the effect kept increasing in the semantically related condition and reversed in the unrelated condition. These results suggest that online processing of up/down words triggers direction-specific oculomotor processes that are dynamically modulated by the semantic relation between prime words and targets.
conference cognitive science | 2017
Vencislav Popov; Markus Ostarek; Caitlin Tenison
A key challenge for cognitive neuroscience is to decipher the representational schemes of the brain. A recent class of decoding algorithms for fMRI data, stimulus-feature-based encoding models, is becoming increasingly popular for inferring the dimensions of neural representational spaces from stimulus-feature spaces. We argue that such inferences are not always valid, because decoding can occur even if the neural representational space and the stimulus-feature space use different representational schemes. This can happen when there is a systematic mapping between them, as shown by two simulations. In one simulation, we successfully decoded the binary representation of numbers from their decimal features. Since binary and decimal number systems use different representations, we cannot conclude that the binary representation encodes decimal features. In the second simulation, we successfully decoded the HSV color representation from the RGB representation of colors, even though these color spaces have different geometries and their dimensions have different interpretations. Detailed analysis of the predicted colors showed systematic deviations from the ground truth despite the high decoding accuracy, indicating that decoding accuracy on its own is not sufficient for making representational inferences. The same argument applies to the decoding of neural patterns from stimulus-feature spaces and we urge caution in inferring the nature of the neural code from such methods. We discuss ways to overcome these inferential limitations.
NeuroImage | 2018
Vencislav Popov; Markus Ostarek; Caitlin Tenison
Behavioral and Brain Sciences | 2018
Arnold R. Kochari; Markus Ostarek
Behavioral and Brain Sciences | 2018
Arnold R. Kochari; Markus Ostarek
the 3rd Attentive Listener in the Visual World (AttLis) workshop | 2016
Markus Ostarek; Falk Huettig