Anouk M. van Loon
University of Amsterdam
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Anouk M. van Loon.
Current Biology | 2013
Anouk M. van Loon; Tomas Knapen; H. Steven Scholte; Elexa St. John-Saaltink; Tobias H. Donner; Victor A. F. Lamme
Sometimes, perception fluctuates spontaneously between two distinct interpretations of a constant sensory input. These bistable perceptual phenomena provide a unique window into the neural mechanisms that create the contents of conscious perception. Models of bistable perception posit that mutual inhibition between stimulus-selective neural populations in visual cortex plays a key role in these spontaneous perceptual fluctuations. However, a direct link between neural inhibition and bistable perception has not yet been established experimentally. Here, we link perceptual dynamics in three distinct bistable visual illusions (binocular rivalry, motion-induced blindness, and structure from motion) to measurements of gamma-aminobutyric acid (GABA) concentrations in human visual cortex (as measured with magnetic resonance spectroscopy) and to pharmacological stimulation of the GABAA receptor by means of lorazepam. As predicted by a model of neural interactions underlying bistability, both higher GABA concentrations in visual cortex and lorazepam administration induced slower perceptual dynamics, as reflected in a reduced number of perceptual switches and a lengthening of percept durations. Thus, we show that GABA, the main inhibitory neurotransmitter, shapes the dynamics of bistable perception. These results pave the way for future studies into the competitive neural interactions across the visual cortical hierarchy that elicit conscious perception.
Cognitive, Affective, & Behavioral Neuroscience | 2010
Anouk M. van Loon; Wery P. M. van den Wildenberg; Anda H. van Stegeren; K. Richard Ridderinkhof; Greg Hajcak
Emotional stimuli may prime the motor system and facilitate action readiness. Direct evidence for this effect has been shown by recent studies using transcranial magnetic stimulation (TMS). When administered over the primary motor cortex involved in responding, TMS pulses elicit motor-evoked potentials (MEPs) in the represented muscles. The amplitudes of these MEPs reflect the state of corticospinal excitability. Here, we investigated the dynamic effects of induced emotions on action readiness, as reflected by corticospinal excitability. Subjects performed a choice task while viewing task-irrelevant emotional and neutral pictures. The pattern of MEP amplitudes showed a typical increase as the TMS pulse was presented closer in time to the imminent response. This dynamic pattern was amplified by both pleasant and unpleasant emotional stimuli, but more so when unpleasant pictures were viewed. These patterns present novel evidence in support of the notion that emotional stimuli modulate action readiness.
Philosophical Transactions of the Royal Society B | 2014
Simon van Gaal; Lionel Naccache; Julia D. I. Meuwese; Anouk M. van Loon; Alexandra H. Leighton; Laurent Cohen; Stanislas Dehaene
What are the limits of unconscious language processing? Can language circuits process simple grammatical constructions unconsciously and integrate the meaning of several unseen words? Using behavioural priming and electroencephalography (EEG), we studied a specific rule-based linguistic operation traditionally thought to require conscious cognitive control: the negation of valence. In a masked priming paradigm, two masked words were successively (Experiment 1) or simultaneously presented (Experiment 2), a modifier (‘not’/‘very’) and an adjective (e.g. ‘good’/‘bad’), followed by a visible target noun (e.g. ‘peace’/‘murder’). Subjects indicated whether the target noun had a positive or negative valence. The combination of these three words could either be contextually consistent (e.g. ‘very bad - murder’) or inconsistent (e.g. ‘not bad - murder’). EEG recordings revealed that grammatical negations could unfold partly unconsciously, as reflected in similar occipito-parietal N400 effects for conscious and unconscious three-word sequences forming inconsistent combinations. However, only conscious word sequences elicited P600 effects, later in time. Overall, these results suggest that multiple unconscious words can be rapidly integrated and that an unconscious negation can automatically ‘flip the sign’ of an unconscious adjective. These findings not only extend the limits of subliminal combinatorial language processes, but also highlight how consciousness modulates the grammatical integration of multiple words.
Journal of Cognitive Neuroscience | 2012
Anouk M. van Loon; H. Steven Scholte; Simon van Gaal; Björn van der Hoort; Victor A. F. Lamme
Consciousness can be manipulated in many ways. Here, we seek to understand whether two such ways, visual masking and pharmacological intervention, share a common pathway in manipulating visual consciousness. We recorded EEG from human participants who performed a backward-masking task in which they had to detect a masked figure form its background (masking strength was varied across trials). In a within-subject design, participants received dextromethorphan (a N-methyl-d-aspartate receptor antagonist), lorazepam (LZP; a GABAA receptor agonist), scopolamine (a muscarine receptor antagonist), or placebo. The behavioral results show that detection rate decreased with increasing masking strength and that of all the drugs, only LZP induced a further decrease in detection rate. Figure-related ERP signals showed three neural events of interest: (1) an early posterior occipital and temporal generator (94–121 msec) that was not influenced by any pharmacological manipulation nor by masking, (2) a later bilateral perioccipital generator (156–211 msec) that was reduced by masking as well as LZP (but not by any other drugs), and (3) a late bilateral occipital temporal generator (293–387 msec) that was mainly affected by masking. Crucially, only the intermediate neural event correlated with detection performance. In combination with previous findings, these results suggest that LZP and masking both reduce visual awareness by means of modulating late activity in the visual cortex but leave early activation intact. These findings provide the first evidence for a common mechanism for these two distinct ways of manipulating consciousness.
Social Cognitive and Affective Neuroscience | 2014
Anna M. V. Gerlicher; Anouk M. van Loon; H. Steven Scholte; Victor A. F. Lamme; Andries R. van der Leij
In human social interactions, facial emotional expressions are a crucial source of information. Repeatedly presented information typically leads to an adaptation of neural responses. However, processing seems sustained with emotional facial expressions. Therefore, we tested whether sustained processing of emotional expressions, especially threat-related expressions, would attenuate neural adaptation. Neutral and emotional expressions (happy, mixed and fearful) of same and different identity were presented at 3 Hz. We used electroencephalography to record the evoked steady-state visual potentials (ssVEP) and tested to what extent the ssVEP amplitude adapts to the same when compared with different face identities. We found adaptation to the identity of a neutral face. However, for emotional faces, adaptation was reduced, decreasing linearly with negative valence, with the least adaptation to fearful expressions. This short and straightforward method may prove to be a valuable new tool in the study of emotional processing.
PLOS ONE | 2013
Julia D. I. Meuwese; Anouk M. van Loon; H. Steven Scholte; Philipp Lirk; Nienke Vulink; Markus W. Hollmann; Victor A. F. Lamme
Recurrent interactions between neurons in the visual cortex are crucial for the integration of image elements into coherent objects, such as in figure-ground segregation of textured images. Blocking N-methyl-D-aspartate (NMDA) receptors in monkeys can abolish neural signals related to figure-ground segregation and feature integration. However, it is unknown whether this also affects perceptual integration itself. Therefore, we tested whether ketamine, a non-competitive NMDA receptor antagonist, reduces feature integration in humans. We administered a subanesthetic dose of ketamine to healthy subjects who performed a texture discrimination task in a placebo-controlled double blind within-subject design. We found that ketamine significantly impaired performance on the texture discrimination task compared to the placebo condition, while performance on a control fixation task was much less impaired. This effect is not merely due to task difficulty or a difference in sedation levels. We are the first to show a behavioral effect on feature integration by manipulating the NMDA receptor in humans.
Philosophical Transactions of the Royal Society B | 2017
Hirohito M Kondo; Anouk M. van Loon; Jun-ichiro Kawahara; Brian C. J. Moore
We perceive the world as stable and composed of discrete objects even though auditory and visual inputs are often ambiguous owing to spatial and temporal occluders and changes in the conditions of observation. This raises important questions regarding where and how ‘scene analysis’ is performed in the brain. Recent advances from both auditory and visual research suggest that the brain does not simply process the incoming scene properties. Rather, top-down processes such as attention, expectations and prior knowledge facilitate scene perception. Thus, scene analysis is linked not only with the extraction of stimulus features and formation and selection of perceptual objects, but also with selective attention, perceptual binding and awareness. This special issue covers novel advances in scene-analysis research obtained using a combination of psychophysics, computational modelling, neuroimaging and neurophysiology, and presents new empirical and theoretical approaches. For integrative understanding of scene analysis beyond and across sensory modalities, we provide a collection of 15 articles that enable comparison and integration of recent findings in auditory and visual scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’.
Journal of Vision | 2017
Anouk M. van Loon; Katya Olmos-Solis; Christian N. L. Olivers
Visual search is thought to be guided by an active visual working memory (VWM) representation of the task-relevant features, referred to as the search template. In three experiments using a probe technique, we investigated which eye movement metrics reveal which search template is activated prior to the search, and distinguish it from future relevant or no longer relevant VWM content. Participants memorized a target color for a subsequent search task, while being instructed to keep central fixation. Before the search display appeared, we briefly presented two task-irrelevant colored probe stimuli to the left and right from fixation, one of which could match the current target template. In all three experiments, participants made both more and larger eye movements towards the probe matching the target color. The bias was predominantly expressed in microsaccades, 100-250 ms after probe onset. Experiment 2 used a retro-cue technique to show that these metrics distinguish between relevant and dropped representations. Finally, Experiment 3 used a sequential task paradigm, and showed that the same metrics also distinguish between current and prospective search templates. Taken together, we show how subtle eye movements track task-relevant representations for selective attention prior to visual search.
Attention Perception & Psychophysics | 2014
Julia D. I. Meuwese; Anouk M. van Loon; Victor A. F. Lamme; Johannes J. Fahrenfort
Perceptual decisions seem to be made automatically and almost instantly. Constructing a unitary subjective conscious experience takes more time. For example, when trying to avoid a collision with a car on a foggy road you brake or steer away in a reflex, before realizing you were in a near accident. This subjective aspect of object recognition has been given little attention. We used metacognition (assessed with confidence ratings) to measure subjective experience during object detection and object categorization for degraded and masked objects, while objective performance was matched. Metacognition was equal for degraded and masked objects, but categorization led to higher metacognition than did detection. This effect turned out to be driven by a difference in metacognition for correct rejection trials, which seemed to be caused by an asymmetry of the distractor stimulus: It does not contain object-related information in the detection task, whereas it does contain such information in the categorization task. Strikingly, this asymmetry selectively impacted metacognitive ability when objective performance was matched. This finding reveals a fundamental difference in how humans reflect versus act on information: When matching the amount of information required to perform two tasks at some objective level of accuracy (acting), metacognitive ability (reflecting) is still better in tasks that rely on positive evidence (categorization) than in tasks that rely more strongly on an absence of evidence (detection).
Journal of Cognition | 2018
Katya Olmos-Solis; Anouk M. van Loon; Christian N. L. Olivers
When observers search for a specific target, it is assumed that they activate a representation of the task relevant object in visual working memory (VWM). This representation – often referred to as the template – guides attention towards matching visual input. In two experiments we tested whether the pupil response can be used to differentiate stimuli that match the task-relevant template from irrelevant input. Observers memorized a target color to be searched for in a multi-color visual search display, presented after a delay period. In Experiment 1, one color appeared at the start of the trial, which was then automatically the search template. In Experiments 2, two colors were presented, and a retro-cue indicated which of these was relevant for the upcoming search task. Crucially, before the search display appeared, we briefly presented one colored probe stimulus. The probe could match either the relevant-template color, the non-cued color (irrelevant), or be a new color not presented in the trial. We measured the pupil response to the probe as a signature of task relevance. Experiment 1 showed significantly smaller pupil size in response to probes matching the search template than for irrelevant colors. Experiment 2 replicated the template matching effect and allowed us to rule out that it was solely due to repetition priming. Taken together, we show that the pupil responds selectively to participants’ target template prior to search.