Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Surya Gayet is active.

Publication


Featured researches published by Surya Gayet.


Frontiers in Psychology | 2014

Breaking continuous flash suppression: competing for consciousness on the pre-semantic battlefield

Surya Gayet; Stefan Van der Stigchel; Chris L. E. Paffen

Traditionally, interocular suppression is believed to disrupt high-level (i.e., semantic or conceptual) processing of the suppressed visual input. The development of a new experimental paradigm, breaking continuous flash suppression (b-CFS), has caused a resurgence of studies demonstrating high-level processing of visual information in the absence of visual awareness. In this method the time it takes for interocularly suppressed stimuli to breach the threshold of visibility, is regarded as a measure of access to awareness. The aim of the current review is twofold. First, we provide an overview of the literature using this b-CFS method, while making a distinction between two types of studies: those in which suppression durations are compared between different stimulus classes (such as upright faces versus inverted faces), and those in which suppression durations are compared for stimuli that either match or mismatch concurrently available information (such as a colored target that either matches or mismatches a color retained in working memory). Second, we aim at dissociating high-level processing from low-level (i.e., crude visual) processing of the suppressed stimuli. For this purpose, we include a thorough review of the control conditions that are used in these experiments. Additionally, we provide recommendations for proper control conditions that we deem crucial for disentangling high-level from low-level effects. Based on this review, we argue that crude visual processing suffices for explaining differences in breakthrough times reported using b-CFS. As such, we conclude that there is as yet no reason to assume that interocularly suppressed stimuli receive full semantic analysis.


Psychological Science | 2013

Information Matching the Content of Visual Working Memory Is Prioritized for Conscious Access

Surya Gayet; Chris L. E. Paffen; Stefan Van der Stigchel

Visual working memory (VWM) is used to retain relevant information for imminent goal-directed behavior. In the experiments reported here, we found that VWM helps to prioritize relevant information that is not yet available for conscious experience. In five experiments, we demonstrated that information matching VWM content reaches visual awareness faster than does information not matching VWM content. Our findings suggest a functional link between VWM and visual awareness: The content of VWM is recruited to funnel down the vast amount of sensory input to that which is relevant for subsequent behavior and therefore requires conscious access.


Cognition | 2016

Visual input signaling threat gains preferential access to awareness in a breaking continuous flash suppression paradigm.

Surya Gayet; Chris L. E. Paffen; Artem V. Belopolsky; Jan Theeuwes; Stefan Van der Stigchel

Visual input that signals threat is inherently relevant for survival. Accordingly, it has been demonstrated that threatening visual input elicits faster behavioral responses than non-threatening visual input. Considering that awareness is a prerequisite for performing demanding tasks and guiding novel behavior, we hypothesized that threatening visual input would gain faster access to awareness than non-threatening visual input. In the present study, we associated one of two basic visual stimuli, that were devoid of intrinsic relevance (colored annuli), with aversive stimulation (i.e., electric shocks) following a classical fear conditioning procedure. In the subsequent test phase no more electric shocks were delivered, and a breaking continuous flash suppression task was used to measure how fast these stimuli would access awareness. The results reveal that stimuli that were previously paired with an electric shock break through suppression faster than comparable stimuli that were not paired with an electric shock.


Journal of Vision | 2016

Visual input that matches the content of vist of visual working memory requires less (not faster) evidence sampling to reach conscious access

Surya Gayet; L. van Maanen; M. Heilbron; C.L.E. Paffen; S. Van der Stigchel

The content of visual working memory (VWM) affects the processing of concurrent visual input. Recently, it has been demonstrated that stimuli are released from interocular suppression faster when they match rather than mismatch a color that is memorized for subsequent recall. In order to investigate the nature of the interaction between visual representations elicited by VWM and visual representations elicited by retinal input, we modeled the perceptual processes leading up to this difference in suppression durations. We replicated the VWM modulation of suppression durations, and fitted sequential sampling models (linear ballistic accumulators) to the response time data. Model comparisons revealed that the data was best explained by a decrease in threshold for visual input that matches the content of VWM. Converging evidence was obtained by fitting similar sequential sampling models (shifted Wald model) to published datasets. Finally, to confirm that the previously observed threshold difference reflected processes occurring before rather than after the stimuli were released from suppression, we applied the same procedure to the data of an experiment in which stimuli were not interocularly suppressed. Here, we found no decrease in threshold for stimuli that match the content of VWM. We discuss our findings in light of a preactivation hypothesis, proposing that matching visual input taps into the same neural substrate that is already activated by a representation concurrently maintained in VWM, thereby reducing its threshold for reaching visual awareness.


The Journal of Neuroscience | 2017

Visual Working Memory Enhances the Neural Response to Matching Visual Input

Surya Gayet; Matthias Guggenmos; Thomas B. Christophel; John-Dylan Haynes; Chris L. E. Paffen; Stefan Van der Stigchel; Philipp Sterzer

Visual working memory (VWM) is used to maintain visual information available for subsequent goal-directed behavior. The content of VWM has been shown to affect the behavioral response to concurrent visual input, suggesting that visual representations originating from VWM and from sensory input draw upon a shared neural substrate (i.e., a sensory recruitment stance on VWM storage). Here, we hypothesized that visual information maintained in VWM would enhance the neural response to concurrent visual input that matches the content of VWM. To test this hypothesis, we measured fMRI BOLD responses to task-irrelevant stimuli acquired from 15 human participants (three males) performing a concurrent delayed match-to-sample task. In this task, observers were sequentially presented with two shape stimuli and a retro-cue indicating which of the two shapes should be memorized for subsequent recognition. During the retention interval, a task-irrelevant shape (the probe) was briefly presented in the peripheral visual field, which could either match or mismatch the shape category of the memorized stimulus. We show that this probe stimulus elicited a stronger BOLD response, and allowed for increased shape-classification performance, when it matched rather than mismatched the concurrently memorized content, despite identical visual stimulation. Our results demonstrate that VWM enhances the neural response to concurrent visual input in a content-specific way. This finding is consistent with the view that neural populations involved in sensory processing are recruited for VWM storage, and it provides a common explanation for a plethora of behavioral studies in which VWM-matching visual input elicits a stronger behavioral and perceptual response. SIGNIFICANCE STATEMENT Humans heavily rely on visual information to interact with their environment and frequently must memorize such information for later use. Visual working memory allows for maintaining such visual information in the minds eye after termination of its retinal input. It is hypothesized that information maintained in visual working memory relies on the same neural populations that process visual input. Accordingly, the content of visual working memory is known to affect our conscious perception of concurrent visual input. Here, we demonstrate for the first time that visual input elicits an enhanced neural response when it matches the content of visual working memory, both in terms of signal strength and information content.


Journal of Vision | 2015

Cogito ergo video: Task-relevant information is involuntarily boosted into awareness

Surya Gayet; Jan Brascamp; Stefan Van der Stigchel; Chris L. E. Paffen

Only part of the visual information that impinges on our retinae reaches visual awareness. In a series of three experiments, we investigated how the task relevance of incoming visual information affects its access to visual awareness. On each trial, participants were instructed to memorize one of two presented hues, drawn from different color categories (e.g., red and green), for later recall. During the retention interval, participants were presented with a differently colored grating in each eye such as to elicit binocular rivalry. A grating matched either the task-relevant (memorized) color category or the task-irrelevant (nonmemorized) color category. We found that the rivalrous stimulus that matched the task-relevant color category tended to dominate awareness over the rivalrous stimulus that matched the task-irrelevant color category. This effect of task relevance persisted when participants reported the orientation of the rivalrous stimuli, even though in this case color information was completely irrelevant for the task of reporting perceptual dominance during rivalry. When participants memorized the shape of a colored stimulus, however, its color category did not affect predominance of rivalrous stimuli during retention. Taken together, these results indicate that the selection of task-relevant information is under volitional control but that visual input that matches this information is boosted into awareness irrespective of whether this is useful for the observer.


Cortex | 2017

Competitive interactions in visual working memory drive access to awareness

Dirk van Moorselaar; Surya Gayet; Chris L. E. Paffen; Jan Theeuwes; Stefan Van der Stigchel; Christian N. L. Olivers

Models of biased competition assume that pre-activating a visual representation in visual working memory (VWM) biases perception towards memory-matching objects. Consistent with this, it has been shown that targets suppressed by interocular competition gain prioritized access to awareness when they match VWM content. Thus far, these VWM biases during interocular suppression have been investigated with minimal levels of competition, as there was always only one target stimulus and observers only held a single item in VWM. In the current study we investigated how VWM-based modulation of access to awareness is influenced by a) multiple-item competition within the stimulus display and b) multiple-item competition within VWM. Using the method of breaking continuous flash suppression (b-CFS), we replicated the finding that information matching the content of VWM is released from interocular suppression faster than non-matching information. This VWM-based facilitation was significantly reduced, though still present, when VWM load increased from one to two items, demonstrating a clear competitive constraint on the top-down modulation by VWM. Furthermore, we manipulated inter-stimulus competition by varying the presence of distractors. When distractors were present, VWM-based facilitation was no longer specific to interocular suppression, but also occurred for monocular displays. The results demonstrate that VWM-based visual biases occur in response to competition, whether between or within the eyes, and reconcile findings from different paradigms.


Journal of Vision | 2018

Attention-based perceptual learning does not affect access to awareness

Chris L. E. Paffen; Surya Gayet; M. Heilbron; S. Van der Stigchel

Visual information that is relevant for an observer gains prioritized access to awareness (Gayet, Van der Stigchel, & Paffen, 2014). Here we investigate whether information that was relevant for an extended duration is prioritized for access to awareness when it is no longer relevant. We applied a perceptual-learning paradigm, in which observers were trained for 3 days on a speed-discrimination task. This task used a stimulus consisting of two motion directions, of which one was relevant to the task and one irrelevant. Before and after training, we applied a motion-coherence task to validate whether perceptual learning took place, and a breaking continuous flash-suppression (b-CFS) paradigm to assess how training affected access to awareness. The results reveal that motion-coherence thresholds for the task-relevant motion direction selectively decreased after compared to before training, revealing that task-relevant perceptual learning took place. The results of the b-CFS task, however, reveal that access to awareness was not affected by task-relevant learning: Instead, detection times for motion undergoing CFS decreased, irrespective of its direction, after compared to before training. A follow-up experiment showed that the time to detect visual motion also decreased after 3 days without training, revealing that perceptual learning did not cause the general decrease in detection times. The latter is in line with results by Mastropasqua, Tse, and Turatto (2015) and has important consequences for studies applying b-CFS to assess access to awareness: Studies that intend to apply measurements involving b-CFS on different testing days should consider that breakthrough times will dramatically decrease from pre- to postmeasurement.


Cortex | 2018

Hide and seek: Directing top-down attention is not sufficient for accelerating conscious access

Surya Gayet; Iris Douw; Vera van der Burg; Stefan Van der Stigchel; Chris L. E. Paffen

At any moment in time, we have a single conscious visual experience representing a minute part of our visual world. As such, the visual input stimulating our retinae is in continuous competition for reaching conscious access. Many complex cognitive operations can only be applied to consciously accessible visual information, thereby raising the question whether humans have the ability to select which parts of their visual input reaches consciousness. Top-down attention allows humans to flexibly assign more processing resources to certain parts of our visual input, making it a likely mechanism to volitionally bias conscious access. Here, we investigated whether directing top-down attention to a particular location or feature accelerates conscious access of an initially suppressed visual stimulus at the attended location, or of the attended feature. We instructed participants to attend a spatial location (Experiment 1) or color (Experiment 2) for a speeded discrimination task, using a highly predictive cue. The predictive cues were highly effective in prompting sustained attention towards the cued location or color, as evidenced by faster discrimination of cued relative to uncued targets. We simultaneously measured detection times to interocularly suppressed probes that were either of the cued (i.e., attended) color/location or not, and were visually distinct from the targets used for the discrimination task. Despite our successful manipulation of top-down attention, suppressed probes were not released from suppression faster when they were presented at the attended location, or in the attended color. In contrast, when observers were cued to attend a color for locating targets of an ill-defined shape (inciting a broader attentional template), we did observe faster conscious access of probes in the attended color (Experiment 3). We discuss our findings in light of the specificity of attentional templates, and the inherent limitations that this poses for top-down attentional biases on conscious access.


Scientific Reports | 2017

Remapping high-capacity, pre-attentive, fragile sensory memory

Paul Zerr; Surya Gayet; Kees Mulder; Yair Pinto; Ilja G. Sligte; Stefan Van der Stigchel

Humans typically make several saccades per second. This provides a challenge for the visual system as locations are largely coded in retinotopic (eye-centered) coordinates. Spatial remapping, the updating of retinotopic location coordinates of items in visuospatial memory, is typically assumed to be limited to robust, capacity-limited and attention-demanding working memory (WM). Are pre-attentive, maskable, sensory memory representations (e.g. fragile memory, FM) also remapped? We directly compared trans-saccadic WM (tWM) and trans-saccadic FM (tFM) in a retro-cue change-detection paradigm. Participants memorized oriented rectangles, made a saccade and reported whether they saw a change in a subsequent display. On some trials a retro-cue indicated the to-be-tested item prior to probe onset. This allowed sensory memory items to be included in the memory capacity estimate. The observed retro-cue benefit demonstrates a tFM capacity considerably above tWM. This provides evidence that some, if not all sensory memory was remapped to spatiotopic (world-centered, task-relevant) coordinates. In a second experiment, we show backward masks to be effective in retinotopic as well as spatiotopic coordinates, demonstrating that FM was indeed remapped to world-centered coordinates. Together this provides conclusive evidence that trans-saccadic spatial remapping is not limited to higher-level WM processes but also occurs for sensory memory representations.

Collaboration


Dive into the Surya Gayet's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jan Theeuwes

VU University Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge