Sébastien M. Crouzet
Charité
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sébastien M. Crouzet.
Journal of Vision | 2010
Sébastien M. Crouzet; Holle Kirchner; Simon J. Thorpe
Previous work has demonstrated that the human visual system can detect animals in complex natural scenes very efficiently and rapidly. In particular, using a saccadic choice task, H. Kirchner and S. J. Thorpe (2006) found that when two images are simultaneously flashed in the left and right visual fields, saccades toward the side with an animal can be initiated in as little as 120-130 ms. Here we show that saccades toward human faces are even faster, with the earliest reliable saccades occurring in just 100-110 ms, and mean reaction times of roughly 140 ms. Intriguingly, it appears that these very fast saccades are not completely under instructional control, because when faces were paired with photographs of vehicles, fast saccades were still biased toward faces even when the subject was targeting vehicles. Finally, we tested whether these very fast saccades might only occur in the simple case where the images are presented left and right of fixation by showing they also occur when the images are presented above and below fixation. Such results impose very serious constraints on the sorts of processing model that can be invoked and demonstrate that face-selective behavioral responses can be generated extremely rapidly.
Frontiers in Psychology | 2011
Sébastien M. Crouzet; Simon J. Thorpe
Recent experimental work has demonstrated the existence of extremely rapid saccades toward faces in natural scenes that can be initiated only 100u2009ms after image onset (Crouzet et al., 2010). These ultra-rapid saccades constitute a major challenge to current models of processing in the visual system because they do not seem to leave enough time for even a single feed-forward pass through the ventral stream. Here we explore the possibility that the information required to trigger these very fast saccades could be extracted very early on in visual processing using relatively low-level amplitude spectrum (AS) information in the Fourier domain. Experiment 1 showed that AS normalization can significantly alter face-detection performance. However, a decrease of performance following AS normalization does not alone prove that AS-based information is used (Gaspar and Rousselet, 2009). In Experiment 2, following the Gaspar and Rousselet paper, we used a swapping procedure to clarify the role of AS information in fast object detection. Our experiment is composed of three conditions: (i) original images, (ii) category swapped, in which the face image has the AS of a vehicle, and the vehicle has the AS of a face, and (iii) identity swapped, where the face has the AS of another face image, and the vehicle has the AS of another vehicle image. The results showed very similar levels of performance in the original and identity swapped conditions, and a clear drop in the category swapped condition. This result demonstrates that, in the early temporal window offered by the saccadic choice task, the visual saccadic system does indeed rely on low-level AS information in order to rapidly detect faces. This sort of crude diagnostic information could potentially be derived very early on in the visual system, possibly as early as V1 and V2.
Frontiers in Psychology | 2011
Sébastien M. Crouzet; Thomas Serre
Research progress in machine vision has been very significant in recent years. Robust face detection and identification algorithms are already readily available to consumers, and modern computer vision algorithms for generic object recognition are now coping with the richness and complexity of natural visual scenes. Unlike early vision models of object recognition that emphasized the role of figure-ground segmentation and spatial information between parts, recent successful approaches are based on the computation of loose collections of image features without prior segmentation or any explicit encoding of spatial relations. While these models remain simplistic models of visual processing, they suggest that, in principle, bottom-up activation of a loose collection of image features could support the rapid recognition of natural object categories and provide an initial coarse visual representation before more complex visual routines and attentional mechanisms take place. Focusing on biologically plausible computational models of (bottom-up) pre-attentive visual recognition, we review some of the key visual features that have been described in the literature. We discuss the consistency of these feature-based representations with classical theories from visual psychology and test their ability to account for human performance on a rapid object categorization task.
Journal of Cognitive Neuroscience | 2015
Chien-Te Wu; Sébastien M. Crouzet; Simon J. Thorpe; Michèle Fabre-Thorpe
Earlier studies suggested that the visual system processes information at the basic level (e.g., dog) faster than at the subordinate (e.g., Dalmatian) or superordinate (e.g., animals) levels. However, the advantage of the basic category over the superordinate category in object recognition has been challenged recently, and the hierarchical nature of visual categorization is now a matter of debate. To address this issue, we used a forced-choice saccadic task in which a target and a distractor image were displayed simultaneously on each trial and participants had to saccade as fast as possible toward the image containing animal targets based on different categorization levels. This protocol enables us to investigate the first 100–120 msec, a previously unexplored temporal window, of visual object categorization. The first result is a surprising stability of the saccade latency (median RT ∼155 msec) regardless of the animal target category and the dissimilarity of target and distractor image sets. Accuracy was high (around 80% correct) for categorization tasks that can be solved at the superordinate level but dropped to almost chance levels for basic level categorization. At the basic level, the highest accuracy (62%) was obtained when distractors were restricted to another dissimilar basic category. Computational simulations based on the saliency map model showed that the results could not be predicted by pure bottom–up saliency differences between images. Our results support a model of visual recognition in which the visual system can rapidly access relatively coarse visual representations that provide information at the superordinate level of an object, but where additional visual analysis is required to allow more detailed categorization at the basic level.
PLOS ONE | 2012
Sébastien M. Crouzet; Olivier Joubert; Simon J. Thorpe; Michèle Fabre-Thorpe
The processes underlying object recognition are fundamental for the understanding of visual perception. Humans can recognize many objects rapidly even in complex scenes, a task that still presents major challenges for computer vision systems. A common experimental demonstration of this ability is the rapid animal detection protocol, where human participants earliest responses to report the presence/absence of animals in natural scenes are observed at 250–270 ms latencies. One of the hypotheses to account for such speed is that people would not actually recognize an animal per se, but rather base their decision on global scene statistics. These global statistics (also referred to as spatial envelope or gist) have been shown to be computationally easy to process and could thus be used as a proxy for coarse object recognition. Here, using a saccadic choice task, which allows us to investigate a previously inaccessible temporal window of visual processing, we showed that animal – but not vehicle – detection clearly precedes scene categorization. This asynchrony is in addition validated by a late contextual modulation of animal detection, starting simultaneously with the availability of scene category. Interestingly, the advantage for animal over scene categorization is in opposition to the results of simulations using standard computational models. Taken together, these results challenge the idea that rapid animal detection might be based on early access of global scene statistics, and rather suggests a process based on the extraction of specific local complex features that might be hardwired in the visual system.
Current Biology | 2015
Sébastien M. Crouzet; Niko A. Busch; Kathrin Ohla
In most species, the sense of taste is key in the distinction of potentially nutritious and harmful food constituents and thereby in the acceptance (or rejection) of food. Taste quality is encoded by specialized receptors on the tongue, which detect chemicals corresponding to each of the basic tastes (sweet, salty, sour, bitter, and savory [1]), before taste quality information is transmitted via segregated neuronal fibers [2], distributed coding across neuronal fibers [3], or dynamic firing patterns [4] to the gustatory cortex in the insula. In rodents, both hardwired coding by labeled lines [2] and flexible, learning-dependent representations [5] and broadly tuned neurons [6] seem to coexist. It is currently unknown how, when, and where taste quality representations are established in the cortex and whether these representations are used for perceptual decisions. Here, we show that neuronal response patterns allow to decode which of four tastants (salty, sweet, sour, and bitter) participants tasted in a given trial by using time-resolved multivariate pattern analyses of large-scale electrophysiological brain responses. The onset of this prediction coincided with the earliest taste-evoked responses originating from the insula and opercular cortices, indicating that quality is among the first attributes of a taste represented in the central gustatory system. These response patterns correlated with perceptual decisions of taste quality: tastes that participants discriminated less accurately also evoked less discriminated brain response patterns. The results therefore provide the first evidence for a link between taste-related decision-making and the predictive value of these brain response patterns.
The Journal of Neuroscience | 2017
Luca Iemi; Maximilien Chaumon; Sébastien M. Crouzet; Niko A. Busch
The brain exhibits organized fluctuations of neural activity, even in the absence of tasks or sensory input. A prominent type of such spontaneous activity is the alpha rhythm, which influences perception and interacts with other ongoing neural activity. It is currently hypothesized that states of decreased prestimulus α oscillations indicate enhanced neural excitability, resulting in improved perceptual acuity. Nevertheless, it remains debated how changes in excitability manifest at the behavioral level in perceptual tasks. We addressed this issue by comparing two alternative models describing the effect of spontaneous α power on signal detection. The first model assumes that decreased α power increases baseline excitability, amplifying the response to both signal and noise, predicting a liberal detection criterion with no effect on sensitivity. The second model predicts that decreased α power increases the trial-by-trial precision of the sensory response, resulting in improved sensitivity. We tested these models in two EEG experiments in humans where we analyzed the effects of prestimulus α power on visual detection and discrimination using a signal detection framework. Both experiments provide strong evidence that decreased α power reflects a more liberal detection criterion, rather than improved sensitivity, consistent with the baseline model. In other words, when the task requires detecting stimulus presence versus absence, reduced α oscillations make observers more likely to report the stimulus regardless of actual stimulus presence. Contrary to previous interpretations, these results suggest that states of decreased α oscillations increase the global baseline excitability of sensory systems without affecting perceptual acuity. SIGNIFICANCE STATEMENT Spontaneous fluctuations of brain activity explain why a faint sensory stimulus is sometimes perceived and sometimes not. The prevailing view is that heightened neural excitability, indexed by decreased α oscillations, promotes better perceptual performance. Here, we provide evidence that heightened neural excitability instead reflects a state of biased perception, during which a person is more likely to see a stimulus, whether or not it is actually present. Therefore, we propose that changes in neural excitability leave the precision of sensory processing unaffected. These results establish the link between spontaneous brain activity and the variability in human perception.
Frontiers in Human Neuroscience | 2013
Maxime Cauchoix; Sébastien M. Crouzet
Primates recognize objects in natural visual scenes with great rapidity. The ventral visual cortex is usually assumed to play a major role in this ability (“high-road”). However, the “low-road” alternative frequently proposed is that the visual cortex is bypassed by a rapid subcortical route to the amygdala, especially in the case of biologically relevant and emotional stimuli. This paper highlights the lack of evidence from psychophysics and computational models to support this “low-road” alternative. Most importantly, the timing of neural responses invites a serious reconsideration of the low-road role in rapid processing of visual objects.
NeuroImage | 2016
Maxime Cauchoix; Sébastien M. Crouzet; Denis Fize; Thomas Serre
Primates can recognize objects embedded in complex natural scenes in a glimpse. Rapid categorization paradigms have been extensively used to study our core perceptual abilities when the visual system is forced to operate under strong time constraints. However, the neural underpinning of rapid categorization remains to be understood, and the incredible speed of sight has yet to be reconciled with modern ventral stream cortical theories of object recognition. Here we recorded multichannel subdural electrocorticogram (ECoG) signals from intermediate areas (V4/PIT) of the ventral stream of the visual cortex while monkeys were actively engaged in a rapid animal/non-animal categorization task. A traditional event-related potential (ERP) analysis revealed short visual latencies (<50-70ms) followed by a rapidly developing visual selectivity (within ~20-30ms) for most electrodes. A multi-variate pattern analysis (MVPA) technique further confirmed that reliable animal/non-animal category information was possible from this initial ventral stream neural activity (within ~90-100ms). Furthermore, this early category-selective neural activity was (a) unaffected by the presentation of a backward (pattern) mask, (b) generalized to novel (unfamiliar) stimuli and (c) co-varied with behavioral responses (both accuracy and reaction times). Despite the strong prevalence of task-related information on the neural signal, task-irrelevant visual information could still be decoded independently of monkey behavior. Monkey behavioral responses were also found to correlate significantly with human behavioral responses for the same set of stimuli. Together, the present study establishes that rapid ventral stream neural activity induces a visually selective signal subsequently used to drive rapid visual categorization and that this visual strategy may be shared between human and non-human primates.
PLOS Computational Biology | 2015
Imri Sofer; Sébastien M. Crouzet; Thomas Serre
Observers can rapidly perform a variety of visual tasks such as categorizing a scene as open, as outdoor, or as a beach. Although we know that different tasks are typically associated with systematic differences in behavioral responses, to date, little is known about the underlying mechanisms. Here, we implemented a single integrated paradigm that links perceptual processes with categorization processes. Using a large image database of natural scenes, we trained machine-learning classifiers to derive quantitative measures of task-specific perceptual discriminability based on the distance between individual images and different categorization boundaries. We showed that the resulting discriminability measure accurately predicts variations in behavioral responses across categorization tasks and stimulus sets. We further used the model to design an experiment, which challenged previous interpretations of the so-called “superordinate advantage.” Overall, our study suggests that observed differences in behavioral responses across rapid categorization tasks reflect natural variations in perceptual discriminability.