Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Laurent Caplette is active.

Publication


Featured researches published by Laurent Caplette.


Frontiers in Psychology | 2014

Affective and contextual values modulate spatial frequency use in object recognition

Laurent Caplette; Gregory West; Marie Gomot; Frédéric Gosselin; Bruno Wicker

Visual object recognition is of fundamental importance in our everyday interaction with the environment. Recent models of visual perception emphasize the role of top-down predictions facilitating object recognition via initial guesses that limit the number of object representations that need to be considered. Several results suggest that this rapid and efficient object processing relies on the early extraction and processing of low spatial frequencies (LSF). The present study aimed to investigate the SF content of visual object representations and its modulation by contextual and affective values of the perceived object during a picture-name verification task. Stimuli consisted of pictures of objects equalized in SF content and categorized as having low or high affective and contextual values. To access the SF content of stored visual representations of objects, SFs of each image were then randomly sampled on a trial-by-trial basis. Results reveal that intermediate SFs between 14 and 24 cycles per object (2.3–4 cycles per degree) are correlated with fast and accurate identification for all categories of objects. Moreover, there was a significant interaction between affective and contextual values over the SFs correlating with fast recognition. These results suggest that affective and contextual values of a visual object modulate the SF content of its internal representation, thus highlighting the flexibility of the visual recognition system.


Scientific Reports | 2016

Atypical Time Course of Object Recognition in Autism Spectrum Disorder

Laurent Caplette; Bruno Wicker; Frédéric Gosselin

In neurotypical observers, it is widely believed that the visual system samples the world in a coarse-to-fine fashion. Past studies on Autism Spectrum Disorder (ASD) have identified atypical responses to fine visual information but did not investigate the time course of the sampling of information at different levels of granularity (i.e. Spatial Frequencies, SF). Here, we examined this question during an object recognition task in ASD and neurotypical observers using a novel experimental paradigm. Our results confirm and characterize with unprecedented precision a coarse-to-fine sampling of SF information in neurotypical observers. In ASD observers, we discovered a different pattern of SF sampling across time: in the first 80 ms, high SFs lead ASD observers to a higher accuracy than neurotypical observers, and these SFs are sampled differently across time in the two subject groups. Our results might be related to the absence of a mandatory precedence of global information, and to top-down processing abnormalities in ASD.


Journal of Experimental Psychology: Human Perception and Performance | 2014

Real-World Interattribute Distances Lead to Inefficient Face Gender Categorization

Nicolas Dupuis-Roy; Daniel Fiset; Kim Dufresne; Laurent Caplette; Frédéric Gosselin

The processing of interattribute distances is believed to be critical for upright face categorization. A recent study by Taschereau-Dumouchel, Rossion, Schyns, and Gosselin (2010) challenged this idea by showing that participants were nearly at chance when asked to identify faces on the sole basis of real-world interattribute distances, while they were nearly perfect when all other facial cues were shown. However, it remains possible that humans are highly tuned to interattribute distances but that the information conveyed by these cues is scarce. We tested this hypothesis by contrasting the efficiencies-a measure of performance that factors out task difficulty-of 60 observers in 6 face gender categorization tasks. Our main finding is that efficiencies for faces that varied only in terms of their interattribute distances were an order of magnitude lower than efficiencies for faces that varied in all respects, except their interattribute distances, or in all respects. These results provide a definitive blow to the idea that real-world interattribute distances are critical for upright face processing. (PsycINFO Database Record (c) 2014 APA, all rights reserved).


Journal of Experimental Psychology | 2017

Hand Position Alters Vision by Modulating the Time Course of Spatial Frequency Use

Laurent Caplette; Bruno Wicker; Frédéric Gosselin; Greg L. West

The nervous system gives preferential treatment to objects near the hands that are candidates for action. It is not yet understood how this process is achieved. Here we show evidence for the mechanism that underlies this process having used an experimental technique that maps the use of spatial frequencies (SFs) during object recognition across time. We used this technique to replicate and characterize with greater precision the coarse-to-fine SF sampling observed in previous studies. Then we show that the visual processing of real-world objects near an observer’s hands is biased toward the use of low-SF information, around 288 ms. Conversely, high-SF information presented around 113 ms impaired object recognition when objects were presented near the hands. Notably, both of these effects happened relatively late during object recognition and suggest that the modulation of SF use by hand position is at least partly attentional in nature.


bioRxiv | 2018

Real-world expectations and their affective value modulate object processing

Laurent Caplette; Frédéric Gosselin; Martial Mermillod; Bruno Wicker

It is well known that expectations influence how we perceive the world. Yet the neural mechanisms underlying this process remain unclear. Studies have focused so far on artificial contingencies between simple neutral cues and events. Real-world expectations are however often generated from complex associations between potentially affective contexts and objects learned over a lifetime. In this study, we used fMRI to investigate how object processing is influenced by neutral and affective context-based expectations. First, we show that the precuneus, the inferotemporal cortex and the frontal cortex are more active during object recognition when expectations have been elicited a priori, irrespectively of their validity or their affective intensity. This result supports previous hypotheses according to which these brain areas integrate contextual expectations with object sensory information. Notably, these brain areas are different from those responsible for simultaneous context-object interactions, dissociating the two processes. Then, we show that early visual areas, on the contrary, are more active during object recognition when no prior expectation has been elicited by a context. Lastly, BOLD activity was shown to be enhanced in early visual areas when objects are less expected, but only when contexts are neutral; the reverse effect is observed when contexts are affective. This result supports recent proposals that affect modulates predictions in the brain. Together, our results help elucidate the neural mechanisms of real-world expectations. Significance statement It is well known that expectations shape how we perceive the world. However, the precise mechanisms remain unclear and studies often used stimuli that lack ecological validity. In the present fMRI study, we assessed the effect of real-world expectations initiated by neutral and affective contexts on the neural mechanisms of object recognition. We first show evidence for previous claims that the precuneus and the inferotemporal cortex integrate contextual expectations with sensory information. Our results also suggest that scene-based predictions and instantaneous scene-object interactions are different processes. Finally, we show that the enhanced response usually observed with unexpected objects is reversed when contexts are affective. This result supports a recent proposal concerning the role of affect in the initiation of predictions.


Handbook of Categorization in Cognitive Science (Second Edition) | 2017

The Time Course of Object, Scene, and Face Categorization

Laurent Caplette; Éric McCabe; Caroline Blais; Frédéric Gosselin

Abstract We first describe Strategy Length & Internal Practicability ( SLIP ), a formal model for thinking about categorization, in particular about the time course of categorization. We then discuss an early application of this model to basic-levelness. We then turn to aspects of the time course of categorization that have been neglected in the categorization literature: our limited processing capacities; the necessity of having a flexible categorization apparatus; and the paradox that this inexorably brings about. We propose a twofold resolution of this paradox, attempting, in the process, to bridge work done on categorization in vision, neuropsychology, and physiology.


Journal of Vision | 2015

Autistic and neurotypical subjects extract spatial frequencies differently

Laurent Caplette; Philippe Desroches; Bruno Wicker; Frédéric Gosselin

When recognizing an object, low spatial frequencies (LSFs) are processed before higher spatial frequencies (HSFs), presumably through the faster magnocellular pathway. People suffering from autism spectrum disorders (ASD) however may not benefit from such a precedence of LSF information, several studies indicating a deficit in processing related to the magnocellular pathway (e.g., Sutherland & Crewther, 2010) and a preference toward HSFs rather than LSFs (e.g., Deruelle, Rondan, Gepner, & Tardif, 2004). Our study compared the time course of spatial frequency (SF) use in object recognition in neurotypical and ASD subjects. Forty-five neurotypical and 18 ASD subjects participated to the study. On each trial, a short video (333 ms) was presented to subjects. This video was created by selecting one of 86 object images, all equalized in SF content, and by sampling randomly its SFs across time. An object name immediately followed and subjects had to indicate if it matched the object (it did on half the trials) as quickly as possible without making too many errors. We then performed multiple linear regressions on SF x time sampling planes and accuracy. Most SFs between 0.08 and 4.42 cycles per degree (cpd) during almost all stimulus presentation (0 to 325 ms) led to more accurate responses for both groups (p< 0.05; Zmax=7.10). Interestingly, SFs between 3.75 and 4.33 cpd in the 58-100 ms time window (p< 0.05; Zmax=4.03) and SFs between 5.00 and 5.58 cpd in the 67-100 ms time window (p< 0.05; Zmax=3.99) led to more accurate responses for autistic subjects than for neurotypical subjects. These results indicate that while both groups use LSFs and intermediate SFs throughout object recognition, autistic subjects use more HSFs at a shorter latency. This suggests a different time course of SF extraction for autistic subjects. Meeting abstract presented at VSS 2015.


Journal of Vision | 2014

Action Video Game Exposure Modulates Spatial Frequency Tuning for Emotional Objects

Laurent Caplette; Greg L. West; Bruno Wicker; Marie Gomot; Frédéric Gosselin


Journal of Vision | 2018

Information sampling and processing during visual recognition

Laurent Caplette; Karim Jerbi; Frédéric Gosselin


Journal of Vision | 2017

Teasing apart the extraction and the processing of visual information in the brain

Laurent Caplette; Karim Jerbi; Frédéric Gosselin

Collaboration


Dive into the Laurent Caplette's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bruno Wicker

Aix-Marseille University

View shared research outputs
Top Co-Authors

Avatar

Daniel Fiset

Université du Québec en Outaouais

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Karim Jerbi

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar

Kim Dufresne

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar

Maxime Fortin

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge