Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Quentin Lenoble is active.

Publication


Featured researches published by Quentin Lenoble.


Journal of Vision | 2016

Finding faces, animals, and vehicles in far peripheral vision

Muriel Boucart; Quentin Lenoble; Justine Quettelart; Sébastien Szaffarczyk; Pascal Despretz; Simon J. Thorpe

Neuroimaging studies have shown that faces exhibit a central visual field bias, as compared to buildings and scenes. With a saccadic choice task, Crouzet, Kirchner, and Thorpe (2010) demonstrated a speed advantage for the detection of faces with stimuli located 8° from fixation. We used the same paradigm to examine whether the face advantage, relative to other categories (animals and vehicles), extends across the whole visual field (from 10° to 80° eccentricity) or whether it is limited to the central visual field. Pairs of photographs of natural scenes (a target and a distractor) were displayed simultaneously left and right of central fixation for 1s on a panoramic screen. Participants were asked to saccade to a target stimulus (faces, animals, or vehicles). The distractors were images corresponding to the two other categories. Eye movements were recorded with a head-mounted eye tracker. Only the first saccade was measured. Experiment 1 showed that (a) in terms of speed of categorization, faces maintain their advantage over animals and vehicles across the whole visual field, up to 80° and (b) even in crowded conditions (an object embedded in a scene), performance was above chance for the three categories of stimuli at 80° eccentricity. Experiment 2 showed that, when compared to another category with a high degree of within category structural similarity (cars), faces keep their advantage at all eccentricities. These results suggest that the bias for faces is not limited to the central visual field, at least in a categorization task.


Frontiers in Integrative Neuroscience | 2014

Differential processing of natural scenes in posterior cortical atrophy and in Alzheimer’s disease, as measured with a saccade choice task

Muriel Boucart; Gauthier Calais; Quentin Lenoble; Christine Moroni; Florence Pasquier

Atrophy of the medial temporal lobe structures that support scene perception and the binding of an object to its context (i.e., the hippocampus and the parahippocampal cortex) appears early in the course of Alzheimer’s disease (AD). However, few studies have investigated scene perception in people with AD. Here, we assessed the ability to find a target object within a natural scene in people with AD and in people with posterior cortical atrophy (PCA, a variant of AD). Pairs of color photographs were displayed on the left and right of a fixation cross for 1 s. In separate blocks of trials, participants were asked to categorize the target (an animal) by either moving their eyes toward the photograph containing the target (the saccadic choice task) or pressing a key corresponding to the target’s location (the manual choice task). Isolated objects and objects within scenes were studied in both tasks. Participants with PCA were more impaired in detection of a target within a scene than participants with AD. The latter’s performance pattern was more similar to that of age-matched controls in terms of accuracy, saccade latencies and the benefit gained from contextual information. Participants with PCA benefited less from contextual information in both the saccade and the manual choice tasks—suggesting that people with posterior brain lesions have impairments in figure/ground segregation and are more sensitive to object crowding.


Journal of Glaucoma | 2016

Impact of Peripheral Field Loss on the Execution of Natural Actions: A Study With Glaucomatous Patients and Normally Sighted People.

Stéphanie Dive; Jean François Rouland; Quentin Lenoble; Sébastien Szaffarczyk; Allison M. McKendrick; Muriel Boucart

Purpose:We investigated the visuomotor behavior of people with reduced peripheral field due to glaucoma while they accomplished natural actions. Methods:Twelve participants with glaucoma and 13 normally sighted controls were included. Participants were asked to accomplish a familiar sandwich-making task and a less familiar model-building task with a children’s construction set while their eye movements were recorded. Both scene layouts contained task-relevant and task-irrelevant objects. There was no time constraint. Results:Participants with glaucoma were slower to perform the task than were the normal observers, but the slower performance was confined to the unfamiliar model-building task. Patients and controls were equally efficient in the more familiar sandwich-making task. On initial exposure, before the first reaching movement was initiated, patients scanned the objects longer than did controls, particularly in the unfamiliar model-building task, and controls fixated irrelevant objects less than did patients. During the working phase fixations were on average longer for patients than for controls and patients made more saccades than did controls. Patients did not grasp more irrelevant objects compared with controls. Conclusions:The results provide evidence that, although slower than controls, patients with glaucoma were able to accomplish natural actions efficiently even when the task required discrimination of small structurally similar objects (nuts and screws in the model-building task). Their difficulties were reflected in longer fixation times and more head and eye movements compared with controls, presumably to compensate for lower visibility when objects fell in the part of their visual field where sensitivity was reduced.


Acta Psychologica | 2017

Eye movement during retrieval of emotional autobiographical memories

Mohamad El Haj; Jean-Louis Nandrino; Pascal Antoine; Muriel Boucart; Quentin Lenoble

This study assessed whether specific eye movement patterns are observed during emotional autobiographical retrieval. Participants were asked to retrieve positive, negative and neutral memories while their scan path was recorded by an eye-tracker. Results showed that positive and negative emotional memories triggered more fixations and saccades but shorter fixation duration than neutral memories. No significant differences were observed between emotional and neutral memories for duration and amplitude of saccades. Positive and negative retrieval triggered similar eye movement (i.e., similar number of fixations and saccades, fixation duration, duration of saccades, and amplitude of saccades). Interestingly, the participants reported higher visual imagery for emotional memories than for neutral memories. The findings demonstrate similarities and differences in eye movement during retrieval of neutral and emotional memories. Eye movement during autobiographical retrieval seems to be triggered by the creation of visual mental images as the latter are indexed by autobiographical reconstruction.


British Journal of Ophthalmology | 2016

Visual object categorisation in people with glaucoma.

Quentin Lenoble; Jia Jia Lek; Allison M. McKendrick

Purpose There is evidence that people with glaucoma exhibit difficulties with some complex visual tasks such as face recognition, motion perception and scene exploration. The purpose of this study was to determine whether glaucoma affects the ability to categorise briefly presented visual objects in central vision. Methods Visual categorisation performance of 14 people with glaucoma (primary open angle glaucoma and preperimetric) and 15 age-matched controls was measured, assessing both accuracy and response times. Grey level photographs of objects (size) were presented for 28 ms foveally. Perimetric thresholds were normal for all participants within the central 3°. Two levels of contrasts were included: one medium level at 50% and one with high contrast at 100%. Results On average, accuracy was significantly decreased by 7% (p=0.046) for the medium contrast stimuli in patients with glaucoma (87% of correct response, SD: 5%) compared with controls (94% of correct response, SD: 4.7%). Group average response times were significantly slower for the patients relative to the control group (712 ms, SD: 53 ms compared with 643 ms, SD: 34 ms for the control group; p<0.01). Performance was equivalent in the two groups when the picture contrast was 100%. Conclusions The impairment observed in the categorisation task supports previous work that demonstrates that people with glaucoma can have greater difficulties with complex visual tasks than is predicted by their visual field loss. The performance was equivalent to age-matched controls when contrast was maximised.


Gériatrie et Psychologie Neuropsychiatrie du Vieillissement | 2012

Vieillissement de la catégorisation visuelle d’objet : interaction entre un déficit « fréquence spatiale-spécifique » et un déficit « catégorie-spécifique »

Pierre Bordaberry; Quentin Lenoble; Sandrine Delord

The study investigated the aging of object categorization manipulating the spatial frequency (SF) content in photographs of object and the object category. Thirty young (m=22 years old) and 24 mature adults (m=57 years old) categorized 120 items (animals/tools) presented for 200 ms each, in one of three versions: a normal version (no filter), a band-pass filtered version (medium to high SF) and a low-pass filtered version (low SF). Results showed that this categorization task relied mainly on the medium to high SF band and that the mature group had a large impairment on that band. This impairment resulted for this group in a category-specific deficit toward the tools, for which the weak intra-category similarity in the items requires that SF range to be processed. An impairment of performance with increasing age was also found for the low SF band, specifically for animals. This interaction between the SF-specific deficit with age and the category is discussed according to the relevance of SF band for the task and to the characteristics of the two categories.


Memory | 2018

Don’t stare, unless you don’t want to remember: Maintaining fixation compromises autobiographical memory retrieval

Quentin Lenoble; Steve M. J. Janssen; Mohamad El Haj

ABSTRACT This study has developed an original approach to the relationship between eye movements and autobiographical memory, by investigating how maintained fixation could influence the characteristics of retrieved memories. We invited participants to retrieve autobiographical memories in two conditions: while fixating a cross at the centre of a screen and while freely exploring the screen. Memories retrieved during the maintained fixation condition were less detailed and contained less visual imagery than those retrieved during the free-gaze condition. Memories retrieved during the maintained fixation condition were retrieved slower and took less time to describe than those retrieved during the free-gaze condition. As for the characteristics of eye movements, analysis showed fewer and longer fixations as well as fewer saccades in the maintained fixation than in the free-gaze condition. Maintaining fixation is likely to tax cognitive resources that are necessary for the reconstruction of autobiographical memory. Our findings demonstrate how maintained fixation may result in a more effortful construction of autobiographical memory and memories with lower spatiotemporal specificity and poorer mental images.


Optometry and Vision Science | 2015

Categorization Task over a Touch Screen in Age-Related Macular Degeneration.

Quentin Lenoble; Thi Ha Chau Tran; Sébastien Szaffarczyk; Muriel Boucart

Purpose In our modern society, many touch screen applications require hand-eye coordination to associate an icon with its specific contextual unit on phones, on computers, or in public transport. We assessed the ability of patients with age-related macular degeneration (AMD) to explore scenes and to associate a target (animal or object) with a unique congruent scene (e.g., to match a fish with the sea) presented between three other distractors on a touch screen computer. Methods Twenty-four patients with AMD (64 to 90 years) with best-corrected visual acuity between 20/40 and 20/400 as well as 17 age-matched (60 to 94 years) and 15 young (22 to 34 years) participants with normal visual acuity had to match a target with a congruent scene by moving their index finger on a 22-in touch screen. Results Patients were as accurate (98.7% correct responses) as the age-matched control (98.9% correct responses) and young participants (99.3% correct responses) at performing the task. The duration of exploration was significantly longer for the AMD patients (mean, 4.13 seconds) compared with the age-matched group (mean, 2.96 seconds). The young participants were also significantly faster than the old group (mean, 0.93 seconds). The movement parameters of the older participants (patients and old control subjects) were affected compared with the young; the peak speed decreased (−8 cm/s) and the movement duration increased (+0.9 seconds) with age compared with the young group. Conclusions People with AMD are able to perform a contextual association task on a touch screen with high accuracy. The AMD patients were specifically affected in the “exploration” phase; their accuracy and movement parameters did not differ from the old control group. Our study suggests that the decline associated with AMD is more focused on the duration of exploration than on movement parameters in touch screen use.


Dementia and geriatric cognitive disorders extra | 2015

Scene Categorization in Alzheimer's Disease: A Saccadic Choice Task

Quentin Lenoble; Giovanna Bubbico; Sébastien Szaffarczyk; Florence Pasquier; Muriel Boucart


Cortex | 2017

Eying the future: Eye movement in past and future thinking

Mohamad El Haj; Quentin Lenoble

Collaboration


Dive into the Quentin Lenoble's collaboration.

Top Co-Authors

Avatar

Muriel Boucart

French Institute of Health and Medical Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge