Antoine Coutrot
University College London
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Antoine Coutrot.
Vision Research | 2016
Olivier Le Meur; Antoine Coutrot
Previous research showed the existence of systematic tendencies in viewing behavior during scene exploration. For instance, saccades are known to follow a positively skewed, long-tailed distribution, and to be more frequently initiated in the horizontal or vertical directions. In this study, we hypothesize that these viewing biases are not universal, but are modulated by the semantic visual category of the stimulus. We show that the joint distribution of saccade amplitudes and orientations significantly varies from one visual category to another. These joint distributions are in addition spatially variant within the scene frame. We demonstrate that a saliency model based on this better understanding of viewing behavioral biases and blind to any visual information outperforms well-established saliency models. We also propose a saccadic model that takes into account classical low-level features and spatially-variant and context-dependent viewing biases. This model outperforms state-of-the-art saliency models, and provides scanpaths in close agreement with human behavior. The better description of viewing biases will not only improve current models of visual attention but could also influence many other applications such as the design of human-computer interfaces, patient diagnosis or image/video processing applications.
Royal Society Open Science | 2016
Nicola Binetti; Charlotte Harrison; Antoine Coutrot; Alan Johnston; Isabelle Mareschal
Most animals look at each other to signal threat or interest. In humans, this social interaction is usually punctuated with brief periods of mutual eye contact. Deviations from this pattern of gazing behaviour generally make us feel uncomfortable and are a defining characteristic of clinical conditions such as autism or schizophrenia, yet it is unclear what constitutes normal eye contact. Here, we measured, across a wide range of ages, cultures and personality types, the period of direct gaze that feels comfortable and examined whether autonomic factors linked to arousal were indicative of peoples preferred amount of eye contact. Surprisingly, we find that preferred period of gaze duration is not dependent on fundamental characteristics such as gender, personality traits or attractiveness. However, we do find that subtle pupillary changes, indicative of physiological arousal, correlate with the amount of eye contact people find comfortable. Specifically, people preferring longer durations of eye contact display faster increases in pupil size when viewing another person than those preferring shorter durations. These results reveal that a persons preferred duration of eye contact is signalled by physiological indices (pupil dilation) beyond volitional control that may play a modulatory role in gaze behaviour.
Behavior Research Methods | 2018
Antoine Coutrot; Janet Hui-wen Hsiao; Antoni B. Chan
How people look at visual information reveals fundamental information about them; their interests and their states of mind. Previous studies showed that scanpath, i.e., the sequence of eye movements made by an observer exploring a visual stimulus, can be used to infer observer-related (e.g., task at hand) and stimuli-related (e.g., image semantic category) information. However, eye movements are complex signals and many of these studies rely on limited gaze descriptors and bespoke datasets. Here, we provide a turnkey method for scanpath modeling and classification. This method relies on variational hidden Markov models (HMMs) and discriminant analysis (DA). HMMs encapsulate the dynamic and individualistic dimensions of gaze behavior, allowing DA to capture systematic patterns diagnostic of a given class of observers and/or stimuli. We test our approach on two very different datasets. Firstly, we use fixations recorded while viewing 800 static natural scene images, and infer an observer-related characteristic: the task at hand. We achieve an average of 55.9% correct classification rate (chance = 33%). We show that correct classification rates positively correlate with the number of salient regions present in the stimuli. Secondly, we use eye positions recorded while viewing 15 conversational videos, and infer a stimulus-related characteristic: the presence or absence of original soundtrack. We achieve an average 81.2% correct classification rate (chance = 50%). HMMs allow to integrate bottom-up, top-down, and oculomotor influences into a single model of gaze behavior. This synergistic approach between behavior and machine learning will open new avenues for simple quantification of gazing behavior. We release SMAC with HMM, a Matlab toolbox freely available to the community under an open-source license agreement.
bioRxiv | 2018
Gillian Coughlan; Antoine Coutrot; Mizanur Khondoker; Anne Marie Minihane; Hugo J. Spiers; Michael Hornberger
INTRODUCTION Spatial navigation is emerging as a critical factor in identifying pre-symptomatic Alzheimer pathophysiology, with the impact of sex and APOE status on spatial navigation yet to be established. METHODS We estimate the effects of sex on navigation performance in 27,308 individuals (50-70 years [benchmark population]) by employing a novel game-based approach to cognitive assessment using Sea Hero Quest. The effects of APOE genotype and sex on game performance was further examined in a smaller lab-based cohort (n = 44). RESULTS Benchmark data showed an effect of sex on wayfinding distance, duration and path integration. Importantly in the lab cohort, performance on allocentric wayfinding levels was reduced in ε4 carriers compared to ε3 carriers, and effect of sex became negligible when APOE status was controlled for. To demonstrate the robustness of this effect and to ensure the quality of data obtained through unmonitored at-home use of the Sea Hero Quest game, post-hoc analysis was carried out to compare performance by the benchmark population to the monitored lab-cohort. DISCUSSION APOE ε4 midlife carriers exhibit changes in navigation pattern before any symptom onset. This supports the move towards spatial navigation as an early cognitive marker and demonstrates for the first time how the utility of large-scale digital cognitive assessment may hold future promise for the early detection of Alzheimer’s disease. Finally, benchmark findings suggest that gender differences may need to be considered when determining the classification criteria for spatial navigational deficits in midlife adults.
Journal of Experimental Child Psychology | 2018
Andrew T. Rider; Antoine Coutrot; Elizabeth Pellicano; Steven C. Dakin; Isabelle Mareschal
Highlights • Content outweighs saliency when watching films.• Variance between eye movements decreases with age.• Gaze patterns show similar (qualitative) strategies across development.
Current Biology | 2018
Antoine Coutrot; Ricardo Silva; Ed Manley; Will de Cothi; Saber Sami; Véronique D. Bohbot; Jan M. Wiener; Christoph Hölscher; Ruth Dalton; Michael Hornberger; Hugo J. Spiers
Human spatial ability is modulated by a number of factors, including age [1-3] and gender [4, 5]. Although a few studies showed that culture influences cognitive strategies [6-13], the interaction between these factors has never been globally assessed as this requires testing millions of people of all ages across many different countries in the world. Since countries vary in their geographical and cultural properties, we predicted that these variations give rise to an organized spatial distribution of cognition at a planetary-wide scale. To test this hypothesis, we developed a mobile-app-based cognitive task, measuring non-verbal spatial navigation ability in more than 2.5 million people and sampling populations in every nation state. We focused on spatial navigation due to its universal requirement across cultures. Using a clustering approach, we find that navigation ability is clustered into five distinct, yet geographically related, groups of countries. Specifically, the economic wealth of a nation was predictive of the average navigation ability of its inhabitants, and gender inequality was predictive of the size of performance difference between males and females. Thus, cognitive abilities, at least for spatial navigation, are clustered according to economic wealth and gender inequalities globally, which has significant implications for cross-cultural studies and multi-center clinical trials using cognitive testing.
IEEE Transactions on Image Processing | 2017
Olivier Le Meur; Antoine Coutrot; Zhi Liu; Pia Rämä; Adrien Le Roch; Andrea Helo
How people look at visual information reveals fundamental information about themselves, their interests and their state of mind. While previous visual attention models output static 2D saliency maps, saccadic models aim to predict not only where observers look at but also how they move their eyes to explore the scene. In this paper, we demonstrate that saccadic models are a flexible framework that can be tailored to emulate observer’s viewing tendencies. More specifically, we use fixation data from 101 observers split into five age groups (adults, 8–10 y.o., 6–8 y.o., 4–6 y.o., and 2 y.o.) to train our saccadic model for different stages of the development of human visual system. We show that the joint distribution of saccade amplitude and orientation is a visual signature specific to each age group, and can be used to generate age-dependent scan paths. Our age-dependent saccadic model does not only output human-like, age-specific visual scan paths, but also significantly outperforms other state-of-the-art saliency models. We demonstrate that the computational modeling of visual attention, through the use of saccadic model, can be efficiently adapted to emulate the gaze behavior of a specific group of observers.
electronic imaging | 2016
Olivier Le Meur; Antoine Coutrot
In this paper, we present saccadic models which are an alternative way to predict where observers look at. Compared to saliency models, saccadic models generate plausible visual scan-paths from which saliency maps can be computed. In addition these models have the advantage of being adaptable to different viewing conditions, viewing tasks and types of visual scene. We demonstrate that saccadic models perform better than existing saliency models for predicting where an observer looks at in free-viewing condition and quality-task condition (i.e. when observers have to score the quality of an image). For that, the joint distributions of saccade amplitudes and orientations in both conditions (i.e. free-viewing and quality task) have been estimated from eye tracking data. Thanks to saccadic models, we hope we will be able to improve upon the performance of saliency-based quality metrics, and more generally the capacity to predict where we look within visual scenes when performing visual tasks.
Journal of Vision | 2015
Antoine Coutrot; Nathalie Guyader
The great ability of the human visual system to classify natural scenes has long been known. For instance, some objects (faces, animals) can be spotted in less time than the duration of a single fixation. Many studies also show that exploration strategies depend on many different high and low level features. However, the link between natural scene classification and eye movement parameters has rarely been explored. In this study, we built a base of 45 videos split into three visual categories: Landscapes (forest, meadow, seashore, etc.), Moving Objects (cars, plane, chain reactions, etc.), and Faces (one to four persons having a conversation). These categories were chosen because of the wide variety of regions of interest they contain. We tracked the eyes of 72 participants watching these videos using an Eyelink 1000 (SR Research). Firstly, univariate analyses show that visual categories substantially influence visual exploration, modify eye movement parameters (fixation duration, saccade amplitude, saccade direction), and impact on fixation locations (mean distance to centre, mean dispersion between the eye positions). Secondly, multivariate analyses were performed via Linear Discriminant Analysis (LDA) on the latter variables. The resulting vectors were used as a linear classifier. On the eye movements recorded with our stimuli, the accuracy of such a classifier reaches 86.7%. The contributions of this study are twofold. First, we quantified the deep influence that visual category of a natural scene has on eye movement parameters. As a consequence of these influences, some eye-tracking results obtained using stimuli belonging to a given visual category may not be generalised to other categories. Second, we showed that simple eye movement parameters are good predictors of the explored visual category. This has numerous applications in computer vision, including saliency-based compression algorithms adapted to the content of the scene. Meeting abstract presented at VSS 2015.
Schizophrenia Bulletin | 2018
Lilla Porffy; Rebekah Wigton; Antoine Coutrot; Daniel Joyce; Isabelle Mareschal; Sukhi Shergill
Abstract Background Deficits in social cognition often develop during the prodromal stages of psychosis, remain stable over the course of the illness, and have a dramatic impact on daily functioning (Fett et al., 2011). Social cue processing, particularly face perception, plays a critical role in social cognitive functioning. Patients with schizophrenia struggle to extract information from faces and interpret facial expressions (Kohler et al., 2010). These deficits may be explained by restricted visual attention. Indeed, eye-tracking studies have demonstrated that people with schizophrenia show reduced exploratory behaviour (i.e. reduced number of fixations and longer fixation durations) in response to facial stimuli compared to healthy controls (e.g. Manor et al., 1999). Oxytocin has been demonstrated to exert pro-social effects on behaviour and modulate eye gaze during perception of faces. In the present study, we tested whether the neuropeptide, oxytocin, has a compensatory effect on visual processing of human faces. Methods Twenty right-handed male subjects with schizophrenia (n = 16) or schizoaffective disorder (n = 4) were administered intranasal oxytocin 40UI or placebo in a double-blind, placebo-controlled, cross-over fashion during two visits separated by 7 days. Participants engaged in a free-viewing eye-tracking task, during which they were looking at 6 facial images of two Caucasian men displaying angry, happy, and neutral facial expressions, and 6 control images in a random order. Eye-tracking measures including 1) total number of fixations, 2) dispersion, 3) saccade amplitude, and 4) mean duration of fixations were captured using the EyeLink 1000 system (SR Research Ltd, Ottawa, Ontario, Canada). Four separate 2 x 4 repeated-measures analysis of variance (ANOVA) were carried out to evaluate the within-subject effects of treatment, stimuli, and the interactions between stimuli and treatment (p < .05, two-tailed). Results We found a main effect of treatment (F1,17 = 16.139, p = .001), but not a main effect of stimuli (F3,51 = 1.479, p > .231) on total number of fixations. There was a main effect of treatment on duration of fixation, (F1,13 = 5.455, p = .036) but not a main effect of stimuli (F3,39 = 1.267, p = .299). For dispersion, there was a significant main effect of stimuli (F3,51 = 3.424, p = .024) but no main effect of treatment (F1,17 = 3.170, p = .093). Analysis of saccade amplitudes revealed no main effect of treatment (F1,17 = 2.666, p = .121) or stimuli (F3,51 = 0.289, p = .833). None of the interactions reached significance. Discussion To our knowledge, this is the first study to explore the effects of oxytocin on eye movements in individuals with schizophrenia. We found that oxytocin increased exploratory viewing behaviour in response to affective facial stimuli by significantly increasing the total number and duration of fixations compared to placebo. While previous findings regarding oxytocin have been inconsistent, our findings are in line with research showing that the intranasal administration of 40UI oxytocin may improve social cognitive deficits in schizophrenia (e.g. Davis et al., 2013). Future experiments may wish to explore the correlation between eye movement changes induced by oxytocin and facial affect recognition in larger samples.