Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nathalie Guyader is active.

Publication


Featured researches published by Nathalie Guyader.


International Journal of Computer Vision | 2009

Modelling Spatio-Temporal Saliency to Predict Gaze Direction for Short Videos

Sophie Marat; Tien Ho Phuoc; Lionel Granjon; Nathalie Guyader; Denis Pellerin; Anne Guérin-Dugué

This paper presents a spatio-temporal saliency model that predicts eye movement during video free viewing. This model is inspired by the biology of the first steps of the human visual system. The model extracts two signals from video stream corresponding to the two main outputs of the retina: parvocellular and magnocellular. Then, both signals are split into elementary feature maps by cortical-like filters. These feature maps are used to form two saliency maps: a static and a dynamic one. These maps are then fused into a spatio-temporal saliency map. The model is evaluated by comparing the salient areas of each frame predicted by the spatio-temporal saliency map to the eye positions of different subjects during a free video viewing experiment with a large database (17000 frames). In parallel, the static and the dynamic pathways are analyzed to understand what is more or less salient and for what type of videos our model is a good or a poor predictor of eye movement.


Journal of Vision | 2014

How saliency, faces, and sound influence gaze in dynamic social scenes

Antoine Coutrot; Nathalie Guyader

Conversation scenes are a typical example in which classical models of visual attention dramatically fail to predict eye positions. Indeed, these models rarely consider faces as particular gaze attractors and never take into account the important auditory information that always accompanies dynamic social scenes. We recorded the eye movements of participants viewing dynamic conversations taking place in various contexts. Conversations were seen either with their original soundtracks or with unrelated soundtracks (unrelated speech and abrupt or continuous natural sounds). First, we analyze how auditory conditions influence the eye movement parameters of participants. Then, we model the probability distribution of eye positions across each video frame with a statistical method (Expectation-Maximization), allowing the relative contribution of different visual features such as static low-level visual saliency (based on luminance contrast), dynamic low level visual saliency (based on motion amplitude), faces, and center bias to be quantified. Through experimental and modeling results, we show that regardless of the auditory condition, participants look more at faces, and especially at talking faces. Hearing the original soundtrack makes participants follow the speech turn-taking more closely. However, we do not find any difference between the different types of unrelated soundtracks. These eyetracking results are confirmed by our model that shows that faces, and particularly talking faces, are the features that best explain the gazes recorded, especially in the original soundtrack condition. Low-level saliency is not a relevant feature to explain eye positions made on social scenes, even dynamic ones. Finally, we propose groundwork for an audiovisual saliency model.


Cognitive Computation | 2013

Improving Visual Saliency by Adding ‘Face Feature Map’ and ‘Center Bias’

Sophie Marat; Anis Rahman; Denis Pellerin; Nathalie Guyader; Dominique Houzet

Faces play an important role in guiding visual attention, and thus, the inclusion of face detection into a classical visual attention model can improve eye movement predictions. In this study, we proposed a visual saliency model to predict eye movements during free viewing of videos. The model is inspired by the biology of the visual system and breaks down each frame of a video database into three saliency maps, each earmarked for a particular visual feature. (a) A ‘static’ saliency map emphasizes regions that differ from their context in terms of luminance, orientation and spatial frequency. (b) A ‘dynamic’ saliency map emphasizes moving regions with values proportional to motion amplitude. (c) A ‘face’ saliency map emphasizes areas where a face is detected with a value proportional to the confidence of the detection. In parallel, a behavioral experiment was carried out to record eye movements of participants when viewing the videos. These eye movements were compared with the models’ saliency maps to quantify their efficiency. We also examined the influence of center bias on the saliency maps and incorporated it into the model in a suitable way. Finally, we proposed an efficient fusion method of all these saliency maps. Consequently, the fused master saliency map developed in this research is a good predictor of participants’ eye positions.


Cognitive Computation | 2010

A Functional and Statistical Bottom-Up Saliency Model to Reveal the Relative Contributions of Low-Level Visual Guiding Factors

Tien Ho-Phuoc; Nathalie Guyader; Anne Guérin-Dugué

When looking at a scene, we frequently move our eyes to place consecutive interesting regions on the fovea, the retina centre. At each fixation, only this specific foveal region is analysed in detail by the visual system. The visual attention mechanisms control eye movements and depend on two types of factor: bottom-up and top-down factors. Bottom-up factors include different visual features such as colour, luminance, edges, and orientations. In this paper, we evaluate quantitatively the relative contribution of basic low-level features as candidate guiding factors to visual attention and hence to eye movements. We also study how these visual features can be combined in a bottom-up saliency model. Our work consists of three interactive parts: a functional saliency model, a statistical model and eye movement data recorded during free viewing of natural scenes. The functional saliency model, inspired by the primate visual system, decomposes a visual scene into different feature maps. The statistical model indicates which features best explain the recorded eye movements. We show an essential role of high frequency luminance and an important contribution of central fixation bias. The relative contribution of features, calculated by the statistical model, is then used to combine the different feature maps into a saliency map. Finally, the comparison between the saliency model and experimental data confirmed the influence of these contributions.


Brain and Cognition | 2005

The coarse-to-fine hypothesis revisited: Evidence from neuro-computational modeling

Martial Mermillod; Nathalie Guyader; Alan Chauvin

The human perceptual system seems to be driven by a coarse-to-fine integration of visual information. Different results have shown a faster integration of low-spatial frequency compared with high-spatial frequency (HSF) information, starting at early retinal processes. The difference in spatial scale decomposition remains throughout the lateral geniculate nucleus (Hubel & Wiesel, 1977) and V1 (Tootell, Silverman, & De Valois, 1981). During the last decade, a debate has emerged concerning the origin of the coarse-to-fine integration. Is it a constant, perceptually driven integration (Parker et al., 1992 and Parker et al., 1996)? Instead, the flexible use hypothesis suggests that different spatial frequency channels could be enhanced depending on the requirement of the task for high-level cognitive processes like categorization (Oliva and Schyns, 1997 and Schyns and Oliva, 1999). In two connectionist simulations, we have shown that global categorization performance could actually be better performed with HSF information when the amount of information is normalized across the different spatial frequency channels. Those results suggest that high-level requirement alone could not explain the coarse-to-fine bias toward LSF information. A hypothesis is proposed concerning the possible implication of the amount of data provided by different spatial frequency channel that might provide the perceptual bias toward LSF information.


NeuroImage | 2015

Spatial frequency processing in scene-selective cortical regions

Louise Kauffmann; Stephen Ramanoël; Nathalie Guyader; Alan Chauvin; Carole Peyrin

Visual analysis begins with the parallel extraction of different attributes at different spatial frequencies. Low spatial frequencies (LSF) convey coarse information and are characterized by high luminance contrast, while high spatial frequencies (HSF) convey fine details and are characterized by low luminance contrast. In the present fMRI study, we examined how scene-selective regions-the parahippocampal place area (PPA), the retrosplenial cortex (RSC) and the occipital place area (OPA)-responded to spatial frequencies when contrast was either equalized or not equalized across spatial frequencies. Participants performed a categorization task on LSF, HSF and non-filtered scenes belonging to two different categories (indoors and outdoors). We either left contrast across scenes untouched, or equalized it using a root-mean-square contrast normalization. We found that when contrast remained unmodified, LSF and NF scenes elicited greater activation than HSF scenes in the PPA. However, when contrast was equalized across spatial frequencies, the PPA was selective to HFS. This suggests that PPA activity relies on an interaction between spatial frequency and contrast in scenes. In the RSC, LSF and NF elicited greater response than HSF scenes when contrast was not modified, while no effect of spatial frequencies appeared when contrast was equalized across filtered scenes, suggesting that the RSC is sensitive to high-contrast information. Finally, we observed selective activation of the OPA in response to HSF, irrespective of contrast manipulation. These results provide new insights into how scene-selective areas operate during scene processing.


Brain Research | 2006

Neural correlates of spatial frequency processing: A neuropsychological approach.

Carole Peyrin; Sylvie Chokron; Nathalie Guyader; Olivier Gout; Jacques Moret; Christian Marendaz

We examined the neural correlates of spatial frequency (SF) processing through a gender and neuropsychological approach, using a recognition task of filtered (either in low spatial frequencies/LSF or high spatial frequencies/HSF) natural scene images. Experiment 1 provides evidence for hemispheric specialization in SF processing in men (the right hemisphere is predominantly involved in LSF analysis and the left in HSF analysis) but not in women. Experiment 2 aims to investigate the role of the right occipito-temporal cortex in LSF processing with a neurological female patient who had a focal lesion of this region due to an embolization of an arterioveinous malformation. This study was conducted 1 week before and 6 months after the surgical intervention. As expected, after the embolization, LSF scene recognition was more impaired than HSF scene recognition. These data support the hypothesis that the right occipito-temporal cortex might be preferentially specialized for LSF information processing and more generally suggest a hemispheric specialization in SF processing in females, although it is difficult to demonstrate in healthy women.


Journal of Real-time Image Processing | 2011

Parallel implementation of a spatio-temporal visual saliency model

Anis Rahman; Dominique Houzet; Denis Pellerin; Sophie Marat; Nathalie Guyader

The human vision has been studied deeply in the past years, and several different models have been proposed to simulate it on computer. Some of these models concerns visual saliency which is potentially very interesting in a lot of applications like robotics, image analysis, compression, video indexing. Unfortunately they are compute intensive with tight real-time requirements. Among all the existing models, we have chosen a spatio-temporal one combining static and dynamic information. We propose in this paper a very efficient implementation of this model with multi-GPU reaching real-time. We present the algorithms of the model as well as several parallel optimizations on GPU with higher precision and execution time results. The real-time execution of this multi-path model on multi-GPU makes it a powerful tool to facilitate many vision related applications.


Journal of Cognitive Neuroscience | 2014

Coarse-to-fine categorization of visual scenes in scene-selective cortex

Benoit Musel; Louise Kauffmann; Stephen Ramanoël; Coralie Giavarini; Nathalie Guyader; Alan Chauvin; Carole Peyrin

Neurophysiological, behavioral, and computational data indicate that visual analysis may start with the parallel extraction of different elementary attributes at different spatial frequencies and follows a predominantly coarse-to-fine (CtF) processing sequence (low spatial frequencies [LSF] are extracted first, followed by high spatial frequencies [HSF]). Evidence for CtF processing within scene-selective cortical regions is, however, still lacking. In the present fMRI study, we tested whether such processing occurs in three scene-selective cortical regions: the parahippocampal place area (PPA), the retrosplenial cortex, and the occipital place area. Fourteen participants were subjected to functional scans during which they performed a categorization task of indoor versus outdoor scenes using dynamic scene stimuli. Dynamic scenes were composed of six filtered images of the same scene, from LSF to HSF or from HSF to LSF, allowing us to mimic a CtF or the reverse fine-to-coarse (FtC) sequence. Results showed that only the PPA was more activated for CtF than FtC sequences. Equivalent activations were observed for both sequences in the retrosplenial cortex and occipital place area. This study suggests for the first time that CtF sequence processing constitutes the predominant strategy for scene categorization in the PPA.


Vision Research | 2015

Rapid scene categorization: Role of spatial frequency order, accumulation mode and luminance contrast

Louise Kauffmann; Alan Chauvin; Nathalie Guyader; Carole Peyrin

Visual analysis follows a default, predominantly coarse-to-fine processing sequence. Low spatial frequencies (LSF) are processed more rapidly than high spatial frequencies (HSF), allowing an initial coarse parsing of visual input, prior to analysis of finer information. Our study investigated the influence of spatial frequency processing order, accumulation mode (i.e. how spatial frequency information is received as an input by the visual system, throughout processing), and differences in luminance contrast between spatial frequencies on rapid scene categorization. In Experiment 1, we used sequences composed of six filtered scenes, assembled from LSF to HSF (coarse-to-fine) or from HSF to LSF (fine-to-coarse) to test the effects of spatial frequency order. Spatial frequencies were either successive or additive within sequences to test the effects of spatial frequency accumulation mode. Results showed that participants categorized coarse-to-fine sequences more rapidly than fine-to-coarse sequences, irrespective of spatial frequency accumulation in the sequences. In Experiment 2, we investigated the extent to which differences in luminance contrast rather than in spatial frequency account for the advantage of coarse-to-fine over fine-to-coarse processing. Results showed that both spatial frequencies and luminance contrast account for a predominant coarse-to-fine processing, but that the coarse-to-fine advantage stems mainly from differences in spatial frequencies. Our study cautions against the use of contrast normalization in studies investigating spatial frequency processing. We argue that this type of experimental manipulation can impair the intrinsic properties of a visual stimulus. As the visual system relies on these to enable recognition, bias may be induced in strategies of visual analysis.

Collaboration


Dive into the Nathalie Guyader's collaboration.

Top Co-Authors

Avatar

Alan Chauvin

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Carole Peyrin

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Anne Guérin-Dugué

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Christian Marendaz

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Louise Kauffmann

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Denis Pellerin

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Martial Mermillod

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Antoine Coutrot

University College London

View shared research outputs
Top Co-Authors

Avatar

Benoit Musel

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Dominique Houzet

Centre national de la recherche scientifique

View shared research outputs
Researchain Logo
Decentralizing Knowledge