Filipe Cristino
Bangor University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Filipe Cristino.
Proceedings of the National Academy of Sciences of the United States of America | 2011
Elizabeth Pellicano; Alastair D. Smith; Filipe Cristino; Bruce M. Hood; Josie Briscoe; Iain D. Gilchrist
It is well established that children with autism often show outstanding visual search skills. To date, however, no study has tested whether these skills, usually assessed on a table-top or computer, translate to more true-to-life settings. One prominent account of autism, Baron-Cohens “systemizing” theory, gives us good reason to suspect that they should. In this study, we tested whether autistic childrens exceptional skills at small-scale search extend to a large-scale environment and, in so doing, tested key claims of the systemizing account. Twenty school-age children with autism and 20 age- and ability-matched typical children took part in a large-scale search task in the “foraging room”: a purpose-built laboratory, with numerous possible search locations embedded into the floor. Children were instructed to search an array of 16 (green) locations to find the hidden (red) target as quickly as possible. The distribution of target locations was manipulated so that they appeared on one side of the midline for 80% of trials. Contrary to predictions of the systemizing account, autistic childrens search behavior was much less efficient than that of typical children: they showed reduced sensitivity to the statistical properties of the search array, and furthermore, their search patterns were strikingly less optimal and less systematic. The nature of large-scale search behavior in autism cannot therefore be explained by a facility for systemizing. Rather, children with autism showed difficulties exploring and exploiting the large-scale space, which might instead be attributed to constraints (rather than benefits) in their cognitive repertoire.
Cognition | 2013
Yan Jing Wu; Filipe Cristino; Charles Leek; Guillaume Thierry
Language non-selective lexical access in bilinguals has been established mainly using tasks requiring explicit language processing. Here, we show that bilinguals activate native language translations even when words presented in their second language are incidentally processed in a nonverbal, visual search task. Chinese-English bilinguals searched for strings of circles or squares presented together with three English words (i.e., distracters) within a 4-item grid. In the experimental trials, all four locations were occupied by English words, including a critical word that phonologically overlapped with the Chinese word for circle or square when translated into Chinese. The eye-tracking results show that, in the experimental trials, bilinguals looked more frequently and longer at critical than control words, a pattern that was absent in English monolingual controls. We conclude that incidental word processing activates lexical representations of both languages of bilinguals, even when the task does not require explicit language processing.
Journal of Vision | 2012
E. Charles Leek; Filipe Cristino; Lina I. Conlan; Candy Patterson; Elly Rodriguez; Stephen J. Johnston
This study used eye movement patterns to examine how high-level shape information is used during 3D object recognition. Eye movements were recorded while observers either actively memorized or passively viewed sets of novel objects, and then during a subsequent recognition memory task. Fixation data were contrasted against different algorithmically generated models of shape analysis based on: (1) regions of internal concave or (2) convex surface curvature discontinuity or (3) external bounding contour. The results showed a preference for fixation at regions of internal local features during both active memorization and passive viewing but also for regions of concave surface curvature during the recognition task. These findings provide new evidence supporting the special functional status of local concave discontinuities in recognition and show how studies of eye movement patterns can elucidate shape information processing in human vision.
Journal of Experimental Psychology: Human Perception and Performance | 2014
Lina Davitt; Filipe Cristino; Alan C.-N. Wong; Leek Ec
This study examines the kinds of shape features that mediate basic- and subordinate-level object recognition. Observers were trained to categorize sets of novel objects at either a basic (between-families) or subordinate (within-family) level of classification. We analyzed the spatial distributions of fixations and compared them to model distributions of different curvature polarity (regions of convex or concave bounding contour), as well as internal part boundaries. The results showed a robust preference for fixation at part boundaries and for concave over convex regions of bounding contour, during both basic- and subordinate-level classification. In contrast, mean saccade amplitudes were shorter during basic- than subordinate-level classification. These findings challenge models of recognition that do not posit any special functional status to part boundaries or curvature polarity. We argue that both basic- and subordinate-level classification are mediated by object representations. These representations make explicit internal part boundaries, and distinguish concave and convex regions of bounding contour. The classification task constrains how shape information in these representations is used, consistent with the hypothesis that both parts-based, and image-based, operations support object recognition in human vision.
Neuropsychologia | 2012
E. Charles Leek; Candy Patterson; Matthew A. Paul; Robert D. Rafal; Filipe Cristino
This paper reports the first ever detailed study about eye movement patterns during single object recognition in visual agnosia. Eye movements were recorded in a patient with an integrative agnosic deficit during two recognition tasks: common object naming and novel object recognition memory. The patient showed normal directional biases in saccades and fixation dwell times in both tasks and was as likely as controls to fixate within object bounding contour regardless of recognition accuracy. In contrast, following initial saccades of similar amplitude to controls, the patient showed a bias for short saccades. In object naming, but not in recognition memory, the similarity of the spatial distributions of patient and control fixations was modulated by recognition accuracy. The study provides new evidence about how eye movements can be used to elucidate the functional impairments underlying object recognition deficits. We argue that the results reflect a breakdown in normal functional processes involved in the integration of shape information across object structure during the visual perception of shape.
Quarterly Journal of Experimental Psychology | 2015
Filipe Cristino; Lina Davitt; William G. Hayward; E. Charles Leek
Current theories of object recognition in human vision make different predictions about whether the recognition of complex, multipart objects should be influenced by shape information about surface depth orientation and curvature derived from stereo disparity. We examined this issue in five experiments using a recognition memory paradigm in which observers (N = 134) memorized and then discriminated sets of 3D novel objects at trained and untrained viewpoints under either mono or stereo viewing conditions. In order to explore the conditions under which stereo-defined shape information contributes to object recognition we systematically varied the difficulty of view generalization by increasing the angular disparity between trained and untrained views. In one series of experiments, objects were presented from either previously trained views or untrained views rotated (15°, 30°, or 60°) along the same plane. In separate experiments we examined whether view generalization effects interacted with the vertical or horizontal plane of object rotation across 40° viewpoint changes. The results showed robust viewpoint-dependent performance costs: Observers were more efficient in recognizing learned objects from trained than from untrained views, and recognition was worse for extrapolated than for interpolated untrained views. We also found that performance was enhanced by stereo viewing but only at larger angular disparities between trained and untrained views. These findings show that object recognition is not based solely on 2D image information but that it can be facilitated by shape information derived from stereo disparity.
Journal of Experimental Psychology: Human Perception and Performance | 2017
Zoe J. Oliver; Filipe Cristino; Mark Roberts; Alan J. Pegna; E. Charles Leek
The role of stereo disparity in the recognition of 3-dimensional (3D) object shape remains an unresolved issue for theoretical models of the human visual system. We examined this issue using high-density (128 channel) recordings of event-related potentials (ERPs). A recognition memory task was used in which observers were trained to recognize a subset of complex, multipart, 3D novel objects under conditions of either (bi-) monocular or stereo viewing. In a subsequent test phase they discriminated previously trained targets from untrained distractor objects that shared either local parts, 3D spatial configuration, or neither dimension, across both previously seen and novel viewpoints. The behavioral data showed a stereo advantage for target recognition at untrained viewpoints. ERPs showed early differential amplitude modulations to shape similarity defined by local part structure and global 3D spatial configuration. This occurred initially during an N1 component around 145–190 ms poststimulus onset, and then subsequently during an N2/P3 component around 260–385 ms poststimulus onset. For mono viewing, amplitude modulation during the N1 was greatest between targets and distracters with different local parts for trained views only. For stereo viewing, amplitude modulation during the N2/P3 was greatest between targets and distracters with different global 3D spatial configurations and generalized across trained and untrained views. The results show that image classification is modulated by stereo information about the local part, and global 3D spatial configuration of object shape. The findings challenge current theoretical models that do not attribute functional significance to stereo input during the computation of 3D object shape.
Journal of Vision | 2015
Charles Leek; Stephen J. Johnston; Filipe Cristino
The recognition of 3D object shape is a fundamental issue in vision science. Although our knowledge has advanced considerably, most prior studies have been restricted to 2D stimulus presentation that ignores stereo disparity. In previous work we have shown how analyses of eye movement patterns can be used to elucidate the kinds of shape information that support the recognition of multi-part 3D objects (e.g., Davitt et al. , JEP: HPP, 2014, 40, 451-456). Here we extend that work using a novel technique for the 3D mapping, and analyses, of eye movement patterns under conditions of stereo viewing. Eye movements were recorded while observers learned sets of surface-rendered multi-part novel objects, and during a subsequent recognition memory task in which they discriminated trained from untrained objects at different depth rotations. The tasks were performed binocularly with or without stereo disparity. Eye movements were mapped onto the underlying 3D object mesh using a ray tracing technique and a common reference frame between the eye tracker and 3D modelling environment. This allowed us to extrapolate the recorded screen coordinates for fixations from the eye tracker onto the 3D structure of the stereo-viewed objects. For the analysis we computed models of the spatial distributions of 3D surface curvature convexity, concavity and low-level image saliency. We then compared (fixation) data - model correspondences using ROC curves. Observers were faster and more accurate when viewing objects with stereo disparity. The spatial distributions of fixations were best accounted for by the 3D surface concavity model. The results support the hypothesis that stereo disparity facilities recognition, and that surface curvature minima play a key role in the recognition of 3D shape. More broadly, the novel techniques outlined for mapping eye movement patterns in 3D space should be of interest to vision researchers in a variety of domains. Meeting abstract presented at VSS 2015.
I-perception | 2012
Filipe Cristino; Candy Patterson; Charles Leek
Eye movements have been widely studied, using images and videos in laboratories or portable eye trackers in the real world. Although a good understanding of the saccadic system and extensive models of gaze have been developed over the years, only a few studies have focused on the consistency of eye movements across viewpoints. We have developed a new technique to compute and map the depth of collected eye movements on stimuli rendered from 3D mesh objects using a traditional corneal reflection eye tracker (SR Eyelink 1000). Having eye movements mapped into 3D space (and not on an image space) allowed us to compare fixations across viewpoints. Fixation sequences (scanpaths) were also studied across viewpoints using the ScanMatch method (Cristino et al 2010, Behavioural and Research Methods42, 692–700), extended to work with 3D eye movements. In a set of experiments where participants were asked to perform a recognition task on either a set of objects or faces, we recorded their gaze while performing the ta...
Proceedings of the National Academy of Sciences of the United States of America | 2011
Elizabeth Pellicano; Alastair D. Smith; Filipe Cristino; Bruce M. Hood; Josie Briscoe; Iain D. Gilchrist
Nemeth and Janacseks (1) letter highlights two findings from our study (2) that seem to be at odds with those from existing studies.