Charles Leek
Bangor University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Charles Leek.
Cognition | 2013
Yan Jing Wu; Filipe Cristino; Charles Leek; Guillaume Thierry
Language non-selective lexical access in bilinguals has been established mainly using tasks requiring explicit language processing. Here, we show that bilinguals activate native language translations even when words presented in their second language are incidentally processed in a nonverbal, visual search task. Chinese-English bilinguals searched for strings of circles or squares presented together with three English words (i.e., distracters) within a 4-item grid. In the experimental trials, all four locations were occupied by English words, including a critical word that phonologically overlapped with the Chinese word for circle or square when translated into Chinese. The eye-tracking results show that, in the experimental trials, bilinguals looked more frequently and longer at critical than control words, a pattern that was absent in English monolingual controls. We conclude that incidental word processing activates lexical representations of both languages of bilinguals, even when the task does not require explicit language processing.
Journal of Vision | 2015
Charles Leek; Stephen J. Johnston; Filipe Cristino
The recognition of 3D object shape is a fundamental issue in vision science. Although our knowledge has advanced considerably, most prior studies have been restricted to 2D stimulus presentation that ignores stereo disparity. In previous work we have shown how analyses of eye movement patterns can be used to elucidate the kinds of shape information that support the recognition of multi-part 3D objects (e.g., Davitt et al. , JEP: HPP, 2014, 40, 451-456). Here we extend that work using a novel technique for the 3D mapping, and analyses, of eye movement patterns under conditions of stereo viewing. Eye movements were recorded while observers learned sets of surface-rendered multi-part novel objects, and during a subsequent recognition memory task in which they discriminated trained from untrained objects at different depth rotations. The tasks were performed binocularly with or without stereo disparity. Eye movements were mapped onto the underlying 3D object mesh using a ray tracing technique and a common reference frame between the eye tracker and 3D modelling environment. This allowed us to extrapolate the recorded screen coordinates for fixations from the eye tracker onto the 3D structure of the stereo-viewed objects. For the analysis we computed models of the spatial distributions of 3D surface curvature convexity, concavity and low-level image saliency. We then compared (fixation) data - model correspondences using ROC curves. Observers were faster and more accurate when viewing objects with stereo disparity. The spatial distributions of fixations were best accounted for by the 3D surface concavity model. The results support the hypothesis that stereo disparity facilities recognition, and that surface curvature minima play a key role in the recognition of 3D shape. More broadly, the novel techniques outlined for mapping eye movement patterns in 3D space should be of interest to vision researchers in a variety of domains. Meeting abstract presented at VSS 2015.
I-perception | 2012
Filipe Cristino; Candy Patterson; Charles Leek
Eye movements have been widely studied, using images and videos in laboratories or portable eye trackers in the real world. Although a good understanding of the saccadic system and extensive models of gaze have been developed over the years, only a few studies have focused on the consistency of eye movements across viewpoints. We have developed a new technique to compute and map the depth of collected eye movements on stimuli rendered from 3D mesh objects using a traditional corneal reflection eye tracker (SR Eyelink 1000). Having eye movements mapped into 3D space (and not on an image space) allowed us to compare fixations across viewpoints. Fixation sequences (scanpaths) were also studied across viewpoints using the ScanMatch method (Cristino et al 2010, Behavioural and Research Methods42, 692–700), extended to work with 3D eye movements. In a set of experiments where participants were asked to perform a recognition task on either a set of objects or faces, we recorded their gaze while performing the ta...
Journal of Eye Movement Research | 2009
Stephen Johnston; Charles Leek
Journal of Vision | 2010
Charles Leek; Stephen Johnston
Journal of Vision | 2014
Alan J. Pegna; Mark Roberts; Charles Leek
Journal of Vision | 2012
Candy Patterson; Filipe Cristino; William G. Hayward; Charles Leek
Journal of Vision | 2010
Charles Leek; Mark Roberts; Irene Reppa; Alan J. Pegna
Journal of Vision | 2010
Stephen Johnston; Charles Leek
Gamble Aware | 2017
Robert D. Rogers; Joe Butler; S Millard; Filipe Cristino; Lina Davitt; Charles Leek