Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Benjamin W. Tatler is active.

Publication


Featured researches published by Benjamin W. Tatler.


Vision Research | 2005

Visual correlates of fixation selection: effects of scale and time

Benjamin W. Tatler; Roland Baddeley; Iain D. Gilchrist

What distinguishes the locations that we fixate from those that we do not? To answer this question we recorded eye movements while observers viewed natural scenes, and recorded image characteristics centred at the locations that observers fixated. To investigate potential differences in the visual characteristics of fixated versus non-fixated locations, these images were transformed to make intensity, contrast, colour, and edge content explicit. Signal detection and information theoretic techniques were then used to compare fixated regions to those that were not. The presence of contrast and edge information was more strongly discriminatory than luminance or chromaticity. Fixated locations tended to be more distinctive in the high spatial frequencies. Extremes of low frequency luminance information were avoided. With prolonged viewing, consistency in fixation locations between observers decreased. In contrast to [Parkhurst, D. J., Law, K., & Niebur, E. (2002). Modeling the role of salience in the allocation of overt visual attention. Vision Research, 42 (1), 107-123] we found no change in the involvement of image features over time. We attribute this difference in our results to a systematic bias in their metric. We propose that saccade target selection involves an unchanging intermediate level representation of the scene but that the high-level interpretation of this representation changes over time.


Current Biology | 2001

Steering with the head. the visual strategy of a racing driver.

Michael F. Land; Benjamin W. Tatler

We studied the eye movements of a racing driver during high-speed practice to see whether he took in visual information in a different way from a normal driver on a winding road [1, 2]. We found that, when cornering, he spent most of the time looking close to, but not exactly at, the tangent points on the inside edges of the bends. Each bend was treated slightly differently, and there was a highly repeatable pattern to the way the track edge was viewed throughout each bend. We also found a very close relationship between the drivers head direction and the rate of rotation of the car 1 s later. We interpret these observations as indicating that the drivers gaze is not driven directly by tangent point location, as it is in ordinary driving. Instead, we propose that his head direction is driven by the same information that he uses to control steering and speed, namely his knowledge of the track and his racing line round it. If he directs his head at an angle proportional to his estimate of car rotation speed, this will automatically bring his head roughly into line with the tangent points of the bends. From this standardized position, he can use the expected movements of the tangent points in his field of view to verify, and if necessary modify, his racing line during the following second.


Quarterly Journal of Experimental Psychology | 2005

Visual memory for objects in natural scenes: From fixations to object files

Benjamin W. Tatler; Iain D. Gilchrist; Michael F. Land

Object descriptions are extracted and retained across saccades when observers view natural scenes. We investigated whether particular object properties are encoded and the stability of the resulting memories. We tested immediate recall of multiple types of information from real-world scenes and from computer-presented images of the same scenes. The relationship between fixations and properties of object memory was investigated. Position information was encoded and accumulated from multiple fixations. In contrast, identity and colour were encoded but did not require direct fixation and did not accumulate. In the current experiments, participants were unable to recall any information about shape or relative distances between objects. In addition, where information was encoded we found differential patterns of stability. Data from viewing real scenes and images were highly consistent, with stronger effects in the real-world conditions. Our findings imply that object files are not dependent upon the encoding of any particular object property and so are robust to dynamic visual environments.


Perception | 2003

The Time Course of Abstract Visual Representation

Benjamin W. Tatler; Iain D. Gilchrist; Jenny Rusted

Studies in change blindness re-enforce the suggestion that veridical, pictorial representations that survive multiple relocations of gaze are unlikely to be generated in the visual system. However, more abstract information may well be extracted and represented by the visual system. In this paper we study the types of information that are retained and the time courses over which these representations are constructed when participants view complex natural scenes. We find that such information is retained and that the resultant abstract representations encode a range of information. Different types of information are extracted and represented over different time courses. After several seconds of viewing natural scenes, our visual system is able to construct a complex information-rich representation.


Perception | 2001

Characterising the Visual Buffer: Real-World Evidence for Overwriting Early in Each Fixation

Benjamin W. Tatler

What happens to the pictorial content of fixations when we move our eyes? Previous studies demonstrate that observers are very poor at detecting changes in natural scenes that occur across saccades, blinks, and artificial interruptions (‘change blindness’). They suggest that the visual ‘snapshots’ of what is on the retina during a fixation are not retained and fused over successive fixations. I find similar results when volunteers are performing the complex real-life task of making a cup of tea. Volunteers can access the snapshot of the current fixation but not those of previous fixations. I suggest that volunteers are reporting the content of a low-level visual store that holds a veridical snapshot of the current fixation, rather than the retina itself. The snapshots are not ‘wiped’ by the saccade and remain in the buffer until they are overwritten by a new snapshot. The overwrite occurs in an all-or-none manner and can be at any time within the first 400 ms of each new fixation, with 50% of overwrites being within the first 100 ms.


Perception | 2003

On Nystagmus, Saccades, and Fixations

Benjamin W. Tatler; Nicholas J. Wade

Investigations of the ways in which the eyes move came to prominence in the 19th century, but techniques for measuring them more precisely emerged in the 20th century. When scanning a scene or text the eyes engage in periods of relative stability (fixations) interspersed with ballistic rotations (saccades). The saccade-and-fixate strategy, associated with voluntary eye movements, was first uncovered in the context of involuntary eye movements following body rotation. This pattern of eye movements is now referred to as nystagmus, and involves periods of slow eye movements, during which objects are visible, and rapid returns, when they are not; it is based on a vestibular reflex which attempts to achieve image stabilisation. Post-rotational nystagmus was reported in the late 18th century (by Wells), with afterimages used as a means of retinal stabilisation to distinguish between movement of the eyes and of the environment. Nystagmus was linked to vestibular stimulation in the 19th century, and Mach, Breuer, and Crum Brown all described its fast and slow phases. Wells and Breuer proposed that there was no visual awareness during the ballistic phase (saccadic suppression). The saccade-and-fixate strategy highlighted by studies of nystagmus was shown to apply to tasks like reading by Dodge, who used more sophisticated photographic techniques to examine oculomotor kinematics. The relationship between eye movements and perception, following earlier intuitions by Wells and Breuer, was explored by Dodge, and has been of fundamental importance in the direction of vision research over the last century.


Perception | 2003

Dodge-Ing the Issue: Dodge, Javal, Hering, and the Measurement of Saccades in Eye-Movement Research

Nicholas J. Wade; Benjamin W. Tatler; Dieter Heller

Dodge, in 1916, suggested that the French term ‘saccade’ should be used for describing the rapid movements of the eyes that occur while reading. Previously he had referred to these as type I movements. Javal had used the term ‘saccade’ in 1879, when describing experiments conducted in his laboratory by Lamare. Accordingly, Javal has been rightly credited with assigning the term to rapid eye movements. In English these rapid rotations had been called jerks, and they had been observed and measured before Lamares studies of reading. Rapid sweeps of the eyes occur as one phase of nystagmus; they were observed by Wells in 1792 who used an afterimage technique, and they were illustrated by Crum Brown in 1878. Afterimages were used in nineteenth-century research on eye movements and eye position; they were also employed by Hering in 1879, to ascertain how the eyes moved during reading. In the previous year, Javal had employed afterimages in his investigations of reading, but this was to demonstrate that the eyes moved horizontally rather than vertically. Herings and Lamares auditory method established the discontinuous nature of eye movements during reading, and the photographic methods introduced by Dodge and others in the early twentieth century enabled their characteristics to be determined with greater accuracy.


Progress in Brain Research | 2002

What information survives saccades in the real world

Benjamin W. Tatler

Recent change detection research demonstrates impressively that we do not retain and fuse the pictorial content of successive fixations. This raises two distinct issues: (1) when and how is point-by-point information lost or replaced? and (2) might more abstract information be extracted and retained from fixations? In this chapter I explore both of these issues. I consider the evidence for a very short-term richly detailed visual representation of the current target of fixation that survives the saccade but is overwritten by the content of each new fixation shortly after it begins. The possibility of abstract representations of the visual surroundings is then discussed. Under conditions of competitive, parallel processing, as are present in real life situations, multiple types of information are extracted and retained from complex natural scenes. Time courses for extraction vary between different types of information. The inclusion of eye movement data reveals a crucial role for fixations in information extraction. The data suggest that information assimilation into the representations tested here is dominated by the fovea and is integrated and accumulated from multiple foveations. From these studies we are able to construct a rudimentary framework, in which representational faithfulness and richness depend upon fixation history of that part of the visual scene, thus producing an efficient and potentially task-specific representation.


Archive | 2009

How our eyes question the world

Michael F. Land; Benjamin W. Tatler


Archive | 2015

Flow in the Macaque Monkey Optokinetic Eye Movements Elicited by Radial Optic

Klaus-Peter Hoffmann; Mingxia Zhu; Richard W. Hertle; Dongsheng Yang; Benjamin W. Tatler; Mary Hayhoe; Michael F. Land; Dana H. Ballard; Paul Anthony Warren; Simon K. Rushton; Andrew J. Foulkes; Andre Kaminiarz; Anja Schlack; Markus Lappe; Frank Bremmer

Collaboration


Dive into the Benjamin W. Tatler's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge