Roland Baddeley
University of Bristol
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Roland Baddeley.
Vision Research | 2005
Benjamin W. Tatler; Roland Baddeley; Iain D. Gilchrist
What distinguishes the locations that we fixate from those that we do not? To answer this question we recorded eye movements while observers viewed natural scenes, and recorded image characteristics centred at the locations that observers fixated. To investigate potential differences in the visual characteristics of fixated versus non-fixated locations, these images were transformed to make intensity, contrast, colour, and edge content explicit. Signal detection and information theoretic techniques were then used to compare fixated regions to those that were not. The presence of contrast and edge information was more strongly discriminatory than luminance or chromaticity. Fixated locations tended to be more distinctive in the high spatial frequencies. Extremes of low frequency luminance information were avoided. With prolonged viewing, consistency in fixation locations between observers decreased. In contrast to [Parkhurst, D. J., Law, K., & Niebur, E. (2002). Modeling the role of salience in the allocation of overt visual attention. Vision Research, 42 (1), 107-123] we found no change in the involvement of image features over time. We attribute this difference in our results to a systematic bias in their metric. We propose that saccade target selection involves an unchanging intermediate level representation of the scene but that the high-level interpretation of this representation changes over time.
Vision Research | 2006
Roland Baddeley; Benjamin W. Tatler
A Bayesian system identification technique was used to determine which image characteristics predict where people fixate when viewing natural images. More specifically an estimate was derived for the mapping between image characteristics at a given location and the probability that this location was fixated. Using a large database of eye fixations to natural images, we determined the most probable (a posteriori) model of this mapping. From a set of candidate feature maps consisting of edge, contrast and luminance maps (at two different spatial scales), fixation probability was dominated by high spatial frequency edge information. The best model applied compressive non-linearity to the high frequency edge detecting filters (approximately a square root). Both low spatial frequency edges and contrast had weaker, but inhibitory, effects. The contributions of the other maps were so small as to be behaviourally irrelevant. This Bayesian method identifies not only the relevant weighting of the different maps, but how this weighting varies as a function of distance from the point of fixation. It was found that rather than centre surround inhibition, the weightings simply averaged over an area of about 2 degrees.
Attention Perception & Psychophysics | 2003
Charles Spence; Roland Baddeley; Massimiliano Zampini; Robert James; David I. Shore
In Experiment 1, participants were presented with pairs of stimuli (one visual and the other tactile) from the left and/or right of fixation at varying stimulus onset asynchronies and were required to make unspeeded temporal order judgments (TOJs) regarding which modality was presented first. When the participants adopted an uncrossed-hands posture, just noticeable differences (JNDs) were lower (i.e., multisensory TOJs were more precise) when stimuli were presented from different positions, rather than from the same position. This spatial redundancy benefit was reduced when the participants adopted a crossed-hands posture, suggesting a failure to remap visuotactile space appropriately. In Experiment 2, JNDs were also lower when pairs of auditory and visual stimuli were presented from different positions, rather than from the same position. Taken together, these results demonstrate that people can use redundant spatial cues to facilitate their performance on multisensory TOJ tasks and suggest that previous studies may have systematically overestimated the precision with which people can make such judgments. These results highlight the intimate link between spatial and temporal factors in determining our perception of the multimodal objects and events in the world around us.
Vision Research | 2006
Benjamin W. Tatler; Roland Baddeley; Benjamin T. Vincent
We recorded over 90,000 saccades while observers viewed a diverse collection of natural images and measured low level visual features at fixation. The features that discriminated between where observers fixated and where they did not varied considerably with task, and the length of the preceding saccade. Short saccades (<8 degrees) are image feature dependent, long are less so. For free viewing, short saccades target high frequency information, long saccades are scale-invariant. When searching for luminance targets, saccades of all lengths are scale-invariant. We argue that models of saccade behaviour must account not only for task but also for saccade length and that long and short saccades are targeted differently.
The Journal of Neuroscience | 2005
Casimir J. H. Ludwig; Iain D. Gilchrist; Eugene McSorley; Roland Baddeley
Models of perceptual decision making often assume that sensory evidence is accumulated over time in favor of the various possible decisions, until the evidence in favor of one of them outweighs the evidence for the others. Saccadic eye movements are among the most frequent perceptual decisions that the human brain performs. We used stochastic visual stimuli to identify the temporal impulse response underlying saccadic eye movement decisions. Observers performed a contrast search task, with temporal variability in the visual signals. In experiment 1, we derived the temporal filter observers used to integrate the visual information. The integration window was restricted to the first ∼100 ms after display onset. In experiment 2, we showed that observers cannot perform the task if there is no useful information to distinguish the target from the distractor within this time epoch. We conclude that (1) observers did not integrate sensory evidence up to a criterion level, (2) observers did not integrate visual information up to the start of the saccadic dead time, and (3) variability in saccade latency does not correspond to variability in the visual integration period. Instead, our results support a temporal filter model of saccadic decision making. The temporal impulse response identified by our methods corresponds well with estimates of integration times of V1 output neurons.
PLOS ONE | 2011
Nicholas E. Scott-Samuel; Roland Baddeley; Chloe E. Palmer; Innes C. Cuthill
Movement is the enemy of camouflage: most attempts at concealment are disrupted by motion of the target. Faced with this problem, navies in both World Wars in the twentieth century painted their warships with high contrast geometric patterns: so-called “dazzle camouflage”. Rather than attempting to hide individual units, it was claimed that this patterning would disrupt the perception of their range, heading, size, shape and speed, and hence reduce losses from, in particular, torpedo attacks by submarines. Similar arguments had been advanced earlier for biological camouflage. Whilst there are good reasons to believe that most of these perceptual distortions may have occurred, there is no evidence for the last claim: changing perceived speed. Here we show that dazzle patterns can distort speed perception, and that this effect is greatest at high speeds. The effect should obtain in predators launching ballistic attacks against rapidly moving prey, or modern, low-tech battlefields where handheld weapons are fired from short ranges against moving vehicles. In the latter case, we demonstrate that in a typical situation involving an RPG7 attack on a Land Rover the reduction in perceived speed is sufficient to make the grenade miss where it was aimed by about a metre, which could be the difference between survival or not for the occupants of the vehicle.
Visual Cognition | 2009
Benjamin T. Vincent; Roland Baddeley; Alessia Correani; Tom Troscianko; Ute Leonards
The allocation of overt visual attention while viewing photographs of natural scenes is commonly thought to involve both bottom-up feature cues, such as luminance contrast, and top-down factors such as behavioural relevance and scene understanding. Profiting from the fact that light sources are highly visible but uninformative in visual scenes, we develop a mixture model approach that estimates the relative contribution of various low and high-level factors to patterns of eye movements whilst viewing natural scenes containing light sources. Low-level salience accounts predicted fixations at luminance contrast and at lights, whereas these factors played only a minor role in the observed human fixations. Conversely, human data were mostly explicable in terms of a central bias and a foreground preference. Moreover, observers were more likely to look near lights rather than directly at them, an effect that cannot be explained by low-level stimulus factors such as luminance or contrast. These and other results support the idea that the visual system neglects highly visible cues in favour of less visible object information. Mixture modelling might be a good way forward in understanding visual scene exploration, since it makes it possible to measure the extent that low-level or high-level cues act as drivers of eye movements.
Visual Cognition | 2009
Filipe Cristino; Roland Baddeley
In this paper we set out to answer two questions. The first aims to discover whether saccades are driven by low level image features (as suggested by a number of recent and influential models), a position we call image salience, or whether they are driven by the meaning and reward associated with the world, a position we call world salience. The second question concerns the reference frame in which the eye movements are planned. To answer these questions, we recorded six videos, using a head mounted camera, with the viewer walking down a popular shopping street in Bristol. As well as showing these videos to our participants, we also showed spatially and temporally filtered versions of them. We found that, at a coarse spatial scale, subjects viewed similar locations in the image, irrespective of the filtering, and that fixation distributions found when viewing videos with similar filtering were no more alike than if the filtering varied widely. Using a novel mixture modelling technique, we also showed that the most important reference frame was world-centred rather than head or body-based. This was confirmed by a second experiment where the fixation distributions to identical videos was systematically changed by using a swivelling tent that only altered subjects’ perception of the gravitational vertical. We conclude that eye movements should not be understood in terms of image salience, or even information maximization, but in terms of the more flexible concept of reward maximization.
Proceedings of the Royal Society of London B: Biological Sciences | 2013
Joanna Hall; Innes C. Cuthill; Roland Baddeley; Adam Shohet; Nicholas E. Scott-Samuel
Nearly all research on camouflage has investigated its effectiveness for concealing stationary objects. However, animals have to move, and patterns that only work when the subject is static will heavily constrain behaviour. We investigated the effects of different camouflages on the three stages of predation—detection, identification and capture—in a computer-based task with humans. An initial experiment tested seven camouflage strategies on static stimuli. In line with previous literature, background-matching and disruptive patterns were found to be most successful. Experiment 2 showed that if stimuli move, an isolated moving object on a stationary background cannot avoid detection or capture regardless of the type of camouflage. Experiment 3 used an identification task and showed that while camouflage is unable to slow detection or capture, camouflaged targets are harder to identify than uncamouflaged targets when similar background objects are present. The specific details of the camouflage patterns have little impact on this effect. If one has to move, camouflage cannot impede detection; but if one is surrounded by similar targets (e.g. other animals in a herd, or moving background distractors), then camouflage can slow identification. Despite previous assumptions, motion does not entirely ‘break’ camouflage.
Journal of Vision | 2009
Benjamin T. Vincent; Roland Baddeley; Tom Troscianko; Iain D. Gilchrist
Despite embodying fundamentally different assumptions about attentional allocation, a wide range of popular models of attention include a max-of-outputs mechanism for selection. Within these models, attention is directed to the items with the most extreme-value along a perceptual dimension via, for example, a winner-take-all mechanism. From the detection theoretic approach, this MAX-observer can be optimal under specific situations, however in distracter heterogeneity manipulations or in natural visual scenes this is not always the case. We derive a Bayesian maximum a posteriori (MAP)-observer, which is optimal in both these situations. While it retains a form of the max-of-outputs mechanism, it is based on the maximum a posterior probability dimension, instead of a perceptual dimension. To test this model we investigated human visual search performance using a yes/no procedure while adding external orientation uncertainty to distracter elements. The results are much better fitted by the predictions of a MAP observer than a MAX observer. We conclude a max-like mechanism may well underlie the allocation of visual attention, but this is based upon a probability dimension, not a perceptual dimension.