Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paul A. Warren is active.

Publication


Featured researches published by Paul A. Warren.


Current Biology | 2009

Optic flow processing for the assessment of object movement during ego movement.

Paul A. Warren; Simon K. Rushton

The vast majority of research on optic flow (retinal motion arising because of observer movement) has focused on its use in heading recovery and guidance of locomotion. Here we demonstrate that optic flow processing has an important role in the detection and estimation of scene-relative object movement during self movement. To do this, the brain identifies and globally discounts (i.e., subtracts) optic flow patterns across the visual scene-a process called flow parsing. Remaining motion can then be attributed to other objects in the scene. In two experiments, stationary observers viewed radial expansion flow fields and a moving probe at various onscreen locations. Consistent with global discounting, perceived probe motion had a significant component toward the center of the display and the magnitude of this component increased with probe eccentricity. The contribution of local motion processing to this effect was small compared to that of global processing (experiment 1). Furthermore, global discounting was clearly implicated because these effects persisted even when all the flow in the hemifield containing the probe was removed (experiment 2). Global processing of optic flow information is shown to play a fundamental role in the recovery of object movement during ego movement.


Current Biology | 2010

A Bayesian Model of Perceived Head-Centered Velocity during Smooth Pursuit Eye Movement

Thomas Charles Augustus Freeman; Rebecca A. Champion; Paul A. Warren

Summary During smooth pursuit eye movement, observers often misperceive velocity. Pursued stimuli appear slower (Aubert-Fleishl phenomenon [1, 2]), stationary objects appear to move (Filehne illusion [3]), the perceived direction of moving objects is distorted (trajectory misperception [4]), and self-motion veers away from its true path (e.g., the slalom illusion [5]). Each illusion demonstrates that eye speed is underestimated with respect to image speed, a finding that has been taken as evidence of early sensory signals that differ in accuracy [4, 6–11]. Here we present an alternative Bayesian account, based on the idea that perceptual estimates are increasingly influenced by prior expectations as signals become more uncertain [12–15]. We show that the speeds of pursued stimuli are more difficult to discriminate than fixated stimuli. Observers are therefore less certain about motion signals encoding the speed of pursued stimuli, a finding we use to quantify the Aubert-Fleischl phenomenon based on the assumption that the prior for motion is centered on zero [16–20]. In doing so, we reveal an important property currently overlooked by Bayesian models of motion perception. Two Bayes estimates are needed at a relatively early stage in processing, one for pursued targets and one for image motion.


Vision Research | 2009

Perception of scene-relative object movement: Optic flow parsing and the contribution of monocular depth cues

Paul A. Warren; Simon K. Rushton

We have recently suggested that the brain uses its sensitivity to optic flow in order to parse retinal motion into components arising due to self and object movement (e.g. Rushton, S. K., & Warren, P. A. (2005). Moving observers, 3D relative motion and the detection of object movement. Current Biology, 15, R542-R543). Here, we explore whether stereo disparity is necessary for flow parsing or whether other sources of depth information, which could theoretically constrain flow-field interpretation, are sufficient. Stationary observers viewed large field of view stimuli containing textured cubes, moving in a manner that was consistent with a complex observer movement through a stationary scene. Observers made speeded responses to report the perceived direction of movement of a probe object presented at different depths in the scene. Across conditions we varied the presence or absence of different binocular and monocular cues to depth order. In line with previous studies, results consistent with flow parsing (in terms of both perceived direction and response time) were found in the condition in which motion parallax and stereoscopic disparity were present. Observers were poorer at judging object movement when depth order was specified by parallax alone. However, as more monocular depth cues were added to the stimulus the results approached those found when the scene contained stereoscopic cues. We conclude that both monocular and binocular static depth information contribute to flow parsing. These findings are discussed in the context of potential architectures for a model of the flow parsing mechanism.


Vision Research | 2008

Evidence for flow-parsing in radial flow displays

Paul A. Warren; Simon K. Rushton

Retinal motion of objects is not in itself enough to signal whether or how objects are moving in the world; the same pattern of retinal motion can result from movement of the object, the observer or both. Estimation of scene-relative movement of an object is vital for successful completion of many simple everyday tasks. Recent research has provided evidence for a neural flow-parsing mechanism which uses the brains sensitivity to optic flow to separate retinal motion signals into those components due to observer movement and those due to the movement of objects in the scene. In this study we provide further evidence that flow-parsing is implicated in the assessment of object trajectory during observer movement. Furthermore, it is shown that flow-parsing involves a global analysis of retinal motion, as might be expected if optic flow processing underpinned this mechanism.


Proceedings of the National Academy of Sciences of the United States of America | 2013

Perceptuo-motor, cognitive, and description-based decision-making seem equally good

Andreas Jarvstad; Ulrike Hahn; Simon K. Rushton; Paul A. Warren

Significance Human decision-making seems fundamentally domain dependent. Sensory-motor decisions (e.g., where to put your feet on a rocky ridge) seem near-optimal, whereas decisions based on numerical information (e.g., choosing between financial options) seem suboptimal. Additionally, when people rely on information gained through experience, they make choices that are often the opposite of those they make when relying on described information. However, comparing results across domains on the basis of past results is difficult, because decision-making is studied very differently in different domains. We compared decision-making performance across domains under precisely matched conditions, finding evidence against the idea that fundamental dissociations exist. In fact, peoples’ ability to make decisions seem rather good, although not perfect, in both sensory-motor and cognitive domains. Classical studies suggest that high-level cognitive decisions (e.g., choosing between financial options) are suboptimal. In contrast, low-level decisions (e.g., choosing where to put your feet on a rocky ridge) appear near-optimal: the perception–cognition gap. Moreover, in classical tasks, people appear to put too much weight on unlikely events. In contrast, when people can learn through experience, they appear to put too little weight on unlikely events: the description–experience gap. We eliminated confounding factors and, contrary to what is commonly believed, found results suggesting that (i) the perception–cognition gap is illusory and due to differences in the way performance is assessed; (ii) the description–experience gap arises from the assumption that objective probabilities match subjective ones; (iii) people’s ability to make decisions is better than the classical literature suggests; and (iv) differences between decision-makers are more important for predicting peoples’ choices than differences between choice tasks.


Vision Research | 2002

Interpolating sampled contours in 3-D: analyses of variability and bias.

Paul A. Warren; Laurence T. Maloney; Michael S. Landy

In two experiments, we examined how observers interpolated the missing parts of sampled, planar contours in 3-D space. We varied (1) contour type (linear or parabolic), (2) orientation of the plane containing the contour and (3) the number of points on a sampled contour.Interpolation performance was very accurate, comparable to results from Vernier tasks. Setting variability was highest along the line of sight and for the parabolic contour. Setting variability did not decrease with increasing number of points on either contour, suggesting that observers do not use all available, relevant information in this task.


Vision Research | 2004

Interpolating sampled contours in 3D: perturbation analyses.

Paul A. Warren; Laurence T. Maloney; Michael S. Landy

In four experiments, observers interpolated parabolic sampled contours confined to planes in three-dimensional space. Each sampled contour consisted of eight visible points, placed irregularly along the otherwise invisible parabolic contour. Observers adjusted an additional point until it fell on the contour. We sought to determine how each visible point influenced interpolation by measuring the effect of slightly perturbing its location. Influence fell rapidly to zero as distance from the interpolated point increased, indicating that human visual interpolation of parabolic contours is local. We compare the measured influence for human observers to that predicted by three standard interpolation algorithms. The results were inconsistent with a fit of a quadratic to the points, but were reasonably consistent with a cubic spline and most consistent with an algorithm that minimizes the variance of angles between neighboring line segments defined by the sampled points.


Journal of Vision | 2012

Does optic flow parsing depend on prior estimation of heading

Paul A. Warren; Simon K. Rushton; Andrew J. Foulkes

We have recently suggested that neural flow parsing mechanisms act to subtract global optic flow consistent with observer movement to aid in detecting and assessing scene-relative object movement. Here, we examine whether flow parsing can occur independently from heading estimation. To address this question we used stimuli comprising two superimposed optic flow fields comprising limited lifetime dots (one planar and one radial). This stimulus gives rise to the so-called optic flow illusion (OFI) in which perceived heading is biased in the direction of the planar flow field. Observers were asked to report the perceived direction of motion of a probe object placed in the OFI stimulus. If flow parsing depends upon a prior estimate of heading then the perceived trajectory should reflect global subtraction of a field consistent with the heading experienced under the OFI. In Experiment 1 we tested this prediction directly, finding instead that the perceived trajectory was biased markedly in the direction opposite to that predicted under the OFI. In Experiment 2 we demonstrate that the results of Experiment 1 are consistent with a positively weighted vector sum of the effects seen when viewing the probe together with individual radial and planar flow fields. These results suggest that flow parsing is not necessarily dependent on prior estimation of heading direction. We discuss the implications of this finding for our understanding of the mechanisms of flow parsing.


Journal of Autism and Developmental Disorders | 2015

Investigating Visual–Tactile Interactions over Time and Space in Adults with Autism

Daniel Poole; Emma Gowen; Paul A. Warren; Ellen Poliakoff

Abstract It has been suggested that the sensory symptoms which affect many people with autism spectrum conditions (ASC) may be related to alterations in multisensory processing. Typically, the likelihood of interactions between the senses increases when information is temporally and spatially coincident. We explored visual–tactile interactions in adults with ASC for the first time in two experiments using low-level stimuli. Both participants with ASC and matched neurotypical controls only produced crossmodal interactions to near simultaneous stimuli, suggesting that temporal modulation is unaffected in the adult population. We also provide preliminary evidence that visual–tactile interactions may occur over greater spatial distances in participants with ASC, which merits further exploration.


Multisensory Research | 2015

Adapting the Crossmodal Congruency Task for Measuring the Limits of Visual-Tactile Interactions Within and Between Groups

Daniel Poole; Samuel Couth; Emma Gowen; Paul A. Warren; Ellen Poliakoff

The crossmodal congruency task (CCT) is a commonly used paradigm for measuring visual-tactile interactions and how these may be influenced by discrepancies in space and time between the tactile target and visual distractors. The majority of studies which have used this paradigm have neither measured, nor attempted to control, individual variability in unisensory (tactile) performance. We have developed a version of the CCT in which unisensory baseline performance is constrained to enable comparisons within and between participant groups. Participants were instructed to discriminate between single and double tactile pulses presented to their dominant hand, at their own approximate threshold level. In Experiment 1, visual distractors were presented at -30 ms, 100 ms, 200 ms and 400 ms stimulus onset asynchronies. In Experiment 2, ipsilateral visual distractors were presented 0 cm, 21 cm, and 42 cm vertically from the target hand, and 42 cm in a symmetrical, contralateral position. Distractors presented -30 ms and 0 cm from the target produced a significantly larger congruency effect than at other time points and spatial locations. Thus, the typical limits of visual-tactile interactions were replicated using a version of the task in which baseline performance can be constrained. The usefulness of this approach is supported by the observation that tactile thresholds correlated with self-reported autistic traits in this non-clinical sample. We discuss the suitability of this adapted version of the CCT for measuring visual-tactile interactions in populations where unisensory tactile ability may differ within and between groups.

Collaboration


Dive into the Paul A. Warren's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Emma Gowen

University of Manchester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wael El-Deredy

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

Andrew Howes

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar

Daniel Poole

University of Manchester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Erich W. Graf

University of Southampton

View shared research outputs
Researchain Logo
Decentralizing Knowledge