Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peter Scarfe is active.

Publication


Featured researches published by Peter Scarfe.


Journal of Vision | 2011

Statistically optimal integration of biased sensory estimates.

Peter Scarfe; Paul B. Hibbard

Experimental investigations of cue combination typically assume that individual cues provide noisy but unbiased sensory information about world properties. However, in numerous instances, including real-world settings, observers systematically misestimate properties of the world from sensory information. Two such instances are the estimation of shape from stereo and motion cues. Bias in single-cue estimates, therefore poses a problem for cue combination if the visual system is to maintain accuracy with respect to the world, particularly because knowledge about the magnitude of bias in individual cues is typically unknown. Here, we show that observers fail to take account of the magnitude of bias in each cue during combination and instead combine cues in proportion to their reliability so as to increase the precision of the combined-cue estimate. This suggests that observers were unaware of the bias in their sensory estimates. Our analysis of cue combination shows that there is a definable range of circumstances in which combining information from biased cues, rather than vetoing one or other cue, can still be beneficial, by reducing error in the final estimate.


Vision Research | 2006

Disparity-defined objects moving in depth do not elicit three-dimensional shape constancy

Peter Scarfe; Paul B. Hibbard

Observers generally fail to recover three-dimensional shape accurately from binocular disparity. Typically, depth is overestimated at near distances and underestimated at far distances [Johnston, E. B. (1991). Systematic distortions of shape from stereopsis. Vision Research, 31, 1351-1360]. A simple prediction from this is that disparity-defined objects should appear to expand in depth when moving towards the observer, and compress in depth when moving away. However, additional information is provided when an object moves from which 3D Euclidean shape can be recovered, be this through the addition of structure from motion information [Richards, W. (1985). Structure from stereo and motion. Journal of the Optical Society of America A, 2, 343-349], or the use of non-generic strategies [Todd, J. T., & Norman, J. F. (2003). The visual perception of 3-D shape from multiple cues: Are observers capable of perceiving metric structure? Perception and Psychophysics, 65, 31-47]. Here, we investigated shape constancy for objects moving in depth. We found that to be perceived as constant in shape, objects needed to contract in depth when moving toward the observer, and expand in depth when moving away, countering the effects of incorrect distance scaling (Johnston, 1991). This is a striking example of the failure of shape constancy, but one that is predicted if observers neither accurately estimate object distance in order to recover Euclidean shape, nor are able to base their responses on a simpler processing strategy.


Journal of Vision | 2015

Using high-fidelity virtual reality to study perception in freely moving observers

Peter Scarfe; Andrew Glennerster

Technological innovations have had a profound influence on how we study the sensory perception in humans and other animals. One example was the introduction of affordable computers, which radically changed the nature of visual experiments. It is clear that vision research is now at cusp of a similar shift, this time driven by the use of commercially available, low-cost, high-fidelity virtual reality (VR). In this review we will focus on: (a) the research questions VR allows experimenters to address and why these research questions are important, (b) the things that need to be considered when using VR to study human perception,


Journal of Vision | 2010

Motion drag induced by global motion Gabor arrays

Peter Scarfe; Alan Johnston

The perceived position of stationary objects can appear shifted in space due to the presence of motion in another part of the visual field (motion drag). We investigated this phenomenon with global motion Gabor arrays. These arrays consist of randomly oriented Gabors (Gaussian windowed sinusoidal luminance modulations) whose speed is set such that the normal component of the individual Gabors motion is consistent with a single 2D global velocity. Global motion arrays were shown to alter the perceived position of nearby stationary objects. The size of this shift was the same as that induced by arrays of Gabors uniformly oriented in the direction of global motion and drifting at the global motion speed. Both types of array were found to be robust to large changes in array density and exhibited the same time course of effect. The motion drag induced by the global motion arrays was consistent with the estimated 2D global velocity, rather than by the component of the local velocities in the global motion direction. This suggests that the motion signal that induces motion drag originates at or after a stage at which local motion signals have been integrated to produce a global motion estimate.


Vision Research | 2013

Reverse correlation reveals how observers sample visual information when estimating three-dimensional shape

Peter Scarfe; Paul B. Hibbard

Human observers exhibit large systematic distance-dependent biases when estimating the three-dimensional (3D) shape of objects defined by binocular image disparities. This has led some to question the utility of disparity as a cue to 3D shape and whether accurate estimation of 3D shape is at all possible. Others have argued that accurate perception is possible, but only with large continuous perspective transformations of an object. Using a stimulus that is known to elicit large distance-dependent perceptual bias (random dot stereograms of elliptical cylinders) we show that contrary to these findings the simple adoption of a more naturalistic viewing angle completely eliminates this bias. Using behavioural psychophysics, coupled with a novel surface-based reverse correlation methodology, we show that it is binocular edge and contour information that allows for accurate and precise perception and that observers actively exploit and sample this information when it is available.


The Journal of Neuroscience | 2014

Humans Use Predictive Kinematic Models to Calibrate Visual Cues to Three-Dimensional Surface Slant

Peter Scarfe; Andrew Glennerster

When the sensory consequences of an action are systematically altered our brain can recalibrate the mappings between sensory cues and properties of our environment. This recalibration can be driven by both cue conflicts and altered sensory statistics, but neither mechanism offers a way for cues to be calibrated so they provide accurate information about the world, as sensory cues carry no information as to their own accuracy. Here, we explored whether sensory predictions based on internal physical models could be used to accurately calibrate visual cues to 3D surface slant. Human observers played a 3D kinematic game in which they adjusted the slant of a surface so that a moving ball would bounce off the surface and through a target hoop. In one group, the balls bounce was manipulated so that the surface behaved as if it had a different slant to that signaled by visual cues. With experience of this altered bounce, observers recalibrated their perception of slant so that it was more consistent with the assumed laws of kinematics and physical behavior of the surface. In another group, making the ball spin in a way that could physically explain its altered bounce eliminated this pattern of recalibration. Importantly, both groups adjusted their behavior in the kinematic game in the same way, experienced the same set of slants, and were not presented with low-level cue conflicts that could drive the recalibration. We conclude that observers use predictive kinematic models to accurately calibrate visual cues to 3D properties of world.


Frontiers in Computational Neuroscience | 2013

The role of the harmonic vector average in motion integration.

Alan Johnston; Peter Scarfe

The local speeds of object contours vary systematically with the cosine of the angle between the normal component of the local velocity and the global object motion direction. An array of Gabor elements whose speed changes with local spatial orientation in accordance with this pattern can appear to move as a single surface. The apparent direction of motion of plaids and Gabor arrays has variously been proposed to result from feature tracking, vector addition and vector averaging in addition to the geometrically correct global velocity as indicated by the intersection of constraints (IOC) solution. Here a new combination rule, the harmonic vector average (HVA), is introduced, as well as a new algorithm for computing the IOC solution. The vector sum can be discounted as an integration strategy as it increases with the number of elements. The vector average over local vectors that vary in direction always provides an underestimate of the true global speed. The HVA, however, provides the correct global speed and direction for an unbiased sample of local velocities with respect to the global motion direction, as is the case for a simple closed contour. The HVA over biased samples provides an aggregate velocity estimate that can still be combined through an IOC computation to give an accurate estimate of the global velocity, which is not true of the vector average. Psychophysical results for type II Gabor arrays show perceived direction and speed falls close to the IOC direction for Gabor arrays having a wide range of orientations but the IOC prediction fails as the mean orientation shifts away from the global motion direction and the orientation range narrows. In this case perceived velocity generally defaults to the HVA.


Journal of Vision | 2018

Perception of Object Movement in Virtual Reality

Rowan Hughes; Peter Scarfe; Paul B. Hibbard; Loes van Dam

Virtual Reality (VR) represents a paradigm shift in terms of user experience and immersion within a virtual environment. In the work of van Dam et al. (2012, 2016) the authors investigated how proprioceptive and visual information were integrated over time in a distal display. We extend on this approach to investigate how visual information is integrated over time in VR and whether the increased latent information, parallax, depth, etc., plays a role in participant performance.


Journal of Vision | 2018

Multisensory Detection: Using Vision and Haptics to detect hidden objects.

Julie Skevik; Peter Scarfe

Consistent with research in the visual domain [3] we examined whether Probability Summation or Additive Summation provided a better fit to our data. Despite having a large dataset, we found no evidence that one model fit better than the other (19/40 better fits for PS and 21/40 better fits for AS). ∆AIC 1 2 3 4 5 6 7 8 9 10 5% 1.05 -0.56 -10.82 -0.91 -3.78 -1.94 -0.14 -2.71 -11.83 -0.37 10% -3.38 1.25 -7.96 0.91 5.98 -0.23 4.13 2.93 7.16 -5.07 15% 14.34 1.63 -5.20 7.85 3.43 5.58 -6.17 -5.66 -2.73 8.53 20% 1.49 10.62 -5.79 13.85 -7.63 6.90 1.67 17.23 -2.44 -0.12 Table: Table of ∆AIC values per participant over noise levels, where negative values, coloured red, indicate that Additive Summation is a better fit, while positive values, coloured blue, indicate that Probability Summation is a better fit.


PLOS ONE | 2016

Binocular depth judgments on smoothly curved surfaces

Rebecca L. Hornsey; Paul B. Hibbard; Peter Scarfe

Binocular disparity is an important cue to depth, allowing us to make very fine discriminations of the relative depth of objects. In complex scenes, this sensitivity depends on the particular shape and layout of the objects viewed. For example, judgments of the relative depths of points on a smoothly curved surface are less accurate than those for points in empty space. It has been argued that this occurs because depth relationships are represented accurately only within a local spatial area. A consequence of this is that, when judging the relative depths of points separated by depth maxima and minima, information must be integrated across separate local representations. This integration, by adding more stages of processing, might be expected to reduce the accuracy of depth judgements. We tested this idea directly by measuring how accurately human participants could report the relative depths of two dots, presented with different binocular disparities. In the first, Two Dot condition the two dots were presented in front of a square grid. In the second, Three Dot condition, an additional dot was presented midway between the target dots, at a range of depths, both nearer and further than the target dots. In the final, Surface condition, the target dots were placed on a smooth surface defined by binocular disparity cues. In some trials, this contained a depth maximum or minimum between the target dots. In the Three Dot condition, performance was impaired when the central dot was presented with a large disparity, in line with predictions. In the Surface condition, performance was worst when the midpoint of the surface was at a similar distance to the targets, and relatively unaffected when there was a large depth maximum or minimum present. These results are not consistent with the idea that depth order is represented only within a local spatial area.

Collaboration


Dive into the Peter Scarfe's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan Johnston

University of Nottingham

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge