Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrew J. Schofield is active.

Publication


Featured researches published by Andrew J. Schofield.


Vision Research | 1999

Sensitivity to modulations of luminance and contrast in visual white noise: separate mechanisms with similar behaviour

Andrew J. Schofield; Mark A. Georgeson

Human vision can detect spatiotemporal information conveyed by first-order modulations of luminance and by second-order, non-Fourier modulations of image contrast. Models for second-order motion have suggested two filtering stages separated by a rectifying nonlinearity. We explore here the encoding of stationary first-order and second-order gratings, and their interaction. Stimuli consisted of 2-D binary, broad-band, static, visual noise sinusoidally modulated in luminance (LM, first-order) or contrast (CM, second-order). Modulation thresholds were measured in a two-interval forced-choice staircase procedure. Sensitivity curves for LM and CM had similar shape as a function of spatial frequency, and as a function of the size of a circular Gaussian blob of modulation. Weak background gratings present in both intervals produced order-specific facilitation: LM background facilitated LM detection (the dipper function) and CM facilitated CM detection. LM did not facilitate CM, nor vice-versa, neither in-phase nor out-of-phase, and this is strong evidence that LM and CM are detected via separate mechanisms. This conclusion was further supported by an experiment on the detection of LM/CM mixtures. From a general mathematical model and a specific computer simulation we conclude that a single mechanism sensitive to both LM and CM cannot predict the pattern of results for mixtures, while a model containing separate pathways for LM and CM, followed by energy summation, does so successfully and is quantitatively consistent with the finding of order-specific facilitation.


Vision Research | 2003

Sensitivity to contrast modulation: the spatial frequency dependence of second-order vision.

Andrew J. Schofield; Mark A. Georgeson

We consider the overall shape of the second-order modulation sensitivity function (MSF). Because second-order modulations of local contrast or orientation require a carrier signal, it is necessary to evaluate modulation sensitivity against a variety of carriers before reaching a general conclusion about second-order sensitivity. Here we present second-order sensitivity functions for new carrier types (low pass (1/f) noise, and high pass noise) and demonstrate that, when first-order artefacts have been accounted for, the shape of the resulting MSFs are similar to one another and to those for white and broad band noise. They are all low pass with a likely upper frequency limit in the range 10-20 c/deg, suggesting that detection of second-order stimuli is relatively insensitive to the structure of the carrier signal. This result contrasts strongly with that found for (first-order) luminance modulations of the same noise types. Here the noise acts as mask and each noise type masks most those frequencies that are dominant in its spectrum. Thus the shape of second-order MSFs are largely independent of the spectrum of their noise carrier, but first-order CSFs depend on the spectrum of an additive noise mask. This provides further evidence for the separation of first- and second-order vision and characterises second-order vision as a low pass mechanism.


Perception | 2000

What Does Second-Order Vision See in an Image?

Andrew J. Schofield

The human visual system is sensitive to both first-order variations in luminance and second-order variations in local contrast and texture. Although there is some debate about the nature of second-order vision and its relationship to first-order processing, there is now a body of results showing that they are processed separately. However, the amount, and nature, of second-order structure present in the natural environment is unclear. This is an important question because, if natural scenes contain little second-order structure in addition to first-order signals, the notion of a separate second-order system would lack ecological validity. Two models of second-order vision were applied to a number of well-calibrated natural images. Both models consisted of a first stage of oriented spatial filters followed by a rectifying nonlinearity and then a second set of filters. The models differed in terms of the connectivity between first-stage and second-stage filters. Output images taken from the models indicate that natural images do contain useful second-order structure. Specifically, the models reveal variations in texture and features defined by such variations. Areas of high contrast (but not necessarily high luminance) are also highlighted by the models. Second-order structure—as revealed by the models—did not correlate with the first-order profile of the images, suggesting that the two types of image ‘content’ may be statistically independent.


Journal of Vision | 2007

Asymmetric transfer of the dynamic motion aftereffect between first- and second-order cues and among different second-order cues

Andrew J. Schofield; Timothy Ledgeway

Recent work on motion processing has suggested a distinction between first-order cues (such as luminance modulation [LM]) and second-order cues (such as local contrast modulation [CM]). We studied interactions between moving LM, CM, and orientation modulation (OM) first comparing their spatial- and temporal-frequency sensitivity. We then tested for the transfer of the dynamic motion aftereffect (dMAE) between the three cues, matched for visibility. Observers adapted to moving, 0.5-c/deg horizontal modulations for 2 min (with 10 s top-ups). Relatively strong dMAEs were found when the adaptation and test patterns were defined by the same cue (i.e., both LM, both CM, or both OM); these effects were tuned for spatial frequency in the case of LM and CM. There was a partial transfer of the dMAE from LM to CM and OM; this transferred effect seemed to lose its tuning. The aftereffect transferred well from CM to OM and retained its tuning. There was little or no transfer from CM to LM or from OM to CM or LM. This asymmetric transfer of the dMAE between first- and second-order cues and between the second-order cues suggests some degree of separation between the mechanisms that process them.


Vision Research | 2000

The temporal properties of first- and second-order vision.

Andrew J. Schofield; Mark A. Georgeson

Vision is sensitive to first-order modulations of luminance and second-order modulations of image contrast. There is now a body of evidence that the two types of modulation are detected by separate mechanisms. Some previous experiments on motion detection have suggested that the second-order system is quite sluggish compared to the first-order system. Here we derive temporal properties of first- and second-order vision at threshold from studies of temporal integration and two-pulse summation. Three types of modulation were tested: luminance gratings alone, luminance modulations added to dynamic visual noise, and contrast modulations of dynamic noise. Data from the two-pulse summation experiment were used to derive impulse response functions for the three types of stimulus. These were then used to predict performance in the temporal integration experiment. Temporal frequency response functions were obtained as the Fourier transform of impulse responses derived from data averaged across two observers. The response to noise-free luminance gratings of 2 c/deg was bi-phasic and transient in the time domain, and bandpass in the frequency domain. The addition of dynamic noise caused the response to become mono-phasic, sustained and low-pass. The response to contrast modulated noise (second-order) was also mono-phasic, sustained and low-pass, with only a slightly longer integration time than in the first-order case. The ultimate roll-off at high frequencies was about the same as for the first-order case. We conclude that second-order vision may not be as sluggish as previously thought.


european conference on computer vision | 2010

Correlation-based intrinsic image extraction from a single image

Xiaoyue Jiang; Andrew J. Schofield; Jeremy L. Wyatt

Intrinsic images represent the underlying properties of a scene such as illumination (shading) and surface reflectance. Extracting intrinsic images is a challenging, ill-posed problem. Human performance on tasks such as shadow detection and shape-from-shading is improved by adding colour and texture to surfaces. In particular, when a surface is painted with a textured pattern, correlations between local mean luminance and local luminance amplitude promote the interpretation of luminance variations as illumination changes. Based on this finding, we propose a novel feature, local luminance amplitude, to separate illumination and reflectance, and a framework to integrate this cue with hue and texture to extract intrinsic images. The algorithm uses steerable filters to separate images into frequency and orientation components and constructs shading and reflectance images from weighted combinations of these components. Weights are determined by correlations between corresponding variations in local luminance, local amplitude, colour and texture. The intrinsic images are further refined by ensuring the consistency of local texture elements. We test this method on surfaces photographed under different lighting conditions. The effectiveness of the algorithm is demonstrated by the correlation between our intrinsic images and ground truth shading and reflectance data. Luminance amplitude was found to be a useful cue. Results are also presented for natural images.


Vision Research | 2006

Local luminance amplitude modulates the interpretation of shape-from-shading in textured surfaces

Andrew J. Schofield; Gillian S. Hesse; Paul B. Rock; Mark A. Georgeson

The pattern of illumination on an undulating surface can be used to infer its 3-D form (shape-from-shading). But the recovery of shape would be invalid if the luminance changes actually arose from changes in reflectance. So how does vision distinguish variation in illumination from variation in reflectance to avoid illusory depth? When a corrugated surface is painted with an albedo texture, the variation in local mean luminance (LM) due to shading is accompanied by a similar modulation in local luminance amplitude (AM). This is not so for reflectance variation, nor for roughly textured surfaces. We used depth mapping and paired comparison methods to show that modulations of local luminance amplitude play a role in the interpretation of shape-from-shading. The shape-from-shading percept was enhanced when LM and AM co-varied (in-phase) and was disrupted when they were out of phase or (to a lesser degree) when AM was absent. The perceptual differences between cue types (in-phase vs out-of-phase) were enhanced when the two cues were present at different orientations within a single image. Our results suggest that when LM and AM co-vary (in-phase) this indicates that the source of variation is illumination (caused by undulations of the surface), rather than surface reflectance. Hence, the congruence of LM and AM is a cue that supports a shape-from-shading interpretation.


Cognitive Neuropsychology | 2008

A tale of two agnosias: Distinctions between form and integrative agnosia

M J Riddoch; Glyn W. Humphreys; Nabeela Akhtar; Harriet A. Allen; Robert Bracewell; Andrew J. Schofield

The performance of two patients with visual agnosia was compared across a number of tests examining visual processing. The patients were distinguished by having dorsal and medial ventral extrastriate lesions. While inanimate objects were disadvantaged for the patient with a dorsal extrastriate lesion, animate items are disadvantaged for the patient with the medial ventral extrastriate lesion. The patients also showed contrasting patterns of performance on the Navon Test: The patient with a dorsal extrastriate lesion demonstrated a local bias while the patient with a medial ventral extrastriate lesion had a global bias. We propose that the dorsal and medial ventral visual pathways may be characterized at an extrastriate level by differences in local relative to more global visual processing and that this can link to visually based category-specific deficits in processing.


Journal of Vision | 2010

What is second-order vision for? Discriminating illumination versus material changes

Andrew J. Schofield; Paul B. Rock; Peng Sun; Xiaoyue Jiang; Mark A. Georgeson

The human visual system is sensitive to second-order modulations of the local contrast (CM) or amplitude (AM) of a carrier signal. Second-order cues are detected independently of first-order luminance signals; however, it is not clear why vision should benefit from second-order sensitivity. Analysis of the first- and second-order contents of natural images suggests that these cues tend to occur together, but their phase relationship varies. We have shown that in-phase combinations of LM and AM are perceived as a shaded corrugated surface whereas the anti-phase combination can be seen as corrugated when presented alone or as a flat material change when presented in a plaid containing the in-phase cue. We now extend these findings using new stimulus types and a novel haptic matching task. We also introduce a computational model based on initially separate first- and second-order channels that are combined within orientation and subsequently across orientation to produce a shading signal. Contrast gain control allows the LM + AM cue to suppress responses to the LM - AM when presented in a plaid. Thus, the model sees LM - AM as flat in these circumstances. We conclude that second-order vision plays a key role in disambiguating the origin of luminance changes within an image.


Vision Research | 2011

Sun and sky: does human vision assume a mixture of point and diffuse illumination when interpreting shape-from-shading?

Andrew J. Schofield; Paul B. Rock; Mark A. Georgeson

People readily perceive smooth luminance variations as being due to the shading produced by undulations of a 3-D surface (shape-from-shading). In doing so, the visual system must simultaneously estimate the shape of the surface and the nature of the illumination. Remarkably, shape-from-shading operates even when both these properties are unknown and neither can be estimated directly from the image. In such circumstances humans are thought to adopt a default illumination model. A widely held view is that the default illuminant is a point source located above the observers head. However, some have argued instead that the default illuminant is a diffuse source. We now present evidence that humans may adopt a flexible illumination model that includes both diffuse and point source elements. Our model estimates a direction for the point source and then weights the contribution of this source according to a bias function. For most people the preferred illuminant direction is overhead with a strong diffuse component.

Collaboration


Dive into the Andrew J. Schofield's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul B. Rock

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peng Sun

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiaoyue Jiang

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge