Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wendy J. Adams is active.

Publication


Featured researches published by Wendy J. Adams.


Psychological Science | 2010

High-Level Face Adaptation Without Awareness

Wendy J. Adams; Katie Gray; Matthew Garner; Erich W. Graf

When a visual stimulus is suppressed from awareness, processing of the suppressed image is necessarily reduced. Although adaptation to simple image properties such as orientation still occurs, adaptation to high-level properties such as face identity is eliminated. Here we show that emotional facial expression continues to be processed even under complete suppression, as indexed by substantial facial expression aftereffects.


Journal of The Optical Society of America A-optics Image Science and Vision | 2003

Bayesian modeling of cue interaction: bistability in stereoscopic slant perception

Raymond van Ee; Wendy J. Adams; Pascal Mamassian

Our two eyes receive different views of a visual scene, and the resulting binocular disparities enable us to reconstruct its three-dimensional layout. However, the visual environment is also rich in monocular depth cues. We examined the resulting percept when observers view a scene in which there are large conflicts between the surface slant signaled by binocular disparities and the slant signaled by monocular perspective. For a range of disparity-perspective cue conflicts, many observers experience bistability: They are able to perceive two distinct slants and to flip between the two percepts in a controlled way. We present a Bayesian model that describes the quantitative aspects of perceived slant on the basis of the likelihoods of both perspective and disparity slant information combined with prior assumptions about the shape and orientation of objects in the scene. Our Bayesian approach can be regarded as an overarching framework that allows researchers to study all cue integration aspects-including perceptual decisions--in a unified manner.


Nature Neuroscience | 2001

Adaptation to three-dimensional distortions in human vision

Wendy J. Adams; Martin S. Banks; Raymond van Ee

When people get new glasses, they often experience distortions in the apparent three-dimensional layout of the environment; the distortions fade away in a week or so. Here we asked observers to wear a horizontal magnifier in front of one eye for several days, causing them to initially perceive large three-dimensional distortions. We found that adaptation to the magnifier was not caused by changes in the weights given to disparity and texture, or by monocular adaptation, but rather by a change in the mapping between retinal disparity and perceived slant.


Journal of Vision | 2009

The spatial scale of perceptual memory in ambiguous figure perception.

Tomas Knapen; Jan Brascamp; Wendy J. Adams; Erich W. Graf

Ambiguous visual stimuli highlight the constructive nature of vision: perception alternates between two plausible interpretations of unchanging input. However, when a previously viewed ambiguous stimulus reappears, its earlier perception almost entirely determines the new interpretation; memory disambiguates the input. Here, we investigate the spatial properties of this perceptual memory, taking into account strong anisotropies in percept preference across the visual field. Countering previous findings, we show that perceptual memory is not confined to the location in which it was instilled. Rather, it spreads to noncontiguous regions of the visual field, falling off at larger distances. Furthermore, this spread of perceptual memory takes place in a frame of reference that is tied to the surface of the retina. These results place the neural locus of perceptual memory in retinotopically organized sensory cortical areas, with implications for the wider function of perceptual memory in facilitating stable vision in natural, dynamic environments.


Proceedings of the Royal Society of London B: Biological Sciences | 2004

The effects of task and saliency on latencies for colour and motion processing

Wendy J. Adams; Pascal Mamassian

In human visual perception, there is evidence that different visual attributes, such as colour, form and motion, have different neural–processing latencies. Specifically, recent studies have suggested that colour changes are processed faster than motion changes. We propose that the processing latencies should not be considered as fixed quantities for different attributes, but instead depend upon attribute salience and the observers task. We asked observers to respond to high– and low–salience colour and motion changes in three different tasks. The tasks varied from having a strong motor component to having a strong perceptual component. Increasing salience led to shorter processing times in all three tasks. We also found an interaction between task and attribute: motion was processed more quickly in reaction–time tasks, whereas colour was processed more quickly in more perceptual tasks. Our results caution against making direct comparisons between latencies for processing different visual attributes without equating salience or considering task effects. More–salient attributes are processed faster than less–salient ones, and attributes that are critical for the task are also processed more quickly.


Nature | 1999

Robust and optimal use of information in stereo vision

John Porrill; John P. Frisby; Wendy J. Adams; David Buckley

Differences between the left and right eyes views of the world carry information about three-dimensional scene structure and about the position of the eyes in the head. The contemporary Bayesian approach to perception, implies that human performance in using this source of eye-position information can be analysed most usefully by comparison with the performance of a statistically optimal observer. Here we argue that the comparison observer should also be statistically robust, and we find that this requirement leads to qualitatively new behaviours. For example, when presented with a class of stereoscopic stimuli containing inconsistent information about eccentricity of gaze, estimates of this gaze parameter recorded from one robust ideal observer bifurcate at a critical value of stimulus inconsistency. We report an experiment in which human observers also show this phenomenon and we use the experimentally determined critical value to estimate the vertical acuity of the visual system. The Bayesian analysis also provides a highly reliable and biologically plausible algorithm that can recover eye positions even before the classic stereo-correspondence problem is solved, that is, before deciding which features in the left and right images are to be matched.


Cognition | 2009

The influence of anxiety on the initial selection of emotional faces presented in binocular rivalry.

Katie Gray; Wendy J. Adams; Matthew Garner

Neurocognitive theories of anxiety predict that threat-related information can be evaluated before attentional selection, and can influence behaviour differentially in high anxious compared to low anxious individuals. We investigate this further by presenting emotional and neutral faces in an adapted binocular rivalry paradigm. We show that the initial selection of emotional faces presented in binocular rivalry is highly influenced by self-reported state and trait anxiety-level. Heightened anxiety was correlated with increased perception of angry and fearful faces, and decreased perception of happy expressions. These results are consistent with recent evidence of involuntary selection of threat in anxiety.


Perception | 1996

Pooling of vertical disparities by the human visual system

Wendy J. Adams; John P. Frisby; David Buckley; Jonas Gårding; Stephen D Hippisley-Cox; John Porrill

Two experiments are described in which the effects of scaling vertical disparities on the perceived amplitudes of dome-shaped surfaces depicted with horizontal disparities were examined. The Mayhew and Longuet-Higginss theory and the regional-disparity-correction theory of Gar̊ding et al predict that scaling should generate a change in perceived depth appropriate to the viewing distance simulated by the scaled vertical disparities. Significant depth changes were observed, by means of a nulling task in which the vertical-disparity-scaling effect was cancelled by the observer choosing a pattern of horizontal disparities that made the dome-shaped surface appear flat. The sizes of the scaling effects were less than those predicted by either theory, suggesting that other cues to fixation distance such as oculomotor information played an appreciable role. In conditions in which 50% of the texture elements were given one value of vertical-disparity scaling and the remaining 50% were left unscaled, the size of the scaling effect on perceived depth could be accounted for by equally weighted pooling of the vertical-disparity information unless the two scalings were very dissimilar, in which case the lower scaling factor tended to dominate. These findings are discussed in terms of a Hough parameter estimation model of the vertical-disparity-pooling process.


Journal of Vision | 2009

The fate of task-irrelevant visual motion: Perceptual load versus feature-based attention

Shuichiro Taya; Wendy J. Adams; Erich W. Graf; Nilli Lavie

We tested contrasting predictions derived from perceptual load theory and from recent feature-based selection accounts. Observers viewed moving, colored stimuli and performed low or high load tasks associated with one stimulus feature, either color or motion. The resultant motion aftereffect (MAE) was used to evaluate attentional allocation. We found that task-irrelevant visual features received less attention than co-localized task-relevant features of the same objects. Moreover, when color and motion features were co-localized yet perceived to belong to two distinct surfaces, feature-based selection was further increased at the expense of object-based co-selection. Load theory predicts that the MAE for task-irrelevant motion would be reduced with a higher load color task. However, this was not seen for co-localized features; perceptual load only modulated the MAE for task-irrelevant motion when this was spatially separated from the attended color location. Our results suggest that perceptual load effects are mediated by spatial selection and do not generalize to the feature domain. Feature-based selection operates to suppress processing of task-irrelevant, co-localized features, irrespective of perceptual load.


Vision Research | 2001

3D after-effects are due to shape and not disparity adaptation

Fulvio Domini; Wendy J. Adams; Martin S. Banks

There are a variety of stereoscopic after-effects in which exposure to a stimulus with a particular slant or curvature affects the perceived slant or curvature of a subsequently presented stimulus. These after-effects have been explained as a consequence of fatigue (a decrease in responsiveness) among neural mechanisms that are tuned to particular disparities or patterns of disparity. In fact, a given disparity pattern is consistent with numerous slants or curvatures; to determine slant or curvature, the visual system must take the viewing distance into account. We took advantage of this property to examine whether the mechanisms underlying the stereoscopic curvature after-effect are tuned to particular disparity patterns or to some other property such as surface curvature. The results clearly support the second hypothesis. Thus, 3D after-effects appear to be caused by adaptation among mechanisms specifying surface shape rather than among mechanisms signaling the disparity pattern.

Collaboration


Dive into the Wendy J. Adams's collaboration.

Top Co-Authors

Avatar

Erich W. Graf

University of Southampton

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthew Garner

University of Southampton

View shared research outputs
Top Co-Authors

Avatar

Pascal Mamassian

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar

Nicholas Hedger

University of Southampton

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge