Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrew E. Welchman is active.

Publication


Featured researches published by Andrew E. Welchman.


The Journal of Neuroscience | 2008

Multivoxel Pattern Selectivity for Perceptually Relevant Binocular Disparities in the Human Brain

Tim Preston; Shengqiao Li; Zoe Kourtzi; Andrew E. Welchman

Processing of binocular disparity is thought to be widespread throughout cortex, highlighting its importance for perception and action. Yet the computations and functional roles underlying this activity across areas remain largely unknown. Here, we trace the neural representations mediating depth perception across human brain areas using multivariate analysis methods and high-resolution imaging. Presenting disparity-defined planes, we determine functional magnetic resonance imaging (fMRI) selectivity to near versus far depth positions. First, we test the perceptual relevance of this selectivity, comparing the pattern-based decoding of fMRI responses evoked by random dot stereograms that support depth perception (correlated RDS) with the decoding of stimuli containing disparities to which the perceptual system is blind (anticorrelated RDS). Preferential disparity selectivity for correlated stimuli in dorsal (visual and parietal) areas and higher ventral area LO (lateral occipital area) suggests encoding of perceptually relevant information, in contrast to early (V1, V2) and intermediate ventral (V3v, V4) visual cortical areas that show similar selectivity for both correlated and anticorrelated stimuli. Second, manipulating disparity parametrically, we show that dorsal areas encode the metric disparity structure of the viewed stimuli (i.e., disparity magnitude), whereas ventral area LO appears to represent depth position in a categorical manner (i.e., disparity sign). Our findings suggest that activity in both visual streams is commensurate with the use of disparity for depth perception but the neural computations may differ. Intriguingly, perceptually relevant responses in the dorsal stream are tuned to disparity content and emerge at a comparatively earlier stage than categorical representations for depth position in the ventral stream.


Nature Neuroscience | 2012

The integration of motion and disparity cues to depth in dorsal visual cortex

Hiroshi Ban; Tim Preston; Alan Meeson; Andrew E. Welchman

Humans exploit a range of visual depth cues to estimate three-dimensional structure. For example, the slant of a nearby tabletop can be judged by combining information from binocular disparity, texture and perspective. Behavioral tests show humans combine cues near-optimally, a feat that could depend on discriminating the outputs from cue-specific mechanisms or on fusing signals into a common representation. Although fusion is computationally attractive, it poses a substantial challenge, requiring the integration of quantitatively different signals. We used functional magnetic resonance imaging (fMRI) to provide evidence that dorsal visual area V3B/KO meets this challenge. Specifically, we found that fMRI responses are more discriminable when two cues (binocular disparity and relative motion) concurrently signal depth, and that information provided by one cue is diagnostic of depth indicated by the other. This suggests a cortical node important when perceiving depth, and highlights computations based on fusion in the dorsal stream.


Experimental Brain Research | 2009

Being discrete helps keep to the beat

Mark T. Elliott; Andrew E. Welchman; Alan M. Wing

Synchronising our actions with external events is a task we perform without apparent effort. Its foundation relies on accurate temporal control that is widely accepted to take one of two different modes of implementation: explicit timing for discrete actions and implicit timing for smooth continuous movements. Here we assess synchronisation performance for different types of action and test the degree to which each action supports corrective updating following changes in the environment. Participants performed three different finger actions in time with an auditory pacing stimulus allowing us to assess synchronisation performance. Presenting a single perturbation to the otherwise regular metronome allowed us to examine corrections supported by movements varying in their mode of timing implementation. We find that discrete actions are less variable and support faster error correction. As such, discrete actions may be preferred when engaging in time-critical adaptive behaviour with people and objects in a dynamic environment.


European Journal of Neuroscience | 2010

Multisensory cues improve sensorimotor synchronisation

Mark T. Elliott; Alan Wing; Andrew E. Welchman

Synchronising movements with events in the surrounding environment is an ubiquitous aspect of everyday behaviour. Often, information about a stream of events is available across sensory modalities. While it is clear that we synchronise more accurately to auditory cues than other modalities, little is known about how the brain combines multisensory signals to produce accurately timed actions. Here, we investigate multisensory integration for sensorimotor synchronisation. We extend the prevailing linear phase correction model for movement synchronisation, describing asynchrony variance in terms of sensory, motor and timekeeper components. Then we assess multisensory cue integration, deriving predictions based on the optimal combination of event time, defined across different sensory modalities. Participants tapped in time with metronomes presented via auditory, visual and tactile modalities, under either unimodal or bimodal presentation conditions. Temporal regularity was manipulated between modalities by applying jitter to one of the metronomes. Results matched the model predictions closely for all except high jitter level conditions in audio–visual and audio–tactile combinations, where a bias for auditory signals was observed. We suggest that, in the production of repetitive timed actions, cues are optimally integrated in terms of both sensory and temporal reliability of events. However, when temporal discrepancy between cues is high they are treated independently, with movements timed to the cue with the highest sensory reliability.


Experimental Brain Research | 2010

Combining multisensory temporal information for movement synchronisation

Alan M. Wing; Michail Doumas; Andrew E. Welchman

The ability to synchronise actions with environmental events is a fundamental skill supporting a variety of group activities. In such situations, multiple sensory cues are usually available for synchronisation, yet previous studies have suggested that auditory cues dominate those from other modalities. We examine the control of rhythmic action on the basis of auditory and haptic cues and show that performance is sensitive to both sources of information for synchronisation. Participants were required to tap the dominant hand index finger in synchrony with a metronome defined by periodic auditory tones, imposed movements of the non-dominant index finger, or both cues together. Synchronisation was least variable with the bimodal metronome as predicted by a maximum likelihood estimation (MLE) model. However, increases in timing variability of the auditory cue resulted in some departures from the MLE model. Our findings indicate the need for further investigation of the MLE account of the integration of multisensory signals in the temporal control of action.


The Journal of Neuroscience | 2009

Adaptive Estimation of Three-Dimensional Structure in the Human Brain

Tim Preston; Zoe Kourtzi; Andrew E. Welchman

Perceiving the three-dimensional (3D) properties of the environment relies on the brain bringing together ambiguous cues (e.g., binocular disparity, shading, texture) with information gained from short- and long-term experience. Perceptual aftereffects, in which the perception of an ambiguous 3D stimulus is biased away from the shape of a previously viewed stimulus, provide a sensitive means of probing this process, yet little is known about their neural basis. Here, we investigate 3D aftereffects using psychophysical and functional MRI (fMRI) adaptation paradigms to gain insight into the cortical circuits that mediate the perceptual interpretation of ambiguous depth signals. Using two classic bistable stimuli (Mach card, kinetic depth effect), we test aftereffects produced by 3D shapes defined by binocular (disparity) or monocular (texture, shading) depth cues. We show that the processing of ambiguous 3D stimuli in dorsal visual cortical areas (V3B/KO, V7) and posterior parietal regions is modulated by adaptation in line with perceptual aftereffects. Similar behavioral and fMRI adaptation effects for the two types of bistable stimuli suggest common neural substrates for depth aftereffects independent of the inducing depth cues (disparity, texture, shading). In line with current thinking about the role of adaptation in sensory optimization, our findings provide evidence that estimation of 3D shape in dorsal cortical areas takes account of the adaptive context to resolve depth ambiguity and interpret 3D structure.


Proceedings of the National Academy of Sciences of the United States of America | 2008

Bayesian motion estimation accounts for a surprising bias in 3D vision

Andrew E. Welchman; Judith M. Lam; Hh Bülthoff

Determining the approach of a moving object is a vital survival skill that depends on the brain combining information about lateral translation and motion-in-depth. Given the importance of sensing motion for obstacle avoidance, it is surprising that humans make errors, reporting an object will miss them when it is on a collision course with their head. Here we provide evidence that biases observed when participants estimate movement in depth result from the brains use of a “prior” favoring slow velocity. We formulate a Bayesian model for computing 3D motion using independently estimated parameters for the shape of the visual systems slow velocity prior. We demonstrate the success of this model in accounting for human behavior in separate experiments that assess both sensitivity and bias in 3D motion estimation. Our results show that a surprising perceptual error in 3D motion perception reflects the importance of prior probabilities when estimating environmental properties.


Proceedings of the Royal Society of London B: Biological Sciences | 2010

The quick and the dead: when reaction beats intention

Andrew E. Welchman; James Stanley; Malte R. Schomers; R. Chris Miall; Hh Bülthoff

Everyday behaviour involves a trade-off between planned actions and reaction to environmental events. Evidence from neurophysiology, neurology and functional brain imaging suggests different neural bases for the control of different movement types. Here we develop a behavioural paradigm to test movement dynamics for intentional versus reaction movements and provide evidence for a ‘reactive advantage’ in movement execution, whereby the same action is executed faster in reaction to an opponent. We placed pairs of participants in competition with each other to make a series of button presses. Within-subject analysis of movement times revealed a 10 per cent benefit for reactive actions. This was maintained when opponents performed dissimilar actions, and when participants competed against a computer, suggesting that the effect is not related to facilitation produced by action observation. Rather, faster ballistic movements may be a general property of reactive motor control, potentially providing a useful means of promoting survival.


The Journal of Neuroscience | 2015

7 tesla FMRI reveals systematic functional organization for binocular disparity in dorsal visual cortex.

Nuno Goncalves; Hiroshi Ban; Rosa Sanchez-Panchuelo; Denis Schluppeck; Andrew E. Welchman

The binocular disparity between the views of the world registered by the left and right eyes provides a powerful signal about the depth structure of the environment. Despite increasing knowledge of the cortical areas that process disparity from animal models, comparatively little is known about the local architecture of stereoscopic processing in the human brain. Here, we take advantage of the high spatial specificity and image contrast offered by 7 tesla fMRI to test for systematic organization of disparity representations in the human brain. Participants viewed random dot stereogram stimuli depicting different depth positions while we recorded fMRI responses from dorsomedial visual cortex. We repeated measurements across three separate imaging sessions. Using a series of computational modeling approaches, we report three main advances in understanding disparity organization in the human brain. First, we show that disparity preferences are clustered and that this organization persists across imaging sessions, particularly in area V3A. Second, we observe differences between the local distribution of voxel responses in early and dorsomedial visual areas, suggesting different cortical organization. Third, using modeling of voxel responses, we show that higher dorsal areas (V3A, V3B/KO) have properties that are characteristic of human depth judgments: a simple model that uses tuning parameters estimated from fMRI data captures known variations in human psychophysical performance. Together, these findings indicate that human dorsal visual cortex contains selective cortical structures for disparity that may support the neural computations that underlie depth perception.


Journal of Neurophysiology | 2013

Integration of texture and disparity cues to surface slant in dorsal visual cortex.

Aidan P. Murphy; Hiroshi Ban; Andrew E. Welchman

Reliable estimation of three-dimensional (3D) surface orientation is critical for recognizing and interacting with complex 3D objects in our environment. Human observers maximize the reliability of their estimates of surface slant by integrating multiple depth cues. Texture and binocular disparity are two such cues, but they are qualitatively very different. Existing evidence suggests that representations of surface tilt from each of these cues coincide at the single-neuron level in higher cortical areas. However, the cortical circuits responsible for 1) integration of such qualitatively distinct cues and 2) encoding the slant component of surface orientation have not been assessed. We tested for cortical responses related to slanted plane stimuli that were defined independently by texture, disparity, and combinations of these two cues. We analyzed the discriminability of functional MRI responses to two slant angles using multivariate pattern classification. Responses in visual area V3B/KO to stimuli containing congruent cues were more discriminable than those elicited by single cues, in line with predictions based on the fusion of slant estimates from component cues. This improvement was specific to congruent combinations of cues: incongruent cues yielded lower decoding accuracies, which suggests the robust use of individual cues in cases of large cue conflicts. These data suggest that area V3B/KO is intricately involved in the integration of qualitatively dissimilar depth cues.

Collaboration


Dive into the Andrew E. Welchman's collaboration.

Top Co-Authors

Avatar

Zoe Kourtzi

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar

Hiroshi Ban

National Institute of Information and Communications Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexander Muryy

University of Southampton

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan M. Wing

University of Birmingham

View shared research outputs
Researchain Logo
Decentralizing Knowledge