Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adam P. Morris is active.

Publication


Featured researches published by Adam P. Morris.


Journal of Cognitive Neuroscience | 2006

Executive Brake Failure following Deactivation of Human Frontal Lobe

Christopher D. Chambers; Mark A. Bellgrove; Mark G. Stokes; Tracy R. Henderson; Hugh Garavan; Ian H. Robertson; Adam P. Morris; Jason B. Mattingley

In the course of daily living, humans frequently encounter situations in which a motor activity, once initiated, becomes unnecessary or inappropriate. Under such circumstances, the ability to inhibit motor responses can be of vital importance. Although the nature of response inhibition has been studied in psychology for several decades, its neural basis remains unclear. Using transcranial magnetic stimulation, we found that temporary deactivation of the pars opercularis in the right inferior frontal gyrus selectively impairs the ability to stop an initiated action. Critically, deactivation of the same region did not affect the ability to execute responses, nor did it influence physiological arousal. These findings confirm and extend recent reports that the inferior frontal gyrus is vital for mediating response inhibition.


The Journal of Neuroscience | 2004

Amygdala Responses to Fearful and Happy Facial Expressions under Conditions of Binocular Suppression

Mark A. Williams; Adam P. Morris; Francis McGlone; David F. Abbott; Jason B. Mattingley

The human amygdala plays a crucial role in processing affective information conveyed by sensory stimuli. Facial expressions of fear and anger, which both signal potential threat to an observer, result in significant increases in amygdala activity, even when the faces are unattended or presented briefly and masked. It has been suggested that afferent signals from the retina travel to the amygdala via separate cortical and subcortical pathways, with the subcortical pathway underlying unconscious processing. Here we exploited the phenomenon of binocular rivalry to induce complete suppression of affective face stimuli presented to one eye. Twelve participants viewed brief, rivalrous visual displays in which a fearful, happy, or neutral face was presented to one eye while a house was presented simultaneously to the other. We used functional magnetic resonance imaging to study activation in the amygdala and extrastriate visual areas for consciously perceived versus suppressed face and house stimuli. Activation within the fusiform and parahippocampal gyri increased significantly for perceived versus suppressed faces and houses, respectively. Amygdala activation increased bilaterally in response to fearful versus neutral faces, regardless of whether the face was perceived consciously or suppressed because of binocular rivalry. Amygdala activity also increased significantly for happy versus neutral faces, but only when the face was suppressed. This activation pattern suggests that the amygdala has a limited capacity to differentiate between specific facial expressions when it must rely on information received via a subcortical route. We suggest that this limited capacity reflects a tradeoff between specificity and speed of processing.


Proceedings of the National Academy of Sciences of the United States of America | 2007

Parietal stimulation destabilizes spatial updating across saccadic eye movements

Adam P. Morris; Christopher D. Chambers; Jason B. Mattingley

Saccadic eye movements cause sudden and global shifts in the retinal image. Rather than causing confusion, however, eye movements expand our sense of space and detail. In macaques, a stable representation of space is embodied by neural populations in intraparietal cortex that redistribute activity with each saccade to compensate for eye displacement, but little is known about equivalent updating mechanisms in humans. We combined noninvasive cortical stimulation with a double-step saccade task to examine the contribution of two human intraparietal areas to transsaccadic spatial updating. Right hemisphere stimulation over the posterior termination of the intraparietal sulcus (IPSp) broadened and shifted the distribution of second-saccade endpoints, but only when the first-saccade was directed into the contralateral hemifield. By interleaving trials with and without cortical stimulation, we show that the shift in endpoints was caused by an enduring effect of stimulation on neural functioning (e.g., modulation of neuronal gain). By varying the onset time of stimulation, we show that the representation of space in IPSp is updated immediately after the first-saccade. In contrast, stimulation of an adjacent IPS site had no such effects on second-saccades. These experiments suggest that stimulation of IPSp distorts an eye position or displacement signal that updates the representation of space at the completion of a saccade. Such sensory-motor integration in IPSp is crucial for the ongoing control of action, and may contribute to visual stability across saccades.


Current Biology | 2012

Dynamics of Eye-Position Signals in the Dorsal Visual System

Adam P. Morris; Michael Kubischik; Klaus-Peter Hoffmann; Bart Krekelberg; Frank Bremmer

BACKGROUND Many visual areas of the primate brain contain signals related to the current position of the eyes in the orbit. These cortical eye-position signals are thought to underlie the transformation of retinal input-which changes with every eye movement-into a stable representation of visual space. For this coding scheme to work, such signals would need to be updated fast enough to keep up with the eye during normal exploratory behavior. We examined the dynamics of cortical eye-position signals in four dorsal visual areas of the macaque brain: the lateral and ventral intraparietal areas (LIP; VIP), the middle temporal area (MT), and the medial-superior temporal area (MST). We recorded extracellular activity of single neurons while the animal performed sequences of fixations and saccades in darkness. RESULTS The data show that eye-position signals are updated predictively, such that the representation shifts in the direction of a saccade prior to (<100 ms) the actual eye movement. Despite this early start, eye-position signals remain inaccurate until shortly after (10-150 ms) the eye movement. By using simulated behavioral experiments, we show that this brief misrepresentation of eye position provides a neural explanation for the psychophysical phenomenon of perisaccadic mislocalization, in which observers misperceive the positions of visual targets flashed around the time of saccadic eye movements. CONCLUSIONS Together, these results suggest that eye-position signals in the dorsal visual system are updated rapidly across eye movements and play a direct role in perceptual localization, even when they are erroneous.


The Journal of Neuroscience | 2010

Summation of Visual Motion across Eye Movements Reflects a Nonspatial Decision Mechanism

Adam P. Morris; Charles C. Liu; Simon J. Cropper; Jason D. Forte; Bart Krekelberg; Jason B. Mattingley

Human vision remains perceptually stable even though retinal inputs change rapidly with each eye movement. Although the neural basis of visual stability remains unknown, a recent psychophysical study pointed to the existence of visual feature-representations anchored in environmental rather than retinal coordinates (e.g., “spatiotopic” receptive fields; Melcher and Morrone, 2003). In that study, sensitivity to a moving stimulus presented after a saccadic eye movement was enhanced when preceded by another moving stimulus at the same spatial location before the saccade. The finding is consistent with spatiotopic sensory integration, but it could also have arisen from a probabilistic improvement in performance due to the presence of more than one motion signal for the perceptual decision. Here we show that this statistical advantage accounts completely for summation effects in this task. We first demonstrate that measurements of summation are confounded by noise related to an observers uncertainty about motion onset times. When this uncertainty is minimized, comparable summation is observed regardless of whether two motion signals occupy the same or different locations in space, and whether they contain the same or opposite directions of motion. These results are incompatible with the tuning properties of motion-sensitive sensory neurons and provide no evidence for a spatiotopic representation of visual motion. Instead, summation in this context reflects a decision mechanism that uses abstract representations of sensory events to optimize choice behavior.


The Journal of Neuroscience | 2013

Eye-Position Signals in the Dorsal Visual System Are Accurate and Precise on Short Timescales

Adam P. Morris; Frank Bremmer; Bart Krekelberg

Eye-position signals (EPS) are found throughout the primate visual system and are thought to provide a mechanism for representing spatial locations in a manner that is robust to changes in eye position. It remains unknown, however, whether cortical EPS (also known as “gain fields”) have the necessary spatial and temporal characteristics to fulfill their purported computational roles. To quantify these EPS, we combined single-unit recordings in four dorsal visual areas of behaving rhesus macaques (lateral intraparietal area, ventral intraparietal area, middle temporal area, and the medial superior temporal area) with likelihood-based population-decoding techniques. The decoders used knowledge of spiking statistics to estimate eye position during fixation from a set of observed spike counts across neurons. Importantly, these samples were short in duration (100 ms) and from individual trials to mimic the real-time estimation problem faced by the brain. The results suggest that cortical EPS provide an accurate and precise representation of eye position, albeit with unequal signal fidelity across brain areas and a modest underestimation of eye eccentricity. The underestimation of eye eccentricity predicted a pattern of mislocalization that matches the errors made by human observers. In addition, we found that eccentric eye positions were associated with enhanced precision relative to the primary eye position. This predicts that positions in visual space should be represented more reliably during eccentric gaze than while looking straight ahead. Together, these results suggest that cortical eye-position signals provide a useable head-centered representation of visual space on timescales that are compatible with the duration of a typical ocular fixation.


Frontiers in Psychology | 2015

The (un)suitability of modern liquid crystal displays (LCDs) for vision research

Masoud Ghodrati; Adam P. Morris; Nicholas Seow Chiang Price

Psychophysical and physiological studies of vision have traditionally used cathode ray tube (CRT) monitors to present stimuli. These monitors are no longer easily available, and liquid crystal display (LCD) technology is continually improving; therefore, we characterized a number of LCD monitors to determine if newer models are suitable replacements for CRTs in the laboratory. We compared the spatial and temporal characteristics of a CRT with five LCDs, including monitors designed with vision science in mind (ViewPixx and Display++), “prosumer” gaming monitors, and a consumer-grade LCD. All monitors had sufficient contrast, luminance range and reliability to support basic vision experiments with static images. However, the luminance of all LCDs depended strongly on viewing angle, which in combination with the poor spatial uniformity of all monitors except the VPixx, caused up to 80% drops in effective luminance in the periphery during central fixation. Further, all monitors showed significant spatial dependence, as the luminance of one area was modulated by the luminance of other areas. These spatial imperfections are most pronounced for experiments that use large or peripheral visual stimuli. In the temporal domain, the gaming LCDs were unable to generate reliable luminance patterns; one was unable to reach the requested luminance within a single frame whereas in the other the luminance of one frame affected the luminance of the next frame. The VPixx and Display++ were less affected by these problems, and had good temporal properties provided stimuli were presented for 2 or more frames. Of the consumer-grade and gaming displays tested, and if problems with spatial uniformity are taken into account, the Eizo FG2421 is the most suitable alternative to CRTs. The specialized ViewPixx performed best among all the tested LCDs, followed closely by the Display++; both are good replacements for a CRT, provided their spatial imperfections are considered.


Frontiers in Systems Neuroscience | 2016

The Dorsal Visual System Predicts Future and Remembers Past Eye Position.

Adam P. Morris; Frank Bremmer; Bart Krekelberg

Eye movements are essential to primate vision but introduce potentially disruptive displacements of the retinal image. To maintain stable vision, the brain is thought to rely on neurons that carry both visual signals and information about the current direction of gaze in their firing rates. We have shown previously that these neurons provide an accurate representation of eye position during fixation, but whether they are updated fast enough during saccadic eye movements to support real-time vision remains controversial. Here we show that not only do these neurons carry a fast and accurate eye-position signal, but also that they support in parallel a range of time-lagged variants, including predictive and post dictive signals. We recorded extracellular activity in four areas of the macaque dorsal visual cortex during a saccade task, including the lateral and ventral intraparietal areas (LIP, VIP), and the middle temporal (MT) and medial superior temporal (MST) areas. As reported previously, neurons showed tonic eye-position-related activity during fixation. In addition, they showed a variety of transient changes in activity around the time of saccades, including relative suppression, enhancement, and pre-saccadic bursts for one saccade direction over another. We show that a hypothetical neuron that pools this rich population activity through a weighted sum can produce an output that mimics the true spatiotemporal dynamics of the eye. Further, with different pooling weights, this downstream eye position signal (EPS) could be updated long before (<100 ms) or after (<200 ms) an eye movement. The results suggest a flexible coding scheme in which downstream computations have access to past, current, and future eye positions simultaneously, providing a basis for visual stability and delay-free visually-guided behavior.


Journal of Vision | 2013

Intrasaccadic suppression is dominated by reduced detector gain.

Jon Guez; Adam P. Morris; Bart Krekelberg

Human vision requires fast eye movements (saccades). Each saccade causes a self-induced motion signal, but we are not aware of this potentially jarring visual input. Among the theorized causes of this phenomenon is a decrease in visual sensitivity before (presaccadic suppression) and during (intrasaccadic suppression) saccades. We investigated intrasaccadic suppression using a perceptual template model (PTM) relating visual detection to different signal-processing stages. One stage changes the gain on the detectors input; another increases uncertainty about the stimulus, allowing more noise into the detector; and other stages inject noise into the detector in a stimulus-dependent or -independent manner. By quantifying intrasaccadic suppression of flashed horizontal gratings at varying external noise levels, we obtained threshold-versus-noise (TVN) data, allowing us to fit the PTM. We tested if any of the PTM parameters changed significantly between the fixation and saccade models and could therefore account for intrasaccadic suppression. We found that the dominant contribution to intrasaccadic suppression was a reduction in the gain of the visual detector. We discuss how our study differs from previous ones that have pointed to uncertainty as an underlying cause of intrasaccadic suppression and how the equivalent noise approach provides a framework for comparing the disparate neural correlates of saccadic suppression.


Cell Reports | 2016

Adaptation without Plasticity

Maria del Mar Quiroga; Adam P. Morris; Bart Krekelberg

Sensory adaptation is a phenomenon in which neurons are affected not only by their immediate input but also by the sequence of preceding inputs. In visual cortex, for example, neurons shift their preferred orientation after exposure to an oriented stimulus. This adaptation is traditionally attributed to plasticity. We show that a recurrent network generates tuning curve shifts observed in cat and macaque visual cortex, even when all synaptic weights and intrinsic properties in the model are fixed. This demonstrates that, in a recurrent network, adaptation on timescales of hundreds of milliseconds does not require plasticity. Given the ubiquity of recurrent connections, this phenomenon likely contributes to responses observed across cortex and shows that plasticity cannot be inferred solely from changes in tuning on these timescales. More broadly, our findings show that recurrent connections can endow a network with a powerful mechanism to store and integrate recent contextual information.

Collaboration


Dive into the Adam P. Morris's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pete Thomas

Loughborough University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge