Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John Douglas Crawford is active.

Publication


Featured researches published by John Douglas Crawford.


Cerebral Cortex | 2015

Visual–Motor Transformations Within Frontal Eye Fields During Head-Unrestrained Gaze Shifts in the Monkey

Amirsaman Sajad; Morteza Sadeh; Gerald P. Keith; Xiaogang Yan; Hongying Wang; John Douglas Crawford

A fundamental question in sensorimotor control concerns the transformation of spatial signals from the retina into eye and head motor commands required for accurate gaze shifts. Here, we investigated these transformations by identifying the spatial codes embedded in visually evoked and movement-related responses in the frontal eye fields (FEFs) during head-unrestrained gaze shifts. Monkeys made delayed gaze shifts to the remembered location of briefly presented visual stimuli, with delay serving to dissociate visual and movement responses. A statistical analysis of nonparametric model fits to response field data from 57 neurons (38 with visual and 49 with movement activities) eliminated most effector-specific, head-fixed, and space-fixed models, but confirmed the dominance of eye-centered codes observed in head-restrained studies. More importantly, the visual response encoded target location, whereas the movement response mainly encoded the final position of the imminent gaze shift (including gaze errors). This spatiotemporal distinction between target and gaze coding was present not only at the population level, but even at the single-cell level. We propose that an imperfect visual–motor transformation occurs during the brief memory interval between perception and action, and further transformations from the FEFs eye-centered gaze motor code to effector-specific codes in motor frames occur downstream in the subcortical areas.


European Journal of Neuroscience | 2015

Spatial transformations between superior colliculus visual and motor response fields during head-unrestrained gaze shifts

Morteza Sadeh; Amirsaman Sajad; Hongying Wang; Xiaogang Yan; John Douglas Crawford

We previously reported that visuomotor activity in the superior colliculus (SC) – a key midbrain structure for the generation of rapid eye movements – preferentially encodes target position relative to the eye (Te) during low‐latency head‐unrestrained gaze shifts (DeSouza et al., 2011). Here, we trained two monkeys to perform head‐unrestrained gaze shifts after a variable post‐stimulus delay (400–700 ms), to test whether temporally separated SC visual and motor responses show different spatial codes. Target positions, final gaze positions and various frames of reference (eye, head, and space) were dissociated through natural (untrained) trial‐to‐trial variations in behaviour. 3D eye and head orientations were recorded, and 2D response field data were fitted against multiple models by use of a statistical method reported previously (Keith et al., 2009). Of 60 neurons, 17 showed a visual response, 12 showed a motor response, and 31 showed both visual and motor responses. The combined visual response field population (n = 48) showed a significant preference for Te, which was also preferred in each visual subpopulation. In contrast, the motor response field population (n = 43) showed a preference for final (relative to initial) gaze position models, and the Te model was statistically eliminated in the motor‐only population. There was also a significant shift of coding from the visual to motor response within visuomotor neurons. These data confirm that SC response fields are gaze‐centred, and show a target‐to‐gaze transformation between visual and motor responses. Thus, visuomotor transformations can occur between, and even within, neurons within a single frame of reference and brain structure.


eNeuro | 2016

Transition from Target to Gaze Coding in Primate Frontal Eye Field During Memory Delay and Memory-Motor Transformation

Amirsaman Sajad; Morteza Sadeh; Xiaogang Yan; Hongying Wang; John Douglas Crawford

Abstract The frontal eye fields (FEFs) participate in both working memory and sensorimotor transformations for saccades, but their role in integrating these functions through time remains unclear. Here, we tracked FEF spatial codes through time using a novel analytic method applied to the classic memory-delay saccade task. Three-dimensional recordings of head-unrestrained gaze shifts were made in two monkeys trained to make gaze shifts toward briefly flashed targets after a variable delay (450-1500 ms). A preliminary analysis of visual and motor response fields in 74 FEF neurons eliminated most potential models for spatial coding at the neuron population level, as in our previous study (Sajad et al., 2015). We then focused on the spatiotemporal transition from an eye-centered target code (T; preferred in the visual response) to an eye-centered intended gaze position code (G; preferred in the movement response) during the memory delay interval. We treated neural population codes as a continuous spatiotemporal variable by dividing the space spanning T and G into intermediate T–G models and dividing the task into discrete steps through time. We found that FEF delay activity, especially in visuomovement cells, progressively transitions from T through intermediate T–G codes that approach, but do not reach, G. This was followed by a final discrete transition from these intermediate T–G delay codes to a “pure” G code in movement cells without delay activity. These results demonstrate that FEF activity undergoes a series of sensory–memory–motor transformations, including a dynamically evolving spatial memory signal and an imperfect memory-to-motor transformation.


Frontiers in Human Neuroscience | 2016

Different Cortical Mechanisms for Spatial vs. Feature-Based Attentional Selection in Visual Working Memory

Anna Heuer; Anna Schubö; John Douglas Crawford

The limited capacity of visual working memory (VWM) necessitates attentional mechanisms that selectively update and maintain only the most task-relevant content. Psychophysical experiments have shown that the retroactive selection of memory content can be based on visual properties such as location or shape, but the neural basis for such differential selection is unknown. For example, it is not known if there are different cortical modules specialized for spatial vs. feature-based mnemonic attention, in the same way that has been demonstrated for attention to perceptual input. Here, we used transcranial magnetic stimulation (TMS) to identify areas in human parietal and occipital cortex involved in the selection of objects from memory based on cues to their location (spatial information) or their shape (featural information). We found that TMS over the supramarginal gyrus (SMG) selectively facilitated spatial selection, whereas TMS over the lateral occipital cortex (LO) selectively enhanced feature-based selection for remembered objects in the contralateral visual field. Thus, different cortical regions are responsible for spatial vs. feature-based selection of working memory representations. Since the same regions are involved in terms of attention to external events, these new findings indicate overlapping mechanisms for attentional control over perceptual input and mnemonic representations.


bioRxiv | 2018

Timing Determines Tuning: a Rapid Spatiotemporal Transformation in Superior Colliculus Neurons During Reactive Gaze Shifts.

Morteza Sadeh; Amirsaman Sajad; Hongying Wang; Xiaogang Yan; John Douglas Crawford

Gaze saccades –rapid shifts of the eyes and head toward a goal— have provided fundamental insights into the neural control of movement. For example, it has been shown that the superior colliculus (SC) transforms a visual target (T) code to future gaze (G) location commands after a memory delay. However, this transformation has not been observed in ‘reactive’ saccades made directly to a stimulus, so its contribution to normal gaze behavior is unclear. Here, we tested this using a quantitative measure of the spatial continuum between T and G coding based on variable gaze errors. We demonstrate that a rapid T-G transformation occurs between SC visual and motor responses during reactive saccades, even within visuomotor cells, with a continuous spatiotemporal shift in coding occurring in cell types (visual, visuomotor, motor). We further show that the primary determinant of this spatial code was not the intrinsic visual-motor index of different cells or populations, but rather the timing of the response in all cells. These results suggest that the SC provides a rapid spatiotemporal transformation for normal gaze saccades, that its motor responses contribute to variable gaze errors, and that those errors arise from a noisy spatiotemporal transformation involving all SC neurons. Significance Statement Oculomotor studies have demonstrated visuomotor transformations in structures like the superior colliculus with the use of trained behavioral manipulations, like the memory delay and antisaccades tasks, but it is not known how this happens during normal saccades. Here, using a spatiotemporal model fitting method based on endogenous gaze errors in ‘reactive’ gaze saccades, we show that the superior colliculus provides a rapid spatiotemporal transformation from target to gaze coding that involves visual, visuomotor, and motor neurons. This technique demonstrates that SC spatial codes are not fixed, and may provide a quantitative biomarker for assessing the health of sensorimotor transformations.


Frontiers in Neural Circuits | 2018

The Influence of a Memory Delay on Spatial Coding in the Superior Colliculus: Is Visual Always Visual and Motor Always Motor?

Morteza Sadeh; Amirsaman Sajad; Hongying Wang; Xiaogang Yan; John Douglas Crawford

The memory-delay saccade task is often used to separate visual and motor responses in oculomotor structures such as the superior colliculus (SC), with the assumption that these same responses would sum with a short delay during immediate “reactive” saccades to visual stimuli. However, it is also possible that additional signals (suppression, delay) alter visual and/or motor response in the memory delay task. Here, we compared the spatiotemporal properties of visual and motor responses of the same SC neurons recorded during both the reactive and memory-delay tasks in two head-unrestrained monkeys. Comparing tasks, visual (aligned with target onset) and motor (aligned on saccade onset) responses were highly correlated across neurons, but the peak response of visual neurons and peak motor responses (of both visuomotor (VM) and motor neurons) were significantly higher in the reactive task. Receptive field organization was generally similar in both tasks. Spatial coding (along a Target-Gaze (TG) continuum) was also similar, with the exception that pure motor cells showed a stronger tendency to code future gaze location in the memory delay task, suggesting a more complete transformation. These results suggest that the introduction of a trained memory delay alters both the vigor and spatial coding of SC visual and motor responses, likely due to a combination of saccade suppression signals and greater signal noise accumulation during the delay in the memory delay task.


Current Biology | 2015

Continuous Updating of Visuospatial Memory in Superior Colliculus during Slow Eye Movements

Suryadeep Dash; Xiaogang Yan; Hongying Wang; John Douglas Crawford


Journal of Vision | 2004

Trans-saccadic integration of the orientation and location features of linear objects

Steven L. Prime; Matthais Niemeier; John Douglas Crawford


Journal of Vision | 2018

Visual-motor transformation in the multiunit activity of the frontal eye fields (FEF) during the head-unrestrained gaze shifts in rhesus monkeys

Vishal Bharmauria; Amirsaman Sajad; Xiaogang Yan; Hongying Wang; John Douglas Crawford


Journal of Vision | 2018

Title: Convolutional Network Approach to Modelling Allocentric Landmark Impact on Target Localization

Sohrab Salimian; Richard P. Wildes; John Douglas Crawford

Collaboration


Dive into the John Douglas Crawford's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge