Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where J. Douglas Crawford is active.

Publication


Featured researches published by J. Douglas Crawford.


Nature | 2003

Optimal transsaccadic integration explains distorted spatial perception

Matthias Niemeier; J. Douglas Crawford; Douglas Tweed

We scan our surroundings with quick eye movements called saccades, and from the resulting sequence of images we build a unified percept by a process known as transsaccadic integration. This integration is often said to be flawed, because around the time of saccades, our perception is distorted and we show saccadic suppression of displacement (SSD): we fail to notice if objects change location during the eye movement. Here we show that transsaccadic integration works by optimal inference. We simulated a visuomotor system with realistic saccades, retinal acuity, motion detectors and eye-position sense, and programmed it to make optimal use of these imperfect data when interpreting scenes. This optimized model showed human-like SSD and distortions of spatial perception. It made new predictions, including tight correlations between perception and motor action (for example, more SSD in people with less-precise eye control) and a graded contraction of perceived jumps; we verified these predictions experimentally. Our results suggest that the brain constructs its evolving picture of the world by optimally integrating each new piece of sensory or motor information.


Annual Review of Neuroscience | 2011

Three-Dimensional Transformations for Goal-Directed Action

J. Douglas Crawford; Denise Y. P. Henriques; W. Pieter Medendorp

Much of the central nervous system is involved in visuomotor transformations for goal-directed gaze and reach movements. These transformations are often described in terms of stimulus location, gaze fixation, and reach endpoints, as viewed through the lens of translational geometry. Here, we argue that the intrinsic (primarily rotational) 3-D geometry of the eye-head-reach systems determines the spatial relationship between extrinsic goals and effector commands, and therefore the required transformations. This approach provides a common theoretical framework for understanding both gaze and reach control. Combined with an assessment of the behavioral, neurophysiological, imaging, and neuropsychological literature, this framework leads us to conclude that (a) the internal representation and updating of visual goals are dominated by gaze-centered mechanisms, but (b) these representations must then be transformed as a function of eye and head orientation signals into effector-specific 3-D movement commands.


The Journal of Neuroscience | 2010

Specificity of Human Parietal Saccade and Reach Regions during Transcranial Magnetic Stimulation

Michael Vesia; Steven L. Prime; Xiaogang Yan; Lauren E. Sergio; J. Douglas Crawford

Single-unit recordings in macaque monkeys have identified effector-specific regions in posterior parietal cortex (PPC), but functional neuroimaging in the human has yielded controversial results. Here we used on-line repetitive transcranial magnetic stimulation (rTMS) to determine saccade and reach specificity in human PPC. A short train of three TMS pulses (separated by an interval of 100 ms) was delivered to superior parieto-occipital cortex (SPOC), a region over the midposterior intraparietal sulcus (mIPS), and a site close to caudal IPS situated over the angular gyrus (AG) during a brief memory interval while subjects planned either a saccade or reach with the left or right hand. Behavioral measures then were compared to controls without rTMS. Stimulation of mIPS and AG produced similar patterns: increased end-point variability for reaches and decreased saccade accuracy for contralateral targets. In contrast, stimulation of SPOC deviated reach end points toward visual fixation and had no effect on saccades. Contralateral-limb specificity was highest for AG and lowest for SPOC. Visual feedback of the hand negated rTMS-induced disruptions of the reach plan for mIPS and AG, but not SPOC. These results suggest that human SPOC is specialized for encoding retinally peripheral reach goals, whereas more anterior-lateral regions (mIPS and AG) along the IPS possess overlapping maps for saccade and reach planning and are more closely involved in motor details (i.e., planning the reach vector for a specific hand). This work provides the first causal evidence for functional specificity of these parietal regions in healthy humans.


Vision Research | 2001

Ocular dominance reverses as a function of horizontal gaze angle.

Aarlenne Z. Khan; J. Douglas Crawford

Ocular dominance is the tendency to prefer visual input from one eye to the other [e.g. Porac, C. & Coren, S. (1976). The dominant eye. Psychological Bulletin 83(5), 880-897]. In standard sighting tests, most people consistently fall into either the left- or right eye-dominant category [Miles, W. R. (1930). Ocular dominance in human adults. Journal of General Psychology 3, 412-420]. Here we show this static concept to be flawed, being based on the limited results of sighting with gaze pointed straight ahead. In a reach-grasp task for targets within the binocular visual field, subjects switched between left and right eye dominance depending on horizontal gaze angle. On average, ocular dominance switched at gaze angles of only 15.5 degrees off center.


Experimental Brain Research | 2012

Specialization of reach function in human posterior parietal cortex

Michael Vesia; J. Douglas Crawford

Posterior parietal cortex (PPC) plays an important role in the planning and control of goal-directed action. Single-unit studies in monkeys have identified reach-specific areas in the PPC, but the degree of effector and computational specificity for reach in the corresponding human regions is still under debate. Here, we review converging evidence spanning functional neuroimaging, parietal patient and transcranial magnetic stimulation studies in humans that suggests a functional topography for reach within human PPC. We contrast reach to saccade and grasp regions to distinguish functional specificity and also to understand how these different goal-directed actions might be coordinated at the cortical level. First, we present the current evidence for reach specificity in distinct modules in PPC, namely superior parietal occipital cortex, midposterior intraparietal cortex and angular gyrus, compared to saccade and grasp. Second, we review the evidence for hemispheric lateralization (both for hand and visual hemifield) in these reach representations. Third, we review evidence for computational reach specificity in these regions and finally propose a functional framework for these human PPC reach modules that includes (1) a distinction between the encoding of reach goals in posterior–medial PPC as opposed to reach movement vectors in more anterior–lateral PPC regions, and (2) their integration within a broader cortical framework for reach, grasp and eye–hand coordination. These findings represent both a confirmation and extension of findings that were previously reported for the monkey.


Nature | 2001

The motor side of depth vision

Kai M. Schreiber; J. Douglas Crawford; Michael Fetter; Douglas Tweed

To achieve stereoscopic vision, the brain must search for corresponding image features on the two retinas. As long as the eyes stay still, corresponding features are confined to narrow bands called epipolar lines. But when the eyes change position, the epipolar lines migrate on the retinas. To find the matching features, the brain must either search different retinal bands depending on current eye position, or search retina-fixed zones that are large enough to cover all usual locations of the epipolar lines. Here we show, using a new type of stereogram in which the depth image vanishes at certain gaze elevations, that the search zones are retina-fixed. This being the case, motor control acquires a crucial function in depth vision: we show that the eyes twist about their lines of sight in a way that reduces the motion of the epipolar lines, allowing stereopsis to get by with smaller search zones and thereby lightening its computational load.


Neuron | 2004

Frames of reference for eye-head gaze commands in primate supplementary eye fields.

Julio C. Martinez-Trujillo; W. Pieter Medendorp; Hongying Wang; J. Douglas Crawford

The supplementary eye field (SEF) is a region within medial frontal cortex that integrates complex visuospatial information and controls eye-head gaze shifts. Here, we test if the SEF encodes desired gaze directions in a simple retinal (eye-centered) frame, such as the superior colliculus, or in some other, more complex frame. We electrically stimulated 55 SEF sites in two head-unrestrained monkeys to evoke 3D eye-head gaze shifts and then mathematically rotated these trajectories into various reference frames. Each stimulation site specified a specific spatial goal when plotted in its intrinsic frame. These intrinsic frames varied site by site, in a continuum from eye-, to head-, to space/body-centered coding schemes. This variety of coding schemes provides the SEF with a unique potential for implementing arbitrary reference frame transformations.


Cerebral Cortex | 2009

Decoding the Cortical Transformations for Visually Guided Reaching in 3D Space

Gunnar Blohm; Gerald P. Keith; J. Douglas Crawford

To explore the possible cortical mechanisms underlying the 3-dimensional (3D) visuomotor transformation for reaching, we trained a 4-layer feed-forward artificial neural network to compute a reach vector (output) from the visual positions of both the hand and target viewed from different eye and head orientations (inputs). The emergent properties of the intermediate layers reflected several known neurophysiological findings, for example, gain field-like modulations and position-dependent shifting of receptive fields (RFs). We performed a reference frame analysis for each individual network unit, simulating standard electrophysiological experiments, that is, RF mapping (unit input), motor field mapping, and microstimulation effects (unit outputs). At the level of individual units (in both intermediate layers), the 3 different electrophysiological approaches identified different reference frames, demonstrating that these techniques reveal different neuronal properties and suggesting that a comparison across these techniques is required to understand the neural code of physiological networks. This analysis showed fixed input-output relationships within each layer and, more importantly, within each unit. These local reference frame transformation modules provide the basic elements for the global transformation; their parallel contributions are combined in a gain field-like fashion at the population level to implement both the linear and nonlinear elements of the 3D visuomotor transformation.


Journal of Vision | 2007

Computations for geometrically accurate visually guided reaching in 3-D space.

Gunnar Blohm; J. Douglas Crawford

A fundamental question in neuroscience is how the brain transforms visual signals into accurate three-dimensional (3-D) reach commands, but surprisingly this has never been formally modeled. Here, we developed such a model and tested its predictions experimentally in humans. Our visuomotor transformation model used visual information about current hand and desired target positions to compute the visual (gaze-centered) desired movement vector. It then transformed these eye-centered plans into shoulder-centered motor plans using extraretinal eye and head position signals accounting for the complete 3-D eye-in-head and head-on-shoulder geometry (i.e., translation and rotation). We compared actual memory-guided reaching performance to the predictions of the model. By removing extraretinal signals (i.e., eye-head rotations and the offset between the centers of rotation of the eye and head) from the model, we developed a compensation index describing how accurately the brain performs the 3-D visuomotor transformation for different head-restrained and head-unrestrained gaze positions as well as for eye and head roll. Overall, subjects did not show errors predicted when extraretinal signals were ignored. Their reaching performance was accurate and the compensation index revealed that subjects accounted for the 3-D visuomotor transformation geometry. This was also the case for the initial portion of the movement (before proprioceptive feedback) indicating that the desired reach plan is computed in a feed-forward fashion. These findings show that the visuomotor transformation for reaching implements an internal model of the complete eye-to-shoulder linkage geometry and does not only rely on feedback control mechanisms. We discuss the relevance of this model in predicting reaching behavior in several patient groups.


Experimental Brain Research | 1993

Modularity and parallel processing in the oculomotor integrator

J. Douglas Crawford; Tutis Vilis

The neural signals that hold eye position originate in a brainstem structure called the neural integrator, so-called because it is thought to compute these position signals using a process equivalent to mathematical integration. Most previous experiments have assumed that the neural integrator reacts to damage like a sigle mathematical integrator: the eye is expected to drift towards a unique resting point at a simple exponential rate dependent on current eye position. Physiologically, this would require a neural network with uniformly distributed internal connections. However, Cannon et al. (1983) proposed a more robust modular internal configuration, with dense local connections and sparse remote connections, computationally equivalent to a parallel array of independent sub-integrators. Damage to some sub-integrators would not affect function in the others, so that part of the position signal would remain intact, and a more complex pattern of drift would result. We evaluated this parallel integrator hypothesis by recording three-dimensional eye positions in the light and dark from five alert monkeys with partial neural integrator failure. Our previous study showed that injection of the inhibitory γ aminobutyric acid agonist muscimol into the mesencephalic interstitial nucleus of Cajal (INC) causes almost complete failure of the integrators for vertical and torsional eye position after ∼ 30 min. This study examines the more modest initial effects. Several aspects of the initial vertical drift could not be accounted for by the single integrator scheme. First, the eye did not initially drift towards a single resting position; rapid but brief drift was observed towards multiple resting positions. With time after the muscimol injection, this range of stable eye positions progressively narrowed until it eventually approximated a single point. Second, the drift had multiple time constants. Third, multiple regression analysis revealed a significant correlation between drift rate and magnitude of the previous saccade, in addition to a correlation between drift rate and position. This saccade dependence enabled animals to stabilize gaze by making a series of saccades to the same target, each with less post-saccadic drift than its predecessor. These observations were predicted and explained by a model in which each of several parallel integrators generated a fraction of the eye-position command. Drift was simulated by setting the internal gain of some integrators at one (perfect integration), others at slightly less than one (imperfect integration), and the remainder at zero (no integration), as expected during partial damage to an anatomically modular network. These results support the previous suggestion that internal connections within the neural integrator network are restricted to local modules. The advantages of this modular configuration are a relative immunity to random local computational errors and partial conservation of function after damage. Similar computational advantages may be an important consequence of the modular patterns of connectivity observed throughout the brain.

Collaboration


Dive into the J. Douglas Crawford's collaboration.

Top Co-Authors

Avatar

Xiaogang Yan

Canadian Institutes of Health Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jacobus Dessing

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eliana M. Klier

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge