Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Douglas Tweed is active.

Publication


Featured researches published by Douglas Tweed.


Vision Research | 1990

Geometric relations of eye position and velocity vectors during saccades.

Douglas Tweed; Tutis Vilis

Measurements of angular position and velocity vectors of the eye in three human and three monkey subjects showed that: (1) position vectors lie roughly in a single plane, in accordance with Listings law, between and during saccades; (2) primary position of the eye is often far from the centre of the oculomotor range. (3) saccades have nearly-fixed rotation axes, which tilt out of Listings plane in a systematic way depending on current eye position. Findings 1 and 3 show that saccadic control signals accurately reflect the properties of three-dimensional rotations, as predicted by a new quaternion model of the saccadic system; models that approximate rotational kinematics using vectorial addition and integration do not predict these findings.


Vision Research | 1990

COMPUTING THREE-DIMENSIONAL EYE POSITION QUATERNIONS AND EYE VELOCITY FROM SEARCH COIL SIGNALS

Douglas Tweed; W. Cadera; Tutis Vilis

The four-component rotational operators called quaternions, which represent eye rotations in terms of their axes and angles, have several advantages over other representations of eye position (such as Fick coordinates): they provide easy computations, symmetry, a simple form for Listings law, and useful three-dimensional plots of eye movements. In this paper we present algorithms for computing eye position quaternions and eye angular velocity (not the derivative of position in three dimensions) from two search coils (not necessarily orthogonal) on one eye in two or three magnetic fields, and for locating primary position using quaternions. We show how differentiation of eye position signals yields poor estimates of all three components of eye velocity.


Nature | 2003

Optimal transsaccadic integration explains distorted spatial perception

Matthias Niemeier; J. Douglas Crawford; Douglas Tweed

We scan our surroundings with quick eye movements called saccades, and from the resulting sequence of images we build a unified percept by a process known as transsaccadic integration. This integration is often said to be flawed, because around the time of saccades, our perception is distorted and we show saccadic suppression of displacement (SSD): we fail to notice if objects change location during the eye movement. Here we show that transsaccadic integration works by optimal inference. We simulated a visuomotor system with realistic saccades, retinal acuity, motion detectors and eye-position sense, and programmed it to make optimal use of these imperfect data when interpreting scenes. This optimized model showed human-like SSD and distortions of spatial perception. It made new predictions, including tight correlations between perception and motor action (for example, more SSD in people with less-precise eye control) and a graded contraction of perceived jumps; we verified these predictions experimentally. Our results suggest that the brain constructs its evolving picture of the world by optimally integrating each new piece of sensory or motor information.


Vision Research | 1997

Visual-motor optimization in binocular control.

Douglas Tweed

When we view objects at various depths, the 3-D rotations of our two eyes are neurally yoked in accordance with a recently discovered geometric rule, here called the binocular extension of Listings law; or L2. This paper examines the visual and motor consequences of this rule. Although L2 is a generalization of Listings original, monocular law, it does not follow from current theories of the latters function, which involve minimizing muscle work or optimizing certain aspects of retinal image flow. This study shows that a new optimization strategy that combines stereo vision with motor efficiency does explain L2, and describes the predictions of this new theory. Contrary to recent suggestions in the literature, L2 does not ensure vision of lines orthogonal to the visual plane, but rather reduces cyclodisparity of the visual plane itself; and L2 does not arise because a single, conjugate angular velocity command is sent to both eyes, but actually requires that the two eyes rotate with different speeds and axes when scanning an isovergence surface. This study shows that L2 is compatible with a 1-D control system for vergence alone (because horizontal and torsional vergence are yoked) and a 3-D system for combined, head-fixed saccades and vergence.


Vision Research | 1992

Three-dimensional properties of human pursuit eye movements

Douglas Tweed; Michael Fetter; S. Andreadaki; E. Koenig; Johannes Dichgans

For any given location and velocity of a point target, there are infinitely many different eye velocities that the pursuit system could use to track the target perfectly. Three-dimensional recordings of eye position and velocity in 8 normal human subjects showed that the system chooses the unique tracking velocity that keeps eye position vectors (a particular mathematical representation of three-dimensional eye orientation) confined to a single plane, i.e. pursuit obeys Listings law. One advantage of this strategy over other possible ones, such as choosing the smallest eye velocity compatible with perfect tracking, is that it permits continuous pursuit without accumulation of ocular torsion. For nonpoint targets, there is at most one eye velocity compatible with perfect retinal image stabilisation, and the optimal velocity may not fit Listings law; we observed small but consistent deviations from the law during pursuit of rotating line targets.


Nature | 2001

The motor side of depth vision

Kai M. Schreiber; J. Douglas Crawford; Michael Fetter; Douglas Tweed

To achieve stereoscopic vision, the brain must search for corresponding image features on the two retinas. As long as the eyes stay still, corresponding features are confined to narrow bands called epipolar lines. But when the eyes change position, the epipolar lines migrate on the retinas. To find the matching features, the brain must either search different retinal bands depending on current eye position, or search retina-fixed zones that are large enough to cover all usual locations of the epipolar lines. Here we show, using a new type of stereogram in which the depth image vanishes at certain gaze elevations, that the search zones are retina-fixed. This being the case, motor control acquires a crucial function in depth vision: we show that the eyes twist about their lines of sight in a way that reduces the motion of the epipolar lines, allowing stereopsis to get by with smaller search zones and thereby lightening its computational load.


Neural Networks | 1990

The superior colliculus and spatiotemporal translation in the saccadic system

Douglas Tweed; Tutis Vilis

Abstract The superior colliculus (SC) plays an important part in generating saccadic eye movements, sending signals coding desired eye rotation to the brainstem. These signals must be translated from the topographic (spatial) representation used in the SC to the firing frequency (temporal) code used downstream. We show that a model of the saccadic system using the quaternion representation of eye rotations yields a spatiotemporal translation with all the experimentally observed properties: activation of a particular site in the SC generates a saccade of a particular amplitude and direction; activation of multiple sites evokes a vector average (weighted by activity levels) of the saccades coded by the individual sites; the intensity and temporal profile of activation determine saccade speed but not metrics. The feature of the model that is essential to these results is a particular sort of redundancy in the quaternion representation, coupled with multiplicative downstream handling of SC outputs.


Nature Communications | 2016

Random synaptic feedback weights support error backpropagation for deep learning

Timothy P. Lillicrap; Daniel Cownden; Douglas Tweed; Colin J. Akerman

The brain processes information through multiple layers of neurons. This deep architecture is representationally powerful, but complicates learning because it is difficult to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neurons axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in the brain. Here we demonstrate that this strong architectural constraint is not required for effective error propagation. We present a surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights. This mechanism can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks. Our results help reopen questions about how the brain could use error signals and dispel long-held assumptions about algorithmic constraints on learning.


Nature | 1999

Non-commutativity in the brain

Douglas Tweed; Thomas Haslwanter; Vera Happe; Michael Fetter

In non-commutative algebra, order makes a difference to multiplication, so that a × b ≠ b × a (refs 1, 2). This feature is necessary for computing rotary motion, because order makes a difference to the combined effect of two rotations. It has therefore been proposed that there are non-commutative operators in the brain circuits that deal with rotations, including motor circuits that steer the eyes, head and limbs,,, and sensory circuits that handle spatial information,. This idea is controversial,,: studies of eye and head control have revealed behaviours that are consistent with non-commutativity in the brain,, but none that clearly rules out all commutative models. Here we demonstrate non-commutative computation in the vestibulo-ocular reflex. We show that subjects rotated in darkness can hold their gaze points stable in space, correctly computing different final eye-position commands when put through the same two rotations in different orders, in a way that is unattainable by any commutative system.


Experimental Brain Research | 1992

The influence of head position and head reorientation on the axis of eye rotation and the vestibular time constant during postrotatory nystagmus

Michael Fetter; Douglas Tweed; W. Hermann; B. Wohland-Braun; E. Koenig

SummaryReorienting the head with respect to gravity during the postrotatory period alters the time course of postrotatory nystagmus (PRN), hastening its decline and thereby reducing the calculated vestibular time constant. One explanation for this phenomenon is that the head reorientation results in a corresponding reorientation of the axis of eye rotation with respect to head coordinates. This possibility was investigated in 10 human subjects whose eye movements were monitored with a three-dimensional magnetic field — search — coil technique using a variety of head reorientation paradigms in a randomized order during PRN following the termination of a 90°/s rotation about earth vertical. Average eye velocities were calculated over two time intervals: from 1 s to 2 s and from 7 s to 8 s after cessation of head rotation. The time constant was estimated as one third of the duration of PRN. For most conditions, a reorientation of the head with respect to gravity 2 s after the rotation had stopped did not significantly alter the direction of the eye velocity vector of PRN with respect to head coordinates. This strongly indicates that, in humans, PRN is mainly stabilized in head coordinates and not in space coordinates, even if the otolith input changes. This finding invalidates the notion that the shortening of PRN due to reorientation of the head could be due to a change of the eye velocity vector towards a direction (torsion), which is not detectable with the eye recording methods (electrooculography) used in earlier studies. The results regarding the vestibular time constant basically confirm earlier findings, showing a strong dependence on static head position, with the time constant being lowest if mainly the vertical canals are stimulated (60° nose up and 90° left ear down). In addition, the time constant was drastically shortened for tilts away from upright. The reduction in vestibular time constant with head reorientation cannot be explained solely on the basis of the dependence of the time constant on static head position. A clear example is provided by head reorientations back towards the upright position, which results in a decrease in the time constant, rather than an increase that would be expected on the basis of static head position.

Collaboration


Dive into the Douglas Tweed's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tutis Vilis

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James A. Sharpe

University Health Network

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

H. Misslisch

University of Tübingen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge