Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrea M. Green is active.

Publication


Featured researches published by Andrea M. Green.


Nature | 2004

Neurons compute internal models of the physical laws of motion.

Dora E. Angelaki; Aasef G. Shaikh; Andrea M. Green; J. David Dickman

A critical step in self-motion perception and spatial awareness is the integration of motion cues from multiple sensory organs that individually do not provide an accurate representation of the physical world. One of the best-studied sensory ambiguities is found in visual processing, and arises because of the inherent uncertainty in detecting the motion direction of an untextured contour moving within a small aperture. A similar sensory ambiguity arises in identifying the actual motion associated with linear accelerations sensed by the otolith organs in the inner ear. These internal linear accelerometers respond identically during translational motion (for example, running forward) and gravitational accelerations experienced as we reorient the head relative to gravity (that is, head tilt). Using new stimulus combinations, we identify here cerebellar and brainstem motion-sensitive neurons that compute a solution to the inertial motion detection problem. We show that the firing rates of these populations of neurons reflect the computations necessary to construct an internal model representation of the physical equations of motion.


Neuron | 2007

Purkinje Cells in Posterior Cerebellar Vermis Encode Motion in an Inertial Reference Frame

Tatyana A. Yakusheva; Aasef G. Shaikh; Andrea M. Green; Pablo M. Blazquez; J. David Dickman; Dora E. Angelaki

The ability to orient and navigate through the terrestrial environment represents a computational challenge common to all vertebrates. It arises because motion sensors in the inner ear, the otolith organs, and the semicircular canals transduce self-motion in an egocentric reference frame. As a result, vestibular afferent information reaching the brain is inappropriate for coding our own motion and orientation relative to the outside world. Here we show that cerebellar cortical neuron activity in vermal lobules 9 and 10 reflects the critical computations of transforming head-centered vestibular afferent information into earth-referenced self-motion and spatial orientation signals. Unlike vestibular and deep cerebellar nuclei neurons, where a mixture of responses was observed, Purkinje cells represent a homogeneous population that encodes inertial motion. They carry the earth-horizontal component of a spatially transformed and temporally integrated rotation signal from the semicircular canals, which is critical for computing head attitude, thus isolating inertial linear accelerations during navigation.


Trends in Neurosciences | 2011

Learning to move machines with the mind

Andrea M. Green; John F. Kalaska

Brain-computer interfaces (BCIs) extract signals from neural activity to control remote devices ranging from computer cursors to limb-like robots. They show great potential to help patients with severe motor deficits perform everyday tasks without the constant assistance of caregivers. Understanding the neural mechanisms by which subjects use BCI systems could lead to improved designs and provide unique insights into normal motor control and skill acquisition. However, reports vary considerably about how much training is required to use a BCI system, the degree to which performance improves with practice and the underlying neural mechanisms. This review examines these diverse findings, their potential relationship with motor learning during overt arm movements, and other outstanding questions concerning the volitional control of BCI systems.


The Journal of Neuroscience | 2003

Resolution of sensory ambiguities for gaze stabilization requires a second neural integrator.

Andrea M. Green; Dora E. Angelaki

The ability to simultaneously move in the world and maintain stable visual perception depends critically on the contribution of vestibulo-ocular reflexes (VORs) to gaze stabilization. It is traditionally believed that semicircular canal signals drive compensatory responses to rotational head disturbances (rotational VOR), whereas otolith signals compensate for translational movements [translational VOR (TVOR)]. However, a sensory ambiguity exists because otolith afferents are activated similarly during head translations and reorientations relative to gravity (i.e., tilts). Extra-otolith cues are, therefore, necessary to ensure that dynamic head tilts do not elicit a TVOR. To investigate how extra-otolith signals contribute, we characterized the temporal and viewing distance-dependent properties of a TVOR elicited in the absence of a lateral acceleration stimulus to the otoliths during combined translational/rotational motion. We show that, in addition to otolith signals, angular head position signals derived by integrating sensory canal information drive the TVOR. A physiological basis for these results is proposed in a model with two distinct integration steps. Upstream of the well known oculomotor velocity-to-position neural integrator, the model incorporates a separate integration element that could represent the “velocity storage integrator,” whose functional role in the oculomotor system has so far remained controversial. We propose that a key functional purpose of the velocity storage network is to temporally integrate semicircular canal signals, so that they may be used to extract translation information from ambiguous otolith afferent signals in the natural and functionally relevant bandwidth of head movements.


Current Biology | 2005

Sensory Convergence Solves a Motion Ambiguity Problem

Aasef G. Shaikh; Andrea M. Green; Fatema Ghasia; Shawn D. Newlands; J. David Dickman; Dora E. Angelaki

Our inner ear is equipped with a set of linear accelerometers, the otolith organs, that sense the inertial accelerations experienced during self-motion. However, as Einstein pointed out nearly a century ago, this signal would by itself be insufficient to detect our real movement, because gravity, another form of linear acceleration, and self-motion are sensed identically by otolith afferents. To deal with this ambiguity, it was proposed that neural populations in the pons and midline cerebellum compute an independent, internal estimate of gravity using signals arising from the vestibular rotation sensors, the semicircular canals. This hypothesis, regarding a causal relationship between firing rates and postulated sensory contributions to inertial motion estimation, has been directly tested here by recording neural activities before and after inactivation of the semicircular canals. We show that, unlike cells in normal animals, the gravity component of neural responses was nearly absent in canal-inactivated animals. We conclude that, through integration of temporally matched, multimodal information, neurons derive the mathematical signals predicted by the equations describing the physics of the outside world.


Current Opinion in Neurobiology | 2010

Multisensory integration: resolving sensory ambiguities to build novel representations.

Andrea M. Green; Dora E. Angelaki

Multisensory integration plays several important roles in the nervous system. One is to combine information from multiple complementary cues to improve stimulus detection and discrimination. Another is to resolve peripheral sensory ambiguities and create novel internal representations that do not exist at the level of individual sensors. Here we focus on how ambiguities inherent in vestibular, proprioceptive and visual signals are resolved to create behaviorally useful internal estimates of our self-motion. We review recent studies that have shed new light on the nature of these estimates and how multiple, but individually ambiguous, sensory signals are processed and combined to compute them. We emphasize the need to combine experiments with theoretical insights to understand the transformations that are being performed.


The Journal of Neuroscience | 2007

A Reevaluation of the Inverse Dynamic Model for Eye Movements

Andrea M. Green; Hui Meng; Dora E. Angelaki

To construct an appropriate motor command from signals that provide a representation of desired action, the nervous system must take into account the dynamic characteristics of the motor plant to be controlled. In the oculomotor system, signals specifying desired eye velocity are thought to be transformed into motor commands by an inverse dynamic model of the eye plant that is shared for all types of eye movements and implemented by a weighted combination of eye velocity and position signals. Neurons in the prepositus hypoglossi and adjacent medial vestibular nuclei (PH-BT neurons) were traditionally thought to encode the “eye position” component of this inverse model. However, not only are PH-BT responses inconsistent with this theoretical role, but compensatory eye movement responses to translation do not show evidence for processing by a common inverse dynamic model. Prompted by these discrepancies between theoretical notions and experimental observations, we reevaluated these concepts using multiple-frequency rotational and translational head movements. Compatible with the notion of a common inverse model, we show that PH-BT responses are unique among all premotor cell types in bearing a consistent relationship to the motor output during eye movements driven by different sensory stimuli. However, because their responses are dynamically identical to those of motoneurons, PH-BT neurons do not simply represent an internal component of the inverse model, but rather its output. They encode and distribute an estimate of the motor command, a signal critical for accurate motor execution and learning.


The Cerebellum | 2010

Computation of egomotion in the macaque cerebellar vermis.

Dora E. Angelaki; Tatyana A. Yakusheva; Andrea M. Green; J. David Dickman; Pablo M. Blazquez

The nodulus and uvula (lobules X and IX of the vermis) receive mossy fibers from both vestibular afferents and vestibular nuclei neurons and are thought to play a role in spatial orientation. Their properties relate to a sensory ambiguity of the vestibular periphery: otolith afferents respond identically to translational (inertial) accelerations and changes in orientation relative to gravity. Based on theoretical and behavioral evidence, this sensory ambiguity is resolved using rotational cues from the semicircular canals. Recordings from the cerebellar cortex have identified a neural correlate of the brains ability to resolve this ambiguity in the simple spike activities of nodulus/uvula Purkinje cells. This computation, which likely involves the cerebellar circuitry and its reciprocal connections with the vestibular nuclei, results from a remarkable convergence of spatially- and temporally-aligned otolith-driven and semicircular canal-driven signals. Such convergence requires a spatio-temporal transformation of head-centered canal-driven signals into an estimate of head reorientation relative to gravity. This signal must then be subtracted from the otolith-driven estimate of net acceleration to compute inertial motion. At present, Purkinje cells in the nodulus/uvula appear to encode the output of this computation. However, how the required spatio-temporal matching takes place within the cerebellar circuitry and what role complex spikes play in spatial orientation and disorientation remains unknown. In addition, the role of visual cues in driving and/or modifying simple and complex spike activity, a process potentially critical for long-term adaptation, constitutes another important direction for future studies.


Progress in Brain Research | 2007

Coordinate transformations and sensory integration in the detection of spatial orientation and self-motion: from models to experiments.

Andrea M. Green; Dora E. Angelaki

An accurate internal representation of our current motion and orientation in space is critical to navigate in the world and execute appropriate action. The force of gravity provides an allocentric frame of reference that defines ones motion relative to inertial (i.e., world-centered) space. However, movement in this environment also introduces particular motion detection problems as our internal linear accelerometers, the otolith organs, respond identically to either translational motion or changes in head orientation relative to gravity. According to physical principles, there exists an ideal solution to the problem of distinguishing between the two as long as the brain also has access to accurate internal estimates of angular velocity. Here, we illustrate how a nonlinear integrative neural network that receives sensory signals from the vestibular organs could be used to implement the required computations for inertial motion detection. The model predicts several distinct cell populations that are comparable with experimentally identified cell types and accounts for a number of previously unexplained characteristics of their responses. A key model prediction is the existence of cell populations that transform head-referenced rotational signals from the semicircular canals into spatially referenced estimates of head reorientation relative to gravity. This chapter provides an overview of how addressing the problem of inertial motion estimation from a computational standpoint has contributed to identifying the actual neuronal populations responsible for solving the tilt-translation ambiguity and has facilitated the interpretation of neural response properties.


Otolaryngology-Head and Neck Surgery | 1998

Vestibular adaptation: How models can affect data interpretations

Henrietta L. Galiana; Andrea M. Green

Vestibular adaptation can be induced optically or by chemical or physical injury to the vestibular apparatus or the brain stem. In searching for the sites or mechanisms of vestibular adaptation, neurophysiologists often rely on comparing central resting (background) activities and central modulations (sensitivity) during vestibular stimulation, before and after motor learning or vestibular compensation. It is assumed that adapted central sites must exhibit modulation changes that parallel vestibulo-ocular reflex changes. Using model simulations and analysis, we will show that such presumptions may be misleading. First, using a simple schematic of interconnected cells or nuclei, one can show that modulation depth and background “tone” can be modified (or fixed) independently, using weightings on direct or indirect afferent projections. That is, if synaptic weights along all stimulus pathways are altered, one may fix or strongly modify central premotor characteristics in a manner apparently unrelated to global reflex changes. In the vestibulo-ocular reflex, the dominant premotor pathways contain position-vestibular-pause cells and eye-head-velocity cells (which are behaviorally similar to floccular-target neurons). Several experiments have reported negligible changes in the velocity sensitivity of position-vestibular-pause cells, despite large gain changes in the vestibulo-ocular reflex induced by training with visual-vestibular conflict. On the other hand, the modulation changes on floccular-target neurons (position-vestibular-pause) can be much larger than the changes in reflex gain. Using a bilateral vestibulo-ocular reflex model, we show that overall increases or decreases in reflex gain can be expressed (even overexpressed) in one particular subgroup of premotor neurons. Nevertheless, such observations are theoretically compatible with synaptic changes on all primary projections in a widely interconnected central network. Hence, stable neural responses during reflex adaptation are not sufficient to exclude a potential site of sensory-motor adaptation. Similarly, modified neural responses (as in cerebellum) need not necessarily imply a direct role in supporting the adapted state. Model predictions should help to design additional experimental protocols, to test hypotheses, and to refine diagnostic measures of recovery after vestibular lesions.

Collaboration


Dive into the Andrea M. Green's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

J. David Dickman

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Aasef G. Shaikh

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pablo M. Blazquez

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge