Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jessica X. Brooks is active.

Publication


Featured researches published by Jessica X. Brooks.


The Journal of Neuroscience | 2009

Multimodal Integration in Rostral Fastigial Nucleus Provides an Estimate of Body Movement

Jessica X. Brooks; Kathleen E. Cullen

The ability to accurately control posture and perceive self-motion and spatial orientation requires knowledge of the motion of both the head and body. However, whereas the vestibular sensors and nuclei directly encode head motion, no sensors directly encode body motion. Instead, the convergence of vestibular and neck proprioceptive inputs during self-motion is generally believed to underlie the ability to compute body motion. Here, we provide evidence that the brain explicitly computes an internal estimate of body motion at the level of single cerebellar neurons. Neuronal responses were recorded from the rostral fastigial nucleus, the most medial of the deep cerebellar nuclei, during whole-body, body-under-head, and head-on-body rotations. We found that approximately half of the neurons encoded the motion of the body in space, whereas the other half encoded the motion of the head in space in a manner similar to neurons in the vestibular nuclei. Notably, neurons encoding body motion responded to both vestibular and proprioceptive stimulation (accordingly termed bimodal neurons). In contrast, neurons encoding head motion were sensitive only to vestibular inputs (accordingly termed unimodal neurons). Comparison of the proprioceptive and vestibular responses of bimodal neurons further revealed similar tuning in response to changes in head-on-body position. We propose that the similarity in nonlinear processing of vestibular and proprioceptive signals underlies the accurate computation of body motion. Furthermore, the same neurons that encode body motion (i.e., bimodal neurons) most likely encode vestibular signals in a body-referenced coordinate frame, since the integration of proprioceptive and vestibular information is required for both computations.


Current Biology | 2013

The Primate Cerebellum Selectively Encodes Unexpected Self-Motion

Jessica X. Brooks; Kathleen E. Cullen

BACKGROUND The ability to distinguish sensory signals that register unexpected events (exafference) from those generated by voluntary actions (reafference) during self-motion is essential for accurate perception and behavior. The cerebellum is most commonly considered in relation to its contributions to the fine tuning of motor commands and sensorimotor calibration required for motor learning. During unexpected motion, however, the sensory prediction errors that drive motor learning potentially provide a neural basis for the computation underlying the distinction between reafference and exafference. RESULTS Recording from monkeys during voluntary and applied self-motion, we demonstrate that individual cerebellar output neurons encode an explicit and selective representation of unexpected self-motion by means of an elegant computation that cancels the reafferent sensory effects of self-generated movements. During voluntary self-motion, the sensory responses of neurons that robustly encode unexpected movement are canceled. Neurons with vestibular and proprioceptive responses to applied head and body movements are unresponsive when the same motion is self-generated. When sensory reafference and exafference are experienced simultaneously, individual neurons provide a precise estimate of the detailed time course of exafference. CONCLUSIONS These results provide an explicit solution to the longstanding problem of understanding mechanisms by which the brain anticipates the sensory consequences of our voluntary actions. Specifically, by revealing a striking computation of a sensory prediction error signal that effectively distinguishes between the sensory consequences of self-generated and externally produced actions, our findings overturn the conventional thinking that the sensory errors coded by the cerebellum principally contribute to the fine tuning of motor activity required for motor learning.


Experimental Brain Research | 2011

Internal models of self-motion: computations that suppress vestibular reafference in early vestibular processing

Kathleen E. Cullen; Jessica X. Brooks; Mohsen Jamali; Jerome Carriot; Corentin Massot

In everyday life, vestibular sensors are activated by both self-generated and externally applied head movements. The ability to distinguish inputs that are a consequence of our own actions (i.e., active motion) from those that result from changes in the external world (i.e., passive or unexpected motion) is essential for perceptual stability and accurate motor control. Recent work has made progress toward understanding how the brain distinguishes between these two kinds of sensory inputs. We have performed a series of experiments in which single-unit recordings were made from vestibular afferents and central neurons in alert macaque monkeys during rotation and translation. Vestibular afferents showed no differences in firing variability or sensitivity during active movements when compared to passive movements. In contrast, the analyses of neuronal firing rates revealed that neurons at the first central stage of vestibular processing (i.e., in the vestibular nuclei) were effectively less sensitive to active motion. Notably, however, this ability to distinguish between active and passive motion was not a general feature of early central processing, but rather was a characteristic of a distinct group of neurons known to contribute to postural control and spatial orientation. Our most recent studies have addressed how vestibular and proprioceptive inputs are integrated in the vestibular cerebellum, a region likely to be involved in generating an internal model of self-motion. We propose that this multimodal integration within the vestibular cerebellum is required for eliminating self-generated vestibular information from the subsequent computation of orientation and posture control at the first central stage of processing.


Nature Neuroscience | 2015

Learning to expect the unexpected: rapid updating in primate cerebellum during voluntary self-motion

Jessica X. Brooks; Jerome Carriot; Kathleen E. Cullen

There is considerable evidence that the cerebellum has a vital role in motor learning by constructing an estimate of the sensory consequences of movement. Theory suggests that this estimate is compared with the actual feedback to compute the sensory prediction error. However, direct proof for the existence of this comparison is lacking. We carried out a trial-by-trial analysis of cerebellar neurons during the execution and adaptation of voluntary head movements and found that neuronal sensitivities dynamically tracked the comparison of predictive and feedback signals. When the relationship between the motor command and resultant movement was altered, neurons robustly responded to sensory input as if the movement was externally generated. Neuronal sensitivities then declined with the same time course as the concurrent behavioral learning. These findings demonstrate the output of an elegant computation in which rapid updating of an internal model enables the motor system to learn to expect unexpected sensory inputs.


The Journal of Neuroscience | 2013

Multimodal integration of self-motion cues in the vestibular system: Active versus passive translations

Jerome Carriot; Jessica X. Brooks; Kathleen E. Cullen

The ability to keep track of where we are going as we navigate through our environment requires knowledge of our ongoing location and orientation. In response to passively applied motion, the otolith organs of the vestibular system encode changes in the velocity and direction of linear self-motion (i.e., heading). When self-motion is voluntarily generated, proprioceptive and motor efference copy information is also available to contribute to the brains internal representation of current heading direction and speed. However to date, how the brain integrates these extra-vestibular cues with otolith signals during active linear self-motion remains unknown. Here, to address this question, we compared the responses of macaque vestibular neurons during active and passive translations. Single-unit recordings were made from a subgroup of neurons at the first central stage of sensory processing in the vestibular pathways involved in postural control and the computation of self-motion perception. Neurons responded far less robustly to otolith stimulation during self-generated than passive head translations. Yet, the mechanism underlying the marked cancellation of otolith signals did not affect other characteristics of neuronal responses (i.e., baseline firing rate, tuning ratio, orientation of maximal sensitivity vector). Transiently applied perturbations during active motion further established that an otolith cancellation signal was only gated in conditions where proprioceptive sensory feedback matched the motor-based expectation. Together our results have important implications for understanding the brains ability to ensure accurate postural and motor control, as well as perceptual stability, during active self-motion.


The Journal of Neuroscience | 2015

Integration of canal and otolith inputs by central vestibular neurons is subadditive for both active and passive self-motion: implication for perception.

Jerome Carriot; Mohsen Jamali; Jessica X. Brooks; Kathleen E. Cullen

Traditionally, the neural encoding of vestibular information is studied by applying either passive rotations or translations in isolation. However, natural vestibular stimuli are typically more complex. During everyday life, our self-motion is generally not restricted to one dimension, but rather comprises both rotational and translational motion that will simultaneously stimulate receptors in the semicircular canals and otoliths. In addition, natural self-motion is the result of self-generated and externally generated movements. However, to date, it remains unknown how information about rotational and translational components of self-motion is integrated by vestibular pathways during active and/or passive motion. Accordingly, here, we compared the responses of neurons at the first central stage of vestibular processing to rotation, translation, and combined motion. Recordings were made in alert macaques from neurons in the vestibular nuclei involved in postural control and self-motion perception. In response to passive stimulation, neurons did not combine canal and otolith afferent information linearly. Instead, inputs were subadditively integrated with a weighting that was frequency dependent. Although canal inputs were more heavily weighted at low frequencies, the weighting of otolith input increased with frequency. In response to active stimulation, neuronal modulation was significantly attenuated (∼70%) relative to passive stimulation for rotations and translations and even more profoundly attenuated for combined motion due to subadditive input integration. Together, these findings provide insights into neural computations underlying the integration of semicircular canal and otolith inputs required for accurate posture and motor control, as well as perceptual stability, during everyday life.


The Cerebellum | 2015

Neural Correlates of Sensory Prediction Errors in Monkeys: Evidence for Internal Models of Voluntary Self-Motion in the Cerebellum

Kathleen E. Cullen; Jessica X. Brooks

During self-motion, the vestibular system makes essential contributions to postural stability and self-motion perception. To ensure accurate perception and motor control, it is critical to distinguish between vestibular sensory inputs that are the result of externally applied motion (exafference) and that are the result of our own actions (reafference). Indeed, although the vestibular sensors encode vestibular afference and reafference with equal fidelity, neurons at the first central stage of sensory processing selectively encode vestibular exafference. The mechanism underlying this reafferent suppression compares the brain’s motor-based expectation of sensory feedback with the actual sensory consequences of voluntary self-motion, effectively computing the sensory prediction error (i.e., exafference). It is generally thought that sensory prediction errors are computed in the cerebellum, yet it has been challenging to explicitly demonstrate this. We have recently addressed this question and found that deep cerebellar nuclei neurons explicitly encode sensory prediction errors during self-motion. Importantly, in everyday life, sensory prediction errors occur in response to changes in the effector or world (muscle strength, load, etc.), as well as in response to externally applied sensory stimulation. Accordingly, we hypothesize that altering the relationship between motor commands and the actual movement parameters will result in the updating in the cerebellum-based computation of exafference. If our hypothesis is correct, under these conditions, neuronal responses should initially be increased—consistent with a sudden increase in the sensory prediction error. Then, over time, as the internal model is updated, response modulation should decrease in parallel with a reduction in sensory prediction error, until vestibular reafference is again suppressed. The finding that the internal model predicting the sensory consequences of motor commands adapts for new relationships would have important implications for understanding how responses to passive stimulation endure despite the cerebellum’s ability to learn new relationships between motor commands and sensory feedback.


Annals of the New York Academy of Sciences | 2009

How Actions Alter Sensory Processing Reafference in the Vestibular System

Kathleen E. Cullen; Jessica X. Brooks; Soroush G. Sadeghi

Our vestibular organs are simultaneously activated by our own actions as well as by stimulation from the external world. The ability to distinguish sensory inputs that are a consequence of our own actions (vestibular reafference) from those that result from changes in the external world (vestibular exafference) is essential for perceptual stability and accurate motor control. Recent work in our laboratory has focused on understanding how the brain distinguishes between vestibular reafference and exafference. Single‐unit recordings were made in alert rhesus monkeys during passive and voluntary (i.e., active) head movements. We found that neurons in the first central stage of vestibular processing (vestibular nuclei), but not the primary vestibular afferents, can distinguish between active and passive movements. In order to better understand how neurons differentiate active from passive head motion, we systematically tested neuronal responses to different combinations of passive and active motion resulting from rotation of the head‐on‐body and/or head‐and‐body in space. We found that during active movements, a cancellation signal was generated when the activation of proprioceptors matched the motor‐generated expectation.


Journal of Neurophysiology | 2014

Early vestibular processing does not discriminate active from passive self-motion if there is a discrepancy between predicted and actual proprioceptive feedback

Jessica X. Brooks; Kathleen E. Cullen

Most of our sensory experiences are gained by active exploration of the world. While the ability to distinguish sensory inputs resulting of our own actions (termed reafference) from those produced externally (termed exafference) is well established, the neural mechanisms underlying this distinction are not fully understood. We have previously proposed that vestibular signals arising from self-generated movements are inhibited by a mechanism that compares the internal prediction of the proprioceptive consequences of self-motion to the actual feedback. Here we directly tested this proposal by recording from single neurons in monkey during vestibular stimulation that was externally produced and/or self-generated. We show for the first time that vestibular reafference is equivalently canceled for self-generated sensory stimulation produced by activation of the neck musculature (head-on-body motion), or axial musculature (combined head and body motion), when there is no discrepancy between the predicted and actual proprioceptive consequences of self-motion. However, if a discrepancy does exist, central vestibular neurons no longer preferentially encode vestibular exafference. Specifically, when simultaneous active and passive motion resulted in activation of the same muscle proprioceptors, neurons robustly encoded the total vestibular input (i.e., responses to vestibular reafference and exafference were equally strong), rather than exafference alone. Taken together, our results show that the cancellation of vestibular reafference in early vestibular processing requires an explicit match between expected and actual proprioceptive feedback. We propose that this vital neuronal computation, necessary for both accurate sensory perception and motor control, has important implications for a variety of sensory systems that suppress self-generated signals.


Brain | 2014

Consulting the vestibular system is simply a must if you want to optimize gaze shifts

Kathleen E. Cullen; Jessica X. Brooks

Even simple activities like reaching for our morning cup of coffee require precisely coordinated movements of multiple parts of the body. Successive attempts at these movements are characterized by ‘repetition without repetition’ (Bernstein, 1967). For this reason, it is thought that the brain does not enforce the details of a specific movement trajectory, but rather uses on-line feedback to optimize acquisition of the movement goal. However, a study in this issue of Brain demonstrates that when we make coordinated movements of the eyes and head to redirect our gaze, we use an optimal strategy that depends on vestibular sensory input: a strategy unavailable to patients with total vestibular loss. These results provide the first evidence that the vestibular system is critical for optimizing voluntary movements (Saglam et al. , 2014). When we make coordinated eye and head movements to redirect our axis of gaze relative to space (gaze = eye-in-head + head-in-space), movement accuracy is preserved even when the head’s trajectory is experimentally altered (Cullen, 2004). This happens because within milliseconds vestibular feedback rapidly alters the motor commands to the eye and head musculature to ensure gaze accuracy (Sylvestre and Cullen, 2006). For example, when a load is transiently applied to the head during a gaze shift, both the response duration and dynamics of neurons commanding the eye movement are updated—midflight—to preserve global movement accuracy. Thus, variability across movement trajectories is not problematic because the end goal of the movement is achieved as a result of on-line vestibular feedback. However, a remaining challenge has been to develop theoretical approaches to explicitly assess whether the gaze (as well as limb; Scott, …

Collaboration


Dive into the Jessica X. Brooks's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shawn D. Newlands

University of Rochester Medical Center

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge