Philip N. Sabes
University of California, San Francisco
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Philip N. Sabes.
Nature Neuroscience | 2005
Samuel J. Sober; Philip N. Sabes
When planning target-directed reaching movements, human subjects combine visual and proprioceptive feedback to form two estimates of the arms position: one to plan the reach direction, and another to convert that direction into a motor command. These position estimates are based on the same sensory signals but rely on different combinations of visual and proprioceptive input, suggesting that the brain weights sensory inputs differently depending on the computation being performed. Here we show that the relative weighting of vision and proprioception depends both on the sensory modality of the target and on the information content of the visual feedback, and that these factors affect the two stages of planning independently. The observed diversity of weightings demonstrates the flexibility of sensory integration and suggests a unifying principle by which the brain chooses sensory inputs so as to minimize errors arising from the transformation of sensory signals between coordinate frames.
The Journal of Neuroscience | 2003
Samuel J. Sober; Philip N. Sabes
When planning goal-directed reaches, subjects must estimate the position of the arm by integrating visual and proprioceptive signals from the sensory periphery. These integrated position estimates are required at two stages of motor planning: first to determine the desired movement vector, and second to transform the movement vector into a joint-based motor command. We quantified the contributions of each sensory modality to the position estimate formed at each planning stage. Subjects made reaches in a virtual reality environment in which vision and proprioception were dissociated by shifting the location of visual feedback. The relative weighting of vision and proprioception at each stage was then determined using computational models of feedforward motor control. We found that the position estimate used for movement vector planning relies mostly on visual input, whereas the estimate used to compute the joint-based motor command relies more on proprioceptive signals. This suggests that when estimating the position of the arm, the brain selects different combinations of sensory input based on the computation in which the resulting estimate will be used.
Current Opinion in Neurobiology | 2000
Philip N. Sabes
The notion of internal models has become central to the study of visually guided reaching. Armed with this theoretical framework, researchers are gleaning insights into long-standing problems in the field, such as the ability to respond rapidly to changes in the location of a reach target and the fine control of the multi-joint dynamics of the arm. A key factor in these advances is our increased understanding of how the brain integrates feedforward control signals, sensory feedback, and predictions based on internal models of the arm.
The Journal of Neuroscience | 2011
Timothy D. Verstynen; Philip N. Sabes
Most voluntary actions rely on neural circuits that map sensory cues onto appropriate motor responses. One might expect that for everyday movements, like reaching, this mapping would remain stable over time, at least in the absence of error feedback. Here we describe a simple and novel psychophysical phenomenon in which recent experience shapes the statistical properties of reaching, independent of any movement errors. Specifically, when recent movements are made to targets near a particular location subsequent movements to that location become less variable, but at the cost of increased bias for reaches to other targets. This process exhibits the variance–bias tradeoff that is a hallmark of Bayesian estimation. We provide evidence that this process reflects a fast, trial-by-trial learning of the prior distribution of targets. We also show that these results may reflect an emergent property of associative learning in neural circuits. We demonstrate that adding Hebbian (associative) learning to a model network for reach planning leads to a continuous modification of network connections that biases network dynamics toward activity patterns associated with recent inputs. This learning process quantitatively captures the key results of our experimental data in human subjects, including the effect that recent experience has on the variance-bias tradeoff. This network also provides a good approximation of a normative Bayesian estimator. These observations illustrate how associative learning can incorporate recent experience into ongoing computations in a statistically principled way.
Neural Computation | 2006
Sen Cheng; Philip N. Sabes
Recent studies have employed simple linear dynamical systems to model trial-by-trial dynamics in various sensorimotor learning tasks. Here we explore the theoretical and practical considerations that arise when employing the general class of linear dynamical systems (LDS) as a model for sensorimotor learning. In this framework, the state of the system is a set of parameters that define the current sensorimotor transformation—the function that maps sensory inputs to motor outputs. The class of LDS models provides a first-order approximation for any Markovian (state-dependent) learning rule that specifies the changes in the sensorimotor transformation that result from sensory feedback on each movement. We show that modeling the trial-by-trial dynamics of learning provides a substantially enhanced picture of the process of adaptation compared to measurements of the steady state of adaptation derived from more traditional blocked-exposure experiments. Specifically, these models can be used to quantify sensory and performance biases, the extent to which learned changes in the sensorimotor transformation decay over time, and the portion of motor variability due to either learning or performance variability. We show that previous attempts to fit such models with linear regression have not generally yielded consistent parameter estimates. Instead, we present an expectation-maximization algorithm for fitting LDS models to experimental data and describe the difficulties inherent in estimating the parameters associated with feedback-driven learning. Finally, we demonstrate the application of these methods in a simple sensorimotor learning experiment: adaptation to shifted visual feedback during reaching.
Nature Neuroscience | 2015
Maria C. Dadarlat; Joseph E. O'Doherty; Philip N. Sabes
Proprioception—the sense of the bodys position in space—is important to natural movement planning and execution and will likewise be necessary for successful motor prostheses and brain–machine interfaces (BMIs). Here we demonstrate that monkeys were able to learn to use an initially unfamiliar multichannel intracortical microstimulation signal, which provided continuous information about hand position relative to an unseen target, to complete accurate reaches. Furthermore, monkeys combined this artificial signal with vision to form an optimal, minimum-variance estimate of relative hand position. These results demonstrate that a learning-based approach can be used to provide a rich artificial sensory feedback signal, suggesting a new strategy for restoring proprioception to patients using BMIs, as well as a powerful new tool for studying the adaptive mechanisms of sensory integration.
The Journal of Neuroscience | 2011
Leah M. M. McGuire; Philip N. Sabes
The planning and control of sensory-guided movements requires the integration of multiple sensory streams. Although the information conveyed by different sensory modalities is often overlapping, the shared information is represented differently across modalities during the early stages of cortical processing. We ask how these diverse sensory signals are represented in multimodal sensorimotor areas of cortex in macaque monkeys. Although a common modality-independent representation might facilitate downstream readout, previous studies have found that modality-specific representations in multimodal cortex reflect upstream spatial representations. For example, visual signals have a more eye-centered representation. We recorded neural activity from two parietal areas involved in reach planning, area 5 and the medial intraparietal area (MIP), as animals reached to visual, combined visual and proprioceptive, and proprioceptive targets while fixing their gaze on another location. In contrast to other multimodal cortical areas, the same spatial representations are used to represent visual and proprioceptive signals in both area 5 and MIP. However, these representations are heterogeneous. Although we observed a posterior-to-anterior gradient in population responses in parietal cortex, from more eye-centered to more hand- or body-centered representations, we do not observe the simple and discrete reference frame representations suggested by studies that focused on identifying the “best-match” reference frame for a given cortical area. In summary, we find modality-independent representations of spatial information in parietal cortex, although these representations are complex and heterogeneous.
The Journal of Neuroscience | 2014
Kris S. Chaisanguanthum; Helen H. Shen; Philip N. Sabes
Even well practiced movements cannot be repeated without variability. This variability is thought to reflect “noise” in movement preparation or execution. However, we show that, for both professional baseball pitchers and macaque monkeys making reaching movements, motor variability can be decomposed into two statistical components, a slowly drifting mean and fast trial-by-trial fluctuations about the mean. The preparatory activity of dorsal premotor cortex/primary motor cortex neurons in monkey exhibits similar statistics. Although the neural and behavioral drifts appear to be correlated, neural activity does not account for trial-by-trial fluctuations in movement, which must arise elsewhere, likely downstream. The statistics of this drift are well modeled by a double-exponential autocorrelation function, with time constants similar across the neural and behavioral drifts in two monkeys, as well as the drifts observed in baseball pitching. These time constants can be explained by an error-corrective learning processes and agree with learning rates measured directly in previous experiments. Together, these results suggest that the central contributions to movement variability are not simply trial-by-trial fluctuations but are rather the result of longer-timescale processes that may arise from motor learning.
PLOS Computational Biology | 2013
Joseph G. Makin; Matthew R. Fellows; Philip N. Sabes
Sensory processing in the brain includes three key operations: multisensory integration—the task of combining cues into a single estimate of a common underlying stimulus; coordinate transformations—the change of reference frame for a stimulus (e.g., retinotopic to body-centered) effected through knowledge about an intervening variable (e.g., gaze position); and the incorporation of prior information. Statistically optimal sensory processing requires that each of these operations maintains the correct posterior distribution over the stimulus. Elements of this optimality have been demonstrated in many behavioral contexts in humans and other animals, suggesting that the neural computations are indeed optimal. That the relationships between sensory modalities are complex and plastic further suggests that these computations are learned—but how? We provide a principled answer, by treating the acquisition of these mappings as a case of density estimation, a well-studied problem in machine learning and statistics, in which the distribution of observed data is modeled in terms of a set of fixed parameters and a set of latent variables. In our case, the observed data are unisensory-population activities, the fixed parameters are synaptic connections, and the latent variables are multisensory-population activities. In particular, we train a restricted Boltzmann machine with the biologically plausible contrastive-divergence rule to learn a range of neural computations not previously demonstrated under a single approach: optimal integration; encoding of priors; hierarchical integration of cues; learning when not to integrate; and coordinate transformation. The model makes testable predictions about the nature of multisensory representations.
Neuron | 2016
Azadeh Yazdan-Shahmorad; Camilo Diaz-Botia; Timothy L. Hanson; Viktor Kharazia; Peter Ledochowitsch; Michel M. Maharbiz; Philip N. Sabes
While optogenetics offers great potential for linking brain function and behavior in nonhuman primates, taking full advantage of that potential will require stable access for optical stimulation and concurrent monitoring of neural activity. Here we present a practical, stable interface for stimulation and recording of large-scale cortical circuits. To obtain optogenetic expression across a broad region, here spanning primary somatosensory (S1) and motor (M1) cortices, we used convection-enhanced delivery of the viral vector, with online guidance from MRI. To record neural activity across this region, we used a custom micro-electrocorticographic (μECoG) array designed to minimally attenuate optical stimuli. Lastly, we demonstrated the use of this interface to measure spatiotemporal responses to optical stimulation across M1 and S1. This interface offers a powerful tool for studying circuit dynamics and connectivity across cortical areas, for long-term studies of neuromodulation and targeted cortical plasticity, and for linking these to behavior.