Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Byron M. Yu is active.

Publication


Featured researches published by Byron M. Yu.


Nature Neuroscience | 2010

Stimulus onset quenches neural variability: a widespread cortical phenomenon

Mark M. Churchland; Byron M. Yu; John P. Cunningham; Leo P. Sugrue; Marlene R. Cohen; Greg Corrado; William T. Newsome; Andy Clark; Paymon Hosseini; Benjamin B. Scott; David C. Bradley; Matthew A. Smith; Adam Kohn; J. Anthony Movshon; Katherine M. Armstrong; Tirin Moore; Steve W. C. Chang; Lawrence H. Snyder; Stephen G. Lisberger; Nicholas J. Priebe; Ian M. Finn; David Ferster; Stephen I. Ryu; Gopal Santhanam; Maneesh Sahani; Krishna V. Shenoy

Neural responses are typically characterized by computing the mean firing rate, but response variability can exist across trials. Many studies have examined the effect of a stimulus on the mean response, but few have examined the effect on response variability. We measured neural variability in 13 extracellularly recorded datasets and one intracellularly recorded dataset from seven areas spanning the four cortical lobes in monkeys and cats. In every case, stimulus onset caused a decline in neural variability. This occurred even when the stimulus produced little change in mean firing rate. The variability decline was observed in membrane potential recordings, in the spiking of individual neurons and in correlated spiking variability measured with implanted 96-electrode arrays. The variability decline was observed for all stimuli tested, regardless of whether the animal was awake, behaving or anaesthetized. This widespread variability decline suggests a rather general property of cortex, that its state is stabilized by an input.


Nature | 2006

A high-performance brain–computer interface

Gopal Santhanam; Stephen I. Ryu; Byron M. Yu; Afsheen Afshar; Krishna V. Shenoy

Recent studies have demonstrated that monkeys and humans can use signals from the brain to guide computer cursors. Brain–computer interfaces (BCIs) may one day assist patients suffering from neurological injury or disease, but relatively low system performance remains a major obstacle. In fact, the speed and accuracy with which keys can be selected using BCIs is still far lower than for systems relying on eye movements. This is true whether BCIs use recordings from populations of individual neurons using invasive electrode techniques or electroencephalogram recordings using less- or non-invasive techniques. Here we present the design and demonstration, using electrode arrays implanted in monkey dorsal premotor cortex, of a manyfold higher performance BCI than previously reported. These results indicate that a fast and accurate key selection system, capable of operating with a range of keyboard sizes, is possible (up to 6.5 bits per second, or ∼15 words per minute, with 96 electrodes). The highest information throughput is achieved with unprecedentedly brief neural recordings, even as recording quality degrades over time. These performance results and their implications for system design should substantially increase the clinical viability of BCIs in humans.


Nature Neuroscience | 2012

A high-performance neural prosthesis enabled by control algorithm design

Vikash Gilja; Paul Nuyujukian; Cynthia A. Chestek; John P. Cunningham; Byron M. Yu; Joline M Fan; Mark M. Churchland; Matthew T. Kaufman; Jonathan C. Kao; Stephen I. Ryu; Krishna V. Shenoy

Neural prostheses translate neural activity from the brain into control signals for guiding prosthetic devices, such as computer cursors and robotic limbs, and thus offer individuals with disabilities greater interaction with the world. However, relatively low performance remains a critical barrier to successful clinical translation; current neural prostheses are considerably slower, with less accurate control, than the native arm. Here we present a new control algorithm, the recalibrated feedback intention–trained Kalman filter (ReFIT-KF) that incorporates assumptions about the nature of closed-loop neural prosthetic control. When tested in rhesus monkeys implanted with motor cortical electrode arrays, the ReFIT-KF algorithm outperformed existing neural prosthetic algorithms in all measured domains and halved target acquisition time. This control algorithm permits sustained, uninterrupted use for hours and generalizes to more challenging tasks without retraining. Using this algorithm, we demonstrate repeatable high performance for years after implantation in two monkeys, thereby increasing the clinical viability of neural prostheses.


The Journal of Neuroscience | 2006

Neural variability in premotor cortex provides a signature of motor preparation

Mark M. Churchland; Byron M. Yu; Stephen I. Ryu; Gopal Santhanam; Krishna V. Shenoy

We present experiments and analyses designed to test the idea that firing rates in premotor cortex become optimized during motor preparation, approaching their ideal values over time. We measured the across-trial variability of neural responses in dorsal premotor cortex of three monkeys performing a delayed-reach task. Such variability was initially high, but declined after target onset, and was maintained at a rough plateau during the delay. An additional decline was observed after the go cue. Between target onset and movement onset, variability declined by an average of 34%. This decline in variability was observed even when mean firing rate changed little. We hypothesize that this effect is related to the progress of motor preparation. In this interpretation, firing rates are initially variable across trials but are brought, over time, to their “appropriate” values, becoming consistent in the process. Consistent with this hypothesis, reaction times were longer if the go cue was presented shortly after target onset, when variability was still high, and were shorter if the go cue was presented well after target onset, when variability had fallen to its plateau. A similar effect was observed for the natural variability in reaction time: longer (shorter) reaction times tended to occur on trials in which firing rates were more (less) variable. These results reveal a remarkable degree of temporal structure in the variability of cortical neurons. The relationship with reaction time argues that the changes in variability approximately track the progress of motor preparation.


neural information processing systems | 2008

Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity

Byron M. Yu; John P. Cunningham; Gopal Santhanam; Stephen I. Ryu; Krishna V. Shenoy; Maneesh Sahani

We consider the problem of extracting smooth, low-dimensional neural trajectories that summarize the activity recorded simultaneously from many neurons on individual experimental trials. Beyond the benefit of visualizing the high-dimensional, noisy spiking activity in a compact form, such trajectories can offer insight into the dynamics of the neural circuitry underlying the recorded activity. Current methods for extracting neural trajectories involve a two-stage process: the spike trains are first smoothed over time, then a static dimensionality-reduction technique is applied. We first describe extensions of the two-stage methods that allow the degree of smoothing to be chosen in a principled way and that account for spiking variability, which may vary both across neurons and across time. We then present a novel method for extracting neural trajectories-Gaussian-process factor analysis (GPFA)-which unifies the smoothing and dimensionality-reduction operations in a common probabilistic framework. We applied these methods to the activity of 61 neurons recorded simultaneously in macaque premotor and motor cortices during reach planning and execution. By adopting a goodness-of-fit metric that measures how well the activity of each neuron can be predicted by all other recorded neurons, we found that the proposed extensions improved the predictive ability of the two-stage methods. The predictive ability was further improved by going to GPFA. From the extracted trajectories, we directly observed a convergence in neural state during motor planning, an effect that was shown indirectly by previous studies. We then show how such methods can be a powerful tool for relating the spiking activity across a neural population to the subjects behavior on a single-trial basis. Finally, to assess how well the proposed methods characterize neural population activity when the underlying time course is known, we performed simulations that revealed that GPFA performed tens of percent better than the best two-stage method.


Nature Neuroscience | 2014

Dimensionality reduction for large-scale neural recordings

John P. Cunningham; Byron M. Yu

Most sensory, cognitive and motor functions depend on the interactions of many neurons. In recent years, there has been rapid development and increasing use of technologies for recording from large numbers of neurons, either sequentially or simultaneously. A key question is what scientific insight can be gained by studying a population of recorded neurons beyond studying each neuron individually. Here, we examine three important motivations for population studies: single-trial hypotheses requiring statistical power, hypotheses of population response structure and exploratory analyses of large data sets. Many recent studies have adopted dimensionality reduction to analyze these populations and to find features that are not apparent at the level of individual neurons. We describe the dimensionality reduction methods commonly applied to population activity and offer practical advice about selecting methods and interpreting their outputs. This review is intended for experimental and computational researchers who seek to understand the role dimensionality reduction has had and can have in systems neuroscience, and who seek to apply these methods to their own data.


Nature | 2014

Neural constraints on learning

Patrick T. Sadtler; Kristin M. Quick; Matthew D. Golub; Steven M. Chase; Stephen I. Ryu; Elizabeth C. Tyler-Kabara; Byron M. Yu; Aaron P. Batista

Learning, whether motor, sensory or cognitive, requires networks of neurons to generate new activity patterns. As some behaviours are easier to learn than others, we asked if some neural activity patterns are easier to generate than others. Here we investigate whether an existing network constrains the patterns that a subset of its neurons is capable of exhibiting, and if so, what principles define this constraint. We employed a closed-loop intracortical brain–computer interface learning paradigm in which Rhesus macaques (Macaca mulatta) controlled a computer cursor by modulating neural activity patterns in the primary motor cortex. Using the brain–computer interface paradigm, we could specify and alter how neural activity mapped to cursor velocity. At the start of each session, we observed the characteristic activity patterns of the recorded neural population. The activity of a neural population can be represented in a high-dimensional space (termed the neural space), wherein each dimension corresponds to the activity of one neuron. These characteristic activity patterns comprise a low-dimensional subspace (termed the intrinsic manifold) within the neural space. The intrinsic manifold presumably reflects constraints imposed by the underlying neural circuitry. Here we show that the animals could readily learn to proficiently control the cursor using neural activity patterns that were within the intrinsic manifold. However, animals were less able to learn to proficiently control the cursor using activity patterns that were outside of the intrinsic manifold. These results suggest that the existing structure of a network can shape learning. On a timescale of hours, it seems to be difficult to learn to generate neural activity patterns that are not consistent with the existing network structure. These findings offer a network-level explanation for the observation that we are more readily able to learn new skills when they are related to the skills that we already possess.


Current Opinion in Neurobiology | 2007

Techniques for extracting single-trial activity patterns from large-scale neural recordings

Mark M. Churchland; Byron M. Yu; Maneesh Sahani; Krishna V. Shenoy

Large, chronically implanted arrays of microelectrodes are an increasingly common tool for recording from primate cortex and can provide extracellular recordings from many (order of 100) neurons. While the desire for cortically based motor prostheses has helped drive their development, such arrays also offer great potential to advance basic neuroscience research. Here we discuss the utility of array recording for the study of neural dynamics. Neural activity often has dynamics beyond that driven directly by the stimulus. While governed by those dynamics, neural responses may nevertheless unfold differently for nominally identical trials, rendering many traditional analysis methods ineffective. We review recent studies - some employing simultaneous recording, some not - indicating that such variability is indeed present both during movement generation and during the preceding premotor computations. In such cases, large-scale simultaneous recordings have the potential to provide an unprecedented view of neural dynamics at the level of single trials. However, this enterprise will depend not only on techniques for simultaneous recording but also on the use and further development of analysis techniques that can appropriately reduce the dimensionality of the data, and allow visualization of single-trial neural behavior.


Journal of Neurophysiology | 2008

Detecting Neural-State Transitions Using Hidden Markov Models for Motor Cortical Prostheses

Caleb Kemere; Gopal Santhanam; Byron M. Yu; Afsheen Afshar; Stephen I. Ryu; Teresa H. Meng; Krishna V. Shenoy

Neural prosthetic interfaces use neural activity related to the planning and perimovement epochs of arm reaching to afford brain-directed control of external devices. Previous research has primarily centered on accurately decoding movement intention from either plan or perimovement activity, but has assumed that temporal boundaries between these epochs are known to the decoding system. In this work, we develop a technique to automatically differentiate between baseline, plan, and perimovement epochs of neural activity. Specifically, we use a generative model of neural activity to capture how neural activity varies between these three epochs. Our approach is based on a hidden Markov model (HMM), in which the latent variable (state) corresponds to the epoch of neural activity, coupled with a state-dependent Poisson firing model. Using an HMM, we demonstrate that the time of transition from baseline to plan epochs, a transition in neural activity that is not accompanied by any external behavior changes, can be detected using a threshold on the a posteriori HMM state probabilities. Following detection of the plan epoch, we show that the intended target of a center-out movement can be detected about as accurately as that by a maximum-likelihood estimator using a window of known plan activity. In addition, we demonstrate that our HMM can detect transitions in neural activity corresponding to targets not found in training data. Thus the HMM technique for automatically detecting transitions between epochs of neural activity enables prosthetic interfaces that can operate autonomously.


The Journal of Neuroscience | 2007

Single-neuron stability during repeated reaching in macaque premotor cortex.

Cynthia A. Chestek; Aaron P. Batista; Gopal Santhanam; Byron M. Yu; Afsheen Afshar; John P. Cunningham; Vikash Gilja; Stephen I. Ryu; Mark M. Churchland; Krishna V. Shenoy

Some movements that animals and humans make are highly stereotyped, repeated with little variation. The patterns of neural activity associated with repeats of a movement may be highly similar, or the same movement may arise from different patterns of neural activity, if the brain exploits redundancies in the neural projections to muscles. We examined the stability of the relationship between neural activity and behavior. We asked whether the variability in neural activity that we observed during repeated reaching was consistent with a noisy but stable relationship, or with a changing relationship, between neural activity and behavior. Monkeys performed highly similar reaches under tight behavioral control, while many neurons in the dorsal aspect of premotor cortex and the primary motor cortex were simultaneously monitored for several hours. Neural activity was predominantly stable over time in all measured properties: firing rate, directional tuning, and contribution to a decoding model that predicted kinematics from neural activity. The small changes in neural activity that we did observe could be accounted for primarily by subtle changes in behavior. We conclude that the relationship between neural activity and practiced behavior is reasonably stable, at least on timescales of minutes up to 48 h. This finding has significant implications for the design of neural prosthetic systems because it suggests that device recalibration need not be overly frequent, It also has implications for studies of neural plasticity because a stable baseline permits identification of nonstationary shifts.

Collaboration


Dive into the Byron M. Yu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stephen I. Ryu

Palo Alto Medical Foundation

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Maneesh Sahani

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark M. Churchland

Columbia University Medical Center

View shared research outputs
Top Co-Authors

Avatar

Steven M. Chase

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Vikash Gilja

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge