Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Steven M. Chase is active.

Publication


Featured researches published by Steven M. Chase.


Proceedings of the National Academy of Sciences of the United States of America | 2008

Functional network reorganization during learning in a brain-computer interface paradigm

Beata Jarosiewicz; Steven M. Chase; George W. Fraser; Meel Velliste; Robert E. Kass; Andrew B. Schwartz

Efforts to study the neural correlates of learning are hampered by the size of the network in which learning occurs. To understand the importance of learning-related changes in a network of neurons, it is necessary to understand how the network acts as a whole to generate behavior. Here we introduce a paradigm in which the output of a cortical network can be perturbed directly and the neural basis of the compensatory changes studied in detail. Using a brain-computer interface, dozens of simultaneously recorded neurons in the motor cortex of awake, behaving monkeys are used to control the movement of a cursor in a three-dimensional virtual-reality environment. This device creates a precise, well-defined mapping between the firing of the recorded neurons and an expressed behavior (cursor movement). In a series of experiments, we force the animal to relearn the association between neural firing and cursor movement in a subset of neurons and assess how the network changes to compensate. We find that changes in neural activity reflect not only an alteration of behavioral strategy but also the relative contributions of individual neurons to the population error signal.


Nature | 2014

Neural constraints on learning

Patrick T. Sadtler; Kristin M. Quick; Matthew D. Golub; Steven M. Chase; Stephen I. Ryu; Elizabeth C. Tyler-Kabara; Byron M. Yu; Aaron P. Batista

Learning, whether motor, sensory or cognitive, requires networks of neurons to generate new activity patterns. As some behaviours are easier to learn than others, we asked if some neural activity patterns are easier to generate than others. Here we investigate whether an existing network constrains the patterns that a subset of its neurons is capable of exhibiting, and if so, what principles define this constraint. We employed a closed-loop intracortical brain–computer interface learning paradigm in which Rhesus macaques (Macaca mulatta) controlled a computer cursor by modulating neural activity patterns in the primary motor cortex. Using the brain–computer interface paradigm, we could specify and alter how neural activity mapped to cursor velocity. At the start of each session, we observed the characteristic activity patterns of the recorded neural population. The activity of a neural population can be represented in a high-dimensional space (termed the neural space), wherein each dimension corresponds to the activity of one neuron. These characteristic activity patterns comprise a low-dimensional subspace (termed the intrinsic manifold) within the neural space. The intrinsic manifold presumably reflects constraints imposed by the underlying neural circuitry. Here we show that the animals could readily learn to proficiently control the cursor using neural activity patterns that were within the intrinsic manifold. However, animals were less able to learn to proficiently control the cursor using activity patterns that were outside of the intrinsic manifold. These results suggest that the existing structure of a network can shape learning. On a timescale of hours, it seems to be difficult to learn to generate neural activity patterns that are not consistent with the existing network structure. These findings offer a network-level explanation for the observation that we are more readily able to learn new skills when they are related to the skills that we already possess.


Proceedings of the National Academy of Sciences of the United States of America | 2007

First-spike latency information in single neurons increases when referenced to population onset

Steven M. Chase; Eric D. Young

It is well known that many stimulus parameters, such as sound location in the auditory system or contrast in the visual system, can modulate the timing of the first spike in sensory neurons. Could first-spike latency be a candidate neural code? Most studies measuring first-spike latency information assume that the brain has an independent reference for stimulus onset from which to extract latency. This assumption creates an obvious confound that casts doubt on the feasibility of first-spike latency codes. If latency is measured relative to an internal reference of stimulus onset calculated from the responses of the neural population, the information conveyed by the latency of single neurons might decrease because of correlated changes in latency across the population. Here we assess the effects of a realistic model of stimulus onset detection on the first-spike latency information conveyed by single neurons in the auditory system. Contrary to expectation, we find that on average, the information contained in single neurons does not decrease; in fact, the majority of neurons show a slight increase in the information conveyed by latency referenced to a population onset. Our results show that first-spike latency codes are a feasible mechanism for information transfer even when biologically plausible estimates of stimulus onset are taken into account.


Journal of Neural Engineering | 2009

Control of a brain-computer interface without spike sorting.

George W. Fraser; Steven M. Chase; Andrew S. Whitford; Andrew B. Schwartz

Two rhesus monkeys were trained to move a cursor using neural activity recorded with silicon arrays of 96 microelectrodes implanted in the primary motor cortex. We have developed a method to extract movement information from the recorded single and multi-unit activity in the absence of spike sorting. By setting a single threshold across all channels and fitting the resultant events with a spline tuning function, a control signal was extracted from this population using a Bayesian particle-filter extraction algorithm. The animals achieved high-quality control comparable to the performance of decoding schemes based on sorted spikes. Our results suggest that even the simplest signal processing is sufficient for high-quality neuroprosthetic control.


Journal of Computational Neuroscience | 2010

Comparison of brain---computer interface decoding algorithms in open-loop and closed-loop control

Shinsuke Koyama; Steven M. Chase; Andrew S. Whitford; Meel Velliste; Andrew B. Schwartz; Robert E. Kass

Neuroprosthetic devices such as a computer cursor can be controlled by the activity of cortical neurons when an appropriate algorithm is used to decode motor intention. Algorithms which have been proposed for this purpose range from the simple population vector algorithm (PVA) and optimal linear estimator (OLE) to various versions of Bayesian decoders. Although Bayesian decoders typically provide the most accurate off-line reconstructions, it is not known which model assumptions in these algorithms are critical for improving decoding performance. Furthermore, it is not necessarily true that improvements (or deficits) in off-line reconstruction will translate into improvements (or deficits) in on-line control, as the subject might compensate for the specifics of the decoder in use at the time. Here we show that by comparing the performance of nine decoders, assumptions about uniformly distributed preferred directions and the way the cursor trajectories are smoothed have the most impact on decoder performance in off-line reconstruction, while assumptions about tuning curve linearity and spike count variance play relatively minor roles. In on-line control, subjects compensate for directional biases caused by non-uniformly distributed preferred directions, leaving cursor smoothing differences as the largest single algorithmic difference driving decoder performance.


Neural Networks | 2009

2009 Special Issue: Bias, optimal linear estimation, and the differences between open-loop simulation and closed-loop performance of spiking-based brain-computer interface algorithms

Steven M. Chase; Andrew B. Schwartz; Robert E. Kass

The activity of dozens of simultaneously recorded neurons can be used to control the movement of a robotic arm or a cursor on a computer screen. This motor neural prosthetic technology has spurred an increased interest in the algorithms by which motor intention can be inferred. The simplest of these algorithms is the population vector algorithm (PVA), where the activity of each cell is used to weight a vector pointing in that neurons preferred direction. Off-line, it is possible to show that more complicated algorithms, such as the optimal linear estimator (OLE), can yield substantial improvements in the accuracy of reconstructed hand movements over the PVA. We call this open-loop performance. In contrast, this performance difference may not be present in closed-loop, on-line control. The obvious difference between open and closed-loop control is the ability to adapt to the specifics of the decoder in use at the time. In order to predict performance gains that an algorithm may yield in closed-loop control, it is necessary to build a model that captures aspects of this adaptation process. Here we present a framework for modeling the closed-loop performance of the PVA and the OLE. Using both simulations and experiments, we show that (1) the performance gain with certain decoders can be far less extreme than predicted by off-line results, (2) that subjects are able to compensate for certain types of bias in decoders, and (3) that care must be taken to ensure that estimation error does not degrade the performance of theoretically optimal decoders.


The Journal of Neuroscience | 2010

A Reward-Modulated Hebbian Learning Rule Can Explain Experimentally Observed Network Reorganization in a Brain Control Task

Robert A. Legenstein; Steven M. Chase; Andrew B. Schwartz; Wolfgang Maass

It has recently been shown in a brain–computer interface experiment that motor cortical neurons change their tuning properties selectively to compensate for errors induced by displaced decoding parameters. In particular, it was shown that the three-dimensional tuning curves of neurons whose decoding parameters were reassigned changed more than those of neurons whose decoding parameters had not been reassigned. In this article, we propose a simple learning rule that can reproduce this effect. Our learning rule uses Hebbian weight updates driven by a global reward signal and neuronal noise. In contrast to most previously proposed learning rules, this approach does not require extrinsic information to separate noise from signal. The learning rule is able to optimize the performance of a model system within biologically realistic periods of time under high noise levels. Furthermore, when the model parameters are matched to data recorded during the brain–computer interface learning experiments described above, the model produces learning effects strikingly similar to those found in the experiments.


Journal of Neurophysiology | 2012

Behavioral and neural correlates of visuomotor adaptation observed through a brain-computer interface in primary motor cortex

Steven M. Chase; Robert E. Kass; Andrew B. Schwartz

Brain-computer interfaces (BCIs) provide a defined link between neural activity and devices, allowing a detailed study of the neural adaptive responses generating behavioral output. We trained monkeys to perform two-dimensional center-out movements of a computer cursor using a BCI. We then applied a perturbation by randomly selecting a subset of the recorded units and rotating their directional contributions to cursor movement by a consistent angle. Globally, this perturbation mimics a visuomotor transformation, and in the first part of this article we characterize the psychophysical indications of motor adaptation and compare them with known results from adaptation of natural reaching movements. Locally, however, only a subset of the neurons in the population actually contributes to error, allowing us to probe for signatures of neural adaptation that might be specific to the subset of neurons we perturbed. One compensation strategy would be to selectively adapt the subset of cells responsible for the error. An alternate strategy would be to globally adapt the entire population to correct the error. Using a recently developed mathematical technique that allows us to differentiate these two mechanisms, we found evidence of both strategies in the neural responses. The dominant strategy we observed was global, accounting for ∼86% of the total error reduction. The remaining 14% came from local changes in the tuning functions of the perturbed units. Interestingly, these local changes were specific to the details of the applied rotation: in particular, changes in the depth of tuning were only observed when the percentage of perturbed cells was small. These results imply that there may be constraints on the networks adaptive capabilities, at least for perturbations lasting only a few hundreds of trials.


The Journal of Neuroscience | 2005

Limited Segregation of Different Types of Sound Localization Information among Classes of Units in the Inferior Colliculus

Steven M. Chase; Eric D. Young

The auditory system uses three cues to decode sound location: interaural time differences (ITDs), interaural level differences (ILDs), and spectral notches (SNs). Initial processing of these cues is done in separate brainstem nuclei, with ITDs in the medial superior olive, ILDs in the lateral superior olive, and SNs in the dorsal cochlear nucleus. This work addresses the nature of the convergence of localization information in the central nucleus of the inferior colliculus (ICC). Ramachandran et al. (1999) argued that ICC neurons of types V, I, and O, respectively, receive their predominant inputs from ITD-, ILD-, and SN-sensitive brainstem nuclei, suggesting that these ICC response types should be differentially sensitive to localization cues. Here, single-unit responses to simultaneous manipulation of pairs of localization cues were recorded, and the mutual information between discharge rate and individual cues was quantified. Although rate responses to cue variation were generally consistent with those expected from the hypothesized anatomical connections, the differences in information were not as large as expected. Type I units provide the most information, especially about SNs in the physiologically useful range. Type I and O units provide information about ILDs, even at low frequencies at which actual ILDs are very small. ITD information is provided by a subset of all low-frequency neurons. Type V neurons provide information mainly about ITDs and the average binaural intensity. These results are the first to quantify the relative representation of cues in terms of information and suggest a variety of degrees of cue integration in the ICC.


The Journal of Neuroscience | 2006

Spike-Timing Codes Enhance the Representation of Multiple Simultaneous Sound-Localization Cues in the Inferior Colliculus

Steven M. Chase; Eric D. Young

To preserve multiple streams of independent information that converge onto a neuron, the information must be re-represented more efficiently in the neural response. Here we analyze the increase in the representational capacity of spike timing over rate codes using sound localization cues as an example. The inferior colliculus receives convergent input from multiple auditory brainstem nuclei, including sound localization information such as interaural level differences (ILDs), interaural timing differences (ITDs), and spectral cues. Virtual space techniques were used to create stimulus sets varying in two sound-localization parameters each. Information about the cues was quantified using a spike distance metric that allows one to separate contributions to the information arising from spike rate and spike timing. Spike timing enhances the representation of spectral and ILD cues at timescales averaging 12 ms. ITD information, however, is carried by a rate code. Comparing responses to frozen and random noise shows that the temporal information is mainly attributable to phase locking to temporal stimulus features, with an additional first-spike latency component. With rate-based codes, there is significant confounding of information about two cues presented simultaneously, meaning that the cues cannot be decoded independently. Spike-timing-based codes reduce this confounded information. Furthermore, the relative representation of the cues often changes as a function of the time resolution of the code, implying that information about multiple cues can be multiplexed onto individual spike trains.

Collaboration


Dive into the Steven M. Chase's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Byron M. Yu

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthew D. Golub

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert E. Kass

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Stephen I. Ryu

Palo Alto Medical Foundation

View shared research outputs
Top Co-Authors

Avatar

Yin Zhang

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Eric D. Young

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Emily R. Oby

University of Pittsburgh

View shared research outputs
Researchain Logo
Decentralizing Knowledge