Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Patrick T. Sadtler is active.

Publication


Featured researches published by Patrick T. Sadtler.


Nature | 2014

Neural constraints on learning

Patrick T. Sadtler; Kristin M. Quick; Matthew D. Golub; Steven M. Chase; Stephen I. Ryu; Elizabeth C. Tyler-Kabara; Byron M. Yu; Aaron P. Batista

Learning, whether motor, sensory or cognitive, requires networks of neurons to generate new activity patterns. As some behaviours are easier to learn than others, we asked if some neural activity patterns are easier to generate than others. Here we investigate whether an existing network constrains the patterns that a subset of its neurons is capable of exhibiting, and if so, what principles define this constraint. We employed a closed-loop intracortical brain–computer interface learning paradigm in which Rhesus macaques (Macaca mulatta) controlled a computer cursor by modulating neural activity patterns in the primary motor cortex. Using the brain–computer interface paradigm, we could specify and alter how neural activity mapped to cursor velocity. At the start of each session, we observed the characteristic activity patterns of the recorded neural population. The activity of a neural population can be represented in a high-dimensional space (termed the neural space), wherein each dimension corresponds to the activity of one neuron. These characteristic activity patterns comprise a low-dimensional subspace (termed the intrinsic manifold) within the neural space. The intrinsic manifold presumably reflects constraints imposed by the underlying neural circuitry. Here we show that the animals could readily learn to proficiently control the cursor using neural activity patterns that were within the intrinsic manifold. However, animals were less able to learn to proficiently control the cursor using activity patterns that were outside of the intrinsic manifold. These results suggest that the existing structure of a network can shape learning. On a timescale of hours, it seems to be difficult to learn to generate neural activity patterns that are not consistent with the existing network structure. These findings offer a network-level explanation for the observation that we are more readily able to learn new skills when they are related to the skills that we already possess.


Journal of Neural Engineering | 2014

To sort or not to sort: the impact of spike-sorting on neural decoding performance.

Sonia Todorova; Patrick T. Sadtler; Aaron P. Batista; Steven M. Chase; Valérie Ventura

OBJECTIVE Brain-computer interfaces (BCIs) are a promising technology for restoring motor ability to paralyzed patients. Spiking-based BCIs have successfully been used in clinical trials to control multi-degree-of-freedom robotic devices. Current implementations of these devices require a lengthy spike-sorting step, which is an obstacle to moving this technology from the lab to the clinic. A viable alternative is to avoid spike-sorting, treating all threshold crossings of the voltage waveform on an electrode as coming from one putative neuron. It is not known, however, how much decoding information might be lost by ignoring spike identity. APPROACH We present a full analysis of the effects of spike-sorting schemes on decoding performance. Specifically, we compare how well two common decoders, the optimal linear estimator and the Kalman filter, reconstruct the arm movements of non-human primates performing reaching tasks, when receiving input from various sorting schemes. The schemes we tested included: using threshold crossings without spike-sorting; expert-sorting discarding the noise; expert-sorting, including the noise as if it were another neuron; and automatic spike-sorting using waveform features. We also decoded from a joint statistical model for the waveforms and tuning curves, which does not involve an explicit spike-sorting step. MAIN RESULTS Discarding the threshold crossings that cannot be assigned to neurons degrades decoding: no spikes should be discarded. Decoding based on spike-sorted units outperforms decoding based on electrodes voltage crossings: spike-sorting is useful. The four waveform based spike-sorting methods tested here yield similar decoding efficiencies: a fast and simple method is competitive. Decoding using the joint waveform and tuning model shows promise but is not consistently superior. SIGNIFICANCE Our results indicate that simple automated spike-sorting performs as well as the more computationally or manually intensive methods used here. Even basic spike-sorting adds value to the low-threshold waveform-crossing methods often employed in BCI decoding.


Neural Computation | 2015

Extracting low-dimensional latent structure from time series in the presence of delays

Karthik Lakshmanan; Patrick T. Sadtler; Elizabeth C. Tyler-Kabara; Aaron P. Batista; Byron M. Yu

Noisy, high-dimensional time series observations can often be described by a set of low-dimensional latent variables. Commonly used methods to extract these latent variables typically assume instantaneous relationships between the latent and observed variables. In many physical systems, changes in the latent variables manifest as changes in the observed variables after time delays. Techniques that do not account for these delays can recover a larger number of latent variables than are present in the system, thereby making the latent representation more difficult to interpret. In this work, we introduce a novel probabilistic technique, time-delay gaussian-process factor analysis (TD-GPFA), that performs dimensionality reduction in the presence of a different time delay between each pair of latent and observed variables. We demonstrate how using a gaussian process to model the evolution of each latent variable allows us to tractably learn these delays over a continuous domain. Additionally, we show how TD-GPFA combines temporal smoothing and dimensionality reduction into a common probabilistic framework. We present an expectation/conditional maximization either (ECME) algorithm to learn the model parameters. Our simulations demonstrate that when time delays are present, TD-GPFA is able to correctly identify these delays and recover the latent space. We then applied TD-GPFA to the activity of tens of neurons recorded simultaneously in the macaque motor cortex during a reaching task. TD-GPFA is able to better describe the neural activity using a more parsimonious latent space than GPFA, a method that has been used to interpret motor cortex data but does not account for time delays. More broadly, TD-GPFA can help to unravel the mechanisms underlying high-dimensional time series data by taking into account physical delays in the system.


international conference of the ieee engineering in medicine and biology society | 2013

Direction and speed tuning of motor-cortex multi-unit activity and local field potentials during reaching movements

Sagi Perel; Patrick T. Sadtler; Jason M. Godlove; Stephen I. Ryu; Wei Wang; Aaron P. Batista; Steven M. Chase

Primary motor-cortex multi-unit activity (MUA) and local-field potentials (LFPs) have both been suggested as potential control signals for brain-computer interfaces (BCIs) aimed at movement restoration. Some studies report that LFP-based decoding is comparable to spiking-based decoding, while others offer contradicting evidence. Differences in experimental paradigms, tuning models and decoding techniques make it hard to directly compare these results. Here, we use regression and mutual information analyses to study how MUA and LFP encode various kinematic parameters during reaching movements. We find that in addition to previously reported directional tuning, MUA also contains prominent speed tuning. LFP activity in low-frequency bands (15-40Hz, LFPL) is primarily speed tuned, and contains more speed information than both high-frequency LFP (100-300Hz, LFPH) and MUA. LFPH contains more directional information compared to LFPL, but less information when compared with MUA. Our results suggest that a velocity and speed encoding model is most appropriate for both MUA and LFPH, whereas a speed only encoding model is adequate for LFPL.


Nature Neuroscience | 2018

Learning by neural reassociation

Matthew D. Golub; Patrick T. Sadtler; Emily R. Oby; Kristin M. Quick; Stephen I. Ryu; Elizabeth C. Tyler-Kabara; Aaron P. Batista; Steven M. Chase; Byron M. Yu

Behavior is driven by coordinated activity across a population of neurons. Learning requires the brain to change the neural population activity produced to achieve a given behavioral goal. How does population activity reorganize during learning? We studied intracortical population activity in the primary motor cortex of rhesus macaques during short-term learning in a brain–computer interface (BCI) task. In a BCI, the mapping between neural activity and behavior is exactly known, enabling us to rigorously define hypotheses about neural reorganization during learning. We found that changes in population activity followed a suboptimal neural strategy of reassociation: animals relied on a fixed repertoire of activity patterns and associated those patterns with different movements after learning. These results indicate that the activity patterns that a neural population can generate are even more constrained than previously thought and might explain why it is often difficult to quickly learn to a high level of proficiency.Learning is ubiquitous in everyday life, yet it is unclear how neurons change their activity together during learning. Golub and colleagues show that short-term learning relies on a fixed neural repertoire, which limits behavioral improvement.


Journal of Neural Engineering | 2016

Extracellular voltage threshold settings can be tuned for optimal encoding of movement and stimulus parameters

Emily R. Oby; Sagi Perel; Patrick T. Sadtler; Douglas A. Ruff; Jessica L Mischel; David F Montez; Marlene R. Cohen; Aaron P. Batista; Steven M. Chase

OBJECTIVE A traditional goal of neural recording with extracellular electrodes is to isolate action potential waveforms of an individual neuron. Recently, in brain-computer interfaces (BCIs), it has been recognized that threshold crossing events of the voltage waveform also convey rich information. To date, the threshold for detecting threshold crossings has been selected to preserve single-neuron isolation. However, the optimal threshold for single-neuron identification is not necessarily the optimal threshold for information extraction. Here we introduce a procedure to determine the best threshold for extracting information from extracellular recordings. We apply this procedure in two distinct contexts: the encoding of kinematic parameters from neural activity in primary motor cortex (M1), and visual stimulus parameters from neural activity in primary visual cortex (V1). APPROACH We record extracellularly from multi-electrode arrays implanted in M1 or V1 in monkeys. Then, we systematically sweep the voltage detection threshold and quantify the information conveyed by the corresponding threshold crossings. MAIN RESULTS The optimal threshold depends on the desired information. In M1, velocity is optimally encoded at higher thresholds than speed; in both cases the optimal thresholds are lower than are typically used in BCI applications. In V1, information about the orientation of a visual stimulus is optimally encoded at higher thresholds than is visual contrast. A conceptual model explains these results as a consequence of cortical topography. SIGNIFICANCE How neural signals are processed impacts the information that can be extracted from them. Both the type and quality of information contained in threshold crossings depend on the threshold setting. There is more information available in these signals than is typically extracted. Adjusting the detection threshold to the parameter of interest in a BCI context should improve our ability to decode motor intent, and thus enhance BCI control. Further, by sweeping the detection threshold, one can gain insights into the topographic organization of the nearby neural tissue.


Journal of Neural Engineering | 2015

Brain–computer interface control along instructed paths

Patrick T. Sadtler; Stephen I. Ryu; Elizabeth C. Tyler-Kabara; Byron M. Yu; Aaron P. Batista

OBJECTIVE Brain-computer interfaces (BCIs) are being developed to assist paralyzed people and amputees by translating neural activity into movements of a computer cursor or prosthetic limb. Here we introduce a novel BCI task paradigm, intended to help accelerate improvements to BCI systems. Through this task, we can push the performance limits of BCI systems, we can quantify more accurately how well a BCI system captures the users intent, and we can increase the richness of the BCI movement repertoire. APPROACH We have implemented an instructed path task, wherein the user must drive a cursor along a visible path. The instructed path task provides a versatile framework to increase the difficulty of the task and thereby push the limits of performance. Relative to traditional point-to-point tasks, the instructed path task allows more thorough analysis of decoding performance and greater richness of movement kinematics. MAIN RESULTS We demonstrate that monkeys are able to perform the instructed path task in a closed-loop BCI setting. We further investigate how the performance under BCI control compares to native arm control, whether users can decrease their movement variability in the face of a more demanding task, and how the kinematic richness is enhanced in this task. SIGNIFICANCE The use of the instructed path task has the potential to accelerate the development of BCI systems and their clinical translation.


international ieee/embs conference on neural engineering | 2011

High-performance neural prosthetic control along nstructed paths

Patrick T. Sadtler; Stephen I. Ryu; Byron M. Yu; Aaron P. Batista

Neural prostheses are becoming increasingly feasible as assistive technologies for paralyzed patients. A major goal is to provide control of a prosthesis rivaling the natural arm in speed, accuracy, and flexibility. Here, we demonstrate high-performance cursor control by training a monkey to move a cursor in a 2D virtual reality environment using neural activity recorded in primary motor cortex. On a standard center-out task with 8 possible targets, the subject maintained a success rate greater than 95% over many hundreds of trials, on par with previous reports. We introduced the more challenging task of moving the cursor along instructed paths with zero, one, and two inflections. Over several weeks, the subjects performance with double-inflection paths reached a stable level of greater than 55% success with movement times approaching those of the natural arm. Our instructed trajectory task provides a new standard for quantification of prosthesis performance: since the subjects intended movement is known (i.e. the instructed path), we can compute the root mean-square-error (RMSE) between the decoded and intended cursor position throughout the reach. We found that, while success rate tended to increase with training, the RMSE among successful trials remained largely unchanged, consistent with the all-or-none reward scheme. In sum, this work demonstrates the utility of instructed paths for i) pushing the limits of the subjects control and ii) rigorously quantifying the accuracy of cursor movements, both of which are critical for increasing the clinical viability of neural prosthetic systems.


Journal of Neurophysiology | 2015

Single-unit activity, threshold crossings, and local field potentials in motor cortex differentially encode reach kinematics

Sagi Perel; Patrick T. Sadtler; Emily R. Oby; Stephen I. Ryu; Elizabeth C. Tyler-Kabara; Aaron P. Batista; Steve Chase


eLife | 2018

Constraints on neural redundancy

Jay A Hennig; Matthew D. Golub; Peter J Lund; Patrick T. Sadtler; Emily R. Oby; Kristin M. Quick; Stephen I. Ryu; Elizabeth C Tyler-Kabara; Aaron P. Batista; Byron M. Yu; Steven M. Chase

Collaboration


Dive into the Patrick T. Sadtler's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stephen I. Ryu

Palo Alto Medical Foundation

View shared research outputs
Top Co-Authors

Avatar

Byron M. Yu

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Steven M. Chase

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Emily R. Oby

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthew D. Golub

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Sagi Perel

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

David F Montez

University of Pittsburgh

View shared research outputs
Researchain Logo
Decentralizing Knowledge