Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Sussillo is active.

Publication


Featured researches published by David Sussillo.


Nature | 2013

Context-dependent computation by recurrent dynamics in prefrontal cortex

Valerio Mante; David Sussillo; Krishna V. Shenoy; William T. Newsome

Prefrontal cortex is thought to have a fundamental role in flexible, context-dependent behaviour, but the exact nature of the computations underlying this role remains largely unknown. In particular, individual prefrontal neurons often generate remarkably complex responses that defy deep understanding of their contribution to behaviour. Here we study prefrontal cortex activity in macaque monkeys trained to flexibly select and integrate noisy sensory inputs towards a choice. We find that the observed complexity and functional roles of single neurons are readily understood in the framework of a dynamical process unfolding at the level of the population. The population dynamics can be reproduced by a trained recurrent neural network, which suggests a previously unknown mechanism for selection and integration of task-relevant inputs. This mechanism indicates that selection and integration are two aspects of a single dynamical process unfolding within the same prefrontal circuits, and potentially provides a novel, general framework for understanding context-dependent computations.


Neuron | 2009

Generating Coherent Patterns of Activity from Chaotic Neural Networks

David Sussillo; L. F. Abbott

Neural circuits display complex activity patterns both spontaneously and when responding to a stimulus or generating a motor output. How are these two forms of activity related? We develop a procedure called FORCE learning for modifying synaptic strengths either external to or within a model neural network to change chaotic spontaneous activity into a wide variety of desired activity patterns. FORCE learning works even though the networks we train are spontaneously chaotic and we leave feedback loops intact and unclamped during learning. Using this approach, we construct networks that produce a wide variety of complex output patterns, input-output transformations that require memory, multiple outputs that can be switched by control inputs, and motor patterns matching human motion capture data. Our results reproduce data on premovement activity in motor and premotor cortex, and suggest that synaptic plasticity may be a more rapid and powerful modulator of network activity than generally appreciated.


The Journal of Neuroscience | 2006

Modular Propagation of Epileptiform Activity: Evidence for an Inhibitory Veto in Neocortex

Andrew J. Trevelyan; David Sussillo; Brendon O. Watson; Rafael Yuste

What regulates the spread of activity through cortical circuits? We present here data indicating a pivotal role for a vetoing inhibition restraining modules of pyramidal neurons. We combined fast calcium imaging of network activity with whole-cell recordings to examine epileptiform propagation in mouse neocortical slices. Epileptiform activity was induced by washing Mg2+ ions out of the slice. Pyramidal cells receive barrages of inhibitory inputs in advance of the epileptiform wave. The inhibitory barrages are effectively nullified at low doses of picrotoxin (2.5–5 μm). When present, however, these inhibitory barrages occlude an intense excitatory synaptic drive that would normally exceed action potential threshold by approximately a factor of 10. Despite this level of excitation, the inhibitory barrages suppress firing, thereby limiting further neuronal recruitment to the ictal event. Pyramidal neurons are recruited to the epileptiform event once the inhibitory restraint fails and are recruited in spatially clustered populations (150–250 μm diameter). The recruitment of the cells within a given module is virtually simultaneous, and thus epileptiform events progress in intermittent (0.5–1 Hz) steps across the cortical network. We propose that the interneurons that supply the vetoing inhibition define these modular circuit territories.


The Journal of Neuroscience | 2007

Feedforward inhibition contributes to the control of epileptiform propagation speed.

Andrew J. Trevelyan; David Sussillo; Rafael Yuste

It is still poorly understood how epileptiform events can recruit cortical circuits. Moreover, the speed of propagation of epileptiform discharges in vivo and in vitro can vary over several orders of magnitude (0.1–100 mm/s), a range difficult to explain by a single mechanism. We previously showed how epileptiform spread in neocortical slices is opposed by a powerful feedforward inhibition ahead of the ictal wave. When this feedforward inhibition is intact, epileptiform spreads very slowly (∼100 μm/s). We now investigate whether changes in this inhibitory restraint can also explain much faster propagation velocities. We made use of a very characteristic pattern of evolution of ictal activity in the zero magnesium (0 Mg2+) model of epilepsy. With each successive ictal event, the number of preictal inhibitory barrages dropped, and in parallel with this change, the propagation velocity increased. There was a highly significant correlation (p < 0.001) between the two measures over a 1000-fold range of velocities, indicating that feedforward inhibition was the prime determinant of the speed of epileptiform propagation. We propose that the speed of propagation is set by the extent of the recruitment steps, which in turn is set by how successfully the feedforward inhibitory restraint contains the excitatory drive. Thus, a single mechanism could account for the wide range of propagation velocities of epileptiform events observed in vitro and in vivo.


Nature Neuroscience | 2015

A neural network that finds a naturalistic solution for the production of muscle activity

David Sussillo; Mark M. Churchland; Matthew T. Kaufman; Krishna V. Shenoy

It remains an open question how neural responses in motor cortex relate to movement. We explored the hypothesis that motor cortex reflects dynamics appropriate for generating temporally patterned outgoing commands. To formalize this hypothesis, we trained recurrent neural networks to reproduce the muscle activity of reaching monkeys. Models had to infer dynamics that could transform simple inputs into temporally and spatially complex patterns of muscle activity. Analysis of trained models revealed that the natural dynamical solution was a low-dimensional oscillator that generated the necessary multiphasic commands. This solution closely resembled, at both the single-neuron and population levels, what was observed in neural recordings from the same monkeys. Notably, data and simulations agreed only when models were optimized to find simple solutions. An appealing interpretation is that the empirically observed dynamics of motor cortex may reflect a simple solution to the problem of generating temporally patterned descending commands.


Neural Computation | 2013

Opening the black box: Low-dimensional dynamics in high-dimensional recurrent neural networks

David Sussillo; Omri Barak

Recurrent neural networks (RNNs) are useful tools for learning nonlinear relationships between time-varying inputs and outputs with complex temporal dependencies. Recently developed algorithms have been successful at training RNNs to perform a wide variety of tasks, but the resulting networks have been treated as black boxes: their mechanism of operation remains unknown. Here we explore the hypothesis that fixed points, both stable and unstable, and the linearized dynamics around them, can reveal crucial aspects of how RNNs implement their computations. Further, we explore the utility of linearization in areas of phase space that are not true fixed points but merely points of very slow movement. We present a simple optimization technique that is applied to trained RNNs to find the fixed and slow points of their dynamics. Linearization around these slow regions can be used to explore, or reverse-engineer, the behavior of the RNN. We describe the technique, illustrate it using simple examples, and finally showcase it on three high-dimensional RNN examples: a 3-bit flip-flop device, an input-dependent sine wave generator, and a two-point moving average. In all cases, the mechanisms of trained networks could be inferred from the sets of fixed and slow points and the linearized dynamics around them.


Journal of Neural Engineering | 2012

A recurrent neural network for closed-loop intracortical brain–machine interface decoders

David Sussillo; Paul Nuyujukian; Joline M Fan; Jonathan C. Kao; Sergey D. Stavisky; Stephen I. Ryu; Krishna V. Shenoy

Recurrent neural networks (RNNs) are useful tools for learning nonlinear relationships in time series data with complex temporal dependences. In this paper, we explore the ability of a simplified type of RNN, one with limited modifications to the internal weights called an echostate network (ESN), to effectively and continuously decode monkey reaches during a standard center-out reach task using a cortical brain-machine interface (BMI) in a closed loop. We demonstrate that the RNN, an ESN implementation termed a FORCE decoder (from first order reduced and controlled error learning), learns the task quickly and significantly outperforms the current state-of-the-art method, the velocity Kalman filter (VKF), using the measure of target acquire time. We also demonstrate that the FORCE decoder generalizes to a more difficult task by successfully operating the BMI in a randomized point-to-point task. The FORCE decoder is also robust as measured by the success rate over extended sessions. Finally, we show that decoded cursor dynamics are more like naturalistic hand movements than those of the VKF. Taken together, these results suggest that RNNs in general, and the FORCE decoder in particular, are powerful tools for BMI decoder applications.


EURASIP Journal on Advances in Signal Processing | 2004

Spectrogram analysis of genomes

David Sussillo; Anshul Kundaje; Dimitris Anastassiou

We performed frequency-domain analysis in the genomes of various organisms using tricolor spectrograms, identifying several types of distinct visual patterns characterizing specific DNA regions. We relate patterns and their frequency characteristics to the sequence characteristics of the DNA. At times, the spectrogram patterns could be related to the structure of the corresponding protein region by using various public databases such as GenBank. Some patterns are explained from the biological nature of the corresponding regions, which relate to chromosome structure and protein coding, and some patterns have yet unknown biological significance. We found biologically meaningful patterns, on the scale of millions of base pairs, to a few hundred base pairs. Chromosome-wide patterns include periodicities ranging from 2 to 300. The color of the spectrogram depends on the nucleotide content at specific frequencies, and therefore can be used as a local indicator of CG content and other measures of relative base content. Several smaller-scale patterns were found to represent different types of domains made up of various tandem repeats.


Current Opinion in Neurobiology | 2014

Neural circuits as computational dynamical systems.

David Sussillo

Many recent studies of neurons recorded from cortex reveal complex temporal dynamics. How such dynamics embody the computations that ultimately lead to behavior remains a mystery. Approaching this issue requires developing plausible hypotheses couched in terms of neural dynamics. A tool ideally suited to aid in this question is the recurrent neural network (RNN). RNNs straddle the fields of nonlinear dynamical systems and machine learning and have recently seen great advances in both theory and application. I summarize recent theoretical and technological advances and highlight an example of how RNNs helped to explain perplexing high-dimensional neurophysiological data in the prefrontal cortex.


Progress in Neurobiology | 2013

From fixed points to chaos: Three models of delayed discrimination

Omri Barak; David Sussillo; Ranulfo Romo; Misha Tsodyks; L. F. Abbott

Working memory is a crucial component of most cognitive tasks. Its neuronal mechanisms are still unclear despite intensive experimental and theoretical explorations. Most theoretical models of working memory assume both time-invariant neural representations and precise connectivity schemes based on the tuning properties of network neurons. A different, more recent class of models assumes randomly connected neurons that have no tuning to any particular task, and bases task performance purely on adjustment of network readout. Intermediate between these schemes are networks that start out random but are trained by a learning scheme. Experimental studies of a delayed vibrotactile discrimination task indicate that some of the neurons in prefrontal cortex are persistently tuned to the frequency of a remembered stimulus, but the majority exhibit more complex relationships to the stimulus that vary considerably across time. We compare three models, ranging from a highly organized line attractor model to a randomly connected network with chaotic activity, with data recorded during this task. The random network does a surprisingly good job of both performing the task and matching certain aspects of the data. The intermediate model, in which an initially random network is partially trained to perform the working memory task by tuning its recurrent and readout connections, provides a better description, although none of the models matches all features of the data. Our results suggest that prefrontal networks may begin in a random state relative to the task and initially rely on modified readout for task performance. With further training, however, more tuned neurons with less time-varying responses should emerge as the networks become more structured.

Collaboration


Dive into the David Sussillo's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stephen I. Ryu

Palo Alto Medical Foundation

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark M. Churchland

Columbia University Medical Center

View shared research outputs
Top Co-Authors

Avatar

Matthew T. Kaufman

Cold Spring Harbor Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge