Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joel Zylberberg is active.

Publication


Featured researches published by Joel Zylberberg.


PLOS Computational Biology | 2011

A Sparse Coding Model with Synaptically Local Plasticity and Spiking Neurons Can Account for the Diverse Shapes of V1 Simple Cell Receptive Fields

Joel Zylberberg; Jason Timothy Murphy; Michael R. DeWeese

Sparse coding algorithms trained on natural images can accurately predict the features that excite visual cortical neurons, but it is not known whether such codes can be learned using biologically realistic plasticity rules. We have developed a biophysically motivated spiking network, relying solely on synaptically local information, that can predict the full diversity of V1 simple cell receptive field shapes when trained on natural images. This represents the first demonstration that sparse coding principles, operating within the constraints imposed by cortical architecture, can successfully reproduce these receptive fields. We further prove, mathematically, that sparseness and decorrelation are the key ingredients that allow for synaptically local plasticity rules to optimize a cooperative, linear generative image model formed by the neural representation. Finally, we discuss several interesting emergent properties of our network, with the intent of bridging the gap between theoretical and experimental studies of visual cortex.


Physical Review D | 2009

Searching for modified growth patterns with tomographic surveys

Gong-Bo Zhao; Levon Pogosian; Alessandra Silvestri; Joel Zylberberg

In alternative theories of gravity, designed to produce cosmic acceleration at the current epoch, the growth of large scale structure can be modified. We study the potential of upcoming and future tomographic surveys such as Dark Energy Survey (DES) and Large Synoptic Survey Telescope (LSST), with the aid of cosmic microwave background (CMB) and supernovae data, to detect departures from the growth of cosmic structure expected within general relativity. We employ parametric forms to quantify the potential time- and scale-dependent variation of the effective gravitational constant and the differences between the two Newtonian potentials. We then apply the Fisher matrix technique to forecast the errors on the modified growth parameters from galaxy clustering, weak lensing, CMB, and their cross correlations across multiple photometric redshift bins. We find that even with conservative assumptions about the data, DES will produce nontrivial constraints on modified growth and that LSST will do significantly better.


The Journal of Neuroscience | 2013

Inhibitory Interneurons Decorrelate Excitatory Cells to Drive Sparse Code Formation in a Spiking Model of V1

Paul D. King; Joel Zylberberg; Michael R. DeWeese

Sparse coding models of natural scenes can account for several physiological properties of primary visual cortex (V1), including the shapes of simple cell receptive fields (RFs) and the highly kurtotic firing rates of V1 neurons. Current spiking network models of pattern learning and sparse coding require direct inhibitory connections between the excitatory simple cells, in conflict with the physiological distinction between excitatory (glutamatergic) and inhibitory (GABAergic) neurons (Dales Law). At the same time, the computational role of inhibitory neurons in cortical microcircuit function has yet to be fully explained. Here we show that adding a separate population of inhibitory neurons to a spiking model of V1 provides conformance to Dales Law, proposes a computational role for at least one class of interneurons, and accounts for certain observed physiological properties in V1. When trained on natural images, this excitatory–inhibitory spiking circuit learns a sparse code with Gabor-like RFs as found in V1 using only local synaptic plasticity rules. The inhibitory neurons enable sparse code formation by suppressing predictable spikes, which actively decorrelates the excitatory population. The model predicts that only a small number of inhibitory cells is required relative to excitatory cells and that excitatory and inhibitory input should be correlated, in agreement with experimental findings in visual cortex. We also introduce a novel local learning rule that measures stimulus-dependent correlations between neurons to support “explaining away” mechanisms in neural coding.


PLOS Computational Biology | 2014

The sign rule and beyond: boundary effects, flexibility, and noise correlations in neural population codes.

Yu Hu; Joel Zylberberg; Eric Shea-Brown

Over repeat presentations of the same stimulus, sensory neurons show variable responses. This “noise” is typically correlated between pairs of cells, and a question with rich history in neuroscience is how these noise correlations impact the populations ability to encode the stimulus. Here, we consider a very general setting for population coding, investigating how information varies as a function of noise correlations, with all other aspects of the problem – neural tuning curves, etc. – held fixed. This work yields unifying insights into the role of noise correlations. These are summarized in the form of theorems, and illustrated with numerical examples involving neurons with diverse tuning curves. Our main contributions are as follows. (1) We generalize previous results to prove a sign rule (SR) — if noise correlations between pairs of neurons have opposite signs vs. their signal correlations, then coding performance will improve compared to the independent case. This holds for three different metrics of coding performance, and for arbitrary tuning curves and levels of heterogeneity. This generality is true for our other results as well. (2) As also pointed out in the literature, the SR does not provide a necessary condition for good coding. We show that a diverse set of correlation structures can improve coding. Many of these violate the SR, as do experimentally observed correlations. There is structure to this diversity: we prove that the optimal correlation structures must lie on boundaries of the possible set of noise correlations. (3) We provide a novel set of necessary and sufficient conditions, under which the coding performance (in the presence of noise) will be as good as it would be if there were no noise present at all.


Physical Review E | 2015

Input nonlinearities can shape beyond-pairwise correlations and improve information transmission by neural populations.

Joel Zylberberg; Eric Shea-Brown

While recent recordings from neural populations show beyond-pairwise, or higher-order, correlations (HOC), we have little understanding of how HOC arise from network interactions and of how they impact encoded information. Here, we show that input nonlinearities imply HOC in spin-glass-type statistical models. We then discuss one such model with parametrized pairwise- and higher-order interactions, revealing conditions under which beyond-pairwise interactions increase the mutual information between a given stimulus type and the population responses. For jointly Gaussian stimuli, coding performance is improved by shaping output HOC only when neural firing rates are constrained to be low. For stimuli with skewed probability distributions (like natural image luminances), performance improves for all firing rates. Our work suggests surprising connections between nonlinear integration of neural inputs, stimulus statistics, and normative theories of population coding. Moreover, it suggests that the inclusion of beyond-pairwise interactions could improve the performance of Boltzmann machines for machine learning and signal processing applications.


Frontiers in Computational Neuroscience | 2015

Triplet correlations among similarly tuned cells impact population coding

Natasha A. Cayco-Gajic; Joel Zylberberg; Eric Shea-Brown

Which statistical features of spiking activity matter for how stimuli are encoded in neural populations? A vast body of work has explored how firing rates in individual cells and correlations in the spikes of cell pairs impact coding. Recent experiments have shown evidence for the existence of higher-order spiking correlations, which describe simultaneous firing in triplets and larger ensembles of cells; however, little is known about their impact on encoded stimulus information. Here, we take a first step toward closing this gap. We vary triplet correlations in small (approximately 10 cell) neural populations while keeping single cell and pairwise statistics fixed at typically reported values. This connection with empirically observed lower-order statistics is important, as it places strong constraints on the level of triplet correlations that can occur. For each value of triplet correlations, we estimate the performance of the neural population on a two-stimulus discrimination task. We find that the allowed changes in the level of triplet correlations can significantly enhance coding, in particular if triplet correlations differ for the two stimuli. In this scenario, triplet correlations must be included in order to accurately quantify the functionality of neural populations. When both stimuli elicit similar triplet correlations, however, pairwise models provide relatively accurate descriptions of coding accuracy. We explain our findings geometrically via the skew that triplet correlations induce in population-wide distributions of neural responses. Finally, we calculate how many samples are necessary to accurately measure spiking correlations of this type, providing an estimate of the necessary recording times in future experiments.


PLOS Computational Biology | 2013

Sparse coding models can exhibit decreasing sparseness while learning sparse codes for natural images.

Joel Zylberberg; Michael R. DeWeese

The sparse coding hypothesis has enjoyed much success in predicting response properties of simple cells in primary visual cortex (V1) based solely on the statistics of natural scenes. In typical sparse coding models, model neuron activities and receptive fields are optimized to accurately represent input stimuli using the least amount of neural activity. As these networks develop to represent a given class of stimulus, the receptive fields are refined so that they capture the most important stimulus features. Intuitively, this is expected to result in sparser network activity over time. Recent experiments, however, show that stimulus-evoked activity in ferret V1 becomes less sparse during development, presenting an apparent challenge to the sparse coding hypothesis. Here we demonstrate that some sparse coding models, such as those employing homeostatic mechanisms on neural firing rates, can exhibit decreasing sparseness during learning, while still achieving good agreement with mature V1 receptive field shapes and a reasonably sparse mature network state. We conclude that observed developmental trends do not rule out sparseness as a principle of neural coding per se: a mature network can perform sparse coding even if sparseness decreases somewhat during development. To make comparisons between model and physiological receptive fields, we introduce a new nonparametric method for comparing receptive field shapes using image registration techniques.


PLOS Computational Biology | 2017

Robust information propagation through noisy neural circuits

Joel Zylberberg; Alexandre Pouget; P.E. Latham; Eric Shea-Brown

Sensory neurons give highly variable responses to stimulation, which can limit the amount of stimulus information available to downstream circuits. Much work has investigated the factors that affect the amount of information encoded in these population responses, leading to insights about the role of covariability among neurons, tuning curve shape, etc. However, the informativeness of neural responses is not the only relevant feature of population codes; of potentially equal importance is how robustly that information propagates to downstream structures. For instance, to quantify the retina’s performance, one must consider not only the informativeness of the optic nerve responses, but also the amount of information that survives the spike-generating nonlinearity and noise corruption in the next stage of processing, the lateral geniculate nucleus. Our study identifies the set of covariance structures for the upstream cells that optimize the ability of information to propagate through noisy, nonlinear circuits. Within this optimal family are covariances with “differential correlations”, which are known to reduce the information encoded in neural population activities. Thus, covariance structures that maximize information in neural population codes, and those that maximize the ability of this information to propagate, can be very different. Moreover, redundancy is neither necessary nor sufficient to make population codes robust against corruption by noise: redundant codes can be very fragile, and synergistic codes can—in some cases—optimize robustness against noise.


Frontiers in Computational Neuroscience | 2011

How should prey animals respond to uncertain threats

Joel Zylberberg; Michael R. DeWeese

A prey animal surveying its environment must decide whether there is a dangerous predator present or not. If there is, it may flee. Flight has an associated cost, so the animal should not flee if there is no danger. However, the prey animal cannot know the state of its environment with certainty, and is thus bound to make some errors. We formulate a probabilistic automaton model of a prey animals life and use it to compute the optimal escape decision strategy, subject to the animals uncertainty. The uncertainty is a major factor in determining the decision strategy: only in the presence of uncertainty do economic factors (like mating opportunities lost due to flight) influence the decision. We performed computer simulations and found that in silico populations of animals subject to predation evolve to display the strategies predicted by our model, confirming our choice of objective function for our analytic calculations. To the best of our knowledge, this is the first theoretical study of escape decisions to incorporate the effects of uncertainty, and to demonstrate the correctness of the objective function used in the model.


BMC Neuroscience | 2014

When does recurrent connectivity improve neural population coding

Joel Zylberberg; Eric Shea-Brown

Neural systems contain many cells, and an important problem is to understand if and how those neurons work together to form a functioning system. In sensory neuroscience -- which is our focus -- this function is to encode information about a stimulus so that it can be transmitted to other brain areas. An experimentally accessible, and hence popular, way to assess collective behavior is to measure the trial-to-trial covariability in the responses of multiple neurons over repeats of the same stimulus, called noise correlations. Repeat presentations of the same stimulus yield different responses on each trial, and that variability is typically correlated across different neurons [1,2] (although, see [3] for one counterexample). In the past two decades, the impact of these noise correlations on population coding has generated great interest. While nothing can add more information about the stimulus than was contained in the inputs (the data processing inequality), noise correlations determine the extent to which noise in the neural system degrades the amount of information that a neural population conveys about a stimulus (for example, see [4-6]). There are two main ways in which these noise correlations can be generated: the cells may receive common (noisy) input from (some of) the same upstream source(s), or the cells may be (recurrently) coupled to one another. At the same time, most work on noise correlations and population coding (with a few notable exceptions, including [7,8]) ignores their mechanistic origins. Interestingly, the few studies that have considered the mechanistic origins of noise correlations [7,8] have concluded that recurrent connectivity tends to hinder population coding, or has little overall effect: even though recurrent coupling can “sharpen” neural tuning curves, this advantage is more than offset by the fact that it generates harmful noise correlations. This begs the question of when -- if ever -- recurrent coupling can improve population coding, and of whether noise correlations with different mechanistic origins have different impacts on coding performance? To address these issues, we are investigating models in which groups of cells with, and without, recurrent coupling are driven by noisy inputs. The cells themselves are then noisy spike generators, and we are varying the fraction of shared inputs to the cells (which modifies the noise correlations due to common input), and the inter-neuronal connectivity, and computing the coding capacity of the resultant networks. Preliminary results indicate that, in some cases – similar to those discussed in [9] – recurrent coupling can combat the cell-intrinsic variabililty enough that the population’s coding performance is improved even though the noise correlations the are generated by that coupling do themselves hinder that coding performance. We are in the process of expanding and generalizing on this work. Support for this work came from NSF grants DMS-1122106, and CRCNS DMS-1208027 and a Burroughs- Wellcome Fund Career Award at the Scientific Interface to ESB.

Collaboration


Dive into the Joel Zylberberg's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fred Rieke

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Jon Cafaro

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alessandra Silvestri

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Sarah Marzen

University of California

View shared research outputs
Top Co-Authors

Avatar

Yu Hu

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge