Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Niru Maheswaranathan is active.

Publication


Featured researches published by Niru Maheswaranathan.


Journal of Social Structure | 2017

Pyret: A Python package for analysis of neurophysiology data

Benjamin Naecker; Niru Maheswaranathan; Surya Ganguli; Stephen A. Baccus

The pyret package contains tools for analyzing neural electrophysiology data. It focuses on applications in sensory neuroscience, broadly construed as any experiment in which one would like to characterize neural responses to a sensory stimulus. Pyret contains methods for manipulating spike trains (e.g. binning and smoothing), pre-processing experimental stimuli (e.g. resampling), computing spike-triggered averages and ensembles (Schwartz et al. 2006), estimating linear-nonlinear cascade models to predict neural responses to different stimuli (Chichilnisky 2001), part of which follows the scikit-learn API (Pedregosa et al. 2011), as well as a suite of visualization tools for all the above. We designed pyret to be simple, robust, and efficient with broad applicability across a range of sensory neuroscience analyses.


bioRxiv | 2018

Deep learning models reveal internal structure and diverse computations in the retina under natural scenes

Niru Maheswaranathan; Lane T. McIntosh; David B. Kastner; Josh Melander; Luke Brezovec; Aran Nayebi; Julia Wang; Surya Ganguli; Stephen A. Baccus

Understanding how the visual system encodes natural scenes is a fundamental goal of sensory neuroscience. We show here that a three-layer network model predicts the retinal response to natural scenes with an accuracy nearing the fundamental limits of predictability. The model’s internal structure is interpretable, in that model units are highly correlated with interneurons recorded separately and not used to fit the model. We further show the ethological relevance to natural visual processing of a diverse set of phenomena of complex motion encoding, adaptation and predictive coding. Our analysis uncovers a fast timescale of visual processing that is inaccessible directly from experimental data, showing unexpectedly that ganglion cells signal in distinct modes by rapidly (< 0.1 s) switching their selectivity for direction of motion, orientation, location and the sign of intensity. A new approach that decomposes ganglion cell responses into the contribution of interneurons reveals how the latent effects of parallel retinal circuits generate the response to any possible stimulus. These results reveal extremely flexible and rapid dynamics of the retinal code for natural visual stimuli, explaining the need for a large set of interneuron pathways to generate the dynamic neural code for natural scenes.The normal function of the retina is to convey information about natural visual images. It is this visual environment that has driven evolution, and that is clinically relevant. Yet nearly all of our understanding of the neural computations, biological function, and circuit mechanisms of the retina comes in the context of artificially structured stimuli such as flashing spots, moving bars and white noise. It is fundamentally unclear how these artificial stimuli are related to circuit processes engaged under natural stimuli. A key barrier is the lack of methods for analyzing retinal responses to natural images. We addressed both these issues by applying convolutional neural network models (CNNs) to capture retinal responses to natural scenes. We find that CNN models predict natural scene responses with high accuracy, achieving performance close to the fundamental limits of predictability set by intrinsic cellular variability. Furthermore, individual internal units of the model are highly correlated with actual retinal interneuron responses that were recorded separately and never presented to the model during training. Finally, we find that models fit only to natural scenes, but not white noise, reproduce a range of phenomena previously described using distinct artificial stimuli, including frequency doubling, latency encoding, motion anticipation, fast contrast adaptation, synchronized responses to motion reversal and object motion sensitivity. Further examination of the model revealed extremely rapid context dependence of retinal feature sensitivity under natural scenes using an analysis not feasible from direct examination of retinal responses. Overall, these results show that nonlinear retinal processes engaged by artificial stimuli are also engaged in and relevant to natural visual processing, and that CNN models form a powerful and unifying tool to study how sensory circuitry produces computations in a natural context.


PLOS Computational Biology | 2018

Inferring hidden structure in multilayered neural circuits

Niru Maheswaranathan; David B. Kastner; Stephen A. Baccus; Surya Ganguli

A central challenge in sensory neuroscience involves understanding how neural circuits shape computations across cascaded cell layers. Here we attempt to reconstruct the response properties of experimentally unobserved neurons in the interior of a multilayered neural circuit, using cascaded linear-nonlinear (LN-LN) models. We combine non-smooth regularization with proximal consensus algorithms to overcome difficulties in fitting such models that arise from the high dimensionality of their parameter space. We apply this framework to retinal ganglion cell processing, learning LN-LN models of retinal circuitry consisting of thousands of parameters, using 40 minutes of responses to white noise. Our models demonstrate a 53% improvement in predicting ganglion cell spikes over classical linear-nonlinear (LN) models. Internal nonlinear subunits of the model match properties of retinal bipolar cells in both receptive field structure and number. Subunits have consistently high thresholds, supressing all but a small fraction of inputs, leading to sparse activity patterns in which only one subunit drives ganglion cell spiking at any time. From the model’s parameters, we predict that the removal of visual redundancies through stimulus decorrelation across space, a central tenet of efficient coding theory, originates primarily from bipolar cell synapses. Furthermore, the composite nonlinear computation performed by retinal circuitry corresponds to a boolean OR function applied to bipolar cell feature detectors. Our methods are statistically and computationally efficient, enabling us to rapidly learn hierarchical non-linear models as well as efficiently compute widely used descriptive statistics such as the spike triggered average (STA) and covariance (STC) for high dimensional stimuli. This general computational framework may aid in extracting principles of nonlinear hierarchical sensory processing across diverse modalities from limited data.


international conference on machine learning | 2015

Deep Unsupervised Learning using Nonequilibrium Thermodynamics

Jascha Sohl-Dickstein; Eric A. Weiss; Niru Maheswaranathan; Surya Ganguli


Neuron | 2017

A Multiplexed, Heterogeneous, and Adaptive Code for Navigation in Medial Entorhinal Cortex

Kiah Hardcastle; Niru Maheswaranathan; Surya Ganguli; Lisa M. Giocomo


neural information processing systems | 2016

Deep Learning Models of the Retinal Response to Natural Scenes

Lane T. McIntosh; Niru Maheswaranathan; Aran Nayebi; Surya Ganguli; Stephen A. Baccus


Neuron | 2017

Social Control of Hypothalamus-Mediated Male Aggression

Taehong Yang; Cindy F. Yang; M. Delara Chizari; Niru Maheswaranathan; Kenneth J. Burke; Maxim Borius; Sayaka Inoue; Michael C. Chiang; Kevin J. Bender; Surya Ganguli; Nirao M. Shah


international conference on machine learning | 2017

Learned Optimizers that Scale and Generalize

Olga Wichrowska; Niru Maheswaranathan; Matthew W. Hoffman; Sergio Gomez Colmenarejo; Misha Denil; Nando de Freitas; Jascha Sohl-Dickstein


arXiv: Learning | 2018

Learning Unsupervised Learning Rules.

Luke Metz; Niru Maheswaranathan; Brian Cheung; Jascha Sohl-Dickstein


arXiv: Neural and Evolutionary Computing | 2018

Guided evolutionary strategies: escaping the curse of dimensionality in random search.

Niru Maheswaranathan; Luke Metz; George Tucker; Jascha Sohl-Dickstein

Collaboration


Dive into the Niru Maheswaranathan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brian Cheung

University of California

View shared research outputs
Top Co-Authors

Avatar

Cindy F. Yang

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge