Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yiyin Zhou is active.

Publication


Featured researches published by Yiyin Zhou.


Vision Research | 2010

Encoding Natural Scenes with Neural Circuits with Random Thresholds

Aurel A. Lazar; Eftychios A. Pnevmatikakis; Yiyin Zhou

We present a general framework for the reconstruction of natural video scenes encoded with a population of spiking neural circuits with random thresholds. The natural scenes are modeled as space-time functions that belong to a space of trigonometric polynomials. The visual encoding system consists of a bank of filters, modeling the visual receptive fields, in cascade with a population of neural circuits, modeling encoding in the early visual system. The neuron models considered include integrate-and-fire neurons and ON-OFF neuron pairs with threshold-and-fire spiking mechanisms. All thresholds are assumed to be random. We demonstrate that neural spiking is akin to taking noisy measurements on the stimulus both for time-varying and space-time-varying stimuli. We formulate the reconstruction problem as the minimization of a suitable cost functional in a finite-dimensional vector space and provide an explicit algorithm for stimulus recovery. We also present a general solution using the theory of smoothing splines in Reproducing Kernel Hilbert Spaces. We provide examples of both synthetic video as well as for natural scenes and demonstrate that the quality of the reconstruction degrades gracefully as the threshold variability of the neurons increases.


Neural Networks | 2012

2012 Special Issue: Massively parallel neural encoding and decoding of visual stimuli

Aurel A. Lazar; Yiyin Zhou

The massively parallel nature of video Time Encoding Machines (TEMs) calls for scalable, massively parallel decoders that are implemented with neural components. The current generation of decoding algorithms is based on computing the pseudo-inverse of a matrix and does not satisfy these requirements. Here we consider video TEMs with an architecture built using Gabor receptive fields and a population of Integrate-and-Fire neurons. We show how to build a scalable architecture for video Time Decoding Machines using recurrent neural networks. Furthermore, we extend our architecture to handle the reconstruction of visual stimuli encoded with massively parallel video TEMs having neurons with random thresholds. Finally, we discuss in detail our algorithms and demonstrate their scalability and performance on a large scale GPU cluster.


Proceedings of the IEEE | 2014

Reconstructing Natural Visual Scenes From Spike Times

Aurel A. Lazar; Yiyin Zhou

In this paper, we investigate neural circuit architectures encoding natural visual scenes with neuron models consisting of dendritic stimulus processors (DSPs) in cascade with biophysical spike generators (BSGs). DSPs serve as functional models of processing of stimuli up to and including the neurons active dendritic tree. BSGs model spike generation at the axon hillock level where neurons respond to aggregated synaptic currents. The highly nonlinear behavior of BSGs calls for novel methods of input/output (I/O) analysis of neural encoding circuits and novel decoding algorithms for signal recovery. On the encoding side we characterize the BSG I/O with a phase response curve (PRC) manifold and interpret neural encoding as generalized sampling. We provide a decoding algorithm that recovers visual stimuli encoded by a neural circuit with intrinsic noise sources. In the absence of noise, we give conditions on perfect reconstruction of natural visual scenes. We extend the architecture to encompass neuron models with on-off BSGs with self- and cross-feedback. With the help of the PRC manifold, decoding is shown to be tractable even for a wide signal dynamic range. Consequently, bias currents that were essential in the encoding process can largely be reduced or eliminated. Finally, we present examples of massively parallel encoding and decoding of natural visual scenes on a cluster of graphical processing units (GPUs). We evaluate the signal reconstruction under different noise conditions and investigate the performance of signal recovery in the Nyquist region and for different temporal bandwidths.


Neural Networks | 2015

Massively parallel neural circuits for stereoscopic color vision

Aurel A. Lazar; Yevgeniy B. Slutskiy; Yiyin Zhou

Past work demonstrated how monochromatic visual stimuli could be faithfully encoded and decoded under Nyquist-type rate conditions. Color visual stimuli were then traditionally encoded and decoded in multiple separate monochromatic channels. The brain, however, appears to mix information about color channels at the earliest stages of the visual system, including the retina itself. If information about color is mixed and encoded by a common pool of neurons, how can colors be demixed and perceived? We present Color Video Time Encoding Machines (Color Video TEMs) for encoding color visual stimuli that take into account a variety of color representations within a single neural circuit. We then derive a Color Video Time Decoding Machine (Color Video TDM) algorithm for color demixing and reconstruction of color visual scenes from spikes produced by a population of visual neurons. In addition, we formulate Color Video Channel Identification Machines (Color Video CIMs) for functionally identifying color visual processing performed by a spiking neural circuit. Furthermore, we derive a duality between TDMs and CIMs that unifies the two and leads to a general theory of neural information representation for stereoscopic color vision. We provide examples demonstrating that a massively parallel color visual neural circuit can be first identified with arbitrary precision and its spike trains can be subsequently used to reconstruct the encoded stimuli. We argue that evaluation of the functional identification methodology can be effectively and intuitively performed in the stimulus space. In this space, a signal reconstructed from spike trains generated by the identified neural circuit can be compared to the original stimulus.


Frontiers in Computational Neuroscience | 2014

Volterra dendritic stimulus processors and biophysical spike generators with intrinsic noise sources

Aurel A. Lazar; Yiyin Zhou

We consider a class of neural circuit models with internal noise sources arising in sensory systems. The basic neuron model in these circuits consists of a dendritic stimulus processor (DSP) cascaded with a biophysical spike generator (BSG). The dendritic stimulus processor is modeled as a set of nonlinear operators that are assumed to have a Volterra series representation. Biophysical point neuron models, such as the Hodgkin-Huxley neuron, are used to model the spike generator. We address the question of how intrinsic noise sources affect the precision in encoding and decoding of sensory stimuli and the functional identification of its sensory circuits. We investigate two intrinsic noise sources arising (i) in the active dendritic trees underlying the DSPs, and (ii) in the ion channels of the BSGs. Noise in dendritic stimulus processing arises from a combined effect of variability in synaptic transmission and dendritic interactions. Channel noise arises in the BSGs due to the fluctuation of the number of the active ion channels. Using a stochastic differential equations formalism we show that encoding with a neuron model consisting of a nonlinear DSP cascaded with a BSG with intrinsic noise sources can be treated as generalized sampling with noisy measurements. For single-input multi-output neural circuit models with feedforward, feedback and cross-feedback DSPs cascaded with BSGs we theoretically analyze the effect of noise sources on stimulus decoding. Building on a key duality property, the effect of noise parameters on the precision of the functional identification of the complete neural circuit with DSP/BSG neuron models is given. We demonstrate through extensive simulations the effects of noise on encoding stimuli with circuits that include neuron models that are akin to those commonly seen in sensory systems, e.g., complex cells in V1.


bioRxiv | 2016

The Fruit Fly Brain Observatory: from structure to function

Nikul H. Ukani; Chung-Heng Yeh; Adam Tomkins; Yiyin Zhou; Dorian Florescu; Carlos Luna Ortiz; Yu-Chi Huang; Cheng-Te Wang; Paul Richmond; Chung-Chuan Lo; Daniel Coca; Ann-Shyn Chiang; Aurel A. Lazar

The Fruit Fly Brain Observatory (FFBO) is a collaborative effort between experimentalists, theorists and computational neuroscientists at Columbia University, National Tsing Hua University and Sheffield University with the goal to (i) create an open platform for the emulation and biological validation of fruit fly brain models in health and disease, (ii) standardize tools and methods for graphical rendering, representation and manipulation of brain circuits, (iii) standardize tools for representation of fruit fly brain data and its abstractions and support for natural language queries, (iv) create a focus for the neuroscience community with interests in the fruit fly brain and encourage the sharing of fruit fly brain structural data and executable code worldwide. NeuroNLP and NeuroGFX, two key FFBO applications, aim to address two major challenges, respectively: i) seamlessly integrate structural and genetic data from multiple sources that can be intuitively queried, effectively visualized and extensively manipulated, ii) devise executable brain circuit models anchored in structural data for understanding and developing novel hypotheses about brain function. NeuroNLP enables researchers to use plain English (or other languages) to probe biological data that are integrated into a novel database system, called NeuroArch, that we developed for integrating biological and abstract data models of the fruit fly brain. With powerful 3D graphical visualization, NeuroNLP presents a highly accessible portal for the fruit fly brain data. NeuroGFX provides users highly intuitive tools to execute neural circuit models with Neurokernel, an open-source platform for emulating the fruit fly brain, with full data support from the NeuroArch database and visualization support from an interactive graphical interface. Brain circuits can be configured with high flexibility and investigated on multiple levels, e.g., whole brain, neuropil, and local circuit levels. The FFBO is publicly available and accessible at http://fruitflybrain.org from any modern web browsers, including those running on smartphones.


international symposium on neural networks | 2011

Realizing Video Time Decoding Machines with recurrent neural networks

Aurel A. Lazar; Yiyin Zhou

Video Time Decoding Machines faithfully reconstruct bandlimited stimuli encoded with Video Time Encoding Machines. The key step in recovery calls for the pseudo-inversion of a typically poorly conditioned large scale matrix. We investigate the realization of time decoders employing only neural components. We show that Video Time Decoding Machines can be realized with recurrent neural networks, describe their architecture and evaluate their performance. We provide the first demonstration of recovery of natural and synthetic video scenes encoded in the spike domain with decoders realized with only neural components. The performance in recovery using the latter decoder is not distinguishable from the one based on the pseudo-inversion matrix method.


bioRxiv | 2016

NeuroGFX: a graphical functional explorer for fruit fly brain circuits

Chung-Heng Yeh; Yiyin Zhou; Nikul H. Ukani; Aurel A. Lazar

Recently, multiple focused efforts have resulted in substantial increase in the availability of connectome data in the fruit fly brain. Elucidating neural circuit function from such structural data calls for a scalable computational modeling methodology. We propose such a methodology that includes i) a brain emulation engine, with an architecture that can tackle the complexity of whole brain modeling, ii) a database that supports tight integration of biological and modeling data along with support for domain specific queries and circuit transformations, and iii) a graphical interface that allows for total flexibility in configuring neural circuits and visualizing run-time results, both anchored on model abstractions closely reflecting biological structure. Towards the realization of such a methodology, we have developed NeuroGFX and integrated it into the architecture of the Fruit Fly Brain Observatory (http://fruitflybrain.org). The computational infrastructure in NeuroGFX is provided by Neurokernel, an open source platform for the emulation of the fruit fly brain, and NeuroArch, a database for querying and executing fruit fly brain circuits. The integration of the two enables the algorithmic construction/manipulation/revision of executable circuits on multiple levels of abstraction of the same model organism. The power of this computational infrastructure can be leveraged through an intuitive graphical interface that allows visualizing execution results in the context of biological structure. This provides an environment where computational researchers can present configurable, executable neural circuits, and experimental scientists can easily explore circuit structure and function ultimately leading to biological validation. With these capabilities, NeuroGFX enables the exploration of function from circuit structure at whole brain, neuropil, and local circuit level of abstraction. By allowing for independently developed models to be integrated at the architectural level, NeuroGFX provides an open plug and play, collaborative environment for whole brain computational modeling of the fruit fly.


bioRxiv | 2016

NeuroNLP: a natural language portal for aggregated fruit fly brain data

Nikul H. Ukani; Adam Tomkins; Chung-Heng Yeh; Wesley Bruning; Allison L Fenichel; Yiyin Zhou; Yu-Chi Huang; Dorian Florescu; Carlos Luna Ortiz; Paul Richmond; Chung-Chuan Lo; Daniel Coca; Ann-Shyn Chiang; Aurel A. Lazar

NeuroNLP, is a key application on the Fruit Fly Brain Observatory platform (FFBO, http://fruitflybrain.org), that provides a modern web-based portal for navigating fruit fly brain circuit data. Increases in the availability and scale of fruit fly connectome data, demand new, scalable and accessible methods to facilitate investigation into the functions of the latest complex circuits being uncovered. NeuroNLP enables in-depth exploration and investigation of the structure of brain circuits, using intuitive natural language queries that are capable of revealing the latent structure and information, obscured due to expansive yet independent data sources. NeuroNLP is built on top of a database system call NeuroArch that codifies knowledge about the fruit fly brain circuits, spanning multiple sources. Users can probe biological circuits in the NeuroArch database with plain English queries, such as “show glutamatergic local neurons in the left antennal lobe” and “show neurons with dendrites in the left mushroom body and axons in the fan-shaped body”. This simple yet powerful interface replaces the usual, cumbersome checkboxes and dropdown menus prevalent in today’s neurobiological databases. Equipped with powerful 3D visualization, NeuroNLP standardizes tools and methods for graphical rendering, representation, and manipulation of brain circuits, while integrating with existing databases such as the FlyCircuit. The userfriendly graphical user interface complements the natural language queries with additional controls for exploring the connectivity of neurons and neural circuits. Designed with an open-source, modular structure, it is highly scalable/flexible/extensible to additional databases or to switch between databases and supports the creation of additional parsers for other languages. By supporting access through a web browser from any modern laptop or smartphone, NeuroNLP significantly increases the accessibility of fruit fly brain data and improves the impact of the data in both scientific and educational exploration.


IEEE Transactions on Molecular, Biological, and Multi-Scale Communications | 2016

Identifying Multisensory Dendritic Stimulus Processors

Aurel A. Lazar; Yiyin Zhou

Functional identification is a key methodology for uncovering the logic of neuroinformation processing of brain circuits. For neural circuits modeling sensory systems as dynamical systems, the complexity of the identification algorithm largely depends on the number of stimuli used. Neurons in these circuits consist of dendritic stimulus processors modeling the signal processing in the dendritric tree and biological spike generators modeling the spiking mechanism at the axon hillock. Here, we review the identification of multi-sensory spatio-temporal dendritic stimulus processors that arise in the encoding of auditory scenes, color visual fields, and the mixing of auditory scenes and natural visual fields. We demonstrate the fundamental duality between the identification of the dendritic stimulus processor of a single neuron and the decoding of stimuli encoded by a population of neurons with a bank of dendritic stimulus processors. The duality enables us to reconstruct the originally encoded stimuli from all the generated spikes by using the identified neural circuit. The reconstruction leads to a simple and intuitive evaluation of the identified dendritic stimulus processors in the space of stimuli.

Collaboration


Dive into the Yiyin Zhou's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adam Tomkins

University of Sheffield

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel Coca

University of Sheffield

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge