Nikul H. Ukani
Columbia University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nikul H. Ukani.
bioRxiv | 2016
Nikul H. Ukani; Chung-Heng Yeh; Adam Tomkins; Yiyin Zhou; Dorian Florescu; Carlos Luna Ortiz; Yu-Chi Huang; Cheng-Te Wang; Paul Richmond; Chung-Chuan Lo; Daniel Coca; Ann-Shyn Chiang; Aurel A. Lazar
The Fruit Fly Brain Observatory (FFBO) is a collaborative effort between experimentalists, theorists and computational neuroscientists at Columbia University, National Tsing Hua University and Sheffield University with the goal to (i) create an open platform for the emulation and biological validation of fruit fly brain models in health and disease, (ii) standardize tools and methods for graphical rendering, representation and manipulation of brain circuits, (iii) standardize tools for representation of fruit fly brain data and its abstractions and support for natural language queries, (iv) create a focus for the neuroscience community with interests in the fruit fly brain and encourage the sharing of fruit fly brain structural data and executable code worldwide. NeuroNLP and NeuroGFX, two key FFBO applications, aim to address two major challenges, respectively: i) seamlessly integrate structural and genetic data from multiple sources that can be intuitively queried, effectively visualized and extensively manipulated, ii) devise executable brain circuit models anchored in structural data for understanding and developing novel hypotheses about brain function. NeuroNLP enables researchers to use plain English (or other languages) to probe biological data that are integrated into a novel database system, called NeuroArch, that we developed for integrating biological and abstract data models of the fruit fly brain. With powerful 3D graphical visualization, NeuroNLP presents a highly accessible portal for the fruit fly brain data. NeuroGFX provides users highly intuitive tools to execute neural circuit models with Neurokernel, an open-source platform for emulating the fruit fly brain, with full data support from the NeuroArch database and visualization support from an interactive graphical interface. Brain circuits can be configured with high flexibility and investigated on multiple levels, e.g., whole brain, neuropil, and local circuit levels. The FFBO is publicly available and accessible at http://fruitflybrain.org from any modern web browsers, including those running on smartphones.
bioRxiv | 2016
Chung-Heng Yeh; Yiyin Zhou; Nikul H. Ukani; Aurel A. Lazar
Recently, multiple focused efforts have resulted in substantial increase in the availability of connectome data in the fruit fly brain. Elucidating neural circuit function from such structural data calls for a scalable computational modeling methodology. We propose such a methodology that includes i) a brain emulation engine, with an architecture that can tackle the complexity of whole brain modeling, ii) a database that supports tight integration of biological and modeling data along with support for domain specific queries and circuit transformations, and iii) a graphical interface that allows for total flexibility in configuring neural circuits and visualizing run-time results, both anchored on model abstractions closely reflecting biological structure. Towards the realization of such a methodology, we have developed NeuroGFX and integrated it into the architecture of the Fruit Fly Brain Observatory (http://fruitflybrain.org). The computational infrastructure in NeuroGFX is provided by Neurokernel, an open source platform for the emulation of the fruit fly brain, and NeuroArch, a database for querying and executing fruit fly brain circuits. The integration of the two enables the algorithmic construction/manipulation/revision of executable circuits on multiple levels of abstraction of the same model organism. The power of this computational infrastructure can be leveraged through an intuitive graphical interface that allows visualizing execution results in the context of biological structure. This provides an environment where computational researchers can present configurable, executable neural circuits, and experimental scientists can easily explore circuit structure and function ultimately leading to biological validation. With these capabilities, NeuroGFX enables the exploration of function from circuit structure at whole brain, neuropil, and local circuit level of abstraction. By allowing for independently developed models to be integrated at the architectural level, NeuroGFX provides an open plug and play, collaborative environment for whole brain computational modeling of the fruit fly.
bioRxiv | 2016
Nikul H. Ukani; Adam Tomkins; Chung-Heng Yeh; Wesley Bruning; Allison L Fenichel; Yiyin Zhou; Yu-Chi Huang; Dorian Florescu; Carlos Luna Ortiz; Paul Richmond; Chung-Chuan Lo; Daniel Coca; Ann-Shyn Chiang; Aurel A. Lazar
NeuroNLP, is a key application on the Fruit Fly Brain Observatory platform (FFBO, http://fruitflybrain.org), that provides a modern web-based portal for navigating fruit fly brain circuit data. Increases in the availability and scale of fruit fly connectome data, demand new, scalable and accessible methods to facilitate investigation into the functions of the latest complex circuits being uncovered. NeuroNLP enables in-depth exploration and investigation of the structure of brain circuits, using intuitive natural language queries that are capable of revealing the latent structure and information, obscured due to expansive yet independent data sources. NeuroNLP is built on top of a database system call NeuroArch that codifies knowledge about the fruit fly brain circuits, spanning multiple sources. Users can probe biological circuits in the NeuroArch database with plain English queries, such as “show glutamatergic local neurons in the left antennal lobe” and “show neurons with dendrites in the left mushroom body and axons in the fan-shaped body”. This simple yet powerful interface replaces the usual, cumbersome checkboxes and dropdown menus prevalent in today’s neurobiological databases. Equipped with powerful 3D visualization, NeuroNLP standardizes tools and methods for graphical rendering, representation, and manipulation of brain circuits, while integrating with existing databases such as the FlyCircuit. The userfriendly graphical user interface complements the natural language queries with additional controls for exploring the connectivity of neurons and neural circuits. Designed with an open-source, modular structure, it is highly scalable/flexible/extensible to additional databases or to switch between databases and supports the creation of additional parsers for other languages. By supporting access through a web browser from any modern laptop or smartphone, NeuroNLP significantly increases the accessibility of fruit fly brain data and improves the impact of the data in both scientific and educational exploration.
Journal of Mathematical Neuroscience | 2018
Aurel A. Lazar; Nikul H. Ukani; Yiyin Zhou
We investigate the sparse functional identification of complex cells and the decoding of spatio-temporal visual stimuli encoded by an ensemble of complex cells. The reconstruction algorithm is formulated as a rank minimization problem that significantly reduces the number of sampling measurements (spikes) required for decoding. We also establish the duality between sparse decoding and functional identification and provide algorithms for identification of low-rank dendritic stimulus processors. The duality enables us to efficiently evaluate our functional identification algorithms by reconstructing novel stimuli in the input space. Finally, we demonstrate that our identification algorithms substantially outperform the generalized quadratic model, the nonlinear input model, and the widely used spike-triggered covariance algorithm.
Computational Intelligence and Neuroscience | 2016
Aurel A. Lazar; Nikul H. Ukani; Yiyin Zhou
Previous research demonstrated that global phase alone can be used to faithfully represent visual scenes. Here we provide a reconstruction algorithm by using only local phase information. We also demonstrate that local phase alone can be effectively used to detect local motion. The local phase-based motion detector is akin to models employed to detect motion in biological vision, for example, the Reichardt detector. The local phase-based motion detection algorithm introduced here consists of two building blocks. The first building block measures/evaluates the temporal change of the local phase. The temporal derivative of the local phase is shown to exhibit the structure of a second order Volterra kernel with two normalized inputs. We provide an efficient, FFT-based algorithm for implementing the change of the local phase. The second processing building block implements the detector; it compares the maximum of the Radon transform of the local phase derivative with a chosen threshold. We demonstrate examples of applying the local phase-based motion detection algorithm on several video sequences. We also show how the locally detected motion can be used for segmenting moving objects in video scenes and compare our local phase-based algorithm to segmentation achieved with a widely used optic flow algorithm.
BMC Neuroscience | 2015
Aurel A. Lazar; Konstantinos Psychas; Nikul H. Ukani; Yiyin Zhou
The compound eyes of the fruit flies consist of the retina and 4 neuropils. The retina comprises about 700~800 ommatidia, each of which houses 8 photoreceptors, where the phototransduction takes place. Although seemingly simple, each photoreceptor can perform complex computation on their own [1]. Nevertheless, it is not yet clear how individual photoreceptors contribute to an overall spatiotemporal processing of visual scenes. Towards this goal, we constructed a full-scale simulation of the retina on a cluster of GPUs. The geometry of the eye was taken into account by building a hexagonal array of 721 ommatidia on a hemisphere and assigning a proper optic axis and an acceptance angle to each of the photoreceptors. The visual stimulus was presented on either a hemisphere or a cylinder surrounding the eye, as in many experimental settings (see Figure Figure11). Figure 1 A. Stimulus presented on a hemisphere screen. B. Inputs (Number of Photons) to R1 photoreceptors in all ommatidia. C. Geometric relation between eye and screen. D. Gamma corrected screen intensity. E. Gamma corrected inputs to R1 photoreceptors. F. Voltage ... The phototransduction process is based on a biochemical model of 30,000 microvilli followed by a conductance based biophysical model of the cell membrane [1]. Here we only consider monochromatic photoreceptors R1-R6 in each ommatidium. Each photoreceptor is described by ~ 450, 000 equations; the total number of equations simulated for the entire retina amounted to about 1.95 billion. All the simulations were performed on 4 Tesla K20X GPUs; it took approx. 7 minutes to finish 1 second of simulation. After constructing the detailed simulation model, we tested the processing of visual stimuli by the retina circuit. Furthermore, to test the effect of feedback originating in the lamina on the photoreceptors, we linked the retina model with a lamina model [2] in the Neurokernel environment [3]. An example of response of the circuit to natural stimuli is shown in Figure Figure1.1. The simulations reveal that composition rules in the lamina circuit affect the spatiotemporal processing carried out by the photoreceptors under the model parameters considered here. Finally, we scaled up the retina model of the fruit fly to the size of the blow fly to investigate the effect of visual acuity and noise.
BMC Neuroscience | 2015
Aurel A. Lazar; Nikul H. Ukani; Yiyin Zhou
Neural circuits built with complex cells play a key role in modeling the primary visual cortex. The encoding capability of an ensemble of complex cells has not been systematically studied, however. Can visual scenes be reconstructed using the spike times generated by an ensemble of complex cells? Can the processing taking place in complex cells be identified with high accuracy? Processing by complex cells has the complexity of Volterra models [1]. General Volterra based models call, among others, for efficient functional identification and decoding algorithms. We demonstrate that complex cells exhibit Volterra dendritic stimulus processors (Volterra DSPs) that are analytically and computationally tractable. Decoding and identification problems arising in neural circuits built with complex cells can be efficiently solved as rank minimization problems [2]. We provide (i) an algorithm that reconstructs the visual stimuli based on the spike times generated by circuits with widely employed complex cells models (Complex Cell Time Decoding Machines), and (ii) propose a mechanistic algorithm for functionally identifying the processing in complex cells using the spike times they generate (Complex Cell Channel Identification Machines). These algorithms are based on the key observation that the functional identification of processing in a single complex cell is dual to the problem of decoding stimuli encoded by an ensemble of complex cells. In addition, we show that the number of spikes needed for perfect reconstruction of a band-limited stimulus is proportional to the dimension of the stimulus space rather than the square of its dimension, thereby reducing the required number of neurons/measurements to a physiologically plausible range. This result demonstrates that visual stimuli can be efficiently reconstructed from the amplitude information carried in the complex cells alone. Similar results obtained for identification establish the computational tractability of higher order Volterra DSPs. We provide examples of perfect reconstruction of space-time stimuli (Figure (Figure1A)1A) and examples of identification of complex cell DSPs (Figure (Figure1B).1B). We demonstrate that our identification algorithms substantially outperform algorithms based on spike-triggered covariance (STC) (Figure (Figure1C).1C). Finally, we evaluate our identification algorithms by reconstructing novel stimuli in the input space using identified Volterra DSPs (Figure (Figure1D)1D) [3]. Figure 1 A. SNR of reconstruction when varying the number of measurements/spikes (dimension of the input space: 117). B. Mean SNR of identified complex cell DSP when varying the number of input trials used in identification. C. Comparison of identification by ...
Archive | 2014
Aurel A. Lazar; Yiyin Zhou; Nikul H. Ukani
Archive | 2015
Aurel A. Lazar; Yiyin Zhou; Nikul H. Ukani; Konstantinos Psychas
Archive | 2016
Aurel A. Lazar; Nikul H. Ukani; Yiyin Zhou