Micah Richert
University of California, Irvine
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Micah Richert.
Frontiers in Neuroinformatics | 2011
Micah Richert; Jayram Moorkanikara Nageswaran; Nikil D. Dutt; Jeffrey L. Krichmar
We have developed a spiking neural network simulator, which is both easy to use and computationally efficient, for the generation of large-scale computational neuroscience models. The simulator implements current or conductance based Izhikevich neuron networks, having spike-timing dependent plasticity and short-term plasticity. It uses a standard network construction interface. The simulator allows for execution on either GPUs or CPUs. The simulator, which is written in C/C++, allows for both fine grain and coarse grain specificity of a host of parameters. We demonstrate the ease of use and computational efficiency of this model by implementing a large-scale model of cortical areas V1, V4, and area MT. The complete model, which has 138,240 neurons and approximately 30 million synapses, runs in real-time on an off-the-shelf GPU. The simulator source code, as well as the source code for the cortical model examples is publicly available.
international symposium on neural networks | 2013
Kristofor D. Carlson; Micah Richert; Nikil D. Dutt; Jeffrey L. Krichmar
Spiking neural network (SNN) simulations with spike-timing dependent plasticity (STDP) often experience runaway synaptic dynamics and require some sort of regulatory mechanism to stay within a stable operating regime. Previous homeostatic models have used L1 or L2 normalization to scale the synaptic weights but the biophysical mechanisms underlying these processes remain undiscovered. We propose a model for homeostatic synaptic scaling that modifies synaptic weights in a multiplicative manner based on the average postsynaptic firing rate as observed in experiments. The homeostatic mechanism was implemented with STDP in conductance-based SNNs with Izhikevich-type neurons. In the first set of simulations, homeostatic synaptic scaling stabilized weight changes in STDP and prevented runaway dynamics in simple SNNs. During the second set of simulations, homeostatic synaptic scaling was found to be necessary for the unsupervised learning of V1 simple cell receptive fields in response to patterned inputs. STDP, in combination with homeostatic synaptic scaling, was shown to be mathematically equivalent to non-negative matrix factorization (NNMF) and the stability of the homeostatic update rule was proven. The homeostatic model presented here is novel, biologically plausible, and capable of unsupervised learning of patterned inputs, which has been a significant challenge for SNNs with STDP.
system on chip conference | 2010
Jayram Moorkanikara Nageswaran; Micah Richert; Nikil D. Dutt; Jeffrey L. Krichmar
Biological neural systems are well known for their robust and power-efficient operation in highly noisy environments. Biological circuits are made up of low-precision, unreliable and massively parallel neural elements with highly reconfigurable and plastic connections. Two of the most interesting properties of the neural systems are its self-organizing capabilities and its template architecture. Recent research in spiking neural networks has demonstrated interesting principles about learning and neural computation. Understanding and applying these principles to practical problems is only possible if large-scale spiking neural simulators can be constructed. Recent advances in low-cost multiprocessor architectures make it possible to build large-scale spiking network simulators. In this paper we review modeling abstractions for neural circuits and frameworks for modeling, simulating and analyzing spiking neural networks.
Neuroinformatics | 2014
Michael Beyeler; Micah Richert; Nikil D. Dutt; Jeffrey L. Krichmar
Simulating large-scale models of biological motion perception is challenging, due to the required memory to store the network structure and the computational power needed to quickly solve the neuronal dynamics. A low-cost yet high-performance approach to simulating large-scale neural network models in real-time is to leverage the parallel processing capability of graphics processing units (GPUs). Based on this approach, we present a two-stage model of visual area MT that we believe to be the first large-scale spiking network to demonstrate pattern direction selectivity. In this model, component-direction-selective (CDS) cells in MT linearly combine inputs from V1 cells that have spatiotemporal receptive fields according to the motion energy model of Simoncelli and Heeger. Pattern-direction-selective (PDS) cells in MT are constructed by pooling over MT CDS cells with a wide range of preferred directions. Responses of our model neurons are comparable to electrophysiological results for grating and plaid stimuli as well as speed tuning. The behavioral response of the network in a motion discrimination task is in agreement with psychophysical data. Moreover, our implementation outperforms a previous implementation of the motion energy model by orders of magnitude in terms of computational speed and memory usage. The full network, which comprises 153,216 neurons and approximately 40 million synapses, processes 20 frames per second of a 40 × 40 input video in real-time using a single off-the-shelf GPU. To promote the use of this algorithm among neuroscientists and computer vision researchers, the source code for the simulator, the network, and analysis scripts are publicly available.
international conference on computer aided design | 2011
Jeffrey L. Krichmar; Nikil D. Dutt; Jayram Moorkanikara Nageswaran; Micah Richert
Biological neural systems are well known for their robust and power-efficient operation in highly noisy environments. We outline key modeling abstractions for the brain and focus on spiking neural network models. We discuss aspects of neuronal processing and computational issues related to modeling these processes. Although many of these algorithms can be efficiently realized in specialized hardware, we present a case study of simulation of the visual cortex using a GPU based simulation environment that is readily usable by neuroscientists and computer scientists and efficient enough to construct very large networks comparable to brain networks.
BMC Neuroscience | 2013
Philip Meier; Micah Richert; Jayram Moorkanikara Nageswaran; Eugene M. Izhikevich
We find that a neural network with temporally evolving surround suppression improves the linear decodability of the network’s population response. We present a novel model of motion processing that is fully implemented in a spiking neural network. We examine the role of lateral inhibition in V1 and MT. We model the response of the retina, V1 and MT, where each neuron is a single compartment conductance-based dynamical system. We apply a linear decoder to estimate the speed and direction of optic flow from a population response of a small spatial region in MT. Before training the decoder on population vector responses from MT with labeled speeds, we allow the spiking neural network to adapt the weights of the recurrent inhibitory neurons with spike-timing dependent plasticity (STDP). This allows the individual cells to adapt their dynamic range to the statistics reflected in the activity of the excitatory feed-forward network. Also, we impose a random onset latency of 1-10 msec for each feed-forward neuron. The combination of the onset latency and the inhibitory STDP results in a surround suppression with a magnitude that modulates throughout the course of the response, balancing the incoming excitatory drive. The temporally evolving surround suppression affects the activity of excitatory and inhibitory units in V1 and MT. The result is a population response of MT excitatory units that is more informative for decoding. The early response is less direction selective but drives the inhibition that sculpts the later responses. One source of improvement is that inhibition removes the non-selective response, but still preserves a robust selective response. Also the inhibition acts as gain control, which limits how much saturation corrupts the linear code. We measure
BMC Neuroscience | 2013
Csaba Petre; Micah Richert; Botond Szatmary; Eugene M. Izhikevich
Lateral inhibition is typically used to repel neural receptive fields. Here we introduce an additional learning mechanism that modifies the plasticity for feedforward synapses based on lateral interactions. We show a model, based on evidence from biological recordings[1], in which a heterosynaptic learning rule in conjunction with lateral inhibition introduces competition among neurons for input features. We demonstrate an STDP learning rule for feedforward connections to a spiking neuron where plasticity is modulated by activity of neighboring neurons. We apply the learning rule to a spiking model of primary visual cortex. In our model, input to V1 cortical neurons comes from the magnocellular pathway of a simulated spiking retina responding to saccades over a natural image. A group of neurons in V1 respond to a particular input feature and convey information that they have spiked to neighboring neurons by way of specialized lateral connections. The connections convey a direct signal of recent spiking activity. Alternatively, lateral inhibitory synapses and the level of inhibition to a neuron can also be used as this signal. If this recent neighbor activity signal is high, feedforward plasticity to the neuron becomes a simple flat depression as opposed to regular exponential STDP, as shown in Figure Figure1A.1A. Thus, late spiking neurons are prevented from developing receptive fields similar to those of earlier spiking neurons for a given input feature. This rule in conjunction with lateral inhibition and slow weight updrift ensures competition between neurons for features over a long timescale. Figure 1 A. Detail of the heterosynaptic rule. Neurons N1 and N2 fire early for the input, and their feedforward synapses undergo normal STDP. They send an activity signal to N3 and N4, whose feedforward synaptic weights are then depressed to prevent learning ... Our V1 model is built on a spatially distributed grid of single-compartment spiking neurons with parameters configured to model regular spiking cortical pyramidal cells, as detailed in [2]. The model has plastic excitatory feedforward connections and local lateral inhibition between spatially proximal neurons. In our model, we observe a better overall coverage of orientation selectivity of V1 Layer 4 neurons with our heterosynaptic rule than without it. We find a larger range of orientations and less redundancy of receptive field features between neighboring neurons, as shown in Figure Figure1B.1B. Such a rule could be used in a generic spiking cortical architecture to enforce independence of neural receptive fields.
BMC Neuroscience | 2013
Filip Piekniewski; Micah Richert; Dimitry Fisher; Botond Szatmary; Csaba Petre; Sach Sokol; Eugene M. Izhikevich
Experimental studies have shown that neuronal excitation is balanced with inhibition and spikes are triggered only when that fine balance is perturbed. It is also known that inhibition is critical for receptive field tuning, yet it is not clear what role is played by different types of inhibitory interneurons and how the corresponding balanced circuitry could emerge via spike timing dependent plasticity (STDP). To study these questions we have constructed a large-scale detailed spiking model of V1 involving a variety of simulated neurons: fast-spiking (FS) interneurons, low threshold spiking (LTS) interneurons and regular spiking (RS) neurons. We modeled layer 4 and layer 2/3 of the primary visual cortex and a number of projections between cell types in agreement with anatomical data. Synaptic dynamics is governed by a set of STDP and activity dependent plasticity mechanisms for both inhibitory and excitatory synapses. The plasticity rules have been chosen to be in quantitative agreement with experiment where the data is available. For many of connections however, the data is either unavailable or noisy. In these cases plasticity rules were chosen based on a guided guess constrained by the requirement of structural stability of the system and expected response properties of cells to probing stimuli. Together, the plasticity rules lead to stable neuronal response and formation of orientation-selective receptive fields. The network learns simple and complex cells of a broad range of orientations and spatial frequencies. The model converges to a balanced neurodynamics and biologically reasonable firing rates. Our study shows that in the presence of strong thalamic drive, plastic inhibition is necessary for feature selectivity. The FS cells remove DC component of the input while firing of the LTS cells imposes sparse response and balances out feedback excitation.
Archive | 2012
Botond Szatmary; Micah Richert
Archive | 2012
Filip Piekniewski; Micah Richert; Dimitry Fisher; Eugene Izhikevich