Gopalakrishnan Srinivasan
Purdue University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gopalakrishnan Srinivasan.
Scientific Reports | 2016
Gopalakrishnan Srinivasan; Abhronil Sengupta; Kaushik Roy
Spiking Neural Networks (SNNs) have emerged as a powerful neuromorphic computing paradigm to carry out classification and recognition tasks. Nevertheless, the general purpose computing platforms and the custom hardware architectures implemented using standard CMOS technology, have been unable to rival the power efficiency of the human brain. Hence, there is a need for novel nanoelectronic devices that can efficiently model the neurons and synapses constituting an SNN. In this work, we propose a heterostructure composed of a Magnetic Tunnel Junction (MTJ) and a heavy metal as a stochastic binary synapse. Synaptic plasticity is achieved by the stochastic switching of the MTJ conductance states, based on the temporal correlation between the spiking activities of the interconnecting neurons. Additionally, we present a significance driven long-term short-term stochastic synapse comprising two unique binary synaptic elements, in order to improve the synaptic learning efficiency. We demonstrate the efficacy of the proposed synaptic configurations and the stochastic learning algorithm on an SNN trained to classify handwritten digits from the MNIST dataset, using a device to system-level simulation framework. The power efficiency of the proposed neuromorphic system stems from the ultra-low programming energy of the spintronic synapses.
design automation conference | 2016
Priyadarshini Panda; Abhronil Sengupta; Syed Shakib Sarwar; Gopalakrishnan Srinivasan; Swagath Venkataramani; Anand Raghunathan; Kaushik Roy
Neuromorphic algorithms are being increasingly deployed across the entire computing spectrum from data centers to mobile and wearable devices to solve problems involving recognition, analytics, search and inference. For example, large-scale artificial neural networks (popularly called deep learning) now represent the state-of-the art in a wide and ever-increasing range of video/image/audio/text recognition problems. However, the growth in data sets and network complexities have led to deep learning becoming one of the most challenging workloads across the computing spectrum. We posit that approximate computing can play a key role in the quest for energy-efficient neuromorphic systems. We show how the principles of approximate computing can be applied to the design of neuromorphic systems at various layers of the computing stack. At the algorithm level, we present techniques to significantly scale down the computational requirements of a neural network with minimal impact on its accuracy. At the circuit level, we show how approximate logic and memory can be used to implement neurons and synapses in an energy-efficient manner, while still meeting accuracy requirements. A fundamental limitation to the efficiency of neuromorphic computing in traditional implementations (software and custom hardware alike) is the mismatch between neuromorphic algorithms and the underlying computing models such as von Neumann architecture and Boolean logic. To overcome this limitation, we describe how emerging spintronic devices can offer highly efficient, approximate realization of the building blocks of neuromorphic computing systems.
design, automation, and test in europe | 2016
Gopalakrishnan Srinivasan; Parami Wijesinghe; Syed Shakib Sarwar; Akhilesh Jaiswal; Kaushik Roy
Multilayered artificial neural networks have found widespread utility in classification and recognition applications. The scale and complexity of such networks together with the inadequacies of general purpose computing platforms have led to a significant interest in the development of efficient hardware implementations. In this work, we focus on designing energy-efficient on-chip storage for the synaptic weights, motivated primarily by the observation that the number of synapses is orders of magnitude larger than the number of neurons. Typical digital CMOS implementations of such large-scale networks are power hungry. In order to minimize the power consumption, the digital neurons could be operated reliably at scaled voltages by reducing the clock frequency. On the contrary, the on-chip synaptic storage designed using a conventional 6T SRAM is susceptible to bitcell failures at reduced voltages. However, the intrinsic error resiliency of neural networks to small synaptic weight perturbations enables us to scale the operating voltage of the 6T SRAM. Our analysis on a widely used digit recognition dataset indicates that the voltage can be scaled by 200 mV from the nominal operating voltage (950 mV) for practically no loss (less than 0.5%) in accuracy (22 nm predictive technology). Scaling beyond that causes substantial performance degradation owing to increased probability of failures in the MSBs of the synaptic weights. We, therefore propose a significance driven hybrid 8T-6T SRAM, wherein the sensitive MSBs are stored in 8T bitcells that are robust at scaled voltages due to decoupled read and write paths. In an effort to further minimize the area penalty, we present a synaptic-sensitivity driven hybrid memory architecture consisting of multiple 8T-6T SRAM banks. Our circuit to system-level simulation framework shows that the proposed synaptic-sensitivity driven architecture provides a 30.91% reduction in the memory access power with a 10.41% area overhead, for less than 1% loss in the classification accuracy.
design, automation, and test in europe | 2017
Gopalakrishnan Srinivasan; Abhronil Sengupta; Kaushik Roy
Biologically-inspired spiking neural networks (SNNs) have attracted significant research interest due to their inherent computational efficiency in performing classification and recognition tasks. The conventional CMOS-based implementations of large-scale SNNs are power intensive. This is a consequence of the fundamental mismatch between the technology used to realize the neurons and synapses, and the neuroscience mechanisms governing their operation, leading to area-expensive circuit designs. In this work, we present a three-terminal spintronic device, namely, the magnetic tunnel junction (MTJ)-heavy metal (HM) heterostructure that is inherently capable of emulating the neuronal and synaptic dynamics. We exploit the stochastic switching behavior of the MTJ in the presence of thermal noise to mimic the probabilistic spiking of cortical neurons, and the conditional change in the state of a binary synapse based on the pre- and post-synaptic spiking activity required for plasticity. We demonstrate the efficacy of a crossbar organization of our MTJ-HM based stochastic SNN in digit recognition using a comprehensive device-circuit-system simulation framework. The energy efficiency of the proposed system stems from the ultra-low switching energy of the MTJ-HM device, and the in-memory computation rendered possible by the localized arrangement of the computational units (neurons) and non-volatile synaptic memory in such crossbar architectures.
IEEE Transactions on Electron Devices | 2017
Akhilesh Jaiswal; Sourjya Roy; Gopalakrishnan Srinivasan; Kaushik Roy
The efficiency of the human brain in performing classification tasks has attracted considerable research interest in brain-inspired neuromorphic computing. The spiking neuromorphic architectures attempt to mimic the computations performed in the brain through a dense interconnection of the neurons and synaptic weights. A leaky-integrate-fire (LIF) spiking model is widely used to emulate the dynamics of the biological neurons. In this paper, we propose a spin-based LIF spiking neuron using the magnetoelectric (ME) switching of ferromagnets. The voltage across the ME oxide exhibits a typical leaky-integrate behavior, which in turn switches an underlying ferromagnet. Due to the effect of thermal noise, the ferromagnet exhibits probabilistic switching dynamics, which is reminiscent of the stochasticity exhibited by biological neurons. The energy efficiency of the ME switching mechanism coupled with the intrinsic nonvolatility of ferromagnets results in lower energy consumption, when compared with a CMOS LIF neuron. A device to system-level simulation framework has been developed to investigate the feasibility of the proposed LIF neuron for a hand-written digit recognition application.
Frontiers in Neuroscience | 2018
Chankyu Lee; Priyadarshini Panda; Gopalakrishnan Srinivasan; Kaushik Roy
Spiking Neural Networks (SNNs) are fast becoming a promising candidate for brain-inspired neuromorphic computing because of their inherent power efficiency and impressive inference accuracy across several cognitive tasks such as image classification and speech recognition. The recent efforts in SNNs have been focused on implementing deeper networks with multiple hidden layers to incorporate exponentially more difficult functional representations. In this paper, we propose a pre-training scheme using biologically plausible unsupervised learning, namely Spike-Timing-Dependent-Plasticity (STDP), in order to better initialize the parameters in multi-layer systems prior to supervised optimization. The multi-layer SNN is comprised of alternating convolutional and pooling layers followed by fully-connected layers, which are populated with leaky integrate-and-fire spiking neurons. We train the deep SNNs in two phases wherein, first, convolutional kernels are pre-trained in a layer-wise manner with unsupervised learning followed by fine-tuning the synaptic weights with spike-based supervised gradient descent backpropagation. Our experiments on digit recognition demonstrate that the STDP-based pre-training with gradient-based optimization provides improved robustness, faster (~2.5 ×) training time and better generalization compared with purely gradient-based training without pre-training.
Frontiers in Neuroscience | 2018
Gopalakrishnan Srinivasan; Priyadarshini Panda; Kaushik Roy
In this work, we propose a Spiking Neural Network (SNN) consisting of input neurons sparsely connected by plastic synapses to a randomly interlinked liquid, referred to as Liquid-SNN, for unsupervised speech and image recognition. We adapt the strength of the synapses interconnecting the input and liquid using Spike Timing Dependent Plasticity (STDP), which enables the neurons to self-learn a general representation of unique classes of input patterns. The presented unsupervised learning methodology makes it possible to infer the class of a test input directly using the liquid neuronal spiking activity. This is in contrast to standard Liquid State Machines (LSMs) that have fixed synaptic connections between the input and liquid followed by a readout layer (trained in a supervised manner) to extract the liquid states and infer the class of the input patterns. Moreover, the utility of LSMs has primarily been demonstrated for speech recognition. We find that training such LSMs is challenging for complex pattern recognition tasks because of the information loss incurred by using fixed input to liquid synaptic connections. We show that our Liquid-SNN is capable of efficiently recognizing both speech and image patterns by learning the rich temporal information contained in the respective input patterns. However, the need to enlarge the liquid for improving the accuracy introduces scalability challenges and training inefficiencies. We propose SpiLinC that is composed of an ensemble of multiple liquids operating in parallel. We use a “divide and learn” strategy for SpiLinC, where each liquid is trained on a unique segment of the input patterns that causes the neurons to self-learn distinctive input features. SpiLinC effectively recognizes a test pattern by combining the spiking activity of the constituent liquids, each of which identifies characteristic input features. As a result, SpiLinC offers competitive classification accuracy compared to the Liquid-SNN with added sparsity in synaptic connectivity and faster training convergence, both of which lead to improved energy efficiency in neuromorphic hardware implementations. We validate the efficacy of the proposed Liquid-SNN and SpiLinC on the entire digit subset of the TI46 speech corpus and handwritten digits from the MNIST dataset.
international symposium on neural networks | 2017
Priyadarshini Panda; Gopalakrishnan Srinivasan; Kaushik Roy
We present an ensemble approach for implementing Spiking Neural Networks (SNNs) with on-line unsupervised learning, well-suited for robust and energy-efficient design of neuromorphic computing systems for pattern recognition tasks. Inspired from the collective neuronal activity observed in the visual cortex, the proposed EnsembleSNN architecture involves multiple simple SNNs or ensembles acting in parallel on different aspects of the input. This in turn reduces the training complexity due to the decreased connectivity obtained from decomposing the input across different ensembles. During inference, a collective decision from all ensembles of the EnsembleSNN is considered to obtain the final prediction. We add predictive connections across different ensembles that enables individual ensembles to learn some statistics about the remaining portions of the input image that further enhances the collective decision making of the proposed architecture. We evaluate our approach on the MNIST dataset for different configurations of EnsembleSNN. Our experiments demonstrate upto 2.8x improvement in efficiency, while yielding better (∼2.5%) accuracy than the optimized baseline network, and even higher improvements of upto 3.7x for minimal accuracy degradation (∼3.2%).
international symposium on neural networks | 2017
Gopalakrishnan Srinivasan; Sourjya Roy; Vijay Raghunathan; Kaushik Roy
Spike Timing Dependent Plasticity (STDP), wherein synaptic weights are modified based on the temporal correlation between a pair of pre- and post-synaptic (post-neuronal) spikes, is widely used to implement unsupervised learning in Spiking Neural Networks (SNNs). In general, STDP-based learning models disregard the information embedded in post-neuronal spiking frequency. We observe that updating the synaptic weights at the instants of every post-neuronal spike while ignoring the spiking frequency could potentially cause them to learn overlapping representations of multiple input patterns sharing common features. We present STDP-based enhanced plasticity mechanisms that account for the spiking frequency to achieve efficient synaptic learning. First, we utilize low-pass filtered neuronal membrane potential to obtain an estimate of the spiking frequency. We perform STDP-driven weight updates in the event of a post-spike if the filtered potential exceeds a definite threshold. This ensures that plasticity is effected on the dominantly firing neuron that indicates a strong bias in learning the input pattern. Synaptic updates are restrained in the case of sporadic neuronal spiking activity, which implies a weak correlation with the input pattern. This enhances the quality of features encoded by the synapses, resulting in an improvement of 5.8% in the classification accuracy of an SNN of 100 neurons trained for digit recognition. Our simulations further show that the enhanced scheme provides a reduction of 2 χ in the number of weight updates, which leads to improved energy efficiency in event-driven SNN implementations. Second, we explore a neuronal spike-count based enhanced plasticity mechanism. The synapses are modified at the instant of a post-spike if the neuron had fired a certain number of spikes since the preceding update instant. This scheme performs delayed updates at suitable neuronal spiking instants to learn improved synaptic representations. Using this technique, the classification accuracy increased by 4% with 5.2× reduction in the number of weight updates.
arXiv: Emerging Technologies | 2018
Amogh Agrawal; Akhilesh Jaiswal; Bing Han; Gopalakrishnan Srinivasan; Kaushik Roy