Abhronil Sengupta
Purdue University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Abhronil Sengupta.
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | 2016
Xuanyao Fong; Yusung Kim; Karthik Yogendra; Deliang Fan; Abhronil Sengupta; Anand Raghunathan; Kaushik Roy
As CMOS technology begins to face significant scaling challenges, considerable research efforts are being directed to investigate alternative device technologies that can serve as a replacement for CMOS. Spintronic devices, which utilize the spin of electrons as the state variable for computation, have recently emerged as one of the leading candidates for post-CMOS technology. Recent experiments have shown that a nano-magnet can be switched by a spin-polarized current and this has led to a number of novel device proposals over the past few years. In this paper, we provide a review of different mechanisms that manipulate the state of a nano-magnet using current-induced spin-transfer torque and demonstrate how such mechanisms have been engineered to develop device structures for energy-efficient on-chip memory and logic.
Applied Physics Letters | 2015
Abhronil Sengupta; Zubair Al Azim; Xuanyao Fong; Kaushik Roy
Nanoelectronic devices that mimic the functionality of synapses are a crucial requirement for performing cortical simulations of the brain. In this work, we propose a ferromagnet-heavy metal heterostructure that employs spin-orbit torque to implement spike-timing dependent plasticity. The proposed device offers the advantage of decoupled spike transmission and programming current paths, thereby leading to reliable operation during online learning. Possible arrangement of such devices in a crosspoint architecture can pave the way for ultra-dense neural networks. Simulation studies indicate that the device has the potential of achieving pico-Joule level energy consumption (maximum 2 pJ per synaptic event) which is comparable to the energy consumption for synaptic events in biological synapses.
Applied Physics Letters | 2015
Abhronil Sengupta; Sri Harsha Choday; Yusung Kim; Kaushik Roy
A device based on current-induced spin-orbit torque (SOT) that functions as an electronic neuron is proposed in this work. The SOT device implements an artificial neurons thresholding (transfer) function. In the first step of a two-step switching scheme, a charge current places the magnetization of a nano-magnet along the hard-axis, i.e., an unstable point for the magnet. In the second step, the SOT device (neuron) receives a current (from the synapses) which moves the magnetization from the unstable point to one of the two stable states. The polarity of the synaptic current encodes the excitatory and inhibitory nature of the neuron input and determines the final orientation of the magnetization. A resistive crossbar array, functioning as synapses, generates a bipolar current that is a weighted sum of the inputs. The simulation of a two layer feed-forward artificial neural network based on the SOT electronic neuron shows that it consumes ∼3× lower power than a 45 nm digital CMOS implementation, while reaching ∼80% accuracy in the classification of 100 images of handwritten digits from the MNIST dataset.
Scientific Reports | 2016
Gopalakrishnan Srinivasan; Abhronil Sengupta; Kaushik Roy
Spiking Neural Networks (SNNs) have emerged as a powerful neuromorphic computing paradigm to carry out classification and recognition tasks. Nevertheless, the general purpose computing platforms and the custom hardware architectures implemented using standard CMOS technology, have been unable to rival the power efficiency of the human brain. Hence, there is a need for novel nanoelectronic devices that can efficiently model the neurons and synapses constituting an SNN. In this work, we propose a heterostructure composed of a Magnetic Tunnel Junction (MTJ) and a heavy metal as a stochastic binary synapse. Synaptic plasticity is achieved by the stochastic switching of the MTJ conductance states, based on the temporal correlation between the spiking activities of the interconnecting neurons. Additionally, we present a significance driven long-term short-term stochastic synapse comprising two unique binary synaptic elements, in order to improve the synaptic learning efficiency. We demonstrate the efficacy of the proposed synaptic configurations and the stochastic learning algorithm on an SNN trained to classify handwritten digits from the MNIST dataset, using a device to system-level simulation framework. The power efficiency of the proposed neuromorphic system stems from the ultra-low programming energy of the spintronic synapses.
Scientific Reports | 2016
Abhronil Sengupta; Priyadarshini Panda; Parami Wijesinghe; Yusung Kim; Kaushik Roy
Brain-inspired computing architectures attempt to mimic the computations performed in the neurons and the synapses in the human brain in order to achieve its efficiency in learning and cognitive tasks. In this work, we demonstrate the mapping of the probabilistic spiking nature of pyramidal neurons in the cortex to the stochastic switching behavior of a Magnetic Tunnel Junction in presence of thermal noise. We present results to illustrate the efficiency of neuromorphic systems based on such probabilistic neurons for pattern recognition tasks in presence of lateral inhibition and homeostasis. Such stochastic MTJ neurons can also potentially provide a direct mapping to the probabilistic computing elements in Belief Networks for performing regenerative tasks.
IEEE Transactions on Biomedical Circuits and Systems | 2016
Abhronil Sengupta; Yong Shim; Kaushik Roy
Non-Boolean computing based on emerging postCMOS technologies can potentially pave the way for low-power neural computing platforms. However, existing work on such emerging neuromorphic architectures have either focused on solely mimicking the neuron, or the synapse functionality. While memristive devices have been proposed to emulate biological synapses, spintronic devices have proved to be efficient at performing the thresholding operation of the neuron at ultra-low currents. In this work, we propose an All-Spin Artificial Neural Network where a single spintronic device acts as the basic building block of the system. The device offers a direct mapping to synapse and neuron functionalities in the brain while inter-layer network communication is accomplished via CMOS transistors. To the best of our knowledge, this is the first demonstration of a neural architecture where a single nanoelectronic device is able to mimic both neurons and synapses. The ultra-low voltage operation of low resistance magneto-metallic neurons enables the low-voltage operation of the array of spintronic synapses, thereby leading to ultra-low power neural architectures. Device-level simulations, calibrated to experimental results, was used to drive the circuit and system level simulations of the neural network for a standard pattern recognition problem. Simulation studies indicate energy savings by ~400× in comparison to a corresponding digital/ analog CMOS neuron implementation.
IEEE Transactions on Neural Networks | 2016
Deliang Fan; Mrigank Sharad; Abhronil Sengupta; Kaushik Roy
Hierarchical temporal memory (HTM) tries to mimic the computing in cerebral neocortex. It identifies spatial and temporal patterns in the input for making inferences. This may require a large number of computationally expensive tasks, such as dot product evaluations. Nanodevices that can provide direct mapping for such primitives are of great interest. In this paper, we propose that the computing blocks for HTM can be mapped using low-voltage, magnetometallic spin-neurons combined with an emerging resistive crossbar network, which involves a comprehensive design at algorithm, architecture, circuit, and device levels. Simulation results show the possibility of more than 200× lower energy as compared with a 45-nm CMOS ASIC design.
IEEE Transactions on Electron Devices | 2016
Abhronil Sengupta; Maryam Parsa; Bing Han; Kaushik Roy
Deep spiking neural networks are becoming increasingly powerful tools for cognitive computing platforms. However, most of the existing studies on such computing models are developed with limited insights on the underlying hardware implementation, resulting in area and power expensive designs. Although several neuromimetic devices emulating neural operations have been proposed recently, their functionality has been limited to very simple neural models that may prove to be inefficient at complex recognition tasks. In this paper, we venture into the relatively unexplored area of utilizing the inherent device stochasticity of such neuromimetic devices to model complex neural functionalities in a probabilistic framework in the time domain. We consider the implementation of a deep spiking neural network capable of performing high-accuracy and lowlatency classification tasks, where the neural computing unit is enabled by the stochastic switching behavior of a magnetic tunnel junction. The simulation studies indicate an energy improvement of 20× over a baseline CMOS design in 45-nm technology.
Physical review applied | 2016
Abhronil Sengupta; Aparajita Banerjee; Kaushik Roy
Over the past decade Spiking Neural Networks (SNN) have emerged as one of the popular architectures to emulate the brain. In SNN, information is temporally encoded and communication between neurons is accomplished by means of spikes. In such networks, spike-timing dependent plasticity mechanisms require the online programming of synapses based on the temporal information of spikes transmitted by spiking neurons. In this work, we propose a spintronic synapse with decoupled spike transmission and programming current paths. The spintronic synapse consists of a ferromagnet-heavy metal heterostructure where programming current through the heavy metal generates spin-orbit torque to modulate the device conductance. Low programming energy and fast programming times demonstrate the efficacy of the proposed device as a nanoelectronic synapse. We perform a simulation study based on an experimentally benchmarked device-simulation framework to demonstrate the interfacing of such spintronic synapses with CMOS neurons and learning circuits operating in transistor sub-threshold region to form a network of spiking neurons that can be utilized for pattern recognition problems.
international symposium on neural networks | 2015
Abhronil Sengupta; Kaushik Roy
Neuromorphic computing attempts to emulate the remarkable efficiency of the human brain in vision, perception and cognition related tasks. Nanoscale devices that offer a direct mapping to the underlying neural computations have emerged as a promising candidate for such neuromorphic architectures. In this paper, a Magnetic Tunneling Junction (MTJ) has been proposed to perform the thresholding operation of a biological neuron. A crossbar array consisting of programmable resistive synapses generates an excitatory/inhibitory charge current input to the neuron. The magnetization of the free layer of the MTJ is manipulated by Spin-Transfer Torque generated by the net synaptic current. Algorithm, device and circuit co-simulation framework suggest the possibility of ~ 1.63 - 1.79x power savings in comparison to a 45nm digital CMOS implementation.