Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mrigank Sharad is active.

Publication


Featured researches published by Mrigank Sharad.


IEEE Transactions on Nanotechnology | 2012

Spin-Based Neuron Model With Domain-Wall Magnets as Synapse

Mrigank Sharad; Charles Augustine; Georgios Panagopoulos; Kaushik Roy

We present artificial neural network design using spin devices that achieves ultralow voltage operation, low power consumption, high speed, and high integration density. We employ spin torque switched nanomagnets for modeling neuron and domain-wall magnets for compact, programmable synapses. The spin-based neuron-synapse units operate locally at ultralow supply voltage of 30 mV resulting in low computation power. CMOS-based interneuron communication is employed to realize network-level functionality. We corroborate circuit operation with physics-based models developed for the spin devices. Simulation results for character recognition as a benchmark application show 95% lower power consumption as compared to 45-nm CMOS design.


design, automation, and test in europe | 2013

DWM-TAPESTRI - an energy efficient all-spin cache using domain wall shift based writes

Rangharajan Venkatesan; Mrigank Sharad; Kaushik Roy; Anand Raghunathan

Spin-based memories are promising candidates for future on-chip memories due to their high density, non-volatility, and very low leakage. However, the high energy and latency of write operations in these memories is a major challenge. In this work, we explore a new approach - shift based write - that offers a fast and energy-efficient alternative to performing writes in spin-based memories. We propose DWM-TAPESTRI, a new all-spin cache design that utilizes Domain Wall Memory (DWM) with shift based writes at all levels of the cache hierarchy. The proposed write scheme enables DWM to be used, for the first time, in L1 caches and in tag arrays, where the inefficiency of writes in spin memories has traditionally precluded their use. At the circuit level, we propose bit-cell designs utilizing shift-based writes, which are tailored to the differing requirements of different levels in the cache hierarchy. We also propose pre-shifting as an architectural technique to hide the latency of shift operations that is inherent to DWM. We performed a systematic device-circuit-architecture evaluation of the proposed design. Over a wide range of SPEC 2006 benchmarks, DWM-TAPESTRI achieves 8.2X improvement in energy and 4X improvement in area, with virtually identical performance, compared to an iso-capacity SRAM cache. Compared to an iso-capacity STT-MRAM cache, the proposed design achieves around 1.6X improvement in both area and energy under iso-performance conditions.


international symposium on low power electronics and design | 2014

SPINDLE: SPINtronic deep learning engine for large-scale neuromorphic computing

Shankar Ganesh Ramasubramanian; Rangharajan Venkatesan; Mrigank Sharad; Kaushik Roy; Anand Raghunathan

Deep Learning Networks (DLNs) are bio-inspired large-scale neural networks that are widely used in emerging vision, analytics, and search applications. The high computation and storage requirements of DLNs have led to the exploration of various avenues for their efficient realization. Concurrently, the ability of emerging post-CMOS devices to efficiently mimic neurons and synapses has led to great interest in their use for neuromorphic computing. We describe SPINDLE, a programmable processor for deep learning based on spintronic devices. SPINDLE exploits the unique ability of spintronic devices to realize highly dense and energy-efficient neurons and memory, which form the fundamental building blocks of DLNs. SPINDLE consists of a three-tier hierarchy of processing elements to capture the nested parallelism present in DLNs, and a two-level memory hierarchy to facilitate data reuse. It can be programmed to execute DLNs with widely varying topologies for different applications. SPINDLE employs techniques to limit the overheads of spin-to-charge conversion, and utilizes output and weight quantization to enhance the efficiency of spin-neurons. We evaluate SPINDLE using a device-to-architecture modeling framework and a set of widely used DLN applications (handwriting recognition, face detection, and object recognition). Our results indicate that SPINDLE achieves 14.4X reduction in energy consumption and 20.4X reduction in EDP over the CMOS baseline under iso-area conditions.


Journal of Applied Physics | 2013

Spin-neurons: A possible path to energy-efficient neuromorphic computers

Mrigank Sharad; Deliang Fan; Kaushik Roy

Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices. Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and “thresholding” operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that “spin-neurons” (spin ba...


international electron devices meeting | 2012

Boolean and non-Boolean computation with spin devices

Mrigank Sharad; Charles Augustine; Kaushik Roy

Recently several device and circuit design techniques have been explored for applying nano-magnets and spin torque devices like spin valves and domain wall magnets in computational hardware. However, most of them have been focused on digital logic, and, their benefits over robust and high performance CMOS remains debatable. Ultra-low voltage, current-mode operation of magneto-metallic spin torque devices can potentially be more suitable for non-Boolean logic like neuromorphic computation, which involve analog processing. Device circuit co-design for different classes of neuromorphic architectures using spin-torque based neuron models along with DWM or other memristive synapses show that the spin-based neuromorphic designs can achieve 15X-100X lower computation energy for applications like, image processing, data-conversion, cognitive-computing, pattern matching and programmable-logic, as compared to state of art CMOS designs.


IEEE Transactions on Nanotechnology | 2014

Energy-Efficient Non-Boolean Computing With Spin Neurons and Resistive Memory

Mrigank Sharad; Deliang Fan; Kyle Aitken; Kaushik Roy

Emerging nonvolatile resistive memory technologies can be potentially suitable for computationally expensive analog pattern-matching tasks. However, the use of CMOS analog circuits with resistive crossbar memory (RCM) would result in large power consumption and poor scalability, thereby eschewing the benefits of RCM-based computation. We explore the potential of emerging spin-torque devices for RCM-based approximate computing circuits. Emerging spin-torque switching techniques may lead to nanoscale, current-mode spintronic switches that can be used for energy-efficient analog-mode data processing. We propose the use of such low-voltage, fast-switching, magnetometallic “spin neurons” for ultralow power non-Boolean computing with RCM. We present the design of analog associative memory for face recognition using RCM, where, substituting conventional analog circuits with spin neurons can achieve ~100× lower power consumption.


IEEE Transactions on Nanotechnology | 2014

Design and Synthesis of Ultralow Energy Spin-Memristor Threshold Logic

Deliang Fan; Mrigank Sharad; Kaushik Roy

A threshold logic gate performs weighted sum of multiple inputs and compares the sum with a threshold. We propose spin-memristor threshold logic (SMTL) gates, which employ a memristive cross-bar array to perform current-mode summation of binary inputs, whereas the low-voltage fast-switching spintronic threshold devices carry out the threshold operation in an energy efficient manner. Field-programmable SMTL gate arrays can operate at a small terminal voltage of ~50 mV, resulting in ultralow power consumption in gates as well as programmable interconnect networks. We evaluate the performance of SMTL using threshold logic synthesis. Results for common benchmarks show that SMTL-based programmable logic hardware can be more than 100 × energy efficient than the state-of-the-art CMOS field-programmable gate array.


IEEE Transactions on Nanotechnology | 2015

Injection-Locked Spin Hall-Induced Coupled-Oscillators for Energy Efficient Associative Computing

Deliang Fan; Supriyo Maji; Karthik Yogendra; Mrigank Sharad; Kaushik Roy

In this paper, we show that the dynamics of injection-locked Spin Hall Effect Spin-Torque Oscillator (SHE-STO) cluster can be exploited as a robust primitive computational operator for non-Boolean associative computing. A cluster of SHE-STOs can be locked to a common frequency and phase with an injected ac current signal. DC input to each STO from external stimuli can conditionally unlock some of them. Based on the input dc signal, the degree of synchronization of SHE-STO cluster is detected by CMOS interface circuitry. The degree of synchronization can be used for associative computing/matching. We present a numerical simulation model of SHE-STO devices based on Landau-Lifshitz-Gilbert equation with spin-transfer torque term and Spin Hall Effect. The model is then used to analyze the frequency and phase locking properties of injection-locked SHE-STO cluster. Results show that associative computing based on the injection locked SHE-STO cluster can be energy efficient and relatively immune to device parameter variations and thermal noise.


international symposium on low power electronics and design | 2013

Multi-level magnetic RAM using domain wall shift for energy-efficient, high-density caches

Mrigank Sharad; Rangharajan Venkatesan; Anand Raghunathan; Kaushik Roy

Spin-based devices promise to revolutionize computing platforms by enabling high-density, low-leakage memories. However, stringent tradeoffs between critical design metrics such as read and write stability, reliability, density, performance and energy-efficiency limit the efficiency of conventional spin-transfer-torque devices and bit-cells. We propose a new multi-level cell design with domain wall magnets (DWM-MLC) that significantly improves upon the read/write performance, density, and write energy consumption of conventional spin memories. The fundamental design tradeoff between read and write operations are addressed in DWM-MLC by decoupling the read and write paths, thereby allowing separate optimization for reads and writes. A thicker tunneling oxide is used for higher readability, while a domain-wall-shift (DWS) based write mechanism is used to improve write speed and energy. The storage of multiple bits per cell and the ability to use smaller transistors lead to a net improvement in density compared to conventional spin memories. We perform a systematic evaluation of DWM-MLC at different levels of design abstraction. At the circuit level, DWM-MLC achieves 2X improvement in density, read energy and read latency over its 1-bit counterpart. We evaluate an “all-spin” cache hierarchy that uses DWM-MLC for both L1 and L2, resulting in 4.4X (1.7X) area improvement and 10X (2X) energy reduction at iso-performance over SRAM (STT-MRAM).


device research conference | 2012

Spin neuron for ultra low power computational hardware

Mrigank Sharad; Georgios Panagopoulos; Kaushik Roy

We propose a device model for neuron based on lateral spin valve (LSV) that constitutes of multiple input magnets, connected to an output magnet, using metal channels. The low-resistance, magneto-metallic neuron can operate at a small terminal voltage of ~20mV, while performing computation upon current-mode inputs. The spin-based neurons can be integrated with CMOS to realize ultra low-power data processing hardware, based on neural networks (NN), for different classes of applications like, cognitive computing, programmable Boolean/non-Boolean logic and analog and digital signal processing [1, 2]. In this work we present analog image acquisition and processing as an example. Results based on device-circuit co-simulation framework show that a spin-CMOS hybrid design, employing the proposed neuron, can achieve ~100x lower energy consumption per computation-frame, as compared to the state of art CMOS designs employing conventional analog circuits [13].

Collaboration


Dive into the Mrigank Sharad's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Deliang Fan

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anand Kumar Mukhopadhyay

Indian Institute of Technology Kharagpur

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pradip Mandal

Indian Institute of Technology Kharagpur

View shared research outputs
Researchain Logo
Decentralizing Knowledge