Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aayush Ankit is active.

Publication


Featured researches published by Aayush Ankit.


design automation conference | 2017

RESPARC: A Reconfigurable and Energy-Efficient Architecture with Memristive Crossbars for Deep Spiking Neural Networks

Aayush Ankit; Abhronil Sengupta; Priyadarshini Panda; Kaushik Roy

Neuromorphic computing using post-CMOS technologies is gaining immense popularity due to its promising abilities to address the memory and power bottlenecks in von-Neumann computing systems. In this paper, we propose RESPARC - a reconfigurable and energy efficient architecture built-on Memristive Crossbar Arrays (MCA) for deep Spiking Neural Networks (SNNs). Prior works were primarily focused on device and circuit implementations of SNNs on crossbars. RESPARC advances this by proposing a complete system for SNN acceleration and its subsequent analysis. RESPARC utilizes the energy-efficiency of MCAs for inner-product computation and realizes a hierarchical reconfigurable design to incorporate the data-flow patterns in an SNN in a scalable fashion. We evaluate the proposed architecture on different SNNs ranging in complexity from 2k–230k neurons and 1.2M–5.5M synapses. Simulation results on these networks show that compared to the baseline digital CMOS architecture, RESPARC achieves 500× (15×) efficiency in energy benefits at 300× (60×) higher throughput for multi-layer perceptrons (deep convolutional networks). Furthermore, RESPARC is a technology-aware architecture that maps a given SNN topology to the most optimized MCA size for the given crossbar technology.


IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | 2017

FALCON: Feature Driven Selective Classification for Energy-Efficient Image Recognition

Priyadarshini Panda; Aayush Ankit; Parami Wijesinghe; Kaushik Roy

Machine-learning algorithms have shown outstanding image recognition/classification performance for computer vision applications. However, the compute and energy requirement for implementing such classifier models for large-scale problems is quite high. In this paper, we propose feature driven selective classification (FALCON) inspired by the biological visual attention mechanism in the brain to optimize the energy-efficiency of machine-learning classifiers. We use the consensus in the characteristic features (color/texture) across images in a dataset to decompose the original classification problem and construct a tree of classifiers (nodes) with a generic-to-specific transition in the classification hierarchy. The initial nodes of the tree separate the instances based on feature information and selectively enable the latter nodes to perform object specific classification. The proposed methodology allows selective activation of only those branches and nodes of the classification tree that are relevant to the input while keeping the remaining nodes idle. Additionally, we propose a programmable and scalable neuromorphic engine (NeuE) that utilizes arrays of specialized neural computational elements to execute the FALCON-based classifier models for diverse datasets. The structure of FALCON facilitates the reuse of nodes while scaling up from small classification problems to larger ones thus allowing us to construct classifier implementations that are significantly more efficient. We evaluate our approach for a 12-object classification task on the Caltech101 dataset and ten-object task on CIFAR-10 dataset by constructing FALCON models on the NeuE platform in 45-nm technology. Our results demonstrate up to


ACM Journal on Emerging Technologies in Computing Systems | 2018

Energy-Efficient Neural Computing with Approximate Multipliers

Syed Shakib Sarwar; Swagath Venkataramani; Aayush Ankit; Anand Raghunathan; Kaushik Roy

3.66\boldsymbol \times


international symposium on neural networks | 2017

Performance analysis and benchmarking of all-spin spiking neural networks (Special session paper)

Abhronil Sengupta; Aayush Ankit; Kaushik Roy

improvement in energy-efficiency for no loss in output quality, and even higher improvements of up to


Archive | 2017

Efficient Neuromorphic Systems and Emerging Technologies: Prospects and Perspectives

Abhronil Sengupta; Aayush Ankit; Kaushik Roy

5.91\boldsymbol \times


IEEE Transactions on Multi-Scale Computing Systems | 2017

Cross-Layer Design Exploration for Energy-Quality Tradeoffs in Spiking and Non-Spiking Deep Artificial Neural Networks

Bing Han; Aayush Ankit; Abhronil Sengupta; Kaushik Roy

with 3.9% accuracy loss compared to an optimized baseline network. In addition, FALCON shows an improvement in training time of up to


arXiv: Computer Vision and Pattern Recognition | 2017

Incremental Learning in Deep Convolutional Neural Networks Using Partial Network Sharing

Syed Shakib Sarwar; Aayush Ankit; Kaushik Roy

1.96\boldsymbol \times


arXiv: Emerging Technologies | 2018

An All-Memristor Deep Spiking Neural Computing System: A Step Toward Realizing the Low-Power Stochastic Brain

Parami Wijesinghe; Aayush Ankit; Abhronil Sengupta; Kaushik Roy

as compared to the traditional classification approach.


IEEE Transactions on Computers | 2018

SPARE: Spiking Neural Network Acceleration Using ROM-Embedded RAMs as In-Memory-Computation Primitives

Amogh Agrawal; Aayush Ankit; Kaushik Roy

Neural networks, with their remarkable ability to derive meaning from a large volume of complicated or imprecise data, can be used to extract patterns and detect trends that are too complex for the von Neumann computing paradigm. Their considerable computational requirements stretch the capabilities of even modern computing platforms. We propose an approximate multiplier that exploits the inherent application resilience to error and utilizes the notion of computation sharing to achieve improved energy consumption for neural networks. We also propose a Multiplier-less Artificial Neuron (MAN), which is even more compact and energy efficient. We also propose a network retraining methodology to recover some of the accuracy loss due to the use of these approximate multipliers. We evaluated the proposed algorithm/design on several recognition applications. The results show that we achieve ∼33%, ∼32%, and ∼25% reduction in power consumption and ∼33%, ∼34%, and ∼27% reduction in area, respectively, for 12-, 8-, and 4-bit MAN, with a maximum ∼2.4% loss in accuracy compared to a conventional neuron implementation of equivalent bit precision. These comparisons were performed under iso-speed conditions.


international conference on computer aided design | 2017

TraNNsformer: N eural N etwork Transform ation for memristive crossbar based neuromorphic system design

Aayush Ankit; Abhronil Sengupta; Kaushik Roy

Spiking Neural Network based brain-inspired computing paradigms are becoming increasingly popular tools for various cognitive tasks. The sparse event-driven processing capability enabled by such networks can be potentially appealing for implementation of low-power neural computing platforms. However, the parallel and memory-intensive computations involved in such algorithms is in complete contrast to the sequential fetch, decode, execute cycles of conventional von-Neumann processors. Recent proposals have investigated the design of spintronic “in-memory” crossbar based computing architectures driving “spin neurons” that can potentially alleviate the memory-access bottleneck of CMOS based systems and simultaneously offer the prospect of low-power inner product computations. In this article, we perform a rigorous system-level simulation study of such All-Spin Spiking Neural Networks on a benchmark suite of 6 recognition problems ranging in network complexity from 10k–7.4M synapses and 195–9.2k neurons. System level simulations indicate that the proposed spintronic architecture can potentially achieve ∼1292× energy efficiency and ∼ 235× speedup on average over the benchmark suite in comparison to an optimized CMOS implementation at 45nm technology node.

Collaboration


Dive into the Aayush Ankit's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge