Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sasikanth Avancha is active.

Publication


Featured researches published by Sasikanth Avancha.


Proceedings of the first ACM workshop on Security and privacy in medical and home-care systems | 2009

A privacy framework for mobile health and home-care systems

David Kotz; Sasikanth Avancha; Amit S. Baxi

In this paper, we consider the challenge of preserving patient privacy in the context of mobile healthcare and home-care systems, that is, the use of mobile computing and communications technologies in the delivery of healthcare or the provision of at-home medical care and assisted living. This paper makes three primary contributions. First, we compare existing privacy frameworks, identifying key differences and shortcomings. Second, we identify a privacy framework for mobile healthcare and home-care systems. Third, we extract a set of privacy properties intended for use by those who design systems and applications for mobile healthcare and home-care systems, linking them back to the privacy principles. Finally, we list several important research questions that the community should address. We hope that the privacy framework in this paper can help to guide the researchers and developers in this community, and that the privacy properties provide a concrete foundation for privacy-sensitive systems and applications for mobile healthcare and home-care systems.


international symposium on computer architecture | 2017

ScaleDeep: A Scalable Compute Architecture for Learning and Evaluating Deep Networks

Swagath Venkataramani; Ashish Ranjan; Subarno Banerjee; Dipankar Das; Sasikanth Avancha; Ashok Jagannathan; Ajaya V. Durg; Dheemanth Nagaraj; Bharat Kaul; Pradeep Dubey; Anand Raghunathan

Deep Neural Networks (DNNs) have demonstrated state-of-the-art performance on a broad range of tasks involving natural language, speech, image, and video processing, and are deployed in many real world applications. However, DNNs impose significant computational challenges owing to the complexity of the networks and the amount of data they process, both of which are projected to grow in the future. To improve the efficiency of DNNs, we propose SCALEDEEP, a dense, scalable server architecture, whose processing, memory and interconnect subsystems are specialized to leverage the compute and communication characteristics of DNNs. While several DNN accelerator designs have been proposed in recent years, the key difference is that SCALEDEEP primarily targets DNN training, as opposed to only inference or evaluation. The key architectural features from which SCALEDEEP derives its efficiency are: (i) heterogeneous processing tiles and chips to match the wide diversity in computational characteristics (FLOPs and Bytes/FLOP ratio) that manifest at different levels of granularity in DNNs, (ii) a memory hierarchy and 3-tiered interconnect topology that is suited to the memory access and communication patterns in DNNs, (iii) a low-overhead synchronization mechanism based on hardware data-flow trackers, and (iv) methods to map DNNs to the proposed architecture that minimize data movement and improve core utilization through nested pipelining. We have developed a compiler to allow any DNN topology to be programmed onto SCALEDEEP, and a detailed architectural simulator to estimate performance and energy. The simulator incorporates timing and power models of SCALEDEEPs components based on synthesis to Intels 14nm technology. We evaluate an embodiment of SCALEDEEP with 7032 processing tiles that operates at 600 MHz and has a peak performance of 680 TFLOPs (single precision) and 1.35 PFLOPs (half-precision) at 1.4KW. Across 11 state-of-the-art DNNs containing 0.65M-14.9M neurons and 6.8M-145.9M weights, including winners from 5 years of the ImageNet competition, SCALEDEEP demonstrates 6×-28× speedup at iso-power over the state-of-the-art performance on GPUs.


ACM Computing Surveys | 2012

Privacy in mobile technology for personal healthcare

Sasikanth Avancha; Amit S. Baxi; David Kotz


arXiv: Distributed, Parallel, and Cluster Computing | 2016

Distributed Deep Learning Using Synchronous Stochastic Gradient Descent.

Dipankar Das; Sasikanth Avancha; Dheevatsa Mudigere; Karthikeyan Vaidyanathan; Srinivas Sridharan; Dhiraj D. Kalamkar; Bharat Kaul; Pradeep Dubey


international conference on learning representations | 2018

Mixed Precision Training of Convolutional Neural Networks using Integer Operations

Dipankar Das; Naveen Mellempudi; Dheevatsa Mudigere; Dhiraj D. Kalamkar; Sasikanth Avancha; Kunal Banerjee; Srinivas Sridharan; Karthik Vaidyanathan; Bharat Kaul; Evangelos Georganas; Alexander Heinecke; Pradeep Dubey; Jesus Corbal; Nikita Shustrov; Roman Dubtsov; Evarist Fomenko; Vadim O. Pirogov


adaptive agents and multi-agents systems | 2018

RAIL: Risk-Averse Imitation Learning

Anirban Santara; Abhishek Naik; Balaraman Ravindran; Dipankar Das; Dheevatsa Mudigere; Sasikanth Avancha; Bharat Kaul


arXiv: Distributed, Parallel, and Cluster Computing | 2018

On Scale-out Deep Learning Training for Cloud and HPC.

Srinivas Sridharan; Karthikeyan Vaidyanathan; Dhiraj D. Kalamkar; Dipankar Das; Mikhail Smorkalov; Mikhail Shiryaev; Dheevatsa Mudigere; Naveen Mellempudi; Sasikanth Avancha; Bharat Kaul; Pradeep Dubey


arXiv: Learning | 2018

Hierarchical Block Sparse Neural Networks.

Dharma Teja Vooturi; Dheevatsa Mudigere; Sasikanth Avancha


arXiv: Distributed, Parallel, and Cluster Computing | 2018

Anatomy Of High-Performance Deep Learning Convolutions On SIMD Architectures.

Evangelos Georganas; Sasikanth Avancha; Kunal Banerjee; Dhiraj D. Kalamkar; Greg Henry; Hans Pabst; Alexander Heinecke


Archive | 2018

APPARATUSES, METHODS, AND SYSTEMS FOR ACCESS SYNCHRONIZATION IN A SHARED MEMORY

Swagath Venkataramani; Dipankar Das; Sasikanth Avancha; Ashish Ranjan; Subarno Banerjee; Bharat Kaul; Anand Raghunathan

Collaboration


Dive into the Sasikanth Avancha's collaboration.

Researchain Logo
Decentralizing Knowledge