Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ajaya V. Durg is active.

Publication


Featured researches published by Ajaya V. Durg.


international symposium on computer architecture | 2017

ScaleDeep: A Scalable Compute Architecture for Learning and Evaluating Deep Networks

Swagath Venkataramani; Ashish Ranjan; Subarno Banerjee; Dipankar Das; Sasikanth Avancha; Ashok Jagannathan; Ajaya V. Durg; Dheemanth Nagaraj; Bharat Kaul; Pradeep Dubey; Anand Raghunathan

Deep Neural Networks (DNNs) have demonstrated state-of-the-art performance on a broad range of tasks involving natural language, speech, image, and video processing, and are deployed in many real world applications. However, DNNs impose significant computational challenges owing to the complexity of the networks and the amount of data they process, both of which are projected to grow in the future. To improve the efficiency of DNNs, we propose SCALEDEEP, a dense, scalable server architecture, whose processing, memory and interconnect subsystems are specialized to leverage the compute and communication characteristics of DNNs. While several DNN accelerator designs have been proposed in recent years, the key difference is that SCALEDEEP primarily targets DNN training, as opposed to only inference or evaluation. The key architectural features from which SCALEDEEP derives its efficiency are: (i) heterogeneous processing tiles and chips to match the wide diversity in computational characteristics (FLOPs and Bytes/FLOP ratio) that manifest at different levels of granularity in DNNs, (ii) a memory hierarchy and 3-tiered interconnect topology that is suited to the memory access and communication patterns in DNNs, (iii) a low-overhead synchronization mechanism based on hardware data-flow trackers, and (iv) methods to map DNNs to the proposed architecture that minimize data movement and improve core utilization through nested pipelining. We have developed a compiler to allow any DNN topology to be programmed onto SCALEDEEP, and a detailed architectural simulator to estimate performance and energy. The simulator incorporates timing and power models of SCALEDEEPs components based on synthesis to Intels 14nm technology. We evaluate an embodiment of SCALEDEEP with 7032 processing tiles that operates at 600 MHz and has a peak performance of 680 TFLOPs (single precision) and 1.35 PFLOPs (half-precision) at 1.4KW. Across 11 state-of-the-art DNNs containing 0.65M-14.9M neurons and 6.8M-145.9M weights, including winners from 5 years of the ImageNet competition, SCALEDEEP demonstrates 6×-28× speedup at iso-power over the state-of-the-art performance on GPUs.


Archive | 2009

Power conservation for mobile device displays

Bran Ferren; Prashant Gandhi; Ajaya V. Durg; Qingfeng Li; Lakshman Krishnamurthy


Archive | 1999

Efficient methodology for scaling and transferring images

Oleg Rashkovskiy; Ajaya V. Durg; William W. Macy


Archive | 1998

Global white point detection and white balance for color images

Ajaya V. Durg; Oleg Rashkovskiy


Archive | 1998

Reducing noise in an imaging system

Oleg Rashkovskiy; William W. Macy; Ajaya V. Durg


Archive | 1999

Method for reducing row noise from images

William W. Macy; Ajaya V. Durg


Archive | 1998

Method and apparatus for imaging processing

Ajaya V. Durg; Oleg Rashkovskiy


Archive | 1998

Selecting a routine based on processing power

Oleg Rashkovskiy; Ajaya V. Durg


Archive | 2016

TECHNOLOGIES FOR LOW-POWER STANDBY DISPLAY REFRESH

Vasudev Bibikar; Rajesh Poornachandran; Ajaya V. Durg; Arpit Shah; Anil K. Sabbavarapu; Nabil Kerkiz; Quang T. Le; Ryan Ritesh M. Pinto; Moorthy Rajesh; James A. Bish; Ranjani Sridharan


Archive | 2014

Selecting A Low Power State Based On Cache Flush Latency Determination

Sundar Ramani; Arvind Raman; Arvind Mandhani; Ashish V. Choubal; Kalyan Muthukumar; Ajaya V. Durg; Samudyatha Chakki

Collaboration


Dive into the Ajaya V. Durg's collaboration.

Researchain Logo
Decentralizing Knowledge