Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Avinash Lingamneni is active.

Publication


Featured researches published by Avinash Lingamneni.


design, automation, and test in europe | 2011

Energy parsimonious circuit design through probabilistic pruning

Avinash Lingamneni; Christian Enz; Jean-Luc Nagel; Krishna V. Palem; Christian Piguet

Inexact Circuits or circuits in which the accuracy of the output can be traded for energy or delay savings, have been receiving increasing attention of late due to invariable inaccuracies in designs as Moores law approaches the low nanometer range, and a concomitant growing desire for ultra low energy systems. In this paper, we present a novel design-level technique called probabilistic pruning to realize inexact circuits. Unlike the previous techniques in literature which relied mostly on some form of scaling of operational parameters such as the supply voltage (Vdd) to achieve energy and accuracy tradeoffs, our technique uses pruning of portions of circuits having a lower probability of being active, as the basis for performing architectural modifications resulting in significant savings in energy, delay and area. Our approach yields more savings when compared to any of the conventional voltage scaling schemes, for similar error values. Extensive simulations using this pruning technique in a novel logic synthesis based CAD framework on various architectures of 64-bit adders demonstrate that normalized gains as great as 2×–7.5× in the Energy-Delay-Area product can be obtained, with a relative error percentage as low as 10−6% up to 10%, when compared to corresponding conventionally correct designs.


ACM Transactions in Embedded Computing Systems | 2013

Ten Years of Building Broken Chips: The Physics and Engineering of Inexact Computing

Krishna V. Palem; Avinash Lingamneni

Well over a decade ago, many believed that an engine of growth driving the semiconductor and computing industries---captured nicely by Gordon Moore’s remarkable prophecy (Moore’s law)---was speeding towards a dangerous cliff-edge. Ranging from expressions of concern to doomsday scenarios, the exact time when serious hurdles would beset us varied quite a bit---some of the more optimistic warnings giving Moore’s law until. Needless to say, a lot of people have spent time and effort with great success to find ways for substantially extending the time when we would encounter the dreaded cliff-edge, if not avoiding it altogether. Faced with this issue, we started approaching this in a decidedly different manner---one which suggested falling off the metaphorical cliff as a design choice, but in a controlled way. This resulted in devices that could switch and produce bits that are correct, namely of having the intended value, only with a probabilistic guarantee. As a result, the results could in fact be incorrect. Such devices and associated circuits and computing structures are now broadly referred to as inexact designs, circuits, and architectures. In this article, we will crystallize the essence of inexactness dating back to 2002 through two key principles that we developed: (i) that of admitting error in a design in return for resource savings, and subsequently (ii) making resource investments in the elements of a hardware platform proportional to the value of information they compute. We will also give a broad overview of a range of inexact designs and hardware concepts that our group and other groups around the world have been developing since, based on these two principles. Despite not being deterministically precise, inexact designs can be significantly more efficient in the energy they consume, their speed of execution, and their area needs, which makes them attractive in application contexts that are resilient to error. Significantly, our development of inexactness will be contrasted against the rich backdrop of traditional approaches aimed at realizing reliable computing from unreliable elements, starting with von Neumann’s influential lectures and further developed by Shannon-Weaver and others.


computing frontiers | 2012

Algorithmic methodologies for ultra-efficient inexact architectures for sustaining technology scaling

Avinash Lingamneni; Kirthi Krishna Muntimadugu; Christian Enz; Richard M. Karp; Krishna V. Palem; Christian Piguet

Owing to a growing desire to reduce energy consumption and widely anticipated hurdles to the continued technology scaling promised by Moores law, techniques and technologies such as inexact circuits and probabilistic CMOS (PCMOS) have gained prominence. These radical approaches trade accuracy at the hardware level for significant gains in energy consumption, area, and speed. While holding great promise, their ability to influence the broader milieu of computing is limited due to two shortcomings. First, they were mostly based on ad-hoc hand designs and did not consider algorithmically well-characterized automated design methodologies. Also, existing design approaches were limited to particular layers of abstraction such as physical, architectural and algorithmic or more broadly software. However, it is well-known that significant gains can be achieved by optimizing across the layers. To respond to this need, in this paper, we present an algorithmically well-founded cross-layer co-design framework (CCF) for automatically designing inexact hardware in the form of datapath elements. Specifically adders and multipliers, and show that significant associated gains can be achieved in terms of energy, area, and delay or speed. Our algorithms can achieve these gains with adding any additional hardware overhead. The proposed CCF framework embodies a symbiotic relationship between architecture and logic-layer design through the technique of probabilistic pruning combined with the novel confined voltage scaling technique introduced in this paper, applied at the physical layer. A second drawback of the state of the art with inexact design is the lack of physical evidence established through measuring fabricated ICs that the gains and other benefits that can be achieved are valid. Again, in this paper, we have addressed this shortcoming by using CCF to fabricate a prototype chip implementing inexact data-path elements; a range of 64-bit integer adders whose outputs can be erroneous. Through physical measurements of our prototype chip wherein the inexact adders admit expected relative error magnitudes of 10% or less, we have found that cumulative gains over comparable and fully accurate chips, quantified through the area-delay-energy product, can be a multiplicative factor of 15 or more. As evidence of the utility of these results, we demonstrate that despite admitting error while achieving gains, images processed using the FFT algorithm implemented using our inexact adders are visually discernible.


asia and south pacific design automation conference | 2014

Leveraging the error resilience of machine-learning applications for designing highly energy efficient accelerators

Zidong Du; Avinash Lingamneni; Yunji Chen; Krishna V. Palem; Olivier Temam; Chueh-Hung Wu

In recent years, inexact computing has been increasingly regarded as one of the most promising approaches for reducing energy consumption in many applications that can tolerate a degree of inaccuracy. Driven by the principle of trading tolerable amounts of application accuracy in return for significant resource savings - the energy consumed, the (critical path) delay and the (silicon) area being the resources - this approach has been limited to certain application domains. In this paper, we propose to expand the application scope, error tolerance as well as the energy savings of inexact computing systems through neural network architectures. Such neural networks are fast emerging as popular candidate accelerators for future heterogeneous multi-core platforms, and have flexible error tolerance limits owing to their ability to be trained. Our results based on simulated 65nm technology designs demonstrate that the proposed inexact neural network accelerator could achieve 43.91%-62.49% savings in energy consumption (with corresponding delay and area savings being 18.79% and 31.44% respectively) when compared to existing baseline neural network implementation, at the cost of an accuracy loss (quantified as the Mean Square Error (MSE) which increases from 0.14 to 0.20 on average).


design automation conference | 2012

What to do about the end of Moore's law, probably!

Krishna V. Palem; Avinash Lingamneni

Computers process bits of information. A bit can take a value of 0 or 1, and computers process these bits through some physical mechanism. In the early days of electronic computers, this was done by electromechanical relays [28] which were soon replaced by vacuum tubes [6]. From the very beginning, these devices and the computers they were used to build were affected by concerns of reliability. For example, in a relatively recent interview with Presper Eckert [1] who co-designed ENIAC, widely believed to be the first electronic computer built, he notes: “we had a tube fail about every two days, and we could locate the problem within 15 minutes.”


Philosophical Transactions of the Royal Society A | 2014

On the use of inexact, pruned hardware in atmospheric modelling

Peter D. Düben; Jaume Joven; Avinash Lingamneni; Hugh McNamara; Giovanni De Micheli; Krishna V. Palem; T. N. Palmer

Inexact hardware design, which advocates trading the accuracy of computations in exchange for significant savings in area, power and/or performance of computing hardware, has received increasing prominence in several error-tolerant application domains, particularly those involving perceptual or statistical end-users. In this paper, we evaluate inexact hardware for its applicability in weather and climate modelling. We expand previous studies on inexact techniques, in particular probabilistic pruning, to floating point arithmetic units and derive several simulated set-ups of pruned hardware with reasonable levels of error for applications in atmospheric modelling. The set-up is tested on the Lorenz ‘96 model, a toy model for atmospheric dynamics, using software emulation for the proposed hardware. The results show that large parts of the computation tolerate the use of pruned hardware blocks without major changes in the quality of short- and long-time diagnostics, such as forecast errors and probability density functions. This could open the door to significant savings in computational cost and to higher resolution simulations with weather and climate models.


ACM Transactions in Embedded Computing Systems | 2013

Synthesizing Parsimonious Inexact Circuits through Probabilistic Design Techniques

Avinash Lingamneni; Christian Enz; Krishna V. Palem; Christian Piguet

The domain of inexact circuit design, in which accuracy of the circuit can be exchanged for substantial cost (energy, delay, and/or area) savings, has been gathering increasing prominence of late owing to a growing desire for reducing energy consumption of the systems, particularly in the domain of embedded and (portable) multimedia applications. Most of the previous approaches to realizing inexact circuits relied on scaling of circuit parameters (such as supply voltage) taking advantage of an application’s error tolerance to achieve the cost and accuracy trade-offs, thus suffering from acute drawbacks of considerable implementation overheads that significantly reduced the gains. In this article, two novel design approaches called Probabilistic Pruning and Probabilistic Logic Minimization are proposed to realize inexact circuits with zero hardware overhead.Extensive simulations on various architectures of critical datapath elements demonstrate that each of the techniques can independently achieve normalized gains as large as 2x--9.5x in energy-delay-area product for relative error magnitude as low as 10 − 4%--8% compared to corresponding conventional correct circuits.


Journal of Low Power Electronics | 2013

Designing Energy-Efficient Arithmetic Operators Using Inexact Computing

Avinash Lingamneni; Christian Enz; Krishna V. Palem; Christian Piguet

It is widely acknowledged that the exponentially improving benefits of sustained technology scaling prophesied by the Moore’s Law would end within the next decade or so, attributed primarily to an understanding that switching devices can no longer function deterministically as feature sizes are scaled down to the molecular levels. The benefits of Moore’s Law could, however, continue provided systems with probabilistic or “error-prone” elements could still process information usefully. We believe that this is indeed possible in contexts where the “quality” of the results of the computation is perceptually determined by our senses—audio and video information being significant examples. To demonstrate this principle, we will show how such “inexact” computing based devices, circuits and computing architectures can be used effectively to realize many ubiquitous energy-constrained error-resilient applications. Further, we show that significant energy, performance and area gains can be achieved, while trading a perceptually tolerable level of error–that will be ultimately determined based on neurobiological models—applied in the context of video and audio data in digital signal processing. This design philosophy of inexact computing is of particular interest in the domain of embedded and (portable) multimedia applications and in application domains of budding interest such as recognition and data mining, all of which can tolerate inaccuracies to varying extents or can synthesize accurate (or sufficient) information even from inaccurate computations!


design automation conference | 2013

Improving energy gains of inexact DSP hardware through reciprocative error compensation

Avinash Lingamneni; Arindam Basu; Christian Enz; Krishna V. Palem; Christian Piguet

We present a zero hardware-overhead design approach called reciprocative error compensation(REC) that significantly enhances the energy-accuracy trade-off gains in inexact signal processing datapaths by using a two-pronged approach: (a) deliberately redesigning the basic arithmetic blocks to effectively compensate for each others (expected) error through inexact logic minimization, and (b) “reshaping” the response waveforms of the systems being designed to further reduce any residual error. We apply REC to several DSP primitives such as the FFT and FIR filter blocks, and show that this approach delivers 2-3 orders of magnitude lower (expected) error and more than an order of magnitude lesser Signal-to-Noise Ratio (SNR) loss (in dB) over the previously proposed inexact design techniques, while yielding similar energy gains. Post-layout comparisons in the 65nm process technology show that our REC approach achieves upto 73% energy savings (with corresponding delay and area savings of upto 16% and 62% respectively) when compared to an existing exact DSP implementation while trading a relatively small loss in SNR of less than 1.5 dB.


IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | 2015

Leveraging the Error Resilience of Neural Networks for Designing Highly Energy Efficient Accelerators

Zidong Du; Avinash Lingamneni; Yunji Chen; Krishna V. Palem; Olivier Temam; Chueh-Hung Wu

In recent years, inexact computing has been increasingly regarded as one of the most promising approaches for slashing energy consumption in many applications that can tolerate a certain degree of inaccuracy. Driven by the principle of trading tolerable amounts of application accuracy in return for significant resource savings-the energy consumed, the (critical path) delay, and the (silicon) area-this approach has been limited to application-specified integrated circuits (ASICs) so far. These ASIC realizations have a narrow application scope and are often rigid in their tolerance to inaccuracy, as currently designed; the latter often determining the extent of resource savings we would achieve. In this paper, we propose to improve the application scope, error resilience and the energy savings of inexact computing by combining it with hardware neural networks. These neural networks are fast emerging as popular candidate accelerators for future heterogeneous multicore platforms and have flexible error resilience limits owing to their ability to be trained. Our results in 65-nm technology demonstrate that the proposed inexact neural network accelerator could achieve 1.78-2.67× savings in energy consumption (with corresponding delay and area savings being 1.23 and 1.46×, respectively) when compared to the existing baseline neural network implementation, at the cost of a small accuracy loss (mean squared error increases from 0.14 to 0.20 on average).

Collaboration


Dive into the Avinash Lingamneni's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lakshmi N. Chakrapani

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Arindam Basu

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Yunji Chen

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Zidong Du

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Chueh-Hung Wu

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge