Debayan Bhaduri
Virginia Tech
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Debayan Bhaduri.
great lakes symposium on vlsi | 2004
Debayan Bhaduri; Sandeep K. Shukla
It is expected that nano-scale devices and interconnections will introduce unprecedented level of defects, noise and interferences in the substrates. This consideration motivates the search for new architectural paradigms based on redundancy based defect-tolerant designs. However, redundancy is not always a solution to the reliability problem, and often too much or too little redundancy may cause lack of reliability. The key challenge is in determining the granularity at which defect tolerance is designed, and the level of redundancy to achieve optimal reliability. Various forms of redundancy such as NAND multiplexing, Triple Modular Redundancy (TMR), Cascaded Triple Modular Redundancy (CTMR) have been considered in the fault-tolerance literature. Also, redundancy has been applied at different levels of granularity, such as gate level, logic block level, logic function level, unit level etc. The questions we try to answer in this paper is what level of granularity and what redundancy levels result in optimal reliability for specific architectures. In this paper, we extend previous work on evaluating reliability-redundancy trade-offs for NAND multiplexing to granularity vs. redundancy vs. reliability trade-offs for other redundancy mechanisms, and present our automation mechanism using the probabilistic model checking tool PRISM. We illustrate the power of this automation by pointing out certain anomalies of these trade-offs which are counter intuitive and can only be obtained by designers through automation, thereby providing better insight into defect-tolerant design decisions.
IEEE Transactions on Nanotechnology | 2005
Debayan Bhaduri; Sandeep K. Shukla
As manufacturing technology reaches nanoscale, architectural designs need to accommodate the uncertainty inherent at such scales. These uncertainties are germane in the minuscule dimension of the devices, quantum physical effects, reduced noise margins, system energy levels reaching computing thermal limits, manufacturing defects, aging, and many other factors. Defect-tolerant architectures and their reliability measures will gain importance for logic and micro-architecture designs based on nanoscale substrates. Recently, the Markov random field has been proposed as a model of computation for nanoscale logic gates. This opens up new possibilities in designing logic and architecture where the conventional Boolean logic is replaced by a notion of an energy distribution function based on Gibbs distribution. In this computational scheme, probabilities of energy levels at various gate inputs and interconnects are considered, and belief propagation is used to propagate these probability distributions from the primary inputs to the primary outputs of a Boolean network. In this paper, we take this approach further by automating this computational scheme and belief propagation algorithm. We have developed MATLAB-based libraries for fundamental logic gates that can compute probability distributions and entropies at the outputs for specified discrete input distributions and in the presence of noise at the inputs and interconnects. Our tool automates the evaluation of reliability measures of combinational logic blocks. The effectiveness of this automation is illustrated in this paper by automatically deriving various reliability results for defect-tolerant architectures, such as triple modular redundancy (TMR), cascaded TMR, and multistage iterations of these. Also, signal noise is modeled as uniform and Gaussian distributions at the inputs and interconnects, so as to evaluate reliability-redundancy tradeoffs of these architectural configurations taking into account such noise models.
IEEE Transactions on Circuits and Systems | 2007
Debayan Bhaduri; Sandeep K. Shukla; Paul S. Graham; Maya Gokhale
The rapid development of CMOS and non-CMOS nanotechnologies has opened up new possibilities and introduced new challenges for circuit design. One of the main challenges is in designing reliable circuits from defective nanoscale devices. Hence, there is a need to develop methodologies to accurately evaluate circuit reliability. In recent years, a number of reliability evaluation methodologies based on probabilistic model checking, probabilistic transfer matrices, probabilistic gate models, etc., have been proposed. Scalability has been a concern in the applicability of these methodologies to the reliability analysis of large circuits. In this paper, we develop a general, scalable technique for these reliability evaluation methodologies. Specifically, an algorithm is developed for the model checking-based methodology and implemented in a tool called Scalable, Extensible Tool for Reliability Analysis (SETRA). SETRA integrates the scalable model checking-based algorithm into the conventional computer-aided design circuit design flow. The paper also discusses ways to modify the scalable algorithm for the other reliability estimation methodologies and plug them into SETRAs extensible framework. Our preliminary experiments show how SETRA can be used effectively to evaluate and compare the robustness of different circuit designs.
IEEE Transactions on Nanotechnology | 2007
Debayan Bhaduri; Sandeep K. Shukla; Paul S. Graham; Maya Gokhale
Nanoelectronic systems are anticipated to be highly susceptible to computation and communication noise. Interestingly, von Neumann addressed the issue of computation in the presence of noisy gates in 1952 and developed a technique called multiplexing. He proposed multiplexing architectures based on two universal logic functions, nand and maj. Generalized combinatorial models to analyze such multiplexing architectures were proposed by von Neumann and extended later by others. In this work, we describe an automated method for computing the effects of noise in both the computational and interconnect hardware of multiplexing-based nanosystems-a method employing a probabilistic model checking tool and extending previous modeling efforts, which only considered gate noise. This method is compared with a recently proposed automation methodology based on probabilistic transfer matrices and used to compute and compare the reliability of individual nand and maj multiplexing systems, both in the presence of gate and interconnect noise. Such a comparative study of nand and maj multiplexing is needed to provide quantitative guidelines for choosing one of the multiplexing schemes. The maximum device failure probabilities that can be accommodated by multiplexing-based fault-tolerant nanosystems are also computed by this method and compared with theoretical results from the literature. This paper provides a framework that can capture probabilistically quantified fault models and provide quick reliability evaluation of multiplexing architectures
ieee computer society annual symposium on vlsi | 2004
Debayan Bhaduri; Sandeep K. Shukla
As silicon manufacturing technology reaches the nanoscale, architectural designs need to accommodate the uncertainty inherent at such scales. These uncertainties are germane in the miniscule dimension of the device, quantum physical effects, reduced noise margins, system energy levels reaching computing thermal limits, manufacturing defects, aging and many other factors. Defect tolerant architectures and their reliability measures gain importance for logic and micro-architecture designs based on nano-scale substrates. Recently, a Markov random field (MRF) has been proposed as a model of computation for nanoscale logic gates. In this paper, we take this approach further by automating this computational scheme and a belief propagation algorithm. We have developed MATLAB based libraries and toolset for fundamental logic gates that can compute output probability distributions and entropies for specified input distributions. Our tool eases evaluation of reliability measures of combinational logic blocks. The effectiveness of this automation is illustrated in this paper by automatically deriving various reliability results for defect-tolerant architectures, such as triple modular redundancy (TMR), cascaded triple modular redundancy (CTMR) and multi-stage iterations of these. These results are used to analyze trade-offs between reliability and redundancy for these architectural configurations.
international conference on vlsi design | 2007
Debayan Bhaduri; Sandeep K. Shukla; Paul S. Graham; Maya Gokhale
With the rapid advancement of CMOS and non-CMOS nanotechnologies, circuit reliability is becoming an important design parameter. In recent years, a number of reliability evaluation methodologies based on probabilistic model checking, probabilistic transition matrices, etc., have been proposed. Scalability has been a concern in the wide applicability of these methodologies to the reliability analysis of large circuits. In this paper, the similarities between these reliability evaluation methodologies were discussed and focus mainly on the scalability issue. In particular, a scalable technique for the model checking-based methodology was developed, and how this technique can be applied to the other methodologies was shown. A tool called SETRA was also developed that can be used to integrate the scalable forms of these methodologies in the conventional circuit design flow
IEEE Transactions on Nanotechnology | 2008
Ayodeji Coker; Valerie E. Taylor; Debayan Bhaduri; Sandeep K. Shukla; Arijit Raychowdhury; Kaushik Roy
Nanoscale elements are fabricated using bottom-up processes, and as such are prone to high levels of defects. Therefore, fault-tolerance is crucial for the realization of practical nanoscale devices. In this paper, we investigate a fault-tolerance scheme that utilizes redundancies in the rows and columns of a nanoscale crossbar molecular switch memory array. In particular, we explore the performance tradeoffs of time delay, power, and reliability for different amounts of redundancies. The results indicate an increase in fault-tolerance with small increases in delay and area utility.
international conference on nanotechnology | 2004
Debayan Bhaduri; Sandeep K. Shukla
The nanometer scale of device manufacturing in the semiconductor industry is characterized by two features (i) high defect rate at the substrate, and (ii) availability of large number of devices on chip. The high defect rate is due to manufacturing defects, ageing, transient faults and quantum physical effects, which need to be circumvented by designing reliable architectures for the nanoscale devices. The availability of higher device count, however, allows designers to implement redundancy based defect-tolerance. However, redundant parts of the design are also affected by defects, and therefore, redundancy levels need to be suitably chosen to obtain reliable architectures. Our past work has concentrated on various structural redundancy techniques, including von Neumanns NAND multiplexing, and we have shown how to use a probabilistic model checking tool to evaluate redundancy/reliability trade-off points for such designs. In this paper, we concentrate on a specific circuit, namely, majority circuit to evaluate its redundancy-reliability trade-off. This special attention to majority circuits is motivated by the recent advances in non-silicon technologies for nanoscale computing, such as quantum dot cellular automata, which use three input majority gates as their basic logic devices. Our results in this paper show that majority circuits when multiplexed using von Neumanns technique admits lesser reliability at the same level of redundancy for small gate failure probabilities than in the case of NAND gates, and higher reliability of computation when large gate failure probabilities are considered. This is significant due to the growing importance of the majority gates in implementing various logic functions in emerging nanotechnologies.
international conference on nanotechnology | 2006
Ayodeji Coker; Valerie E. Taylor; Debayan Bhaduri; Sandeep K. Shukla; Arijit Raychowdhury; Kaushik Roy
Nanoscale elements are fabricated using bottom-up processes, and as such are prone to high levels of defects. Therefore, fault-tolerance is crucial for the realization of practical nanoscale devices. In this paper, we investigate a fault tolerance scheme that utilizes redundancies in the rows and columns of a nanoscale crossbar molecular switch memory array. In particular, we explore the performance tradeoffs of time delay, power, and reliability for different amounts of redundancies. The results indicate an increase in fault-tolerance with small increases in delay and area utility.
Nano, quantum and molecular computing | 2004
Debayan Bhaduri; Sandeep K. Shukla
Nano-computing in the form of quantum, molecular and other computing models is proliferating as we scale down to nano-meter fabrication technologies. According to many experts, it is expected that nano-scale devices and interconnections will introduce unprecedented level of defects in the substrates and architectural designs need to accommodate the uncertainty inherent at such scales. This consideration motivates the search for new architectural paradigms based on redundancy based defect-tolerant designs. However, redundancy is not always a solution to the reliability problem, and often too much or too little redundancy may cause lack of reliability. The key challenge is in determining the granularity at which defect tolerance is designed, and the level of redundancy to achieve a specific level of reliability. Various forms of redundancy such as NAND multiplexing, Triple Modular Redundancy (TMR), Cascaded Triple Modular Redundancy (CTMR) have been considered in the fault-tolerance literature. Also, redundancy has been applied at different levels of granularity, such as gate level, logic block level, logic function level, unit level etc. Analytical probabilistic models to evaluate such reliability-redundancy trade-offs are error prone and cumbersome. In this chapter, we discuss different analytical and automation methodologies that can evaluate the reliability measures of combinational logic blocks, and can be used to analyze trade-offs between reliability and redundancy for different architectural configurations. We also illustrate the effectiveness of our reliability analysis tools pointing out certain anomalies which are counter intuitive and can be obtained easily by designers through automation, thereby providing better insight into defect-tolerant design decisions. We foresee that these tools will help furthering research and pedagogical interests in this area, expedite the reliability analysis process and enhance the accuracy of establishing reliability-redundancy trade-off points.