Farzan Fallah
Fujitsu
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Farzan Fallah.
IEICE Transactions on Electronics | 2005
Farzan Fallah; Massoud Pedram
In many new high performance designs, the leakage component of power consumption is comparable to the switching component. Reports indicate that 40% or even higher percentage of the total power consumption is due to the leakage of transistors. This percentage will increase with technology scaling unless effective techniques are introduced to bring leakage under control. This article focuses on circuit optimization and design automation techniques to accomplish this goal. The first part of the article provides an overview of basic physics and process scaling trends that have resulted in a significant increase in the leakage currents in CMOS circuits. This part also distinguishes between the standby and active components of the leakage current. The second part of the article describes a number of circuit optimization techniques for controlling the standby leakage current, including power gating and body bias control. The third part of the article presents techniques for active leakage control, including use of multiple-threshold cells, long channel devices, input vector design, transistor stacking to switching noise, and sizing with simultaneous threshold and supply voltage assignment.
design automation conference | 1999
Farzan Fallah; Pranav Ashar; Srinivas Devadas
Validation of RTL circuits remains the primary bottleneck in improving design turnaround time, and simulation remains the primary methodology for validation. Simulation-based validation has suffered from a disconnect between the metrics used to measure the error coverage of a set of simulation vectors, and the vector generation process. This disconnect has resulted in the simulation of virtually endless streams of vectors which achieve enhanced error coverage only infrequently. Another drawback has been that most error coverage metrics proposed have either been too simplistic or too inefficient to compute. Recently, an effective observability-based statement coverage metric was proposed along with a fast companion procedure for evaluating it. The contribution of our work is the development of a vector generation procedure targeting the observability-based statement coverage metric. Our method uses repeated coverage computation to minimize the number of vectors generated. For vector generation, we propose a novel technique to set up constraints based on the chosen coverage metric. Once the system of interacting arithmetic and Boolean constraints has been set up, it can be solved using hybrid linear programming and Boolean satisfiability methods. We present heuristics to control the size of the constraint system that needs to be solved. We present experimental results which show the viability of automatically generating vectors using our approach for industrial RTL circuits. We envision our system being used during the design process, as well as during post-design debugging.
design automation conference | 1998
Farzan Fallah; Srinivas Devadas; Kurt Keutzer
Functional simulation is still the primary workhorse for verifying the functional correctness of hardware designs. Functional verification is necessarily incomplete because it is not computationally feasible to exhaustively simulate designs. It is important therefore to quantitatively measure the degree of verification coverage of the design. Coverage metrics proposed for measuring the extent of design verification provided by a set of functional simulation vectors should compute statement execution counts (controllability information), and check to see whether effects of possible errors activated by program stimuli can be observed at the circuit outputs (observability information). Unfortunately, the metrics proposed thus far, either to not compute both types of information, or are inefficient, i.e., the overhead of computing the metric is very large. In this paper, we provide the details of an efficient method to compute an Observability-based Code COverage Metric (OCCOM) that can be used while simulating complex HDL designs. This method offers a more accurate assessment of design verification coverage than line coverage, and is significantly more computationally efficient than prior efforts to assess observability information because it breaks up the computation into two phases: Functional simulation of a modified HDL model, followed by analysis of a flowgraph extracted from the HDL model. Commercial HDL simulators can be directly used for the time-consuming first phase, and the second phase can be performed efficiently using concurrent evaluation techniques.
design automation conference | 1998
Farzan Fallah; Srinivas Devadas; Kurt Keutzer
Our strategy for automatic generation of functional vectors is based on exercising selected paths in the given hardware description language (HDL) model. The HDL model describes interconnections of arithmetic, logic and memory modules. Given a path in the HDL model, the search for input stimuli that exercise the path can be converted into a standard satisfiability checking problem by expanding the arithmetic modules into logic-gates. However, this approach is not very efficient. We present a new HDL-satisfiability checking algorithm that works directly on the HDL model. The primary feature of our algorithm is a seamless integration of linear-programming techniques for feasibility checking of arithmetic equations that govern the behavior of datapath modules, and 3-SAT checking for logic equations that govern the behavior of control modules. This feature is critically important to efficiency, since it avoids module expansion and allows us to work with logic and arithmetic equations whose cardinality tracks the size of the HDL model. We describe the details of the HDL-satisfiability checking algorithm in this paper. Experimental results which show significant speedups over state-of-the-art gate-level satisfiability checking methods are included.
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | 2001
Farzan Fallah; Srinivas Devadas; Kurt Keutzer
Functional simulation is still the primary workhorse for verifying the functional correctness of hardware designs. Functional verification is necessarily incomplete because it is not computationally feasible to exhaustively simulate designs. It is important, therefore, to quantitatively measure the degree of verification coverage of the design. Coverage metrics proposed for measuring the extent of design verification provided by a set of functional simulation vectors should compute statement execution counts (controllability information) and check to see whether effects of possible errors activated by program stimuli can be observed at the circuit outputs (observability information). Unfortunately, the metrics proposed thus far either do not compute both types of information or are inefficient, i.e., the overhead of computing the metric is very large. In this paper, we provide the details of an efficient method to compute an observability-based code coverage metric that can be used while simulating complex hardware description language (HDL) designs. This method offers a more accurate assessment of design verification coverage than line coverage and is significantly more computationally efficient than prior efforts to assess observability information because it breaks up the computation into two phases: functional simulation of a modified HDL model followed by analysis of a flowgraph extracted from the HDL model. Commercial HDL simulators can be directly used for the time-consuming first phase and the second phase can be performed efficiently using concurrent evaluation techniques.
IEEE Transactions on Very Large Scale Integration Systems | 2007
Afshin Abdollahi; Farzan Fallah; Massoud Pedram
The large magnitude of supply/ground bounces, which arise from power mode transitions in power gating structures, may cause spurious transitions in a circuit. This can result in wrong values being latched in the circuit registers. We propose a design methodology for limiting the maximum value of the supply/ground currents to a user-specified threshold level while minimizing the wake up (sleep to active mode transition) time. In addition to controlling the sudden discharge of the accumulated charge in the intermediate nodes of the circuit through the sleep transistors during the wake up transition, we can eliminate short circuit current and spurious switching activity during this time. This is, in turn, achieved by reducing the amount of charge that must be removed from the intermediate nodes of the circuit and by turning on different parts of the circuit in a way that causes a uniform distribution of current over the wake up time. Simulation results show that, compared to existing wakeup scheduling methods, the proposed techniques result in a 1-2 orders of magnitude improvement in the product of the maximum ground current and the wake up time
IEEE Transactions on Very Large Scale Integration Systems | 2008
Behnam Amelifard; Farzan Fallah; Massoud Pedram
Aggressive CMOS scaling results in low threshold voltage and thin oxide thickness for transistors manufactured in deep submicrometer regime. As a result, reducing the subthreshold and tunneling gate leakage currents has become one of the most important criteria in the design of VLSI circuits. This paper presents a method based on dual- V t and dual- T ox assignment to reduce the total leakage power dissipation of static random access memories (SRAMs) while maintaining their performance. The proposed method is based on the observation that read and write delays of a memory cell in an SRAM block depend on the physical distance of the cell from the sense amplifier and the decoder. Thus, the idea is to deploy different configurations of six-transistor SRAM cells corresponding to different threshold voltage and oxide thickness assignments for the transistors. Unlike other techniques for low-leakage SRAM design, the proposed technique incurs neither area nor delay overhead. In addition, it results in a minor change in the SRAM design flow. The leakage saving achieved by using this technique is a function of the values of the high threshold voltage and the oxide thickness, as well as the number of rows and columns in the cell array. Simulation results with a 65-nm process demonstrate that this technique can reduce the total leakage power dissipation of a 64 times 512 SRAM array by 33% and that of a 32 times 512 SRAM array by 40%.
international conference on computer design | 2001
Serdar Tasiran; Farzan Fallah; David Chinnery; Scott J. Weber; Kurt Keutzer
We present a simulation-based semi-formal verification method for sequential circuits described at the register-transfer level. The method consists of an iterative loop where coverage analysis guides input pattern generation. An observability-based coverage metric is used to identify portions of the circuit not exercised by simulation. A heuristic algorithm then selects probability distributions for biased random input pattern generation that targets non-covered portions. This algorithm is based on an approximate analysis of the circuit modeled as a Markov chain at steady state. Node controllability and observability are estimated using a limited depth reconvergence analysis and an implicit algorithm for manipulating probability distributions and determining steady-state behavior. An optimization algorithm iteratively perturbs the probability distributions of the primary inputs in order to improve estimated coverage. The coverage enhancement achieved by our approach is demonstrated on benchmarks from the ISCAS89 and VIS suites.
international symposium on quality electronic design | 2005
Behnam Amelifard; Farzan Fallah; Massoud Pedram
Based on the idea of sharing two adders used in the carry select adder (CSA), a new design of a low-power high-performance adder is presented. The new adder is faster than a ripple carry adder (RCA), but slower than a CSA. On the other hand, its area and power dissipation are smaller than those of a CSA.
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | 2006
Anup Hosangadi; Farzan Fallah; Ryan Kastner
Polynomial expressions are frequently encountered in many application domains, particularly in signal processing and computer graphics. Conventional compiler techniques for redundancy elimination such as common subexpression elimination (CSE) are not suited for manipulating polynomial expressions, and designers often resort to hand optimizing these expressions. This paper leverages the algebraic techniques originally developed for multilevel logic synthesis to optimize polynomial expressions by factoring and eliminating common subexpressions. The proposed algorithm was tested on a set of benchmark polynomial expressions where savings of 26.7% in latency and 26.4% in energy consumption were observed for computing these expressions on the StrongARM SA1100 processor core. When these expressions were synthesized in custom hardware, average energy savings of 63.4% for minimum hardware constraints and 24.6% for medium hardware constraints over CSE were observed