Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rajeev K. Ranjan is active.

Publication


Featured researches published by Rajeev K. Ranjan.


computer aided verification | 1996

VIS: A System for Verification and Synthesis

Robert K. Brayton; Gary D. Hachtel; Alberto L. Sangiovanni-Vincentelli; Fabio Somenzi; Adnan Aziz; Szu-Tsung Cheng; Stephen A. Edwards; Sunil P. Khatri; Yuji Kukimoto; Abelardo Pardo; Shaz Qadeer; Rajeev K. Ranjan; Shaker Sarwary; Thomas R. Shiple; Gitanjali Swamy; Tiziano Villa

ion Manual abstraction can be performed by giving a file containing the names of variables to abstract. For each variable appearing in the file, a new primary input node is created to drive all the nodes that were previously driven by the variable. Abstracting a net effectively allows it to take any value in its range, at every clock cycle. Fair CTL model checking and language emptiness check VIS performs fair CTL model checking under Buchi fairness constraints. In addition, VIS can perform language emptiness checking by model checking the formula EG true. The language of a design is given by sequences over the set of reachable states that do not violate the fairness constraint. The language emptiness check can be used to perform language containment by expressing the set of bad behaviors as another component of the system. If model checking or language emptiness fail, VIS reports the failure with a counterexample, (i.e., behavior seen in the system that does not satisfy the property for model checking, or valid behavior seen in the system for language emptiness). This is called the “debug” trace. Debug traces list a set of states that are on a path to a fair cycle and fail the CTL formula. Equivalence checking VIS provides the capability to check the combinational equivalence of two designs. An important usage of combinational equivalence is to provide a sanity check when re-synthesizing portions of a network. VIS also provides the capability to test the sequential equivalence of two designs. Sequential verification is done by building the product finite state machine, and checking whether a state where the values of two corresponding outputs differ, can be reached from the set of initial states of the product machine. If this happens, a debug trace is provided. Both combinational and sequential verification are implemented using BDD-based routines. Simulation VIS also provides traditionaldesign verification in the form of a cycle-based simulator that uses BDD techniques. Since VIS performs both formal verification and simulation using the same data structures, consistency between them is ensured. VIS can generate random input patterns or accept user-specified input patterns. Any subtree of the specified hierarchy may be simulated.


design automation conference | 1996

High performance BDD package by exploiting memory hierarchy

Jagesh V. Sanghavi; Rajeev K. Ranjan; Robert K. Brayton; Alberto L. Sangiovanni-Vincentelli

The success of binary decision diagram (BDD) based algorithms for verification depend on the availability of a high performance package to manipulate very large BDDs. State-of-the-art BDD packages, based on the conventional depth-first technique, limit the size of the BDDs due to a disorderly memory access patterns that results in unacceptably high elapsed time when the BDD size exceeds the main memory capacity. We present a high performance BDD package that enables manipulation of very large BDDs by using an iterative breadth-first technique directed towards localizing the memory accesses to exploit the memory system hierarchy. The new memory-oriented performance features of this package are: 1) an architecture independent customized memory management scheme, 2) the ability to issue multiple independent BDD operations (superscalarity), and 3) the ability to perform multiple BDD operations even when the operands of some BDD operations are the result of some other operations yet to be completed (pipelining). A comprehensive set of BDD manipulation algorithms are implemented using the above techniques. Unlike the breadth-first algorithms presented in the literature, the new package is faster than the state-of-the-art BDD package by a factor of up to 15, even for the BDD sizes that fit within the main memory. For BDD sizes that do not fit within the main memory, a performance improvement of up to a factor of 100 can be achieved.


design automation conference | 1994

HSIS: A BDD-Based Environment for Formal Verification

Adnan Aziz; Felice Balarin; Szu-Tsung Cheng; Ramin Hojati; Timothy Kam; Sriram C. Krishnan; Rajeev K. Ranjan; Thomas R. Shiple; Vigyan Singhal; Serdar Tasiran; Huey-Yih Wang; Robert K. Brayton; Alberto L. Sangiovanni-Vincentelli

Functional and timing verification are currently the bottlenecks in many design efforts. Simulation and emulation are extensively used for verification. Formal verification is now gaining acceptance in advanced design groups. This has been facilitated by the use of binary decision diagrams (BDDs). This paper describes the essential features of HSIS, a BDD-based environment for formal verification: 1. Open language design, made possible by using a compact and expressive intermediate format known as BLIF-MV. Currently, a synthesis subset of Verilog is supported. 2. Support for both model checking and language containment in a single unified environment, using expressivefairness constraints. 3. Efficient BDD-based algorithms. 4. Debugging environment for both language containment and model checking. 5. Automatic algorithms for the early quantification problem. 6. Support for state minimization using bisimulation and similar techniques. HSIS allows us to experiment with formal verification techniques on a variety of design problems. It also provides an environment for further research in formal verification.


computer aided verification | 1998

A Comparison of Presburger Engines for EFSM Reachability

Thomas R. Shiple; James H. Kukula; Rajeev K. Ranjan

Implicit state enumeration for extended finite state machines relies on a decision procedure for Presburger arithmetic. We compare the performance of two Presburger packages, the automata-based Shasta package and the polyhedrabased Omega package. While the raw speed of each of these two packages can be superior to the other by a factor of 50 or more, we found the asymptotic performance of Shasta to be equal or superior to that of Omega for the experiments we performed.


international conference on computer design | 1996

Binary decision diagrams on network of workstations

Rajeev K. Ranjan; Jagesh V. Sanghavi; Robert K. Brayton; Alberto L. Sangiovanni-Vincentelli

The success of all binary decision diagram (BDD) based synthesis and verification algorithms depend on the ability to efficiently manipulate very large BDDs. We present algorithms for manipulation of very large Binary Decision Diagrams (BDDs) on a network of workstations (NOW). A NOW provides a collection of main memories and disks which can be used effectively to create and manipulate very large BDDs. To make efficient use of memory resources of a Now, while completing execution in a reasonable amount of wall clock time, extension of breadth-first technique is used to manipulate BDDs. BDDs are partitioned such that nodes for a set of consecutive variables are assigned to the same workstation. We present experimental results to demonstrate the capability of such an approach and point towards the potential impact for manipulating very large BDDs.


international conference on computer aided design | 1998

On the optimization power of retiming and resynthesis transformations

Rajeev K. Ranjan; Vigyan Singhal; Fabio Somenzi; Robert K. Brayton

Retiming and resynthesis transformations can be used for optimizing the area, power, and delay of sequential circuits. Even though this technique has been known for more than a decade, its exact optimization capability has not been formally established. We show that retiming and resynthesis can exactly implement 1-step equivalent state transition graph transformations. This result is the strongest to date. We also show how the notions of retiming and resynthesis can be moderately extended to achieve more powerful state transition graph transformations. Our work will provide theoretical foundation for practical retiming and resynthesis based optimization and verification.


design, automation, and test in europe | 1999

Using combinational verification for sequential circuits

Rajeev K. Ranjan; Vigyan Singhal; Fabio Somenzi; Robert K. Brayton

Retiming combined with combinational optimization is a powerful sequential synthesis method. However, this methodology has not found wide application because formal sequential verification is not practical and current simulation methodology requires the correspondence of latches disallowing any movement of latches. We present a practical verification technique which permits such sequential synthesis for a class of circuits. In particular, we require certain constraints to be met on the feedback paths of the latches involved in the retiming process. For a general circuit, we can satisfy these constraints by fixing the location of some latches, e.g., by making them observable. We show that equivalence checking after performing repeated retiming and synthesis on this class of circuit reduces to a combinational verification problem. We also demonstrate that our methodology covers a large class of circuits by applying it to a set of benchmarks and industrial designs.


design automation conference | 2009

Beyond verification: leveraging formal for debugging

Rajeev K. Ranjan; Claudionor Jose Nunes Coelho; Sebastian Skalberg

The latest advancements in the commercial formal model checkers have enabled the integration of formal property verification with the conventional testbench based methods in the overall verification plan. This has led to significant verification productivity across the entire design flow (from architectural verification to post-silicon debugging). As verification productivity is improved, debugging efficiency has become more important than before. In this paper, we discuss how formal technology can be leveraged to bring efficiency in the debugging process. In particular, we discuss how ldquobehavioral indexingrdquo enables a top-down view of the counter-example and facilitates debugging by overlaying a higher abstraction view on the bit-level counter-example. We also discuss how formal technology can be leveraged to do ldquowhat-ifrdquo analysis to localize the root cause of the bug. We also discuss how formal technology supports the even more challenging task of traceless debugging (the process of debugging the ldquoabsence of witness/counter-examplerdquo).


design automation conference | 2007

Verification Coverage: When is Enough, Enough?

Francine Bacchini; Alan J. Hu; Tom Fitzpatrick; Rajeev K. Ranjan; David Lacey; Mercedes Tan; Andrew Piziali; Avi Ziv

For EDA users worldwide, the functional verification of complex chips poses a daunting challenge that consumes not just increasingly precious amounts of time, but also limited resources and available budget. The introduction of new tools has driven new powerful new methodologies, and spurred further debate on the issue of coverage interoperability - of heterogeneous verification tools and their respective handling of coverage data. New methodologies hold promise for better decision-making, as does a baseline standard for coverage interoperability. With the various new tools and technologies that have arrived on the scene has come support of flows that can lead to better functional verification decisions and higher quality products.


international conference on computer design | 1997

Dynamic reordering in a breadth-first manipulation based BDD package: challenges and solutions

Rajeev K. Ranjan; Wilsin Gosti; Robert K. Brayton; A. Sangiovanni-Vincenteili

The breadth-first manipulation technique has proven effective in dealing with very large sized BDDs. However, until now the lack of dynamic variable reordering has remained an obstacle in its acceptance. The goal of the work is to provide efficient techniques to address this issue. After identifying the problems with implementing variable swapping (the core operation in dynamic reordering) in breadth-first based packages, the authors propose techniques to handle the computational and memory overheads. They feel that combining dynamic reordering with the powerful manipulation algorithms of a breadth-first based scheme can significantly enhance the performance of BDD based algorithms. The efficiency of the proposed techniques is demonstrated on a range of examples.

Collaboration


Dive into the Rajeev K. Ranjan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adnan Aziz

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fabio Somenzi

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vigyan Singhal

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Claudionor José Nunes Coelho

Universidade Federal de Minas Gerais

View shared research outputs
Researchain Logo
Decentralizing Knowledge