Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Carl Pixley is active.

Publication


Featured researches published by Carl Pixley.


IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | 1992

A theory and implementation of sequential hardware equivalence

Carl Pixley

A theory of sequential hardware equivalence is presented. This theory includes the notions of gate-level model (GLM), hardware finite state machine (HFSM), quotient machine, state equivalence ( approximately ), alignability, resetability, essential resetability, isomorphism, and sequential hardware equivalence. The theory is motivated by (1) the observation that it is impossible to control the initial state of a machine when it is powered on and (2) the desire to decide equivalence of two designs based solely on their netlists and logic device models, without knowledge of intended initial states or intended environments. Algorithms based upon a binary decision diagram (BDD) implementation of predicate calculus over Boolean domains are presented. This calculus is employed to calculate properties of hardware designs. Experimental results based upon these algorithms as implemented in the MCC sequential equivalence tool (SET) are presented. >


Journal of Electronic Testing | 1993

Synchronizing sequences and symbolic traversal techniques in test generation

Hyunwoo Cho; Seh-Woong Jeong; Fabio Somenzi; Carl Pixley

Asynchronizing sequence drives a circuit from an arbitrary power-up state into a unique state. Test generation on a circuit without a reset state can be much simplified if the circuit has a synchronizing sequence. In this article, a framework and algorithms for test generation based on themultiple observation time strategy are developed by taking advantage of synchronizing sequences. Though it has been shown that the multiple observation time strategy can provide a higher fault coverage than the conventional single observation time strategy, until now the multiple observation time strategy has required a very complex tester operation model (referred asMultiple Observation time-Multiple Reference strategy (MOMR) in the sequel) over the conventional tester operation model. The overhead of MOMR, exponential in the worst case, has prevented widespread use of the method. However, when a circuit is synchronizable, test generation can employ the multiple observation time strategy and provide better fault coverages, without resorting to MOMR. This testing strategy is referred asMultiple Observation time-Single Reference strategy (MOSR). We prove in this article that the same fault coverage, that could be achieved in MOMR, can be obtained in MOSR, if the circuit under test generation is synchronizable. We investigate how a synchronizing sequences simplifies test generation and allows to use MOSR under multiple observation time strategy. The experimental results show that higher fault coverages and large savings in CPU time can be achieved by the proposed framework and algorithms over both existing single observation time strategy methods as well as other multiple observation time strategy methods.


design automation conference | 1992

Exact calculation of synchronization sequences based on binary decision diagrams

Carl Pixley; Seh-Woong Jeong; Gary D. Hachtel

A synchronization sequence for a synchronous design D is a sequence of primary input vectors which when applied to any initial state of D will drive D to a single state, called a reset state. The authors present efficient methods based upon the universal alignment theorem and binary decision diagrams to compute a synchronization sequence, to compute a tight lower bound for the length of such a sequence, and to check that an initial state given in the specification is a reset state. It was shown in the experiments that the proposed method can handle fairly large circuits and the length of the actual synchronization sequence computed is quite close to the lower bound.<<ETX>>


computer aided verification | 1998

Design Constraints in Symbolic Model Checking

Matt Kaufmann; Andrew K. Martin; Carl Pixley

A time-consuming and error-prone activity in symbolic model-checking is the construction of environments. We present a technique for modeling environmental constraints that avoids the need for explicit construction of environments. Moreover, our approach supports an assume/guarantee style of reasoning that also supports simulation monitors. We give examples of the use of constraints in PowerPCTMverification.


design automation conference | 1995

The Validity of Retiming Sequential Circuits

Vigyan Singhal; Carl Pixley; Richard L. Rudell; Robert K. Brayton

Retiming has been proposed as an optimization step for sequential circuits represented at the net-list level. Retiming moves the latches across the logic gates and in doing so changes the number of latches and the longest path delay between the latches. In this paper we show by example that retiming a design may lead to differing simulation results when the retimed design replaces the original design. We also show, by example, that retiming may not preserve the testability of a sequential test sequence for a given stuck-at fault as measured by a simulator. We identify the cause of the problemas forward retiming moves across multiple-fanout points in the circuit. The primary contribution of this paper is to show that, while an accurate logic simulation may distinguish the retimed circuit fromthe original circuit, a conservative three-valued simulator cannot do so. Hence, retiming is a safe operation when used in a design methodology based on conservative three-valued simulation starting each latch with the unknown value.


design, automation, and test in europe | 2009

Solver technology for system-level to RTL equivalence checking

Alfred Koelbl; Reily Jacoby; Himanshu Jain; Carl Pixley

Checking the equivalence of a system-level model against an RTL design is a major challenge. The reason is that usually the system-level model is written by a system architect, whereas the RTL implementation is created by a hardware designer. This approach leads to two models that are significantly different. Checking the equivalence of real-life designs requires strong solver technology. The challenges can only be overcome with a combination of bit-level and word-level reasoning techniques, combined with the right orchestration. In this paper, we discuss solver technology that has shown to be effective on many real-life equivalence checking problems.


computer aided verification | 1990

Introduction to a Computational Theory and Implementation of Sequential Hardware Equivalence

Carl Pixley

A theory of sequential hardware equivalence [1] is presented, including the notions of gate-level model (GLM), hardware finite state machine (HFSM), state equivalence (∼), alignability, resetability, and sequential hardware equivalence (≈). This theory is motivated by (1) the observation that it is impossible to control the initial state of a machine when it is powered on, and (2) the desire to decide equivalence of two designs based solely on their netlists and logic device models, without knowledge of intended initial states or intended environments.


high level design validation and test | 2001

Experience with term level modeling and verification of the M*CORE/sup TM/ microprocessor core

Shuvendu K. Lahiri; Carl Pixley; Ken Albin

The paper describes the use of term-level modeling and verification of an industrial microprocessor, M*CORE/sup TM/ which is a limited dual-issue, super-scalar processor with instruction prefetching mechanism, deep pipeline, multicycle functional units, speculation and interlocks. Term-level modeling uses terms, uninterpreted functions and predicates to abstract the data path complexity of the microprocessor. The verification of the control path is carried out almost mechanically with the aid of CMU-EVC, an extremely efficient decision procedure based on the Logic of Positive Equality with Uninterpreted Functions (PEUF). The verification effort resulted in detection of a couple of non-trivial bugs in the microarchitecture in design exploration phase of the design. The paper demonstrates the effectiveness of CMU-EVC for automated verification of real-life microprocessor designs and also points out some of the challenges and the future work that need to be addressed in term-level modeling and verification of microprocessors using CMU-EVC.


International Journal of Parallel Programming | 2005

Constructing efficient formal models from high-level descriptions using symbolic simulation

Alfred Koelbl; Carl Pixley

Automating hardware design at higher levels of abstraction requires first and foremost a formal model of the high-level specification. This formal model is the basis of many EDA applications such as synthesis, analysis or verification. It should have a compact form, but still be close to the original description. In this paper, we propose using a Data-Flow-Graph (DFG) as a formal model. We present a new approach for generating a DFG from a high-level C++ specification based on symbolic simulation. The main advantage of using symbolic simulation for this task is that conceptually all C++ constructs can be handled. No restriction to a subset of constructs is required. Furthermore, our approach focuses on the quality of the resulting DFG. It attempts to minimize the number of nodes while still producing DFGs that adhere to the original specification.


international conference on computer aided design | 2003

A Framework for Constrained Functional Verification

Jun Yuan; Carl Pixley; Adnan Aziz; Ken Albin

We describe a framework for constrained simulation-vector generationin an industry setting. The framework consists of two keycomponents: the constraint compiler and the vector generator. Theconstraint compiler employs various techniques, including prioritization,partitioning, extraction, and decomposition, to minimize theinternal representation of the constraints, and thus the complexityof constraint solving. The vector generator then uses the compileddata together with input biasing to generate random simulation vectors.Constraints and input biases are treated in a unified manner inthe vector generator. Although there are many alternative ways ofgenerating vectors from constraints, the framework uniquely suits apractical constrained verification environment because of its abilityto handle complicated constraints and its seamless treatment of constraintsand biases. We illustrate the effectiveness of the frameworkwith real examples from commercial designs.

Collaboration


Dive into the Carl Pixley's collaboration.

Top Co-Authors

Avatar

Adnan Aziz

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vigyan Singhal

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Himanshu Jain

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matt Kaufmann

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Fabio Somenzi

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge