Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Barry K. Rosen is active.

Publication


Featured researches published by Barry K. Rosen.


IEEE Design & Test of Computers | 1987

Transition Fault Simulation

John A. Waicukauski; Eric Lindbloom; Barry K. Rosen; Vijay S. Iyengar

Delay fault testing is becoming more important as VLSI chips become more complex. Components that are fragments of functions, such as those in gate-array designs, need a general model of a delay fault and a feasible method of generating test patterns and simulating the fault. The authors present such a model, called a transition fault, which when used with parallel-pattern, single-fault propagation, is an efficient way to simulate delay faults. The authors describe results from 10 benchmark designs and discuss add-ons to a stuck fault simulator to enable transition fault simulation. Their experiments show that delay fault simulation can be done of random patterns in less than 10% more time than needed for a stuck fault simulation.


symposium on principles of programming languages | 1989

An efficient method of computing static single assignment form

Ron Cytron; Jeanne Ferrante; Barry K. Rosen; Mark N. Wegman; F. K. Zadeck

In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point where advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present a new algorithm that efficiently computes these data structures for arbitrary control flow graph We also give analytical and experimental evidence that they are usually {\em linear} in the size of the original program. This paper thus presents strong evidence that these structures can be of {\em practical} use in optimization.


symposium on principles of programming languages | 1988

Global value numbers and redundant computations

Barry K. Rosen; Mark N. Wegman; F. K. Zadeck

Most previous redundancy elilmination algorithms have been of two kinds. The lexical algorithms deal with the entire program, but they can only detect redundancy among computations of lexicatlly identical expressions, where expressions are lexically identical if they apply exactly the same operator to exactly the same operands. The value numbering algorithms,, on the other hand, can recognize redundancy among ex:pressions that are lexically different but that are certain to compute the same value. This is accomplished by assigning special symbolic names called value numbers to expr,essions. If the value numbers of the operands of two expressions are identical, and if the operators applied by the expressions are identical, then the expressions receive the: same value number and are certain to have the same values. Sameness of value numbers permits more extensive optimization than lexical identity, but value numbering algor:ithms have usually been restricted in the past to basic blocks (sequences of computations with no branching) or extended basic blocks (sequences of computations with no joins). We propose a redundancy elimination algorithm that is global (in that it deals with the entire program), yet able to recognize redundancy among expressions that are lexitally different. The al,gorithm also takes advantage of second order effects: transformations based on the discovery that two computations compute the same value may create opportunities to discover that other computations are equivalent. The algorithm applies to programs expressed as reducible [l] [9] control flow gratphs. As the examples in section 7 illustrate, our algorithm optimizes reducible programs much more extensively than previous algorithms. In the special case of a program without loops, the code generated by our algorithm is provably “optimal” in the technical sense explained in section 8. Thiis degree of optimization is


IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | 1987

HSS--A High-Speed Simulator

Zeev Barzilai; J.L. Carter; Barry K. Rosen; J.D. Rutledge

The High-Speed Simulator (HSS) is a fast and flexible system for gate-level fault simulation. Originally limited to combinational logic, it is being extended to handle sequential logic. It may also prove useful as a functional simulator. The speed of HSS is obtained by converting the cycle-free portions of a circuit into optimized machine code for a general-purpose computer. This compiled code simulates the circuits response for 16 or 32 test patterns in parallel. Faults are injected into the circuit by changing the machine instruction corresponding to the fault location. From the range of speeds seen in recent measurements, we take 240 million gates per second as a fair general estimate of the speed of 2-valued simulation running on a 3081/K computer. For 3-valued simulation, divide by 2.9. The paper discusses the merits and drawbacks of the HSS strategy. It also sketches the extensions of HSS to model sequential logic and the various applications of HSS. These include functional verification, design for testability, good machine signatures, and accurate simulation of transistor-level defects in certain CMOS technologies. Finally, there is some discussion of how the simulation requirements of future designs can be met, and of the lessons to be drawn from long-term experimentation with HSS.


IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | 1990

On computing the sizes of detected delay faults

Vijay S. Iyengar; Barry K. Rosen; John A. Waicukauski

Defects in integrated circuits can cause delay faults of various sizes. Testing for delay faults has the goal of detecting a large fraction of these faults for a wide range of fault sizes. Hence, an evaluation scheme for a delay fault test must not only compute whether or not a delay fault was detected, but also calculate the sizes of detected delay faults. Delay faults have the counterintuitive property that a test for a fault of one size need not be a test for a similar fault of a larger size. This makes it difficult to answer questions about the sizes of delay faults detected by a set of tests. A model for delay faults that answers such questions correctly, but with calculations simple enough to be done for large circuits, is presented. >


IEEE Transactions on Very Large Scale Integration Systems | 1995

AVPGEN-A test generator for architecture verification

Ashok K. Chandra; Vijay S. Iyengar; D. Jameson; R. V. Jawalekar; Indira Nair; Barry K. Rosen; Michael P. Mullen; J. Yoon; R. Armoni; Daniel Geist; Yaron Wolfsthal

This paper describes a system (AVPGEN) for generating tests (called architecture verification programs or AVPs) to check the conformance of processor designs to the specified architecture. To generate effective tests, AVPGEN uses novel concepts like symbolic execution and constraint solving, along with various biasing techniques. Unlike many earlier systems that make biased random choices, AVPGEN often chooses intermediate or final values and then solves for initial values that can lead to the desired values. A language called SIGL (symbolic instruction graph language) is provided in AVPGEN for the user to specify templates with symbolic constraints. The combination of user-specified constraints and the biasing functions is used to focus the tests on conditions that are interesting in that they are likely to activate various kinds of bugs. The system has been used successfully to debug many S/390 processors and is an integral part of the design process for these processors. >


international test conference | 1988

Delay test generation. I. Concepts and coverage metrics

Vijay S. Iyengar; Barry K. Rosen; Ilan Y. Spillinger

An approach to test for delay faults is presented. A variable size delay fault model is used to represent these failures. The nominal gate delays with the manufacturing tolerances are an integral part of the model and are used in the propagation of simplified waveforms through the logic network. The faulty waveforms are functions of the variable-size delay fault. For each fault and test pattern, a threshold is computed such that this fault is detected if its size exceeds epsilon . This threshold is used (along with the minimum slack at the fault site) to determine a metric called quality. The quality of detection for a fault measures how close the test came to exposing the ideally smallest-size fault at that point. This metric (together with the traditional fault coverage) gives a complete measure of the goodness of the test.<<ETX>>


Journal of the ACM | 1979

Data Flow Analysis for Procedural Languages

Barry K. Rosen

Global analysis and optimization techniques presuppose local data flow information about the effects of program statements on the values associated wRh names For procedure calls this information is not immediately available but can presumably be obtained through flow analysis of procedure bodies Accurate mformatlon proves to be surprisingly difficult to obtain This paper includes a language independent formulation of the problem, an interprocedural data flow algorithm, and a proof that the algorithm is correct Symbohc data flow analysis is introduced in the course of optimizing the algorithm We move much of the work outside of a loop by manipulating partially evaluated symbolic expressions for the data within the loop Foundational difficulties are revealed when the theory of data flow analysis is extended to support extensive optimization of procedural language programs Several widespread assumptions become false or ambiguous A few of the problems are resolved here Inductive arguments are facilitated by a simple path tree representation of control flow that allows for both recurslon and side effects


Communications of The ACM | 1977

High-level data flow analysis

Barry K. Rosen

In contrast to the predominant use of low-level intermediate text, high-level data flow analysis deals with programs essentially at source level and exploits the control flow information implicit in the parse tree. The need for high-level flow analysis arises from several aspects of recent work on advanced methods of program certification and optimization. This paper proposes a simple general method of high-level data flow analysis that allows free use of escape and jump statements, avoids large graphs when compiling large programs, facilitates updating of data flow information to reflect program changes, and derives new global information helpful in solving many familiar global flow analysis problems. An illustrative application to live variable analysis is presented. Many of the graphs involved are constructed and analyzed before any programs are compiled, thus avoiding certain costs that low-level methods incur repeatedly at compile time.


international test conference | 1988

Delay test generation. II. Algebra and algorithms

Vijay S. Iyengar; Barry K. Rosen; Ilan Y. Spillinger

For pt.I see ibid., p.857-66 (1988). A novel algebra is introduced for delay test generation. The algebra combines the nine natural logic values (00 , 01, 0X, 10, 11, 1X, X1, XX) with special attributes that record both heuristic choices and whatever information about waveforms is deducible algebraically (i.e. without numerical computations using actual gate delays). A test generator uses this algebra in an efficiently organized backtrack search. The test generator is linked to a delay fault simulator. Previous event-driven simulators have considered different types of events; one type of event is a change in faultless values from one test to another test, and the other type of event is a difference between faulty and faultless values. The presented simulator is driven by both types of events. Each generated test is simulated to determine the quality of detection.<<ETX>>

Collaboration


Dive into the Barry K. Rosen's collaboration.

Researchain Logo
Decentralizing Knowledge