Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael Wolfe is active.

Publication


Featured researches published by Michael Wolfe.


Sigplan Notices | 1994

J + = J

Michael Wolfe

Michael Wolfe Oregon Graduate Institute P.O. Box 91000, Portland, OR 97291 [email protected] Is J+(S) = J(S)? Given a control ow graph (CFG) G, the join of two nodes a; b, written J(a; b), is the set of nodes c such that there is a path from a and b to predecessors of c, with no common nodes on the paths. Formally c 2 J(a; b) if: 9pac : a ! ca with ca ! c and either c 62 pac or c = a, and 9pbc : b ! cb with cb ! c and either c 62 pbc or c = b, and pac \ pbc = ; (see the section below about notation used here). The two paths pac and pbc may be trivial paths. The join of a set of nodes, J(S), is de ned to be the union of the pair-wise joins: J(S) = Sa;b2S J(a; b), where J(a; a) = ;. This de nition is slightly di erent than that used in other references but equivalent and more useful for our proofs. The join set is used to prove the correctness certain advanced analysis algorithms, such as Static Single Assignment [CFR+91]. The iterated join, J+(S), is de ned as the limit of the increasing sequence of nodes de ned by: J1(S) = J(S) J2(S) = J(S [ J1(S)) J i+1(S) = J(S [ J i(S)) Weiss proved that J+(S) = J(S) if Entry 2 S where Entry is the unique entry into the CFG [Wei92]. He conjectured that J+(S) = J(S) for all sets S, but did not prove it. Notation. We use the usual directed graph notation, where x ! y means an edge from node x to y, and x ! y means a path (possibly trivial) from x to y. The name cb means a predecessor of c on a non-trivial path from b. The notation pxy will mean a (possibly trivial) path from node x to a predecessor of y. We use path intersection, written pxy \ pab, to mean the set of nodes that appear on both paths (or the empty set if the two paths are disjoint). The notation pxy pab means that pxy is a sub-path of pab, and a 2 pxy means a appears somewhere along the a b ca c cb


Communications of The ACM | 1986

Advanced compiler optimizations for supercomputers

David A. Padua; Michael Wolfe

Compilers for vector or multiprocessor computers must have certain optimization features to successfully generate parallel code.


symposium on principles of programming languages | 1981

Dependence graphs and compiler optimizations

David J. Kuck; Robert H. Kuhn; David A. Padua; Bruce Leasure; Michael Wolfe

Dependence graphs can be used as a vehicle for formulating and implementing compiler optimizations. This paper defines such graphs and discusses two kinds of transformations. The first are simple rewriting transformations that remove dependence arcs. The second are abstraction transformations that deal more globally with a dependence graph. These transformations have been implemented and applied to several different types of high-speed architectures.


ACM Transactions on Programming Languages and Systems | 1995

Beyond induction variables: detecting and classifying sequences using a demand-driven SSA form

Michael P. Gerlek; Eric Stoltz; Michael Wolfe

Linear induction variable detection is usually associated with the strength reduction optimization. For restructuring compilers, effective data dependence analysis requires that the compiler detect and accurately describe linear and nonlinear induction variables as well as more general sequences. In this article we present a practical technique for detecting a broader class of linear induction variables than is usually recognized, as well as several other sequence forms, including periodic, polynomial, geometric, monotonic, and wrap-around variables. Our method is based on Factored Use-Def (FUD) chains, a demand-driven representation of the popular Static Single Assignment (SSA) form. In this form, strongly connected components of the associated SSA graph correspond to sequences in the source program: we describe a simple yet efficient algorithm for detecting and classifying these sequences. We have implemented this algorithm in Nascent, our restructuring Fortran 90+ compiler, and we present some results showing the effectiveness of our approach.


International Journal of Parallel Programming | 1987

Data dependence and its application to parallel processing

Michael Wolfe; Utpal Banerjee

Data dependence testing is required to detect parallelism in programs. In this paper data dependence concepts and data dependence direction vectors are reviewed. Data dependence computation in parallel and vector constructs as well as serialdo loops is covered. Several transformations that require data dependence are given as examples, such as vectorization (translating serial code into vector code), concurrentization (translating serial code into concurrent code for multiprocessors), scalarization (translating vector or concurrent code into serial code for a scalar uniprocessor), loop interchanging and loop fusion. The details of data dependence testing including several data dependence decision algorithms are given.


International Journal of Parallel Programming | 1986

Loop skewing: the wavefront method revisited

Michael Wolfe

Loop skewing is a new procedure to derive the wavefront method of execution of nested loops. The wavefront method is used to execute nested loops on parallel and vector computers when none of the loops can be done in vector mode. Loop skewing is a simple transformation of loop bounds and is combined with loop interchanging to generate the wavefront. This derivation is particularly suitable for implementation in compilers that already perform automatic detection of parallelism and generation of vector and parallel code, such as are available today. Loop normalization, a loop transformation used by several vectorizing translators, is related to loop skewing, and we show how loop normalization, applied blindly, can adversely affect the parallelism detected by these translators.


programming language design and implementation | 1992

Beyond induction variables

Michael Wolfe

Induction variable detection is usually closely tied to the strength reduction optimization. This paper studies induction variable analysis from a different perspective, that of finding induction variables for data dependence analysis. While classical induction variable analysis techniques have been used successfully up to now, we have found a simple algorithm based on the Static Single Assignment form of a program that finds all linear induction variables in a loop. Moreover, this algorithm is easily extended to find induction variables in multiple nested loops, to find nonlinear induction variables, and to classify other integer scalar assignments in loops, such as monotonic, periodic and wrap-around variables. Some of these other variables are now classified using ad hoc pattern recognition, while others are not analyzed by current compilers. Giving a unified approach improves the speed of compilers and allows a more general classification scheme. We also show how to use these variables in data dependence testing.


programming language design and implementation | 1995

Elimination of redundant array subscript range checks

Priyadarshan Kolte; Michael Wolfe

This paper presents a compiler optimization algorithm to reduce the run time overhead of array subscript range checks in programs without compromising safety. The algorithm is based on partial redundancy elimination and it incorporates previously developed algorithms for range check optimization. We implemented the algorithm in our research compiler, Nascent, and conducted experiments on a suite of 10 benchmark programs to obtain four results: (1) the execution overhead of naive range checking is high enough to merit optimization, (2) there are substantial differences between various optimizations, (3) loop-based optimizations that hoist checks out of loops are effective in eliminating about 98% of the range checks, and (4) more sophisticated analysis and optimization algorithms produce very marginal benefits.


symposium on principles of programming languages | 1993

Static single assignment for explicitly parallel programs

Harini Srinivasan; James Hook; Michael Wolfe

We describe and prove algorithms to convert programs which use the Parallel Computing Forum Parallel Sections construct into Static Single Assignment (SSA) form. This proces allows compilers to apply classical scalar optimization algorithms to explicitly parallel programs. To do so, we must define what the concept of dominator and dominance frontier mean in parallel programs. We also describe how we extend SSA form to handle parallel updates and still preserve the SSA properties.


international conference on parallel architectures and languages europe | 1994

A New Approach to Array Redistribution: Strip Mining Redistribution

Akiyoshi Wakatani; Michael Wolfe

Languages such as High Performance Fortran are used to implement parallel algorithms by distributing large data structures across a multicomputer system. To reduce the communication time for the redistribution of arrays, we proposes a new scheme, strip mining redistribution.

Collaboration


Dive into the Michael Wolfe's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge