Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Preston Briggs is active.

Publication


Featured researches published by Preston Briggs.


ACM Transactions on Programming Languages and Systems | 1994

Improvements to graph coloring register allocation

Preston Briggs; Keith D. Cooper; Linda Torczon

We describe two improvements to Chaitin-style graph coloring register allocators. The first, optimistic coloring, uses a stronger heuristic to find a k-coloring for the interference graph. The second extends Chaitins treatment of rematerialization to handle a larger class of values. These techniques are complementary. Optimistic coloring decreases the number of procedures that require spill code and reduces the amount of spill code when spilling is unavoidable. Rematerialization lowers the cost of spilling some values. This paper describes both of the techniques and our experience building and using register allocators that incorporate them. It provides a detailed description of optimistic coloring and rematerialization. It presents experimental data to show the performance of several versions of the register allocator on a suite of FORTRAN programs. It discusses several insights that we discovered only after repeated implementation of these allocators.


programming language design and implementation | 1989

Coloring heuristics for register allocation

Preston Briggs; Keith D. Cooper; Ken Kennedy; Linda Torczon

We describe an improvement to a heuristic introduced by Chaitin for use in graph coloring register allocation. Our modified heuristic produces better colorings, with less spill code. It has similar compile-time and implementation requirements. We present experimental data to compare the two methods.


programming language design and implementation | 1994

Effective partial redundancy elimination

Preston Briggs; Keith D. Cooper

Partial redundancy elimination is a code optimization with a long history of literature and implementation. In practice, its effectiveness depends on issues of naming and code shape. This paper shows that a combination of global reassociation and global value numbering can increase the effectiveness of partial redundancy elimination. By imposing a discipline on the choice of names and the shape of expressions, we are able to expose more redundancies. As part of the work, we introduce a new algorithm for global reassociation of expressions. It uses global information to reorder expressions, creating opportunities for other optimizations. The new algorithm generalizes earlier work that ordered FORTRAN array address expressions to improve otpimization [25].


Software - Practice and Experience | 1998

Practical improvements to the construction and destruction of static single assignment form

Preston Briggs; Keith D. Cooper; Timothy J. Harvey; L. Taylor Simpson

Static Single Assignment (SSA) form is a program representation that is becoming increasingly popular for compiler‐based code optimization. In this paper, we address three problems that have arisen in our use of SSA form. Two are variations to the SSA construction algorithms presented by Cytron et al.1 The first variation is a version of SSA form that we call ‘semi‐pruned’ SSA. It offers an attractive trade‐off between the cost of global data‐flow analysis required to build ‘pruned’ SSA and the large number of unused ϕ‐functions found in minimal SSA. The second variation speeds up the program renaming process by efficiently manipulating the stacks of names used during renaming. Our improvement reduces the number of pushes performed, in addition to more efficiently locating the stacks that should be popped. To convert code in SSA form back into an executable form, the compiler must use an algorithm that replaces ϕ‐functions with appropriately‐placed copy instructions. The algorithm given by Cytron et al. for inserting copies produces incorrect results in some situations; particularly in cases like instruction scheduling, where the compiler may not be able to split ‘critical edges’, and in the aftermath of optimizations that aggressively rewrite the name space, like some forms of global value numbering.2 We present a new algorithm for inserting copy instructions to replace ϕ‐functions. It fixes the problems that we have encountered with the original copy insertion algorithm. We present experimental results that demonstrate the effectiveness of the first two improvements not only during the construction of SSA form, but also in the time saved by subsequent optimization passes that use a smaller representation of the program.


conference on high performance computing (supercomputing) | 1997

Tera Hardware Software Cooperation

Gail A. Alverson; Preston Briggs; Susan L. Coatney; Simon Kahan; Richard Korry

The development of Teras MTA system was unusual. It respected the need for fast hardware and large shared memory, facilitating execution of the most demanding parallel application programs. But at the same time, it met the need for a clean machine model enabling calculated compiler optimizations and easy programming; and the need for novel architectural features necessary to support fast parallel system software. From its inception, system and application needs have molded the MTA architecture. The result is a system that offers high performance and ease of programming by virtue not only of fast physical hardware and flat shared memory, but also of the streamlined software systems that well utilize the features of the architecture intended to support them.


ACM Letters on Programming Languages and Systems | 1992

Coloring register pairs

Preston Briggs; Keith D. Cooper; Linda Torczon

Many architectures require that a program use pairs of adjacent registers to hold double-precision floating-point values. Register allocators based on Chaitins graph-coloring technique have trouble with programs that contain both single-register values and values that require adjacent pairs of registers. In particular, Chaitins algorithm often produces excessive spilling on such programs. This results in underuse of the register set; the extra loads and stores inserted into the program for spilling also slow execution. An allocator based on an optimistic coloring scheme naturally avoids this problem. Such allocators delay the decision to spill a value until late in the allocation process. This eliminates the over-spilling provoked by adjacent register pairs in Chaitins scheme. This paper discusses the representation of register pairs in a graph coloring allocator. It explains the problems that arise with Chaitins allocator and shows how the optimistic allocator avoids them. It provides a rationale for determining how to add larger aggregates to the interference graph.


ACM Letters on Programming Languages and Systems | 1993

An efficient representation for sparse sets

Preston Briggs; Linda Torczon

Sets are a fundamental abstraction widely used in programming. Many representations are possible, each offering different advantages. We describe a representation that supports constant-time implementations of clear-set, add-member, and delete-member. Additionally, it supports an efficient forall iterator, allowing enumeration of all the members of a set in time proportional to the cardinality of the set. We present detailed comparisons of the costs of operations on our representation and on a bit vector representation. Additionally, we give experimental results showing the effectiveness of our representation in a practical application: construction of an interference graph for use during graph-coloring register allocation. While this representation was developed to solve a specific problem arising in register allocation, we have found it useful throughout our work, especially when implementing efficient analysis techniques for large programs. However, the new representation is not a panacea. The operations required for a particular set should be carefully considered before this representation, or any other representation, is chosen.


Sigplan Notices | 1996

Automatic parallelization

Preston Briggs

When people talk about parallel computers, they often make assertions about the ability (or inability) of compilers to automatically parallelize programs, especially existing applications or dusty decks. Some people are quite optimistic; others are very pessimistic. The collected claims of the two groups have given rise to the myth of automatic parallelization. There are two common versions of the myth:


Sigplan Notices | 1997

Predicting the performance of parallel machines

Preston Briggs

Many supercomputers, despite their apparent complexity, actually provide quite predictable performance. Crays vector machines and the Tera MTA are two good examples. One reason for their predictable performance is the lack of a memory hierarchy there are no cache effects to be accounted for in the performance model. The performance of a particular loop on such a machine is often characterized by a pair of parameters:


Sigplan Notices | 1996

Sparse matrix manipulation

Preston Briggs

Matrix multiplication is often used to illustrate the effectiveness of various optimizations its almost the canonical example. However, multiplication of dense matrices, or dense linear algebra in general, is a little dull its just too easy to get significant improvements. On the other hand, manipulation of sparse matrices is much more interesting. Indeed, its almost a canonical counter example, used to illustrate the inadequacy of various languages, compilers, and hardware organizations. In this column, I give a parallel implementation of matrix-vector multiplication, where the matrix is sparse, and discuss its performance on the Tera, a large parallel computer. Such a routine would be used, for instance, when solving sparse systems of linear equations. The parallelism in the routine will be implicit; that is, the code is ordinary C and uses no language extensions for expressing explicit parallelism.

Collaboration


Dive into the Preston Briggs's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge