Oliver Sharp
University of California, Berkeley
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Oliver Sharp.
ACM Computing Surveys | 1994
David Bacon; Susan L. Graham; Oliver Sharp
In the last three decades a large number of compiler transformations for optimizing programs have been implemented. Most optimizations for uniprocessors reduce the number of instructions executed by the program using transformations based on the analysis of scalar quantities and data-flow techniques. In contrast, optimizations for high-performance superscalar, vector, and parallel processors maximize parallelism and memory locality with transformations that rely on tracking the properties of arrays using loop dependence analysis. This survey is a comprehensive overview of the important high-level program restructuring techniques for imperative languages, such as C and Fortran. Transformations for both sequential and various types of parallel architectures are covered in depth. We describe the purpose of each transformation, explain how to determine if it is legal, and give an example of its application. Programmers wishing to enhance the performance of their code can use this survey to improve their understanding of the optimizations that compilers can perform, or as a reference for techniques to be applied manually. Students can obtain an overview of optimizing compiler technology. Compiler writers can use this survey as a reference for most of the important optimizations developed to date, and as bibliographic reference for the details of each optimization. Readers are expected to be familiar with modern computer architecture and basic program compilation techniques.
symposium on principles of programming languages | 1991
Steven Lucco; Oliver Sharp
Parallel programs display two fundamentally different kinds of execution behavior: synchronous and asynchronous. Some methodologies, such as distributed data structures, are best suited to the construction of asynchronous programs. In this paper, we propose a methodology for synchronous parallel programming based on the notion of a coordination structure, a direct representation of the multidimensional dataflow patterns common to synchronous programs. We introduce Delirium, a language in which one can concisely express many useful coordination structures.
conference on high performance computing (supercomputing) | 1990
Steven Lucco; Oliver Sharp
The authors outline a strategy for expressing coordination of sequential subcomputations, realized in the embedding language Delirium. In contrast to existing embedded languages, the notation clearly expresses the coordination framework of the application. All the coordination required to execute the program is expressed in a unified Delirium program. The program contains the computational code in the form of embedded operators, written using conventional tools. The proposed environment, which executes on a variety of shared-memory multi processors, provides tools which make it possible to develop parallel applications quickly. It supports a coordination model than can guarantee deterministic execution. Programmers who remain within the restrictions of the model can develop a program on a sequential machine and be certain that it will execute deterministically on a variety of parallel architectures.<<ETX>>
programming language design and implementation | 1993
Susan L. Graham; Steven Lucco; Oliver Sharp
Many parallel programs contain multiple sub-computations, each with distinct communication and load balancing requirements. The traditional approach to compiling such programs is to impose a processor synchronization barrier between sub-computations, optimizing each as a separate entity. This paper develops a methodology for managing the interactions among sub-computations, avoiding strict synchronization where concurrent or pipelined relationships are possible. Our approach to compiling parallel programs has two components: symbolic data access analysis and adaptive runtime support. We summarize the data access behavior of sub-computations (such as loop nests) and split them to expose concurrency and pipelining opportunities. The split transformation has been incorporated into an extended FORTRAN compiler, which outputs a FORTRAN 77 program augmented with calls to library routines written in C and a coarse-grained dataflow graph summarizing the exposed parallelism. The compiler encodes symbolic information, including loop bounds and communication requirements, for an adaptive runtime system, which uses runtime information to improve the scheduling efficiency of irregular sub-computations. The runtime system incorporates algorithms that allocate processing resources to concurrently executing sub-computations and choose communication granularity. We have demonstrated that these dynamic techniques substantially improve performance on a range of production applications including climate modeling and x-ray tomography, expecially when large numbers of processors are available.
international world wide web conferences | 1996
Steven Lucco; Oliver Sharp; Robert Wahbe
ieee international conference on high performance computing data and analytics | 1993
David Bacon; Susan L. Graham; Oliver Sharp
Archive | 2014
Oliver Sharp; David Wortendyke; Scot Gellock; Robert Wahbe; Paul Viola
Archive | 2013
Oliver Sharp; David Wortendyke; Scot Gellock; Robert Wahbe; Paul Viola
Archive | 2016
Raphael Hoffman; Nate Dire; Erik B. Christensen; Oliver Sharp; David Wortendyke; Scot Gellock; Robert Wahbe
Archive | 2014
Oliver Sharp; David Wortendyke; Scot Gellock; Robert Wahbe; Paul Viola