Williams Ludwell Harrison
University of Illinois at Urbana–Champaign
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Williams Ludwell Harrison.
Higher-order and Symbolic Computation \/ Lisp and Symbolic Computation | 1990
Williams Ludwell Harrison
Lisp and its descendants are among the most important and widely used of programming languages. At the same time, parallelism in the architecture of computer systems is becoming commonplace. There is a pressing need to extend the technology of automatic parallelization that has become available to Fortran programmers of parallel machines, to the realm of Lisp programs and symbolic computing. In this paper we present a comprehensive approach to the compilation of Scheme programs for shared-memory multiprocessors. Our strategy has two principal components:interprocedural analysis andprogram restructuring. We introduceprocedure strings andstack configurations as a framework in which to reason about interprocedural side-effects and object lifetimes, and develop a system of interprocedural analysis, using abstract interpretation, that is used in the dependence analysis and memory management of Scheme programs. We introduce the transformations ofexit-loop translation andrecursion splitting to treat the control structures of iteration and recursion that arise commonly in Scheme programs. We propose an alternative representation for s-expressions that facilitates the parallel creation and access of lists. We have implemented these ideas in a parallelizing Scheme compiler and run-time system, and we complement the theory of our work with “snapshots” of programs during the restructuring process, and some preliminary performance results of the execution of object codes produced by the compiler.Lisp and its descendants are among the most important and widely used of programming languages. At the same time, parallelism in the architecture of computer systems is becoming commonplace. There is a pressing need to extend the technology of automatic parallelization that has become available to Fortran programmers of parallel machines, to the realm of Lisp programs and symbolic computing. In this paper we present a comprehensive approach to the compilation of Scheme programs for shared-memory multiprocessors. Our strategy has two principal components:interprocedural analysis andprogram restructuring. We introduceprocedure strings andstack configurations as a framework in which to reason about interprocedural side-effects and object lifetimes, and develop a system of interprocedural analysis, using abstract interpretation, that is used in the dependence analysis and memory management of Scheme programs. We introduce the transformations ofexit-loop translation andrecursion splitting to treat the control structures of iteration and recursion that arise commonly in Scheme programs. We propose an alternative representation for s-expressions that facilitates the parallel creation and access of lists. We have implemented these ideas in a parallelizing Scheme compiler and run-time system, and we complement the theory of our work with “snapshots” of programs during the restructuring process, and some preliminary performance results of the execution of object codes produced by the compiler.
programming language design and implementation | 1990
Zahira Ammarguellat; Williams Ludwell Harrison
The recognition of recurrence relations is important in several ways to the compilation of programs. Induction variables, the simplest form of recurrence, are pivotal in loop optimizations and dependence testing. Many recurrence relations, although expressed sequentially by the programmer, lend themselves to efficient vector or parallel computation. Despite the importance of recurrences, vectorizing and parallelizing compilers to date have recognized them only in an ad-hoc fashion. In this paper we put forth a systematic method for recognizing recurrence relations automatically. Our method has two parts. First, abstract interpretation [CC77, CC79] is used to construct a map that associates each variable assigned in a loop with a symbolic form (expression) of its value. Second, the elements of this map are matched with patterns that describe recurrence relations. The scheme is easily extensible by the addition of templates, and is able to recognize nested recurrences by the propagation of the closed forms of recurrences from inner loops. We present some applications of this method and a proof of its correctness.
symposium on principles of programming languages | 1993
Kwangkeun Yi; Williams Ludwell Harrison
We have designed and implemented an interprocedural program analyzer generator, called system Z. Our goal is to automate the generation and management of semantics-based interprocedural program analysis for a wide range of target languages. System Z is based on the abstract interpretation framework. The input to system Z is a high-level specification of an abstract interpreter. The output is a C code for the specified interprocedural program analyzer. The system provides a high-level command set (called projection expressions) in which the user can tune the analysis in accuracy and cost. The user writes projection expressions for selected domains; system Z takes care of the remaining things so that the generated analyzer conducts an analysis over the projected domains, which will vary in cost and accuracy according to the projections. We demonstrate the systems capabilities by experiments with a set of generated analyzers which can analyze C, FORTRAN, and SCHEME programs.
symposium on principles of programming languages | 1992
Jyh-Herng Chow; Williams Ludwell Harrison
Traditional optimization techniques for sequential programs are not directly applicable to parallel programs where concurrent activities may interfere with each other through shared variables. New compiler techniques must be developed to accommodate features found in parallel languages. In this paper, we use abstract interpretation to obtain useful properties of programs, e.g., side effects, data dependences, object lifetime and concurrent expressions, for a language that supports first-class functions, pointers, dynamic allocations and explicit parallelism through cobegin. These analyses may facilitate many applications, such as program optimization, parallelization, restructuring, memory management, and detecting access anomalies. Our semantics is based on a labeled transition system and is instrumented with procedure strings to record the procedural/concurrency movement along the program interpretation. We develop analyses in both concrete domains and abstract domains, and prove the correctness and termination of the abstract interpretation.
languages and compilers for parallel computing | 1992
Williams Ludwell Harrison; Zahira Ammarguellat
Miprac is a parallelizing C, Lisp and Fortran compiler. We present its workings by following a C program as it progresses through the modules of the compiler. Miprac makes use of a simple, operational intermediate form, called MIL. Dependence analysis and memory management are performed by a whole-program abstract interpretation of MIL. We present the intermediate form, and illustrate the analysis and transformation of the example program as it becomes a parallel object code for Cedar.
international conference on supercomputing | 1994
Li-Ling Chen; Williams Ludwell Harrison
A chief source of inefficiency in program analysis using abstract interpretation comes from the fact that a large context (i.e., problem state) is propagated from node to node during the course of an analysis. This problem can be addressed and largely alleviated by a technique we call context projection, which projects an input context for a node to the portion that is actually relevant and determines whether the node should be reevaluated based on the projected context. This technique reduces the cost of an evaluation and eliminates unnecessary evaluations. Therefore, the efficiency of computing fixpoints over general lattices is greatly improved. A specific method, reachability, is presented as an example to accomplish context projection. Experimental results using reachability show very convincing speedups (more than eight for larger programs) that demonstrate the practical significance of context projection.
Proceedings of the US/Japan workshop on Parallel Lisp on Parallel Lisp: languages and systems | 1989
Williams Ludwell Harrison; Zahira Ammarguellat
Parcel was arguably the first complete system for the automatic parallelization of Lisp programs. It was quite successful in several respects: it introduced a sharp interprocedural semantic analysis that computes the interprocedural visibility of side-effects, and allows the placement of objects in memory according to their lifetimes; it introduced several restructuring techniques tailored to the iterative and recursive control structures that arise in Lisp programs; and it made use of multiple procedure versions with a flexible microtasking mechanism for efficient parallelism at run-time. Parcel had several shortcomings however: the intrinsic procedures of Scheme, and those added to Parcel for support of parallelism, were embedded in its interprocedural analysis, transformations, code generation and run-time system, making the system difficult to adapt for other source languages; its interprocedural analysis handled compound, mutable data only indirectly (by analogy to closures), making it less accurate and more expensive than necessary; and its representation of programs as general control-flow graphs made the implementation of complex transformations difficult.
conference on high performance computing (supercomputing) | 1990
Jyh-Herng Chow; Williams Ludwell Harrison
The authors discuss run-time microtasking support for executing nested parallel loops on a shared-memory multiprocessor system, and present a scheme called switch-stacks for implementing such support. They first discuss current approaches to flat microtasking, and investigate how to extend these to full microtasking. They point out the problem of dummy waiting in the processor that initiates a parallel loop. To solve this problem, two schemes, dequeue-tasks and dequeue-descendant-tasks, are considered, and their disadvantages are discussed. The proposed switch-stack scheme perfectly solves the problem. These schemes have been implemented in the PARCEL run-time system. The results show that the new scheme nearly always achieves the best performance in execution time and stability.<<ETX>>
international parallel and distributed processing symposium | 1991
Williams Ludwell Harrison; Jyh-Herng Chow
The effects of controlling granularity and the growth of parallelism at runtime in executing automatically parallelized programs are addressed. The authors version-switch method allows the runtime system to dynamically choose the appropriate version of codes for execution in order to achieve better performance. The decision when to switch version is shown to be critical to the effectiveness of the method. A framework is built to study how to estimate the current work load for deciding when to switch versions. Four different control schemes based on local or global estimation of workload have been implemented in the runtime system. Their results are compared and discussed in detail.<<ETX>>
Archive | 1995
Williams Ludwell Harrison; Sharad Mehrotra