Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert H. Halstead is active.

Publication


Featured researches published by Robert H. Halstead.


ACM Transactions on Programming Languages and Systems | 1985

MULTILISP: a language for concurrent symbolic computation

Robert H. Halstead

Multilisp is a version of the Lisp dialect Scheme extended with constructs for parallel execution. Like Scheme, Multilisp is oriented toward symbolic computation. Unlike some parallel programming languages, Multilisp incorporates constructs for causing side effects and for explicitly introducing parallelism. The potential complexity of dealing with side effects in a parallel context is mitigated by the nature of the parallelism constructs and by support for abstract data types: a recommended Multilisp programming style is presented which, if followed, should lead to highly parallel, easily understandable programs. Multilisp is being implemented on the 32-processor Concert multiprocessor; however, it is ultimately intended for use on larger multiprocessors. The current implementation, called Concert Multilisp, is complete enough to run the Multilisp compiler itself and has been run on Concert prototypes including up to eight processors. Concert Multilisp uses novel techniques for task scheduling and garbage collection. The task scheduler helps control excessive resource utilization by means of an unfair scheduling policy; the garbage collector uses a multiprocessor algorithm based on the incremental garbage collector of Baker.


Proceedings of the US/Japan Workshop on Parallel Lisp: Languages and Systems | 1989

Mul-T: A High-Performance Parallel Lisp

David A. Kranz; Robert H. Halstead; Eric Mohr

The development of Mul-T has been valuable in several ways. First, Mul-T is a complete, working parallel Lisp system, publicly available to interested users. Second, its single-processor performance is competitive with that of “production quality” sequential Lisp implementations, and therefore a parallel program running under Mul-T can show absolute speedups over the best sequential implementation of the same algorithm. This is attractive to application users whose primary interest is raw speed rather than the abstract gratification of having demonstrated speedup via a time-consuming simulation. Finally, implementing Mul-T has allowed us to experiment with and evaluate implementation strategies such as inlining. The Mul-T experience has also allowed us to probe the limits of implementing futures on stock multiprocessors, and has suggested (for example) that hardware assistance for tag management may be a more significant benefit in a machine for parallel Lisp (where it can eliminate the 65% overhead of implicit touches) than it has ever proven to be in machines for sequential Lisps.


International Journal of Parallel Programming | 1986

An assessment of multilisp: lessons from experience

Robert H. Halstead

Multilisp is a parallel programming language derived from the Scheme dialect of Lisp by addition of thefuture construct. It has been implemented on Concert, a 32-processor shared-memory multiprocessor. A statistics-gathering feature of Concert Multilisp producesparallelism profiles showing the number of processors busy with computing or overhead, as a function of time. Experience gained using parallelism profiles and other measurement tools on several application programs has revealed three basic ways in whichfuture generates concurrency. These ways are illustrated on two example programs: the Lisp mapping functionmapcar and the partitioning routine from Quicksort. Experience with Multilisp programming exposes issues relating to side effects, error and exception handling, low-level operations for explicit manipulation of futures and tasks, and speculative computing, which are also discussed. The basic outlines of Multilisp are now fairly clear and have stood the test of being used for several applications, but further language design work is especially needed in the areas of speculative computing and exception handling.


international symposium on computer architecture | 1980

The MuNet: A scalable decentralized architecture for parallel computation

Robert H. Halstead; Stephen A. Ward

The MuNet is a multiprocessor architecture which can be program-transparently scaled over a very wide cost-performance spectrum. Each processor in a MuNet communicates directly only with a limited number ofneighbors..There is no shared memory, central broadcast medium, or other hardware resource shared equally by all processors In the system. This strictly local communication and interconnection strategy means that only a constant amount of additional hardware need be added for each new processor incorporated into the system. MuNet architectures are significant because of their potential for scalability and large capacity, their way of forging a collection of processors into a coherent programming system, and their ability to support a wide range of object management functions on a distributed system without recourse to any central controlling mechanism. The paper gives an overview of the main structural features of the MuNet, along with a status report on the MuNet project.


Journal of the ACM | 1980

A Syntactic Theory of Message Passing

Stephen A. Ward; Robert H. Halstead

Recent developments by Hewitt and others have stimulated interest in message-passing constructs as an alternative to the more conventional applicative semantics on which most current languages are based. The present work illuminates the distinction between applicative and message-passing semantics by means of the <italic>μ</italic>-calculus, a syntactic model of message-passing systems similar in mechanism to the λ-calculus. Algorithms for the translation of expressions from the λ- to the <italic>μ</italic>-calculus are presented, and differences between the two approaches are discussed. Message-passing semantics seem particularly applicable to the study of multiprocessing. The <italic>μ</italic>-calculus, through the mechanism of <italic>conduits,</italic> provides a simple model for a limited but interesting class of parallel computations. Multiprocessing capabilities of the <italic>μ</italic>-calculus are illustrated, and multiple-processor implementations are discussed briefly.


Proceedings of the US/Japan Workshop on Parallel Symbolic Computing: Languages, Systems, and Applications | 1992

MulTVision: A Tool for Visualizing Parallel Program Executions

Robert H. Halstead; David A. Kranz; Patrick G. Sobalvarro

MulTVision is a visualization tool that supports both performance measurement and debugging by helping a programmer see what happens during a specific, traced execution of a program. MulTVision has two components: a debug monitor and a replay engine. A traced execution yields a log as a by-product; both the debug monitor and the replay engine use this log as input. The debug monitor produces a graphical display showing the relationships between tasks in the traced execution. Using this display, a programmer can see bottlenecks or other causes of poor performance. The replay engine can be used to reproduce internal program states that existed during the traced execution. The replay engine uses a novel log protocol—the side- effect touch protocol-oriented toward programs that are mostly functional (have few side effects). Measurements show that the tracing overhead added to mostly functional programs is generally less than the overhead already incurred for task management and touch operations. While currently limited to program executions that create at most tens of thousands of tasks, MulTVision is already useful for an interesting class of programs.


ACM Sigarch Computer Architecture News | 1987

Overview of concert multilisp: a multiprocessor symbolic computing system

Robert H. Halstead

Multilisp is a parallel programming language derived from the Scheme dialect of Lisp by addition of the future construct. Multilisp has been implemented on Concert, a shared-memory muitiprocessor that uses a novel RingBus interconnection. Concert currently has 28 MC68000 processors, with a design goal of 32 processors. Several application programs have been developed and measured using Concert Multilisp. Experience with these programs has contributed to tuning the Multilisp language design and will ultimately contribute to the design of a parallel architecture streamlined for high performance on Multilisp programs.


International Journal of Parallel Programming | 1987

Simulating logic circuits: a multiprocessor application

Elizabeth Bradley; Robert H. Halstead

Circuits, especially logic circuits, are highly concurrent structures: signals flow along many parallel paths at once. This “native” concurrency, a function of both circuit size and topology, can be exploited in simulating these circuits on parallel machines. Simulation efficiency is affected by machine, language, and simulator implementation parameters like cycle speed, parallelism overhead, and partitioning of the circuit within the simulator, as well as by the amount of native concurrency. The experimental logic simulatorconsim, written in Multilisp and implemented on a 34 element shared-memory multiprocessor, was used to investigate these issues.


ACM Sigplan Lisp Pointers | 1989

A study of LISP on a multiprocessor preliminary version

Peter R. Nuth; Robert H. Halstead

Parallel symbolic computation has attracted considerable interest in recent years. Research groups building multiprocessors for such applications have been frustrated by the lack of data on how symbolic programs run on a parallel machine. This report describes the behavior of Multilisp programs running on a shared memory multiprocessor. Data was collected for a set of application programs on the frequency of different instructions, the type of objects accessed, and where the objects were located in the memory of the multiprocessor. The locality of data references for different multiprocessor organizations was measured. Finally, the effect of different task scheduling strategies on the locality of accesses was studied. This data is summarized here, and compared to other studies of LISP performance on uniprocessors.


Archive | 1989

Computation Structures

Stephen A. Ward; Robert H. Halstead

Collaboration


Dive into the Robert H. Halstead's collaboration.

Top Co-Authors

Avatar

David A. Kranz

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Stephen A. Ward

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Christopher J. Terman

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Elizabeth Bradley

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Patrick G. Sobalvarro

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Peter R. Nuth

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge