Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peter Hibbard is active.

Publication


Featured researches published by Peter Hibbard.


Journal of Systems and Software | 1983

Generalized path expressions: A high-level debugging mechanism

Bernd Bruegge; Peter Hibbard

This paper introduces a modified version of path expressions called Path Rules which can be used as a debugging mechanism to monitor the dynamic behavior of a computation. Path rules have been implemented in a remote symbolic debugger running on the Three Rivers Computer Corporation PERQ computer under the Accent operating system.


Sigplan Notices | 1983

Generalized path expressions: a high level debugging mechanism

Bernd Bruegge; Peter Hibbard

This paper introduces a modified version of path expressions called Path Rules which can be used as a debugging mechanism to monitor the dynamic behaviour of a computation. Path rules have been implemented in a remote symbolic debugger running on the Three Rivers Computer Corporation PERQ computer under the Accent operating system.


ACM Transactions on Information Systems | 1985

A Butler process for resource sharing on Spice machines

Roger B. Dannenberg; Peter Hibbard

A network of personal computers may contain a large amount of distributed computing resources. For a number of reasons it is desirable to share these resources, but sharing is complicated by issues of security and autonomy. A process known as the Butler addresses these problems and provides support for resource sharing. The Butler relies upon a capability-based accounting system called the Banker to monitor the use of local resources.


Proceedings of the ACM SIGSOFT/SIGPLAN software engineering symposium on High-level debugging | 1983

Generalized path expressions: A high level debugging mechanism (Preliminary Draft)

Bernd Bruegge; Peter Hibbard

This paper introduces a modified version of path expressions called Path Rules which can be used as a debugging mechanism to monitor the dynamic behavior of a computation. Path rules have been implemented in a remote symbolic debugger running on the Three Rivers Computer Corporation PERQ computer under the Accent operating system.


ACM Sigada Ada Letters | 1986

Studies in Ada style

Peter Hibbard; Andy Hisgen; Jonathan Rosenberg; Mary Shaw; Mark Sherman

to the Second Edition.- to the First Edition.- I The Impact of Abstraction Concerns on Modern Programming Languages.- 1. The Impact of Abstraction Concerns on Modern Programming Languages.- 1.1 Issues of Modern Software.- 1.2 Historical Review of Abstraction Techniques.- 1.2.1 Early Abstraction Techniques.- 1.2.2 Extensible Languages.- 1.2.3 Structured Programming.- 1.2.4 Program Verification.- 1.2.5 Abstract Data Types.- 1.2.6 Interactions Between Abstraction and Specification Techniques.- 1.3 Abstraction Facilities in Modern Programming Languages.- 1.3.1 The New Ideas.- 1.3.2 Language Support for Abstract Data Types.- 1.3.3 Generic Definitions.- 1.4 Practical Realizations.- 1.4.1 A Small Example Program.- 1.4.2 Pascal.- 1.4.3 Ada.- 1.5 Status and Potential.- 1.5.1 How New Ideas Affect Programming.- 1.5.2 Limitations of Current Abstraction Techniques.- 1.5.3 Further Reading.- II Programming In Ada: Examples.- 1. Introduction to Example Programs.- 2. An Implementation of Queues.- 2.1 Description.- 2.2 Implementation.- 2.3 Program Text.- 2.4 Discussion.- 2.4.1 Use of Limited Private Types.- 2.4.2 Initialization and Finalization.- 2.4.3 Passing Discriminants to Tasks.- 2.4.4 Remove as a Procedure.- 3. A Simple Graph Package Providing an Iterator.- 3.1 Description.- 3.2 Specifications.- 3.3 Program Text.- 3.4 Discussion.- 3.4.1 The Algorithm.- 3.4.2 Information Hiding.- 3.4.3 In/In Out Parameters.- 3.4.4 Using the Iterator.- 3.4.5 Iterators Versus Generic Procedures.- 3.4.6 Separate Compilation.- 4. A Console Driver for a PDP-11.- 4.1 Description.- 4.2 Implementation.- 4.3 Program Text.- 4.4 Discussion.- 4.4.1 Use of a Package to Surround the Task.- 4.4.2 Distinction Between Task Types and Tasks.- 4.4.3 Resetting and Terminating the Terminal Driver.- 4.4.4 Interfacing to Devices.- 5. Table Creation and Table Searching.- 5.1 Description.- 5.2 Implementation.- 5.3 Program Text.- 5.4 Discussion.- 5.4.1 Use of The Package.- 5.4.2 Use of the Search Function.- 5.4.3 Use of Packages.- 5.4.4 The Type of the Entries in the Table.- 5.4.5 Use of a Private Type for the Pointers to the Table.- 5.4.6 Nesting a Generic Package Within a Generic Package.- 5.4.7 String Comparisons.- 5.4.8 Use of Integers in Find.- 6. Solution of Laplaces Equation with Several Ada Tasks.- 6.1 Description.- 6.2 Implementation.- 6.3 Program Text.- 6.3.1 A Protected Counter Task Type.- 6.3.2 Parallel.Relaxation Procedure.- 6.4 Discussion.- 6.4.1 Use of Shared Variables.- 6.4.2 Updates of Shared Variables From Registers.- 6.4.3 Generics and Generic Instantiation.- 6.4.4 Scheduling of Ada Tasks Onto Processors.- References.


Software - Practice and Experience | 1982

An adaptive system for dynamic storage allocation

Bruce W. Leverett; Peter Hibbard

We describe a technique (the adaptive creation of free lists) for dynamic storage allocation that is particularly suited to situations in which the distribution of sizes of blocks requested has one or more sharp peaks. We describe a particular dynamic storage allocation system and the environment in which it runs, and give the results of some experiments to determine the usefulness of the technique in this system. Our experiments also tested the efficacy of a technique suggested by Knuth for improving the performance of similar systems.


international symposium on computer architecture | 1978

A language implementation design for a multiprocessor computer system

Peter Hibbard; Andy Hisgen; Thomas L. Rodeheffer

Theoretical and experimental results have indicated that automatic decompositions can discover modest amounts of parallelism. These investigations have tended to ignore the practical problems of language run-time organization, such as synchronization, communication, memory organization, resource management, and input/output. This paper describes a language implementation effort which combines the investigation of implicit and explicit parallel decomposition facilities with the practical considerations of system organization on a multiprocessor computer, Cm*.


Parallel Computations | 1982

A Case Study in the Application of a Tightly Coupled Multiprocessor to Scientific Computations

Neil S. Ostlund; Peter Hibbard; Robert A. Whiteside

Publisher Summary Computational physicists, chemists, and biologists need hardware and software facilities that are capable of solving numerical problems. Processors that are capable of executing several operations are a cost-effective way of supplying these needs than are serial computers. This chapter presents the experiments designed to understand the potential of a general-purpose tightly coupled multiprocessor. It describes the hardware and software characteristics of multiprocessors. From the hardware engineering point of view, multiprocessors may offer attractive solutions to the problem of increasing the performance of computers than the parallel machines, because they are able to take advantage of simple regular designs, which employ replicated standard components. When suitable communication structures are used, they allow extensibility and reliability. To a certain extent, the regularity of the design is achieved at the expense of removing specialized and centralized control, and one of the characteristics of software that executes on multiple instruction, multiple data (MIMD) machines is the need to provide control using a variety of programming techniques. This involves software overheads, in terms of design complexity and execution costs.


Proceedings of the 5th Colloquium on International Symposium on Programming | 1982

Optimizing for a multiprocessor: Balancing synchronization costs against parallelism in straight-line code

Peter Hibbard; Thomas L. Rodeheffer

This paper reports on the status of a research project to develop compiler techniques to optimize programs for execution on an asynchronous multiprocessor. We adopt a simplified model of a multiprocessor, consisting of several identical processors, all sharing access to a common memory. Synchronization must be done explicitly, using two special operations that take a period of time comparable to the cost of data operations. Our treatment differs from other attempts to generate code for such machines because we treat the necessary synchronization overhead as an integral part of the cost of a parallel code sequence. We are particularly interested in heuristics that can be used to generate good code sequences, and local optimizations that can then be applied to improve them. Our current efforts are concentrated on generating straight-line code for high-level, algebraic languages.


Proceedings of the ACM 1980 annual conference on | 1980

Multiprocessor software design

Peter Hibbard

Machines intended for parallel computations exhibit a wide variety of architectural designs, including pipeline, vector and array organizations, less traditional associative, data-flow and systolic organizations, and shared-memory MIMD organizations. It is not surprising, therefore, that the software support for these machines exhibits a wide variety of features reflecting the differing designs. Even within a single class of parallel machine, the system software used on different machines within that class may appear radically different. In part this variety arises because the design space for multiprocessor software is richer than for uniprocessor software; for example there are tradeoffs to be selected between performance and reliability, extensibility, fault-tolerance, etc., and the particular choice of design parameters can have a profound effect on the structure of the operating system. Another factor, however, which causes variety between different operating systems is that the costs of various design choices are known much less accurately than they are with uniprocessors, and thus individual multiprocessor operating systems may exhibit a great deal of experimental variability. Fortunately, the design principles are relatively well understood, and may be described in broad terms. In the case of special-purpose SIMD and associative machines, built with some particular set of applications in mind, a general-purpose host uniprocessor usually takes over most of the resource allocation and scheduling for both itself and for the special-purpose attached processor, which it treats as a peripheral. Consequently the support software on the special-purpose machine is relatively primitive. Since only a small number of programming techniques are appropriate for such machines, they are most easily provided to the programmer as machine-oriented extensions to conventional languages, though several systems have language processors which provide optimizers appropriate for the architecture.

Collaboration


Dive into the Peter Hibbard's collaboration.

Top Co-Authors

Avatar

Andy Hisgen

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark Sherman

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Mary Shaw

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bruce W. Leverett

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul Knueven

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge