Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brian D. Marsh is active.

Publication


Featured researches published by Brian D. Marsh.


symposium on operating systems principles | 1991

First-class user-level threads

Brian D. Marsh; Michael L. Scott; Thomas J. LeBlanc; Evangelos P. Markatos

It is often desirable, for reasons of clarity, portability, and efficiency, to write parallel programs in which the number of processes is independent of the number of available processors. Several modern operating systems support more than one process in an address space, but the overhead of creating and synchronizing kernel processes can be high. Many runtime environments implement lightweight processes (threads) in user space, but this approach usually results in second-class status for threads, making it difficult or impossible to perform scheduling operations at appropriate times (e.g. when the current thread blocks in the kernel). In addition, a lack of common assumptions may also make it difficult for parallel programs or library routines that use dissimilar thread packages to communicate with each other, or to synchronize access to shared data.We describe a set of kernel mechanisms and conventions designed to accord first-class status to user-level threads, allowing them to be used in any reasonable way that traditional kernel-provided processes can be used, while leaving the details of their implementation to user-level code. The key features of our approach are (1) shared memory for asynchronous communication between the kernel and the user, (2) software interrupts for events that might require action on the part of a user-level scheduler, and (3) a scheduler interface convention that facilitates interactions in user space between dissimilar kinds of threads. We have incorporated these mechanisms in the Psyche parallel operating system, and have used them to implement several different kinds of user-level threads. We argue for our approach in terms of both flexibility and performance.


acm sigplan symposium on principles and practice of parallel programming | 1990

Multi-model parallel programming in psyche

Michael L. Scott; Thomas J. LeBlanc; Brian D. Marsh

Many different parallel programming models, including lightweight processes that communicate with shared memory and heavyweight processes that communicate with messages, have been used to implement parallel applications. Unfortunately, operating systems and languages designed for parallel programming typically support only one model. Multi-model parallel programming is the simultaneous use of several different models, both across programs and within a single program. This paper describes multi-model parallel programming in the Psyche multiprocessor operating system. We explain why multi-model programming is desirable and present an operating system interface designed to support it. Through a series of three examples, we illustrate how the Psyche operating system supports different models of parallelism and how the different models are able to interact.


Concurrency and Computation: Practice and Experience | 1993

Kernel-Kernel communication in a shared-memory multiprocessor

Eliseu M. Chaves; Prakash Das; Thomas J. LeBlanc; Brian D. Marsh; Michael L. Scott

SUMMARY In the standard kernel organization on a bus-based multiprocessor, all processors share the code and data of the operating system; explicit synchronization is used to control access to kernel data structures. Distributed-memory multicomputers use an alternative approach, in which each instance of the kernel performs local operations directly and uses remote invocation to perform remote operations. Either approach to interkernel communication can be used in a large-scale shared-memory multiprocessor. In the paper we discuss the issues and architectural features that must be considered when choosing between remote memory access and remote invocation. We focus in particular on experience with the Psyche multiprocessor operating system on the BBN Butterfly Plus. We find that the Butterfly architecture is biased towards the use of remote Invocation for kernel operations that perform a significant number of memory references, and that current architectural trends are likely to increase this bias in future machines. This conclusion suggests that straightforward parallelization of existing kernels (e.g. by using semaphores to protect shared data) is unlikely in the future to yield acceptable performance. We note, however, that remote memory access is useful for small, frequently-executed operations, and is likely to remain so.


IEEE Computer | 1992

The Rochester checkers player: multimodel parallel programming for animate vision

Brian D. Marsh; Christopher M. Brown; Thomas J. LeBlanc; Michael L. Scott; Timothy G. Becker; Cesar Quiroz; Prakash Das; Jonas Karlsson

It is maintained that to exploit fully the parallelism inherent in animate vision systems, an integrated vision architecture must support multiple models of parallelism. To support this claim, the hardware base of a typical animate vision laboratory and the software requirements of applications are described. A brief overview is then given of the Psyche operating system, which was designed to support multimodel programming. A complex animate vision application, checkers, constructed as a multimodel program under Psyche, is also described. Checkers demonstrates the advantages of decomposing animate vision systems by function and independently selecting an appropriate parallel-programming model for each function.<<ETX>>


workshop on hot topics in operating systems | 1989

A multi-user, multi-language open operating system

Michael L. Scott; Thomas J. LeBlanc; Brian D. Marsh

An open operating system, which provides a high degree of programming flexibility and efficiency, generally requires that all programs be written in a single language and provides no protection other than that which is available from the compiler. It is noted that these limitations become unacceptable on a workstation that must run untrusted software written in many different languages. Psyche, an open operating system designed to make the most effective possible use of shared-memory multiprocessors and uniprocessor machines, is presented. It combines the flexibility of an open operating system with the ability to write in multiple languages and to establish solid protection boundaries. It also provides the efficiency of an open operating system for programs that do not require protection.<<ETX>>


Journal of Parallel and Distributed Computing | 1992

Operating system support for animate vision

Brian D. Marsh; Christopher M. Brown; Thomas J. LeBlanc; Michael L. Scott; Timothy G. Becker; Prakash Das; Jonas Karlsson; Cesar Quiroz

Animate vision systems couple computer vision and robotics to achieve robust and accurate vision, as well as other complex behavior. These systems combine low-level sensory processing and effector output with high-level cognitive planning-all computationally intensive tasks that can benefit from parallel processing. A typical animate vision application will likely consist of many tasks, each of which may require a different parallel programming model, and all of which must cooperate to achieve the desired behavior. These multi-model programs require an underlying software system that not only supports several different models of parallel computation simultaneously, but which also allows tasks implemented in different models to interact. This paper describes the Psyche multiprocessor operating system, which was designed to support multi-model programming, and the Rochester Checkers Player, a multi-model robotics program that plays checkers against a human opponent. Psyche supports a variety of parallel programming models within a single operating system by according first-class status to processes implemented in user space. It also supports interactions between programming models using model-independent communication, wherein different types of processes communicate and synchronize without relying on the semantics or implementation of a particular programming model. The implementation of the Checkers Player, in which different parallel programming models are used for vision, robot motion planning, and strategy, illustrates the use of the Psyche mechanisms in an application program, and demonstrates many of the advantages of multi-model programming for animate vision systems. 0 1992 Academic Press, Inc.


international conference on parallel processing | 1988

Design Rationale for Psyche a General-Purpose Multiprocessor Operating System.

Michael L. Scott; Thomas J. LeBlanc; Brian D. Marsh


IEEE Transactions on Reliability | 1989

Evolution of an Operating System for Large-Scale Shared-Memory Multiprocessors

Michael L. Scott; Thomas J. LeBlanc; Brian D. Marsh


Computing Systems | 1989

Implementation Issues for the Psyche Multiprocessor Operating System

Michael L. Scott; Thomas J. LeBlanc; Brian D. Marsh; Timothy G. Becker; Cezary Dubnicki; Evangelos P. Markatos; Neil G. Smithline


Multi-Model Parallel Programming | 1992

Multi-model parallel programming

Brian D. Marsh

Collaboration


Dive into the Brian D. Marsh's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Prakash Das

University of Rochester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cesar Quiroz

University of Rochester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge