David Mosberger
University of Arizona
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David Mosberger.
operating systems design and implementation | 1996
David Mosberger; Larry L. Peterson
This paper makes a case for paths as an explicit abstraction in operating system design. Paths provide a unifying infrastructure for several OS mechanisms that have been introduced in the last several years, including fbufs, integrated layer processing, packet classifiers, code specialization, and migrating threads. This paper articulates the potential advantages of a path-based OS structure, describes the specific path architecture implemented in the Scout OS, and demonstrates the advantages in a particular application domain---receiving, decoding, and displaying MPEG-compressed video.
Operating Systems Review | 1993
David Mosberger
This paper discusses memory consistency models and their influence on software in the context of parallel machines. In the first part we review previous work on memory consistency models. The second part discusses the issues that arise due to weakening memory consistency. We are especially interested in the influence that weakened consistency models have on language, compiler, and runtime system design. We conclude that tighter interaction between those parts and the memory system might improve performance considerably.
workshop on hot topics in operating systems | 1995
A. B. Montz; David Mosberger; S. W. O'Mally; Larry L. Peterson; Todd A. Proebsting
Scout is new communication-centric operating system. The principle scout abstraction (the path) is an attempt to capture all of the operating system infrastructure necessary to insure that a given network connection can achieve high and predictable performance in the face of other connections and other system loads.
acm special interest group on data communication | 1996
David Mosberger; Larry L. Peterson; Patrick G. Bridges; Sean W. O'Malley
This paper describes several techniques designed to improve protocol latency, and reports on their effectiveness when measured on a modern RISC machine employing the DEC Alpha processor. We found that the memory system---which has long been known to dominate network throughput---is also a key factor in protocol latency. As a result, improving instruction cache effectiveness can greatly reduce protocol processing overheads. An important metric in this context is the memory cycles per instructions (mCPI), which is the average number of cycles that an instruction stalls waiting for a memory access to complete. The techniques presented in this paper reduce the mCPI by a factor of 1.35 to 5.8. In analyzing the effectiveness of the techniques, we also present a detailed study of the protocol processing behavior of two protocol stacks---TCP/IP and RPC---on a modern RISC processor.
Software - Practice and Experience | 1996
David Mosberger; Peter Druschel; Larry L. Peterson
This article presents a software-only solution to the synchronization problem for uniprocessors. The idea is to execute atomic sequences without any hardware protection, and in the rare case of preemption, to roll the sequence forward to the end, thereby preserving atomicity. One of the proposed implementations protects atomic sequences without any memory-accesses. This is significant as it enables execution at CPU-speeds, rather than memory-speeds. The benefit of this method increases with the frequency at which atomic sequences are executed. It therefore encourages the building of systems with fine-grained synchronization. This has the additional advantage of reducing average latency. Experiments demonstrate that this technique has the potential to outperform even the best hardware mechanisms. The main contribution of this article is to discuss operating-system related issues of rollforward and to demonstrate its practicality, both in terms of flexibility and performance.
ieee international conference on high performance computing data and analytics | 1994
Charles J. Turner; David Mosberger; Larry L. Peterson
Data parallel languages are gaining interest as it becomes clear that they support a wider range of computation than previously believed. With improved network technology, it is now feasible to build data parallel supercomputers using traditional RISC-based workstations connected by a highspeed network. The paper presents an in-depth look at the communication behavior of nine C/sup */ programs (J.L. Frankel, 1991). It also compares the performance of these programs on both a cluster of 8 HP 720 workstations and a 32 node (128 Vector Unit) CM-5. The result is that under some conditions, the cluster is faster on an absolute scale, and that on a relative, per-node scale, the cluster delivers superior performance in all cases.<<ETX>>
workshop on hot topics in operating systems | 1993
David Mosberger; Larry L. Peterson
The motivation for the work presented in this paper stems from the observation that optical token-ring networks have bit-error rates that are low enough to be negligible for all but the most demanding applications. We define the notion of careful protocols that attempt to benefit from the reliability of such networks. Although it might seem trivial to implement protocols in the presence of a reliable network, a closer look reveals that this is not at all true. In essence, while protocols based on unreliable networks have to worry recovering from lost packets, careful protocols have to worry about flow-control. This is work in progress, and as such, incomplete. However, first results appear to show that it might be worthwhile to use careful protocols over networks with high reliability.<<ETX>>
Archive | 1998
Andy C. Bavier; Larry L. Peterson; David Mosberger
Archive | 1995
David Mosberger; Larry L. Peterson
operating systems design and implementation | 1994
Allen Brady Montz; David Mosberger; Sean W. O'Malley; Larry L. Peterson; Todd A. Proebsting; John H. Hartman