Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Edward D. Lazowska is active.

Publication


Featured researches published by Edward D. Lazowska.


IEEE Transactions on Software Engineering | 1986

Adaptive load sharing in homogeneous distributed systems

Derek L. Eager; Edward D. Lazowska; John Zahorjan

Rather than proposing a specific load sharing policy for implementation, the authors address the more fundamental question of the appropriate level of complexity for load sharing policies. It is shown that extremely simple adaptive load sharing policies, which collect very small amounts of system state information and which use this information in very simple ways, yield dramatic performance improvements. These policies in fact yield performance close to that expected from more complex policies whose viability is questionable. It is concluded that simple policies offer the greatest promise in practice, because of their combination of nearly optimal performance and inherent stability.


IEEE Transactions on Computers | 1989

Speedup versus efficiency in parallel systems

Derek L. Eager; John Zahorjan; Edward D. Lazowska

The tradeoff between speedup and efficiency that is inherent to a software system is investigated. The extent to which this tradeoff is determined by the average parallelism of the software system, as contrasted with other, more detailed, characterizations, is shown. The extent to which both speedup and efficiency can simultaneously be poor is bound: it is shown that for any software system and any number of processors, the sum of the average processor utilization (i.e. efficiency) and the attained fraction of the maximum possible speedup must exceed one. Bounds are given on speedup and efficiency, and on the incremental benefit and cost of allocating additional processors. An explicit formulation, as well as bounds, are given for the location of the knee of the execution time-efficiency profile, where the benefit per unit cost is maximized. >


Performance Evaluation | 1986

A comparison of receiver-initiated and sender-initiated adaptive load sharing

Derek L. Eager; Edward D. Lazowska; John Zahorjan

Abstract Load sharing in a locally distributed system is the process of transparently distributing work submitted to the system by its users. By directing work away from nodes that are heavily loaded to nodes that are lightly loaded, system performance can be improved substantially. Adaptive load sharing policies make transfer decisions using information about the current system state. Control over the maintenance of this information and the initiation of load sharing actions may be centralized in a ‘server’ node or distributed among the system nodes participating in load sharing. The goal of this paper is to compare two strategies for adaptive load sharing with distributed control. In sender-initiated strategies, congested nodes search for lightly loaded nodes to which work may be transferred. In receiver-initiated strategies, the situation is reversed: lightly loaded nodes search for congested nodes from which work may be transferred. We show that sender-initiated strategies outperform receiver-initiated strategies at light to moderate system loads, and that receiver-initiated strategies are preferable at high system loads only if the costs of task transfer under the two strategies are comparable. (There are reasons to believe that the costs will be greater under receiver-initiated strategies, making sender-initiated strategies uniformly preferable.)


symposium on operating systems principles | 1989

The Amber system: parallel programming on a network of multiprocessors

Jeffrey S. Chase; Franz G. Amador; Edward D. Lazowska; Henry M. Levy; Richard J. Littlefield

This paper describes a programming system called Amber that permits a single application program to use a homogeneous network of computers in a uniform way, making the network appear to the application as an integrated multiprocessor. Amber is specifically designed for high performance in the case where each node in the network is a shared-memory multiprocessor. Amber shows that support for loosely-coupled multiprocessing can be efficiently realized using an object-based programming model. Amber programs execute in a uniform network-wide object space, with memory coherence maintained at the object level. Careful data placement and consistency control are essential for reducing communication overhead in a loosely-coupled system. Amber programmers use object migration primitives to control the location of data and processing.


ACM Transactions on Computer Systems | 1990

Lightweight remote procedure call

Brian N. Bershad; Thomas E. Anderson; Edward D. Lazowska; Henry M. Levy

Lightweight Remote Procedure Call (LRPC) is a communication facility designed and optimized for communication between protection domains on the same machine. In contemporary small-kernel operating systems, existing RPC systems incur an unnecessarily high cost when used for the type of communication that predominates—between protection domains on the same machine. This cost leads system designers to coalesce weakly related subsystems into the same protection domain, trading safety for performance. By reducing the overhead of same-machine communication, LRPC encourages both safety and performance. LRPC combines the control transfer and communication model of capability systems with the programming semantics and large-grained protection model of RPC. LRPC achieves a factor-of-three performance improvement over more traditional approaches based on independent threads exchanging messages, reducing the cost of same-machine communication to nearly the lower bound imposed by conventional hardware. LRPC has been integrated into the Taos operating system of the DEC SRC Firefly multiprocessor workstation.


Software - Practice and Experience | 1988

PRESTO: a system for object-oriented parallel programming

Brian N. Bershad; Edward D. Lazowska; Henry M. Levy

PRESTO is a programming system for writing object‐oriented parallel programs in a multiprocessor environment. PRESTO provides the programmer with a set of pre‐defined object types that simplify the construction of parallel programs. Examples of PRESTO objects are threads, which provide fine‐grained control over a programs execution, and synchronization objects, which allow simultaneously executing threads to co‐ordinate their activities.


IEEE Transactions on Parallel and Distributed Systems | 1993

Using processor-cache affinity information in shared-memory multiprocessor scheduling

Mark S. Squillante; Edward D. Lazowska

In a shared-memory multiprocessor system, it may be more efficient to schedule a task on one processor than on another if relevant data already reside in a particular processors cache. The effects of this type of processor affinity are examined. It is observed that tasks continuously alternate between executing at a processor and releasing this processor due to I/O, synchronization, quantum expiration, or preemption. Queuing network models of different abstract scheduling policies are formulated, spanning the range from ignoring affinity to fixing tasks on processors. These models are solved via mean value analysis, where possible, and by simulation otherwise. An analytic cache model is developed and used in these scheduling models to include the effects of an initial burst of cache misses experienced by tasks when they return to a processor for execution. A mean-value technique is also developed and used in the scheduling models to include the effects of increased bus traffic due to these bursts of cache misses. Only a small amount of affinity information needs to be maintained for each task. The importance of having a policy that adapts its behavior to changes in system load is demonstrated. >


measurement and modeling of computer systems | 1988

The limited performance benefits of migrating active processes for load sharing

Derek L. Eager; Edward D. Lazowska; John Zahorjan

<italic>Load sharing</italic> in a distributed system is the process of transparently sharing workload among the nodes in the system to achieve improved performance. In <italic>non-migratory</italic> load sharing, jobs may not be transferred once they have commenced execution. In load sharing with <italic>migration</italic>, on the other hand, jobs in execution may be interrupted, moved to other nodes, and then resumed. In this paper we examine the performance benefits offered by migratory load sharing beyond those offered by non-migratory load sharing. We show that while migratory load sharing can offer modest performance benefits under some fairly extreme conditions, there are <italic>no</italic> conditions under which migration yields <italic>major</italic> performance benefits.


IEEE Transactions on Computers | 1989

The performance implications of thread management alternatives for shared-memory multiprocessors

Thomas E. Anderson; Edward D. Lazowska; Henry M. Levy

An examination is made of the performance implications of several data structure and algorithm alternatives for thread management in shared-memory multiprocessors. Both experimental measurements and analytical model projections are presented. For applications with fine-grained parallelism, small differences in thread management are shown to have significant performance impact, often posing a tradeoff between throughput and latency. Per-processor data structures can be used to to improve throughput, and in some circumstances to avoid locking, improving latency as well. The method used by processors to queue for locks is also shown to affect performance significantly. Normal methods of critical resource waiting can substantially degrade performance with moderate numbers of waiting processors. The authors present an Ethernet-style backoff algorithm that largely eliminates this effect. >


acm special interest group on data communication | 1993

Implementing network protocols at user level

Chandramohan A. Thekkath; Thu D. Nguyen; Evelyn Moy; Edward D. Lazowska

Traditionally, network software has been structured in a monolithic fashion with all protocol stacks executing either within the kernel or in a single trusted user-level server. This organization is motivated by performance and security concerns. However, considerations of code maintenance, ease of debugging, customization, and the simultaneous existence of multiple protocols argue for separating the implementations into more manageable user-level libraries of protocols. This paper describes the design and implementation of transport protocols as user-level libraries.We begin by motivating the need for protocol implementations as user-level libraries and placing our approach in the context of previous work. We then describe our alternative to monolithic protocol organization, which has been implemented on Mach workstations connected not only to traditional Ethernet, but also to a more modern network, the DEC SRC ANI. Based on our experience, we discuss the implications for host-network interface design and for overall system structure to support efficient user-level implementations of network protocols.

Collaboration


Dive into the Edward D. Lazowska's collaboration.

Top Co-Authors

Avatar

John Zahorjan

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Henry M. Levy

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Derek L. Eager

Wisconsin Alumni Research Foundation

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yi-Bing Lin

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jan Sanislo

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Michael Rabinovich

Case Western Reserve University

View shared research outputs
Researchain Logo
Decentralizing Knowledge