Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Derek L. Eager is active.

Publication


Featured researches published by Derek L. Eager.


IEEE Transactions on Software Engineering | 1986

Adaptive load sharing in homogeneous distributed systems

Derek L. Eager; Edward D. Lazowska; John Zahorjan

Rather than proposing a specific load sharing policy for implementation, the authors address the more fundamental question of the appropriate level of complexity for load sharing policies. It is shown that extremely simple adaptive load sharing policies, which collect very small amounts of system state information and which use this information in very simple ways, yield dramatic performance improvements. These policies in fact yield performance close to that expected from more complex policies whose viability is questionable. It is concluded that simple policies offer the greatest promise in practice, because of their combination of nearly optimal performance and inherent stability.


IEEE Transactions on Computers | 1989

Speedup versus efficiency in parallel systems

Derek L. Eager; John Zahorjan; Edward D. Lazowska

The tradeoff between speedup and efficiency that is inherent to a software system is investigated. The extent to which this tradeoff is determined by the average parallelism of the software system, as contrasted with other, more detailed, characterizations, is shown. The extent to which both speedup and efficiency can simultaneously be poor is bound: it is shown that for any software system and any number of processors, the sum of the average processor utilization (i.e. efficiency) and the attained fraction of the maximum possible speedup must exceed one. Bounds are given on speedup and efficiency, and on the incremental benefit and cost of allocating additional processors. An explicit formulation, as well as bounds, are given for the location of the knee of the execution time-efficiency profile, where the benefit per unit cost is maximized. >


Performance Evaluation | 1986

A comparison of receiver-initiated and sender-initiated adaptive load sharing

Derek L. Eager; Edward D. Lazowska; John Zahorjan

Abstract Load sharing in a locally distributed system is the process of transparently distributing work submitted to the system by its users. By directing work away from nodes that are heavily loaded to nodes that are lightly loaded, system performance can be improved substantially. Adaptive load sharing policies make transfer decisions using information about the current system state. Control over the maintenance of this information and the initiation of load sharing actions may be centralized in a ‘server’ node or distributed among the system nodes participating in load sharing. The goal of this paper is to compare two strategies for adaptive load sharing with distributed control. In sender-initiated strategies, congested nodes search for lightly loaded nodes to which work may be transferred. In receiver-initiated strategies, the situation is reversed: lightly loaded nodes search for congested nodes from which work may be transferred. We show that sender-initiated strategies outperform receiver-initiated strategies at light to moderate system loads, and that receiver-initiated strategies are preferable at high system loads only if the costs of task transfer under the two strategies are comparable. (There are reasons to believe that the costs will be greater under receiver-initiated strategies, making sender-initiated strategies uniformly preferable.)


IEEE Transactions on Knowledge and Data Engineering | 2001

Minimizing bandwidth requirements for on-demand data delivery

Derek L. Eager; Mary K. Vernon; John Zahorjan

Two recent techniques for multicast or broadcast delivery of streaming media can provide immediate service to each client request, yet achieve considerable client stream sharing which leads to significant server and network bandwidth savings. The paper considers: 1) how well these recently proposed techniques perform relative to each other and 2) whether there are new practical delivery techniques that can achieve better bandwidth savings than the previous techniques over a wide range of client request rates. The principal results are as follows: First, the recent partitioned dynamic skyscraper technique is adapted to provide immediate service to each client request more simply and directly than the original dynamic skyscraper method. Second, at moderate to high client request rates, the dynamic skyscraper method has required server bandwidth that is significantly lower than the recent optimized stream tapping/patching/controlled multicast technique. Third, the minimum required server bandwidth for any delivery technique that provides immediate real-time delivery to clients increases logarithmically (with constant factor equal to one) as a function of the client request arrival rate. Furthermore, it is (theoretically) possible to achieve very close to the minimum required server bandwidth if client receive bandwidth is equal to two times the data streaming rate and client storage capacity is sufficient for buffering data from shared streams. Finally, we propose a new practical delivery technique, called hierarchical multicast stream merging (HMSM), which has a required server bandwidth that is lower than the partitioned dynamic skyscraper and is reasonably close to the minimum achievable required server bandwidth over a wide range of client request rates.


acm multimedia | 1999

Optimal and efficient merging schedules for video-on-demand servers

Derek L. Eager; Mary K. Vernon; John Zahorjan

The simplest video-on-demand (VOD) delivery policy is to allocate a new media delivery stream to each client request when it arrives. This policy has the desirable properties of “immediate service” (there is minimal latency between the client request and the start of playback, assuming that sufficient server bandwidth is available to start the new stream), of placing minimal demands on client capabilities (the client receive bandwidth required is the media playback rate, and no client local storage is required), and of being simple to implement. However, the policy is untenable because it requires server bandwidth that scales linearly with the number of clients that must be supported simultaneously, which is too expensive for many applications.


network and operating system support for digital audio and video | 2001

Analysis of educational media server workloads

Jussara M. Almeida; Jeffrey Krueger; Derek L. Eager; Mary K. Vernon

This paper presents an extensive analysis of the client workloads for educational media servers at two major U.S. universities. The goals of the analysis include providing data for generating synthetic workloads, gaining insight into the design of streaming content distribution networks, and quantifying how much server bandwidth can be saved in interactive educational environments by using recently developed multicast streaming methods for stored content.


measurement and modeling of computer systems | 1988

The limited performance benefits of migrating active processes for load sharing

Derek L. Eager; Edward D. Lazowska; John Zahorjan

<italic>Load sharing</italic> in a distributed system is the process of transparently sharing workload among the nodes in the system to achieve improved performance. In <italic>non-migratory</italic> load sharing, jobs may not be transferred once they have commenced execution. In load sharing with <italic>migration</italic>, on the other hand, jobs in execution may be interrupted, moved to other nodes, and then resumed. In this paper we examine the performance benefits offered by migratory load sharing beyond those offered by non-migratory load sharing. We show that while migratory load sharing can offer modest performance benefits under some fairly extreme conditions, there are <italic>no</italic> conditions under which migration yields <italic>major</italic> performance benefits.


measurement and modeling of computer systems | 1988

Scheduling in multiprogrammed parallel systems

Shikharesh Majumdar; Derek L. Eager; Richard B. Bunt

Processor scheduling on multiprocessor systems that simultaneously run concurrent applications is currently not well-understood. This paper reports a preliminary investigation of a number of fundamental issues which are important in the context of scheduling concurrent jobs on multiprogrammed parallel systems. The major motivation for this research is to gain insight into system behaviour and understand the basic principles underlying the performance of scheduling strategies in such parallel systems. Based on abstract models of systems and scheduling disciplines, several high level issues that are important in this context have been analysed.


IEEE Network | 2000

Traffic analysis of a Web proxy caching hierarchy

Anirban Mahanti; Carey L. Williamson; Derek L. Eager

Understanding Web traffic characteristics is key to improving the performance and scalability of the Web. In this article Web proxy workloads from different levels of a caching hierarchy are used to understand how the workload characteristics change across different levels of a caching hierarchy. The main observations of this study are that HTML and image documents account for 95 percent of the documents seen in the workload; the distribution of transfer sizes of documents is heavy-tailed, with the tails becoming heavier as one moves up the caching hierarchy; the popularity profile of documents does not precisely follow the Zipf distribution; one-timers account for approximately 70 percent of the documents referenced; concentration of references is less at proxy caches than at servers, and concentration of references diminishes as one moves up the caching hierarchy; and the modification rate is higher at higher-level proxies.


IEEE ACM Transactions on Networking | 2003

Scalable on-demand media streaming with packet loss recovery

Anirban Mahanti; Derek L. Eager; Mary K. Vernon; David Sundaram-Stukel

Previous scalable on-demand streaming protocols do not allow clients to recover from packet loss. This paper develops new protocols that: (1) have a tunably short latency for the client to begin playing the media; (2) allow heterogeneous clients to recover lost packets without jitter as long as each clients cumulative loss rate is within a tunable threshold; and (3) assume a tunable upper bound on the transmission rate to each client that can be as small as a fraction (e.g., 25%) greater than the media play rate. Models are developed to compute the minimum required server bandwidth for a given loss rate and playback latency. The results of the models are used to develop the new protocols and assess their performance. The new protocols, Reliable Periodic Broadcast and Reliable Bandwidth Skimming, are simple to implement and achieve nearly the best possible scalability and efficiency for a given set of client characteristics and desirable/feasible media quality. Furthermore, the results show that the new reliable protocols that transmit to each client at only twice the media play rate have similar performance to previous protocols that require clients to receive at many times the play rate.

Collaboration


Dive into the Derek L. Eager's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mary K. Vernon

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

John Zahorjan

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tim Brecht

University of Waterloo

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jim Summers

University of Waterloo

View shared research outputs
Researchain Logo
Decentralizing Knowledge