Alan J. Demers
PARC
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alan J. Demers.
acm special interest group on data communication | 1989
Alan J. Demers; Srinivasan Keshav; Scott Shenker
We discuss gateway queueing algorithms and their role in controlling congestion in datagram networks. A fair queueing algorithm, based on an earlier suggestion by Nagle, is proposed. Analysis and simulations are used to compare this algorithm to other congestion control schemes. We find that fair queueing provides several important advantages over the usual first-come-first-serve queueing algorithm: fair allocation of bandwidth, lower delay for sources using less than their full share of bandwidth, and protection from ill-behaved sources.
acm special interest group on data communication | 1994
Vaduvur Bharghavan; Alan J. Demers; Scott Shenker; Lixia Zhang
In recent years, a wide variety of mobile computing devices has emerged, including portables, palmtops, and personal digital assistants. Providing adequate network connectivity for these devices will require a new generation of wireless LAN technology. In this paper we study media access protocols for a single channel wireless LAN being developed at Xerox Corporations Palo Alto Research Center. We start with the MACA media access protocol first proposed by Karn [9] and later refined by Biba [3] which uses an RTS-CTS-DATA packet exchange and binary exponential back-off. Using packet-level simulations, we examine various performance and design issues in such protocols. Our analysis leads to a new protocol, MACAW, which uses an RTS-CTS-DS-DATA-ACK message exchange and includes a significantly different backoff algorithm.
principles of distributed computing | 1987
Alan J. Demers; Daniel H. Greene; Carl H. Hauser; Wes Irish; John Larson; Scott Shenker; Howard E. Sturgis; Daniel C. Swinehart; Douglas B. Terry
Whru a dilt~lhSC is replicated at, many sites2 maintaining mutual consistrnry among t,he sites iu the fac:e of updat,es is a signitirant problem. This paper descrikrs several randomized algorit,hms for dist,rihut.ing updates and driving t,he replicas toward consist,c>nc,y. The algorit Inns are very simple and require few guarant,ees from the underlying conllllunicat.ioll system, yc+ they rnsutc t.hat. the off(~c~t, of (‘very update is evcnt,uwlly rf+irt-ted in a11 rq1ica.s. The cost, and parformancc of t,hr algorithms arc tuned I>? c%oosing appropriat,c dist,rilMions in t,hc randoinizat,ioii step. TIN> idgoritlmls ilr(’ c*los~*ly analogoIls t,o epidemics, and t,he epidcWliolog)litcratiirc, ilitlh iii Illld~~rsti4lldill~ tlicir bc*liavior. One of tlW i
foundations of computer science | 1995
F. Frances Yao; Alan J. Demers; Scott Shenker
,oritlims 11&S brc>n implrmcWrd in the Clraringhousr sprv(brs of thr Xerox C’orporat~c~ Iiitcrnc4, solviiig long-standing prol>lf~lns of high traffic and tlatirl>ilsr inconsistcllcp.
operating systems design and implementation | 1994
Mark Weiser; Brent B. Welch; Alan J. Demers; Scott Shenker
The energy usage of computer systems is becoming an important consideration, especially for battery-operated systems. Various methods for reducing energy consumption have been investigated, both at the circuit level and at the operating systems level. In this paper, we propose a simple model of job scheduling aimed at capturing some key aspects of energy minimization. In this model, each job is to be executed between its arrival time and deadline by a single processor with variable speed, under the assumption that energy usage per unit time, P, is a convex function, of the processor speed s. We give an off-line algorithm that computes, for any set of jobs, a minimum-energy schedule. We then consider some on-line algorithms and their competitive performance for the power function P(s)=s/sup p/ where p/spl ges/2. It is shown that one natural heuristic, called the Average Rate heuristic, uses at most a constant times the minimum energy required. The analysis involves bounding the largest eigenvalue in matrices of a special type.
symposium on operating systems principles | 1997
Karin Petersen; Mike Spreitzer; Douglas B. Terry; Marvin M. Theimer; Alan J. Demers
The energy usage of computer systems is becoming more important, especially for battery operated systems. Displays, disks, and cpus, in that order, use the most energy. Reducing the energy used by displays and disks has been studied elsewhere; this paper considers a new method for reducing the energy used by the cpu. We introduce a new metric for cpu energy performance, millions-of-instructions-per-joule (MIPJ). We examine a class of methods to reduce MIPJ that are characterized by dynamic control of system clock speed by the operating system scheduler. Reducing clock speed alone does not reduce MIPJ, since to do the same work the system must run longer. However, a number of methods are available for reducing energy with reduced clock-speed, such as reducing the voltage [Chandrakasan et al 1992][Horowitz 1993] or using reversible [Younis and Knight 1993] or adiabatic logic [Athas et al 1994].n What are the right scheduling algorithms for taking advantage of reduced clock-speed, especially in the presence of applications demanding ever more instructions-per-second? We consider several methods for varying the clock speed dynamically under control of the operating system, and examine the performance of these methods against workstation traces. The primary result is that by adjusting the clock speed at a fine grain, substantial CPU energy can be saved with a limited impact on performance.
international conference on parallel and distributed information systems | 1994
Douglas B. Terry; Alan J. Demers; Karin Petersen; Mike Spreitzer; Marvin M. Theimer; Brent B. Welch
Bayous anti-entropy protocol for update propagation between weakly consistent storage replicas is based on pair-wise communication, the propagation of write operations, and a set of ordering and closure constraints on the propagation of the writes. The simplicity of the design makes the protocol very flexible, thereby providing support for diverse networking environments and usage scenarios. It accommodates a variety of policies for when and where to propagate updates. It operates over diverse network topologies, including low-bandwidth links. It is incremental. It enables replica convergence, and updates can be propagated using floppy disks and similar transportable media. Moreover, the protocol handles replica creation and retirement in a light-weight manner. Each of these features is enabled by only one or two of the protocols design choices, and can be independently incorporated in other systems. This paper presents the anti-entropy protocol in detail, describing the design decisions and resulting features.
workshop on mobile computing systems and applications | 1994
Alan J. Demers; Karin Petersen; Mike Spreitzer; D. Ferry; Marvin M. Theimer; Brent B. Welch
Four per-session guarantees are proposed to aid users and applications of weakly consistent replicated data: read your writes, monotonic reads, writes follow reads, and monotonic writes. The intent is to present individual applications with a view of the database that is consistent with their own actions, even if they read and write from various, potentially inconsistent servers. The guarantees can be layered on existing systems that employ a read-any/write-any replication scheme while retaining the principal benefits of such a scheme, namely high availability, simplicity, scalability, and support for disconnected operation. These session guarantees were developed in the context of the Bayou project at Xerox PARC in which we are designing and building a replicated storage system to support the needs of mobile computing users who may be only intermittently connected.<<ETX>>
programming language design and implementation | 1991
Hans-Juergen Boehm; Alan J. Demers; Scott Shenker
The Bayou System is a platform of replicated, highly-available, variable-consistency, mobile databases on which to build collaborative applications. This paper presents the preliminary system architecture along with the design goals that influenced it. We take a fresh, bottom-up and critical look at the requirements of mobile computing applications and carefully pull together both new and existing techniques into an overall architecture that meets these requirements. Our emphasis is on supporting application-specific conflict detection and resolution and on providing application controlled inconsistency.
workshop on hot topics in operating systems | 1993
Marvin M. Theimer; Alan J. Demers; Brent B. Welch
We present a method for adapting garbage collectors designed to run sequentially with the client, so that they may run concurrently with it. We rely on virtual memory hardware to provide information about pages that have been updated or “dirtied” during a given period of time. This method has been used to construct a mostly parallel trace-and-sweep collector that exhibits very short pause times. Performance measurements are given.