Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Dominic Sleator is active.

Publication


Featured researches published by Daniel Dominic Sleator.


Communications of The ACM | 1985

Amortized efficiency of list update and paging rules

Daniel Dominic Sleator; Robert Endre Tarjan

In this article we study the amortized efficiency of the “move-to-front” and similar rules for dynamically maintaining a linear list. Under the assumption that accessing the ith element from the front of the list takes &thgr;(i) time, we show that move-to-front is within a constant factor of optimum among a wide class of list maintenance rules. Other natural heuristics, such as the transpose and frequency count rules, do not share this property. We generalize our results to show that move-to-front is within a constant factor of optimum as long as the access cost is a convex function. We also study paging, a setting in which the access cost is not convex. The paging rule corresponding to move-to-front is the “least recently used” (LRU) replacement rule. We analyze the amortized complexity of LRU, showing that its efficiency differs from that of the off-line paging rule (Beladys MIN algorithm) by a factor that depends on the size of fast memory. No on-line paging algorithm has better amortized performance.


Journal of the ACM | 1985

Self-adjusting binary search trees

Daniel Dominic Sleator; Robert Endre Tarjan

The splay tree, a self-adjusting form of binary search tree, is developed and analyzed. The binary search tree is a data structure for representing tables and lists so that accessing, inserting, and deleting items is easy. On an n-node splay tree, all the standard search tree operations have an amortized time bound of O(log n) per operation, where by “amortized time” is meant the time per operation averaged over a worst-case sequence of operations. Thus splay trees are as efficient as balanced trees when total running time is the measure of interest. In addition, for sufficiently long access sequences, splay trees are as efficient, to within a constant factor, as static optimum search trees. The efficiency of splay trees comes not from an explicit structural constraint, as with balanced trees, but from applying a simple restructuring heuristic, called splaying, whenever the tree is accessed. Extensions of splaying give simplified forms of two other data structures: lexicographic or multidimensional search trees and link/cut trees.


symposium on the theory of computing | 1986

Making data structures persistent

James R. Driscoll; Neil Sarnak; Daniel Dominic Sleator; Robert Endre Tarjan

Abstract This paper is a study of persistence in data structures. Ordinary data structures are ephemeral in the sense that a change to the structure destroys the old version, leaving only the new version available for use. In contrast, a persistent structure allows access to any version, old or new, at any time. We develop simple, systematic, and efficient techniques for making linked data structures persistent. We use our techniques to devise persistent forms of binary search trees with logarithmic access, insertion, and deletion times and O (1) space bounds for insertion and deletion.


Communications of The ACM | 1986

A locally adaptive data compression scheme

Jon Louis Bentley; Daniel Dominic Sleator; Robert Endre Tarjan; Victor K. Wei

A data compression scheme that exploits locality of reference, such as occurs when words are used frequently over short intervals and then fall into long periods of disuse, is described. The scheme is based on a simple heuristic for self-organizing sequential search and on variable-length encodings of integers. We prove that it never performs much worse than Huffman coding and can perform substantially better; experiments on real files show that its performance is usually quite close to that of Huffman coding. Our scheme has many implementation advantages: it is simple, allows fast encoding and decoding, and requires only one pass over the data to be compressed (static Huffman coding takes two passes).


Journal of Algorithms | 1991

Competitive paging algorithms

Amos Fiat; Richard M. Karp; Michael Luby; Lyle A. McGeoch; Daniel Dominic Sleator; Neal E. Young

The paging problem is that of deciding which pages to keep in a memory of k pages in order to minimize the number of page faults. We develop the marking algorithm, a randomized on-line algorithm for the paging problem. We prove that its expected cost on any sequence of requests is within a factor of 2Hk of optimum. (Where Hk is the kth harmonic number, which is roughly In k.) The best such factor that can be achieved is Hk. This is in contrast to deterministic algorithms, which cannot be guaranteed to be within a factor smaller than k of optimum. An alternative to comparing an on-line algorithm with the optimum off-line algorithm is the idea of comparing it to several other on-line algorithms. We have obtained results along these lines for the paging problem. Given a set of on-line algorithms ‘Support was provided by a Weizmann fellowship. ‘Partial support was provided by the International Computer Science Institute, Berkeley, CA, and by NSF Grant CCR-8411954. 3Support was provided by the International Computer Science Institute and operating grant A8092 of the Natural Sciences and Engineering Research Council of Canada. Current


Journal of Algorithms | 1990

Competitive algorithms for server problems

Mark S. Manasse; Lyle A. McGeoch; Daniel Dominic Sleator

Abstract The k-server problem is that of planning the motion of k mobile servers on the vertices of a graph under a sequence of requests for service. Each request consists of the name of a vertex, and is satisfied by placing a server at the requested vertex. The requests must be satisfied in their order of occurrence. The cost of satisfying a sequence of requests is the distance moved by the servers. In this paper we study on-line algorithms for this problem from the competitive point of view. That is, we seek to develop on-line algorithms whose performance on any sequence of requests is as close as possible to the performance of the optimum off-line algorithm. We obtain optimally competitive algorithms for several important cases. Because of the flexibility in choosing the distances in the graph and the number of servers, the k-server problem can be used to model a number of important paging and caching problems. It can also be used as a building block for solving more general problems. We show how server algorithms can be used to solve a seemingly more general class of problems known as task systems.


symposium on the theory of computing | 1987

Two algorithms for maintaining order in a list

Paul F. Dietz; Daniel Dominic Sleator

The <italic>order maintenance problem</italic> is that of maintaining a list under a sequence of <italic>Insert</italic> and <italic>Delete</italic> operations, while answering <italic>Order</italic> queries (determine which of two elements comes first in the list). We give two new algorithms for this problem. The first algorithm matches the <italic>O</italic>(1) amortized time per operation of the best previously known algorithm, and is much simpler. The second algorithm permits all operations to be performed in <italic>O</italic>(1) <italic>worst-case</italic> time.


symposium on the theory of computing | 1988

Competitive algorithms for on-line problems

Mark S. Manasse; Lyle A. McGeoch; Daniel Dominic Sleator

An on-line problem is one in which an algorithm must handle a sequence of requests, satisfying each request without knowledge of the future requests. Examples of on-line problems include scheduling the motion of elevators, finding routes in networks, allocating cache memory, and maintaining dynamic data structures. A competitive algorithm for an on-line problem has the property that its performance on any sequence of requests is within a constant factor of the performance of any other algorithm on the same sequence. This paper presents several general results concerning competitive algorithms, as well as results on specific on-line problems.


Algorithmica | 1991

A strongly competitive randomized paging algorithm

Lyle A. McGeoch; Daniel Dominic Sleator

Thepaging problem is that of deciding which pages to keep in a memory ofk pages in order to minimize the number of page faults. We develop thepartitioning algorithm, a randomized on-line algorithm for the paging problem. We prove that its expected cost on any sequence of requests is within a factor ofHk of optimum. (Hk is thekth harmonic number, which is about ln(k).) No on-line algorithm can perform better by this measure. Our result improves by a factor of two the best previous algorithm.


Algorithmica | 1986

The pairing heap: a new form of self-adjusting heap

Michael L. Fredman; Robert Sedgewick; Daniel Dominic Sleator; Robert Endre Tarjan

Recently, Fredman and Tarjan invented a new, especially efficient form of heap (priority queue) called theFibonacci heap. Although theoretically efficient, Fibonacci heaps are complicated to implement and not as fast in practice as other kinds of heaps. In this paper we describe a new form of heap, called thepairing heap, intended to be competitive with the Fibonacci heap in theory and easy to implement and fast in practice. We provide a partial complexity analysis of pairing heaps. Complete analysis remains an open problem.

Collaboration


Dive into the Daniel Dominic Sleator's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Victor K. Wei

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David L. Black

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge