Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Orli Waarts is active.

Publication


Featured researches published by Orli Waarts.


symposium on the theory of computing | 1993

On-line load balancing with applications to machine scheduling and virtual circuit routing

James Aspnes; Yossi Azar; Amos Fiat; Serge A. Plotkin; Orli Waarts

In this paper we study an idealized problem of on-line allocation of routes to virtual circuits where the goal is to minimize the required bandwidth. For the case where virtual circuits continue to exist forever, we describe an algorithm that achieves an O (log n) competitive ratio, where n is the number of nodes in the network. Informally, our results show that instead of knowing all of the future requests, it is sufficient to increase the bandwidth of the communication links by an O(log n) factor. We also show that this result is tight, i.e. for any on-line algorithm there exists a scenario in which O(log n) increase in bandwidth is necessary. We view virtual circuit routing as a generalization of an on-line scheduling problem, and hence a major part of the paper focuses on development of algorithms for non-preemptive on-line scheduling for related and unrelated machines. Specialization of routing to scheduling leads us to concentrate on scheduling in the case where jobs must be assigned immediately upon arrival; assigning a job to a machine increases this machine’s load by an amount that depends both on the job and on the machine. The goal is to minimize the maximum load. For the related machines case, we describe the first algorithm that achieves constant competitive ratio. For the unrekzted case (with n machines), we describe a new method that yields O(log n)-competitive algorithm. This stands in contrast to the natural greedy approach, which we show has only a ~(n) competitive ratio. The virtual circuit routing result follows as a generalization of the unrelated machines case.


Journal of Algorithms | 1997

On-Line Load Balancing of Temporary Tasks

Yossi Azar; Bala Kalyanasundaram; Serge A. Plotkin; Kirk Pruhs; Orli Waarts

This paper considers the nonpreemptive on-line load balancing problem where tasks havelimited durationin time. Upon arrival, each task has to be immediately assigned to one of the machines, increasing the load on this machine for the duration of the task by an amount that depends on both the machine and the task. The goal is to minimize the maximum load. Azar, Broder, and Karlin studied theunknown durationcase where the duration of a task is not known upon its arrival (On-line load balancingin“Proc. 33rd IEEE Annual Symposium on Foundations of Computer Science, 1992,” pp. 218Â?225). They focused on the special case in which for each task there is a subset of machines capable of executing it, and the increase in load due to assigning the task to one of these machines depends only on the task and not on the machine. For this case, they showed anO(n2/3)- competitive algorithm, and anÂ?(n)lower bound on the competitive ratio, wherenis the number of the machines. This paper closes the gap by giving anO(n)-competitive algorithm. In addition, trying to overcome theÂ?(n)lower bound for the case of unknown task duration, this paper initiates a study of the load balancing problem for tasks withknown duration(i.e., the duration of a task becomes known upon its arrival). For this case we show anO(lognT)-competitive algorithm, whereTis the ratio of the maximum possible duration of a task to the minimum possible duration of a task. The paper explores an alternative way to overcome theÂ?(n)bound; it considers therelated machinescase with unknown task duration. In the related machines case, a task can be executed by any machine and the increase in load depends on the speed of the machine and the weight of the task. For this case the paper gives a 20-competitive algorithm and shows a lower bound of 3Â?o(1) on the competitive ratio.


SIAM Journal on Computing | 1998

Performing Work Efficiently in the Presence of Faults

Cynthia Dwork; Joseph Y. Halpern; Orli Waarts

We consider a system of t synchronous processes that communicate only by sending messages to one another, and together the processes must perform n independent units of work. Processes may fail by crashing; we want to guarantee that in every execution of the protocol in which at least one process survives, all n units of work will be performed. We consider three parameters: the number of messages sent, the total number of units of work performed (including multiplicities), and time. We present three protocols for solving the problem. All three are work optimal, doing O(n+t) work. The first has moderate costs in the remaining two parameters, sends


symposium on the theory of computing | 1993

Contention in shared memory algorithms

Cynthia Dwork; Maurice Herlihy; Orli Waarts

O(t\sqrt{t})


foundations of computer science | 1996

Efficient information gathering on the Internet

Oren Etzioni; Steve Hanks; Tao Jiang; Richard M. Karp; Omid Madani; Orli Waarts

messages, and takes O(n+t) time. This protocol can be easily modified to run in any completely asynchronous system equipped with a failure detection mechanism. The second sends only O(t log t) messages, but its running time is large (O(t2(n+t) 2n+t)). The third is essentially time optimal in the (usual) case in which there are no failures, and its time complexity degrades gracefully as the number of failures increases.


Distributed Computing | 1996

Linearizable counting networks

Maurice Herlihy; Nir Shavit; Orli Waarts

Most complexity measures for concurrent algorithms for asynchronous shared-memory architectures focus on process steps and memory consumption. In practice, however, performance of multiprocessor algorithms is heavily influenced by contention, the extent to which processess access the same location at the same time. Nevertheless, even though contention is one of the principal considerations affecting the performance of real algorithms on real multiprocessors, there are no formal tools for analyzing the contention of asynchronous shared-memory algorithms. This paper introduces the first formal complexity model for contention in shared-memory multiprocessors. We focus on the standard multiprocessor architecture in which n asynchronous processes communicate by applying read, write, and read-modify-write operations to a shared memory. To illustrate the utility of our model, we use it to derive two kinds of results: (1) lower bounds on contention for well-known basic problems such as agreement and mutual exclusion, and (2) trade-offs between the length of the critical path (maximal number of accesses to shared variables performed by a single process in executing the algorithm) and contention for these algorithms. Furthermore, we give the first formal contention analysis of a variety of counting networks, a class of concurrent data structures inplementing shared counters. Experiments indicate that certain counting networks outperform conventional single-variable counters at high levels of contention. Our analysis provides the first formal model explaining this phenomenon.


SIAM Journal on Computing | 1996

Randomized Consensus in Expected O(n log^ 2 n) Operations Per Processor

James Aspnes; Orli Waarts

The Internet offers unprecedented access to information. At present most of this information is free, but information providers ore likely to start charging for their services in the near future. With that in mind this paper introduces the following information access problem: given a collection of n information sources, each of which has a known time delay, dollar cost and probability of providing the needed information, find an optimal schedule for querying the information sources. We study several variants of the problem which differ in the definition of an optimal schedule. We first consider a cost model in which the problem is to minimize the expected total cost (monetary and time) of the schedule, subject to the requirement that the schedule may terminate only when the query has been answered or all sources have been queried unsuccessfully. We develop an approximation algorithm for this problem and for an extension of the problem in which more than a single item of information is being sought. We then develop approximation algorithms for a reward model in which a constant reward is earned if the information is successfully provided, and we seek the schedule with the maximum expected difference between the reward and a measure of cost. The monetary and time costs may either appear in the cost measure or be constrained not to exceed a fixed upper bound; these options give rise to four different variants of the reward model.


symposium on the theory of computing | 1992

Simple and efficient bounded concurrent timestamping or bounded concurrent timestamp systems are comprehensible

Cynthia Dwork; Orli Waarts

SummaryThe counting problem requires n asynchronous processes to assign themselves successive values. A solution is linearizable if the order of the values assigned reflects the real-time order in which they were requested. Linearizable counting lies at the heart of concurrent time-stamp generation, as well as concurrent implementations of shared counters, FIFO buffers, and similar data structures. We consider solutions to the linearizable counting problem in a multiprocessor architecture in which processes communicate by applying read-modify-write operations to a shared memory. Linearizable counting algorithms can be judged by three criteria: the memory contention produced, whether processes are required to wait for one another, and how long it takes a process to choose a value (the latency). A solution is ideal if it has low contention, low latency, and it eschews waiting. The conventional software solution, where processes synchronize at a single variable, avoids waiting and has low latency, but has high contention. In this paper we give two new constructions based on counting networks, one with low latency and low contention, but that requires processes to wait for one another, and one with low contention and no waiting, but that has high latency. Finally, we prove that these trade-offs are inescapable: an ideal linearizable counting algorithm is impossible. Since ideal non-lineariz-able counting algorithms exist, these results establish a substantial complexity gap between linearizable and non-linearizable counting.


principles of distributed computing | 1990

A characterization of eventual Byzantine agreement

Joseph Y. Halpern; Yoram Moses; Orli Waarts

This paper presents a new randomized algorithm for achieving consensus among asynchronous processors that communicate by reading and writing shared registers. The fastest previously known algorithm requires a processor to perform an expected


symposium on discrete algorithms | 1995

Fairness in scheduling

Miklós Ajtai; James Aspnes; Moni Naor; Yuval Rabani; Leonard J. Schulman; Orli Waarts

O(n^2 \log n)

Collaboration


Dive into the Orli Waarts's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nir Shavit

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge