Hagit Attiya
Technion – Israel Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hagit Attiya.
Journal of the ACM | 1990
Hagit Attiya; Amotz Bar-Noy; Danny Dolev; David Peleg; Rüdiger Reischuk
This paper is concerned with the solvability of the problem of processor renaming in unreliable, completely asynchronous distributed systems. Fischer et al. prove in [8] that “nontrivial consensus” cannot be attained in such systems, even when only a single, benign processor failure is possible. In contrast, this paper shows that problems of processor renaming can be solved even in the presence of up to <italic>t</italic> < <italic>n</italic>/2 faulty processors, contradicting the widely held belief that no nontrivial problem can be solved in such a system. The problems deal with <italic>renaming processors</italic> so as to reduce the size of the initial name space. When only uniqueness of the new names is required, we present a lower bound of <italic>n</italic> + 1 on the size of the new name space, and a renaming algorithm that establishes an upper bound on <italic>n</italic> + <italic>t</italic>. If the new names are required also to preserve the original order, a tight bound of 2′(<italic>n</italic> - <italic>t</italic> + 1) - 1 is obtained.
ACM Transactions on Computer Systems | 1994
Hagit Attiya; Jennifer L. Welch
The power of two well-known consistency conditions for shared-memory multiprocessors, sequential consistency and linearizability, is compared. The cost measure studied is the worst-case response time in distributed implementations of virtual shared memory supporting one of the two conditions. Three types of shared-memory objects are considered: read/write objects, FIFO queues, and stacks. If clocks are only approximately synchronized (or do not exist), then for all three object types it is shown that linearizability is more expensive than sequential consistency. We show that, for all three data types, the worst-case response time is very sensitive to the assumptions that are made about the timing information available to the system. Under the strong assumption that processes have perfectly synchronized clocks, it is shown that sequential consistency and linearizability are equally costly. We present upper bounds for linearizability and matching lower bounds for sequential consistency. The upper bounds are shown by presenting algorithms that use atomic broadcast in a modular fashion. The lower-bound proofs for the approximate case use the technique of “shifting,” first introduced for studying the clock synchronization problem.
SIAM Journal on Computing | 1998
Hagit Attiya; Ophir Rachman
The atomic snapshot object is an important primitive used for the design and verification of wait-free algorithms in shared-memory distributed systems. A snapshot object is a shared data structure partitioned into segments. Processors can either update an individual segment or instantaneously scan all segments of the object. This paper presents an implementation of an atomic snapshot object in which each high-level operation (scan or update) requires O(n log n) low-level operations on atomic read/write registers.
Journal of the ACM | 1994
Hagit Attiya; Nancy A. Lynch; Nir Shavit
The time complexity of wait-free algorithms in “normal” executions, where no failures occur and processes operate at approximately the same speed, is considered. A lower bound of log <italic>n</italic> on the time complexity of any wait-free algorithm that achieves <italic>approximate agreement</italic> among <italic>n</italic> processes is proved. In contrast, there exists a non-wait-free algorithm that solves this problem in constant time. This implies an &OHgr;(log <italic>n</italic>) time separation between the wait-free and non-wait-free computation models. On the positive side, we present an O(log <italic>n</italic>) time wait-free approximate agreement algorithm; the complexity of this algorithm is within a small constant of the lower bound.
symposium on principles of programming languages | 2011
Hagit Attiya; Rachid Guerraoui; Danny Hendler; Maged M. Michael; Martin T. Vechev
Building correct and efficient concurrent algorithms is known to be a difficult problem of fundamental importance. To achieve efficiency, designers try to remove unnecessary and costly synchronization. However, not only is this manual trial-and-error process ad-hoc, time consuming and error-prone, but it often leaves designers pondering the question of: is it inherently impossible to eliminate certain synchronization, or is it that I was unable to eliminate it on this attempt and I should keep trying? In this paper we respond to this question. We prove that it is impossible to build concurrent implementations of classic and ubiquitous specifications such as sets, queues, stacks, mutual exclusion and read-modify-write operations, that completely eliminate the use of expensive synchronization. We prove that one cannot avoid the use of either: i) read-after-write (RAW), where a write to shared variable A is followed by a read to a different shared variable B without a write to B in between, or ii) atomic write-after-read (AWAR), where an atomic operation reads and then writes to shared locations. Unfortunately, enforcing RAW or AWAR is expensive on all current mainstream processors. To enforce RAW, memory ordering--also called fence or barrier--instructions must be used. To enforce AWAR, atomic instructions such as compare-and-swap are required. However, these instructions are typically substantially slower than regular instructions. Although algorithm designers frequently struggle to avoid RAW and AWAR, their attempts are often futile. Our result characterizes the cases where avoiding RAW and AWAR is impossible. On the flip side, our result can be used to guide designers towards new algorithms where RAW and AWAR can be eliminated.
Distributed Computing | 1992
Nancy A. Lynch; Hagit Attiya
SummaryA new technique for proving timing properties for timing-based algorithms is described; it is an extension of the mapping techniques previously used in proofs of safety properties for asynchronous concurrent systems. The key to the method is a way of representing a system with timing constraints as an automaton whose state includes predictive timing information. Timing assumptions and timing requirements for the system are both represented in this way. A multi-valued mapping from the “assumptions automaton” to the “requirements automaton” is then used to show that the given system satisfies the requirements. One type of mapping is based on a collection of “progress functions” providing measures of progress toward timing goals. The technique is illustrated with two examples, a simple resource manager and a two-process race system.
principles of distributed computing | 1999
Yehuda Afek; Hagit Attiya; Arie Fouren; Gideon Stupp; Dan Touitou
Two implementations of an adaptive, wait-free, and long-lived renaming task in the read/write shared memory model are presented. Implementations of longlived and adaptive objects were previously known only in the much stronger model of load-linked and storeconditional (i.e., read-modify-write) shared memory. In read/write shared-memory only one-shot adaptive objects are known. Presented here are two algorithms that assign a new unique id in the range 1, . . . , O(k’) to any process whose initial unique name is taken from a set of size N, for an arbitrary N and where k is the number of processors that actually take steps or hold a name while the new name is being acquired. The step complexity of acquiring a new name is respectively O(k2) and O(k2 log k), while the step complexity of releasing a name is 1. The main differences between the two algorithms are in the precise definition of adaptiveness, and in their space complexity. The first algorithm adapts to the interval contention of an operation while requiring a bounded amount of space. The second algorithm adapts to the point contention but requires an unbounded amount of space. The two algorithms use completely different techniques to achieve their goals.
acm symposium on parallel algorithms and architectures | 2009
Hagit Attiya; Eshcar Hillel; Alessia Milani
Transactional memory (TM) is a promising approach for designing concurrent data structures, and it is essential to develop better understanding of the formal properties that can be achieved by TM implementations. Two fundamental properties of TM implementations are disjoint-access parallelism, which is critical for their scalability, and the invisibility of read operations, which reduces memory contention. This paper proves an inherent tradeoff for implementations of transactional memories: they cannot be both disjoint-access parallel and have read-only transactions that are invisible and always terminate successfully. In fact, a lower bound of Ω(t) is proved on the number of writes needed in order to implement a read-only transaction of t items, which successfully terminates in a disjoint-access parallel TM implementation. The results assume strict serializability and thus hold under the assumption of opacity. It is shown how to extend the results to hold also for weaker consistency conditions, serializability and snapshot isolation.
Journal of the ACM | 2003
Hagit Attiya; Arie Fouren
This article introduces the sieve, a novel building block that allows to adapt to the number of simultaneously active processes (the point contention) during the execution of an operation. We present an implementation of the sieve in which each sieve operation requires O(k log k) steps, where k is the point contention during the operation.The sieve is the cornerstone of the first wait-free algorithms that adapt to point contention using only read and write operations. Specifically, we present efficient algorithms for long-lived renaming, timestamping and collecting information.
principles of distributed computing | 1996
Hagit Attiya; Eyal Dagan
Hagit Attiya and Eyal Dagan Department of Computer Science The Technion