Amos Israeli
Technion – Israel Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Amos Israeli.
principles of distributed computing | 1990
Amos Israeli; Marc Jalfon
A self-stabilizing system is a system which reaches a legal configuration by itself, without any kind of an outside intervention; when started from any arbitrary configuration. Hence a self-stabilizing system accommodates any possible initial configuration and tolerates transient bugs. This fact contributes most of the extra difficulty of devising selfstabilizing systems. On the other hand, the same fact makes self-stabilizing systems so appealing as no initialization of the system is required. In this paper a novel modular method for constructing uniform self stabilizing mutual exclusion (or in short USSA4E) protocols is presented. The viability of the method is demonstrated by constructing for the first time a randomized USSME protocol for any arbitrary dynamic graph and another one for dyna.mic rings. The correctness of both protocols is proved and their complexity is analyzed. The analysis of the new protocols in*Partially supported by VPR Funds Japan TS Research Fund and B. Sr. G. Greenberg Research Fund (Ottawa). tpartially supported by a Gutwirth fellowship. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and /or specific permission.
principles of distributed computing | 1994
Amos Israeli; Lihu Rappoport
In this paper, we present efficient implementations of strong shared memory primitives. We use the asynchronous shar-ed memory model. In this model, processes communicate by applying primitive operations (e.g. Read, Write) to a shared memory. We define disjoint-access-parallel implementations. Intuitively, an implementation of shared memory primitives is disjoint-access-parallel, if processes which execute shared memory operations that access disjoint sets of words, progress concurrently, without interfering with each other (under an assumption described in the paper). Two commonly used primitives, both in theory and in practice, are Compare @Swap ( C’&S) and the pair Load Linked (LL) and Store Conditional (SC). We present an efficient, non-blocking, disjointaccess-parallel implementation of LL and SCn, using Read and CBS. SCn is a generalization of SC, which accesses n memory words. This implementation is constructed in three stages. We first present an implementation of L-L, SC and an additional primitive, called Validate ( VL), using Read and C&S. We then present an implementation of Read and CLYSn, using LL, SC and VL ( C&Sn is a generalization of C&S, which accesses n memory words). Finally, we present an implementation of SCn, using Read and Ct3’Sn. The work and space complexities of the implementations presented in this paper, improve the work and space complexities of previous works. *e-mail: amOs@ee. techniOn.ac.il t ~-mtil: fihu@cs .technion.ac.i] Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association of Computing Machinery. To copy otherwise, or to republish, requires a fee and[or specific permission. PODC 94-8194 Los Angeles CA USA @ 1994 ACM 0-89791 -654-9/94/0008.
IEEE Transactions on Parallel and Distributed Systems | 1997
Shlomi Dolev; Amos Israeli; Shlomo Moran
3.50 Lihu Rappoportt Faculty of Computer Science
Distributed Computing | 1993
Shlomi Dolev; Amos Israeli; Shlomo Moran
A distributed system is self-stabilizing if it can be started in any possible global state. Once started the system regains its consistency by itself, without any kind of outside intervention. The self-stabilization property makes the system tolerant to faults in which processors exhibit a faulty behavior for a while and then recover spontaneously in an arbitrary state. When the intermediate period in between one recovery and the next faulty period is long enough, the system stabilizes. A distributed system is uniform if all processors with the same number of neighbors are identical. A distributed system is dynamic if it can tolerate addition or deletion of processors and links without reinitialization. In this work, we study uniform dynamic self-stabilizing protocols for leader election under readwrite atomicity. Our protocols use randomization to break symmetry. The leader election protocol stabilizes in O(/spl Delta/D log n) time when the number of the processors is unknown and O(/spl Delta/D), otherwise. Here /spl Delta/ denotes the maximal degree of a node, D denotes the diameter of the graph and n denotes the number of processors in the graph. We introduce self-stabilizing protocols for synchronization that are used as building blocks by the leader-election algorithm. We conclude this work by presenting a simple, uniform, self-stabilizing ranking protocol.
principles of distributed computing | 1987
Benny Chor; Amos Israeli; Ming Li
SummaryThree self-stabilizing protocols for distributed systems in the shared memory model are presented. The first protocol is a mutual-exclusion prootocol for tree structured systems. The second protocol is a spanning tree protocol for systems with any connected communication graph. The thrid protocol is obtianed by use offair protoco combination, a simple technique which enables the combination of two self-stabilizing dynamic protocols. The result protocol is a self-stabilizing, mutualexclusion protocol for dynamic systems with a general (connected) communication graph. The presented protocols improve upon previous protocols in two ways: First, it is assumed that the only atomic operations are either read or write to the shared memory. Second, our protocols work for any connected network and even for dynamic network, in which the topology of the network may change during the excution.
SIAM Journal on Computing | 1993
Reuven Bar-Yehuda; Amos Israeli; Alon Itai
We investigate an asynchronous model of concurrent computations, where processors communicate by shared registers that allow atomic read and write operations (but do not support atomic test-and-set). For this model, we define a general notion of processor coordination, and study the possibility and complexity of achieving coordination. Our definition includes, as special cases, mutual exclusion and asynchronous agreement. It is shown that the coordination problem cannot be solved by means of a deterministic protocol even if the system consists of only two processors. This impossibility result holds for the most powerful type of shared atomic registers and does not assume symmetric protocols. The impossibility result is contrasted by a variety of eficient randomized protocols, that achieve fast coordination for systems of arbitrary number of processors n. These protocols are all fairly simple, constructive, and their ezpectedrun-time is polynomial in n, even in the presence of an adaptive *Supported in part by NSF Grant MCS81-21431 at Harvard University. tsupported in part by The Weizmann fellowship and NSF Grant DCR-86-00379. *Supported in part by ONR Grant NOOOl4-85-k-0445 at Harvard and by NSF Grant DCR-86-06366 at OSU. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission.
principles of distributed computing | 1989
Reuven Bar-Yehuda; Amos Israeli
Two tasks of communication in a multihop synchronous radio network are considered: Point-to-point communication and broadcast (sending a message to all nodes of a network). Efficient protocols for both problems are presented. Even though the protocols are probabilistic, it is shown how to acknowledge messages deterministically.Let n, D, and
Information Processing Letters | 1986
Amos Israeli; Yossi Shiloach
\Delta
Distributed Computing | 1993
Amos Israeli; Ming Li
be the number of nodes, the diameter and the maximum degree of our network, respectively. Both protocols require a setup phase in which a BFS tree is constructed. This phase takes
symposium on the theory of computing | 1984
Baruch Awerbuch; Amos Israeli; Yossi Shiloach
O((n + D\log n)\log \Delta )