Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gil Neiger is active.

Publication


Featured researches published by Gil Neiger.


IEEE Computer | 2005

Intel virtualization technology

Rich Uhlig; Gil Neiger; Dion Rodgers; Amy L. Santoni; Fernando C. M. Martins; Andrew V. Anderson; Steven M. Bennett; Alain Kagi; Felix Leung; Larry Smith

A virtualized system includes a new layer of software, the virtual machine monitor. The VMMs principal role is to arbitrate accesses to the underlying physical host platforms resources so that multiple operating systems (which are guests of the VMM) can share them. The VMM presents to each guest OS a set of virtual platform interfaces that constitute a virtual machine (VM). Once confined to specialized, proprietary, high-end server and mainframe systems, virtualization is now becoming more broadly available and is supported in off-the-shelf systems based on Intel architecture (IA) hardware. This development is due in part to the steady performance improvements of IA-based systems, which mitigates traditional virtualization performance overheads. Intel virtualization technology provides hardware support for processor virtualization, enabling simplifications of virtual machine monitor software. Resulting VMMs can support a wider range of legacy and future operating systems while maintaining high performance.


Distributed Computing | 1995

Causal memory: definitions, implementation, and programming

Mustaque Ahamad; Gil Neiger; James E. Burns; Prince Kohli; Phillip W. Hutto

SummaryThe abstraction of a shared memory is of growing importance in distributed computing systems. Traditional memory consistency ensures that all processes agree on a common order of all operations on memory. Unfortunately, providing these guarantees entails access latencies that prevent scaling to large systems. This paper weakens such guarantees by definingcausal memory, an abstraction that ensures that processes in a system agree on the relative ordering of operations that arecausally related. Because causal memory isweakly consistent, it admits more executions, and hence more concurrency, than either atomic or sequentially consistent memories. This paper provides a formal definition of causal memory and gives an implementation for message-passing systems. In addition, it describes a practical class of programs that, if developed for a strongly consistent memory, run correctly with causal memory.


acm symposium on parallel algorithms and architectures | 1993

The power of processor consistency

Mustaque Ahamad; Rida A. Bazzi; Ranjit John; Prince Kohli; Gil Neiger

Shared memories that provide weaker consistency guarantees than the traditional sequentially consistent or atomic memories have been claimed to provide the key to building scalable systems. One influential memory model, processor considency, has been cited widely in the literature but, due to the lack of a precise and formal definition, contradictory claims have been made regarding its power. We use a formal model to give two distinct definitions of processors consistency: one corresponding to Goodman’s original proposal and the other corresponding that given by the implementors of the DASH system. These definitions are non-operational and can be easily related to other types of memories. To illustrate the power of processor consistency, we exhibit a non-cooperative solution to the mutual exclusion problem that is correct with processor consistency. As a contrast, we show that Lamport’s Bakery algorithm is not correct with processor consistency.


Journal of Algorithms | 1990

Automatically increasing the fault-tolerance of distributed algorithms

Gil Neiger; Sam Toueg

Abstract The design of fault-tolerant distributed systems is a costly and difficult task. Its cost and difficulty increase dramatically with the severity of failures that a system must tolerate. This task is simplified through methods that automatically translate protocols tolerant of “benign” failures into ones tolerant of more “severe” failures. This paper describes two new translation mechanisms for synchronous systems: one translates protocols tolerant of crash failures into protocols tolerant of general omission failures and the other from general omission failures to arbitrary failures. Together these can be used to translate any protocol tolerant of the most benign failures into a protocol tolerant of the most severe. In addition, the paper also shows lower bounds on the fault-tolerance of translations between certain systems. These lower bounds are matched by some of the translations given, which are thus optimal with respect to fault-tolerance.


international workshop on distributed algorithms | 1991

Detection of Global State Predicates

Keith Marzullo; Gil Neiger

This paper examines algorithms for detecting when a property Φ holds during the execution of a distributed system. The properties we consider are expressed over the state of the system and are not assumed to have properties that facilitate detection, such as stability.


principles of distributed computing | 1998

Structured derivations of consensus algorithms for failure detectors

Jiong Yang; Gil Neiger; Eli Gafni

In a seminal paper, Chandra and Toueg showed how unreliable failure detectors could allows processors to achieve consensus in asynchronous message passing systems. Since then, other researchers have developed consensus algorithms for other systems or based on different failure detectors. Each algorithm was developed and proven independently. This paper shows how a consensus algorithm for any of the standard models can be automatically converted to run in any other. These results show more clearly how the different system models and failure detectors can be related. In addition, they may permit the development of new results for new models also through transformations.


Journal of the ACM | 2001

Simplifying fault-tolerance: providing the abstraction of crash failures

Rida A. Bazzi; Gil Neiger

The difficulty of designing fault-tolerant distributed algorithms incr eases with the severity of failures that an algorithm must tolerate, especially for systems with synchronous message passing. This paper considers methods that automatically translate algorithms tolerant of simple crash failures into ones tolerant of more severe failures. These translations simplify the design task by allowing algorithm designers to assume that processors fail only by stopping. Such translations can be quantified by two measures: fault-tolerance, which is a measure of how many processors must remain correct for the translation to be correct, and round-complexity, which is a measure of how the translation increases the running time of an algorithm. Understanding these translations and their limitations with respect to these measures can provide insight into the relative impact of different models of faculty behavior on the ability to provide fault-tolerant applications for systems with synchronous message passing. This paper considers translations fr om crash failures to each of the following types of more severe failures: omission to send messages; omission to send and receive messages; and totally arbitrary behavior. It shows that previously developed translaions to send-omission failures are optimal with respect to both fault-tolerance and round-complexity. It exhibits a hierarchy of translations to general (send/receive) omission failures that improves upon the fault-tolerance of previously developed translations. These translations are optimal in that they cannot be improved with respect to one measure without negatively affecting the other; that is, the hierarchy of translations is matched by corresponding hierarchy of impossibility results. The paper also gives a hierarchy of translations to arbitrary failures that improves upon the round-complexity of previously developed translations. These translations are near-optimal;


principles of distributed computing | 1996

A new look at membership services (extended abstract)

Gil Neiger

A New Look at Membership Services*


principles of distributed computing | 1995

Failure detectors and the wait-free hierarchy (extended abstract)

Gil Neiger

Fault-tolerant consensus cannot be achieved in asynchronous systems with shared read/write memory. Researchers have addressed thid imitation by considering shared objects more powerful than read/write memory and by augmenting such systems with failure detectors. The former approach has led to the development of the wait-free hiemr-chg, which characterizes concurrent data types by their ability to achieve consensus. The latter approach has led to the identification of the weakest failure detector that can be used to solve consensus with read/write memory. This paper combines these research paths by considering failure detectors that augment the synchronization power of different types in the wait-free hierarchy. It provides a hierarchy of failure detectors, one corresponding to each level in the wait-free hierarchy, and demonstrates that each is the weakest to increase the consensus power of its corresponding set of types.


Distributed Computing | 1993

Common knowledge and consistent simultaneous coordination

Gil Neiger; Mark R. Tuttle

SummaryThere is a very close relationship between common knowledge and simultaneity in synchronous distributed systems. The analysis of several well-known problems in terms of common knowledge has led to round-optimal protocols for these problems, includingReliable Broadcast, Distributed Consensus, and theDistributed Firing Squad problem. These problems require that the correct processors coordinate their actions in some way but place no restrictions on the behaviour of the faulty processors. In systems with benign processor failures, howrver, it is reasonable to require that the actions of a faulty processor be consistent with those of the correct processors, assuming it performs any action at all. We consider problems requiringconsistent, simultaneous coordination. We then analyze these problems in terms of common knowledge in several failure models. The analysis of these stronger problems requires a stronger definition of common knowledge, and we study the relationship between these two definitions. In many cases, the two definitions are actually equivalent, and simple modifications of previous solutions yield roundoptimal solutions to these problems. When the definitions differ, however, we show that such problems cannot be solved, even in failure-free executions.

Collaboration


Dive into the Gil Neiger's collaboration.

Top Co-Authors

Avatar

Rida A. Bazzi

Arizona State University

View shared research outputs
Top Co-Authors

Avatar

Mustaque Ahamad

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Prince Kohli

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

James E. Burns

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Phillip W. Hutto

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ranjit John

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Sam Toueg

University of Toronto

View shared research outputs
Researchain Logo
Decentralizing Knowledge