Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James E. Burns is active.

Publication


Featured researches published by James E. Burns.


Distributed Computing | 1995

Causal memory: definitions, implementation, and programming

Mustaque Ahamad; Gil Neiger; James E. Burns; Prince Kohli; Phillip W. Hutto

SummaryThe abstraction of a shared memory is of growing importance in distributed computing systems. Traditional memory consistency ensures that all processes agree on a common order of all operations on memory. Unfortunately, providing these guarantees entails access latencies that prevent scaling to large systems. This paper weakens such guarantees by definingcausal memory, an abstraction that ensures that processes in a system agree on the relative ordering of operations that arecausally related. Because causal memory isweakly consistent, it admits more executions, and hence more concurrency, than either atomic or sequentially consistent memories. This paper provides a formal definition of causal memory and gives an implementation for message-passing systems. In addition, it describes a practical class of programs that, if developed for a strongly consistent memory, run correctly with causal memory.


ACM Transactions on Programming Languages and Systems | 1989

Uniform self-stabilizing rings

James E. Burns; Jan K. Pachl

A <italic>self-stabilizing system</italic> has the property that, no matter how it is perturbed, it eventually returns to a legitimate configuration. Dijkstra originally introduced the self-stabilization problem and gave several solutions for a ring of processors in his 1974 <italic>Communications of the ACM</italic> paper. His solutions use a distinguished processor in the ring, which effectively acts as a controlling element to drive the system toward stability. Dijkstra has observed that a distinguished processor is essential if the number of processors in the ring is composite. We show, by presenting a protocol and proving its correctness, that there is a self-stabilizing system with no distinguished processor if the size of the ring is prime. The basic protocol uses &THgr; (<italic>n</italic><supscrpt>2</supscrpt>) states in each processor when <italic>n</italic> is the size of the ring. We modify the basic protocol to obtain one that uses &THgr; (<italic>n</italic><supscrpt>2</supscrpt>/ln <italic>n</italic>) states.


Information & Computation | 1993

Bounds on Shared Memory for Mutual Exclusion

James E. Burns; Nancy A. Lynch

The shared memory requirements of Dijkstra?s mutual exclusion problem are examined. It is shown that n binary shared variables are necessary and sufficient to solve the problem of mutual exclusion with guaranteed global progress for n processes using only atomic reads and writes of shared variables for communication.


ACM Transactions on Programming Languages and Systems | 1989

Distributed FIFO allocation of identical resources using small shared space

Michael J. Fischer; Nancy A. Lynch; James E. Burns

We present a simple and efficient algorithm for the FIFO allocation of k identical resources among asynchronous processes that communicate via shared memory. The algorithm simulates a shared queue but uses exponentially fewer shared memory values, resulting in practical savings of time and space as well as program complexity. The algorithm is robust against process failure through unannounced stopping, making it attractive also for use in an environment of processes of widely differing speeds. In addition to its practical advantages, we show that for fixed k, the shared space complexity of the algorithm as a function of the number N of processes is optimal to within a constant factor.


foundations of computer science | 1979

Resource allocation with immunity to limited process failure

Michael J. Fischer; Nancy A. Lynch; James E. Burns

Upper and lower bounds are proved for the shared space requirements for solution of several problems involving resource allocation among asynchronous processes. Controlling the degradation of performance when a limited number of processes fail is of particular interest.


principles of distributed computing | 1987

Constructing multi-reader atomic values from non-atomic values

James E. Burns; Gary L. Peterson

We present a simple, efficient protocol for constructing single-writer, multi-reader atomic shared values from singlewriter, multi-reader non-atomic (%egular”) shared values. This solves the last open problem in the Concurrent Reading While Writing hierarchy. It is now known how to construct a multi-writer, multi-reader atomic shared value given only single-reader, single-writer non-atomic (“safe”) shared bits. The protocol given here is remarkably simple and efficient. The total amount of shared space to communicate a shared value with a range of V to n readers is just O(n + log IV]) multi-reader regular bits, which is provably optimal (with very small constant factors). Similarly, the amount of communication (reading and writing of copies of the simulated shared values) required by readers and writers is also optimal. The simplicity of the protocol results in a short and easily understood proof of correctness. Great care has been taken to completely describe the protocol to avoid ambiguities. We also describe several variations of the protocol which optimize other goals.


principles of distributed computing | 1989

The ambiguity of choosing

James E. Burns; Gary L. Peterson

In the difficult area of distributed algorithms, it is useful to find tools that have wide application. Here we consider a model in which communication is through asynchronous, atomic reads and writes of shared memory. One of our main results is a lemma showing that a single fail-stop failure can lead to ambiguity whenever a choice must be made. Thii lemma can be used to prove a variety of impossibility results and lower bounds. We use the lemma to give a simple proof of the optimality of our solution to the f-assignment problem. The L-assignment problem requires that a group of processors compete for a pool of distinct resources with the restriction that a limited number of processors can halt unexpectedly. (The problem differs from the normal &exclusion problem in that an explicit assignment of resources must be made.) Use of the lemma is also demonstrated by giving a simple proof for the lower bound of the pure buffers version of the concurrent reading while writing problem. Some other problems where the lemma applies are mentioned.


foundations of computer science | 1987

Concurrent reading while writing II: The multi-writer case

Gary L. Peterson; James E. Burns

An algorithm is given for the multi-writer version of the Concurrent Reading While Writing (CRWW) problem. The algorithm solves the problem of allowing simultaneous access to arbitrarily sized shared data without requiring waiting, and hence avoids mutual exclusion. This. demonstrates that a quite complicated concurrent control problem can be solved-without eliminating the efficiency of parallelism. One very important aspect of the algorithm are the tools developed to prove its correctness. Without these tools, proving the correctness of a solution to a problem of this complexity would be very difficult.


Distributed Computing | 1993

Stabilization and pseudo-stabilization

James E. Burns; Mohamed G. Gouda; Raymond E. Miller

SummaryAstabilizing system is one which if started at any state is guaranteed to reach a state after which the system cannot deviate from its intended specification. In this paper, we propose a new variation of this notion, called pseudo-stabilization. Apseudo-stabilizing system is one which if started at any state is guaranteed to reach a state after which the system does not deriate from its intended specification. Thus, the difference between the two notions comes down to the difference between “cannot” and “does not” — a difference that hardly matters in many practical situations. As it happens, a number of well-known systems, for example the alternating-bit protocol, are pseudo-stabilizing but not stabilizing. We conclude that one should not try to make any such system stabilizing, especially if stabilization comes at a high price.


foundations of computer science | 1981

Symmetry in systems of asynchronous processes

James E. Burns

A new solution to the problem of deadlock-free mutual exclusion of N processes is given which uses less shared space than earlier solutions (one variable which may take on N values and N binary variables). The solution uses only indivisible reads and writes of shared variables for communication and is symmetric among the processes. Two definitions of symmetry are developed. The strong definition of symmetry requires that all processes be identically programmed and be started in identical states. However, this definition does not allow any solution to the problem of deadlock-free mutual exclusion using only reads and writes. The weaker definition admits the solution given. It is also shown that under weak symmetry N shared variables, at least one of which must be able to take on N values, are necessary.

Collaboration


Dive into the James E. Burns's collaboration.

Top Co-Authors

Avatar

Nancy A. Lynch

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chungki Lee

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mostafa H. Ammar

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael M. J. Fischer

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul R. Jackson

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mohamed G. Gouda

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Mustaque Ahamad

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge