Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kenneth J. Perry is active.

Publication


Featured researches published by Kenneth J. Perry.


Distributed Computing | 1993

Self-stabilizing extensions for message-passing systems

Shmuel Katz; Kenneth J. Perry

SummaryA self-stabilizing program eventually resumes normal behavior even if excution begins in, an abnormal initial state. In this paper, we explore the possibility of extending an arbitrary program into a self-stabilizing one. Our contributions are: (1) a formal definition of the concept of one program being aself-stabilizing extension of another; (2) a characterization of what properties may hold in such extensions; (3) a demonstration of the possibility of mechanically creating such extensions. The computtional model used is that of an asynchronous distributed message-passing system whose communication topology is an arbitrary graph. We contrast the difficulties of self-stabilization in thismodel with those of themore common shared-memory models.


IEEE Transactions on Software Engineering | 1986

Distributed agreement in the presence of processor and communication faults

Kenneth J. Perry; Sam Toueg

A model of distributed computation is proposed in which processes may fail by not sending or receiving the message specified by a protocol. The solution to the Byzantine generals problem for this model is presented. The algorithm exhibits early stopping under conditions of less than maximum failure and is as efficient as the algorithm developed for the more restrictive crash-fault model in terms of time, message, and bit complexity. The authors show extant models to underestimate resiliency when faults in the communication medium are considered; the model outlined here is more accurate in this regard.


foundations of computer science | 1989

Towards optimal distributed consensus

P. Berman; Juan A. Garay; Kenneth J. Perry

In a distributed consensus protocol all processors (of which t may be faulty) are given (binary) initial values; after exchanging messages all correct processors must agree on one of them. The quality of a protocol is measured here using as parameters the total number of processors n, number of rounds of message exchange r, and maximal message length m, with optima, respectively, of 3t+1, t+1, and 1. Although no known protocol is optimal in all these three aspects simultaneously, the protocols that take further steps in this direction are presented. The first protocol has n>4t, r=t+1, and polynomial message size. The second protocol has n>3t, r=3t+3, and m=2, and it is asymptotically optimal in all three quality parameters while using the optimal number of processors. Using these protocols as building blocks, families of protocols with intermediate quality parameters, offering better tradeoffs than previous results, are obtained. All the protocols work in polynomial time and have succinct descriptions.<<ETX>>


international workshop on distributed algorithms | 1992

A Continuum of Failure Models for Distributed Computing

Juan A. Garay; Kenneth J. Perry

A range of models of distributed computing is presented in which processors may fail either by crashing or by exhibiting arbitrary (Byzantine) behavior. In these models, the total number of faulty processors is bounded from above by a constant t subject to the proviso that no more than b <= t of these processors are Byzantine. At the two extremes of the range (i.e., b=0 or b=t) we get models that are equivalent to the traditional models of either pure crash failures or pure Byzantine failures. For 0<b<t, the models that we introduce accommodate “real-world” experience that shows that the overwhelming majority of failures are crashes but occasionally some number of less-restrictive failures occur. We examine the Reliable Broadcast and Consensus problems within this new family of models and prove lower bounds on the relationship required between the number of processors, t, and b. We also present protocols to solve these problems, which match the lower bounds. In presenting the protocols, we emphasize new algorithmic techniques that are fruitful to use in the new models but which have limited value in either of the pure models.


Computer Science | 1992

Bit optimal distributed consensus

Piotr Berman; Juan A. Garay; Kenneth J. Perry

The Distributed Consensus problem involves n processors each of which holds an initial binary value. At most t processors may be faulty and ignore any protocol (even behaving maliciously), yet it is required that non-faulty processors eventually agree on a common value that was initially held by one of them. The quality of a consensus protocol is measured using the following parameters: the number of processors n, the number of rounds of message exchange r and the total number of bits transmitted B. The known lower bounds are respectively 3t + 1, t + 1 and Ω(nt).


international workshop on distributed algorithms | 1992

Optimal early stopping in distributed consensus

Piotr Berman; Juan A. Garay; Kenneth J. Perry

The Distributed Consensus problem involves n processors each of which holds an initial binary value. At most t processors may be faulty and ignore any protocol (even behaving maliciously), yet it is required that the non-faulty processors eventually agree on a value that was initially held by one of them. This paper presents consensus protocols that tolerate arbitrary faults, are early-stopping (i.e., run for a number of rounds proportional to the number of faults f that actually occur during their execution), and are optimal in various measures.


IEEE Transactions on Software Engineering | 1985

Randomized Byzantine Agreement

Kenneth J. Perry

A randomized model of distributed computation was recently presented by Rabin [ 81. This model admits a solution to the Byzantine Agreement Problem for systems of n asynchronous processes where no more than t are faulty. The algorithm described by Rabin produces agreement in an expected number of rounds which is a small constant independent of n and t. Using the same model, we present an algorithm of similar complexity which is able to tolerate a greater portion of malicious processes. The algorithm is also applicable, with minor changes, to systems of synchronous processes.


logic in computer science | 1988

Efficient parallel algorithms for anti-unification and relative complement

Gabriel M. Kuper; Ken McAloon; Krishna V. Palem; Kenneth J. Perry

Parallel algorithms and computational complexity results are given for two problems; computing the relative complement of terms and antiunification. The concepts of antiunification and relative complement are useful for theorem proving, logic programming, and machine learning. The relative complement problem is shown to be NP-complete.<<ETX>>


Proceedings of the 1983 ACM SIGSMALL symposium on Personal and small computers | 1983

RM: A resource-sharing system for personal computers

Rita C. Summers; Christopher Wood; John Marberg; Mostafa Ebrahimi; Kenneth J. Perry; Uri Zernik

With the recent advances in personal computer technology, time-sharing of a processor is no longer a necessity; each user can have his own machine. It is valuable, however, to share resources among the individual machines. This paper discusses a system structure for interactive computing in which personal computers are connected by a local-area network for the purpose of resource sharing, and describes an experimental prototype that is being implemented using the IBM Personal Computer.


Journal of Automated Reasoning | 1992

A note on the parallel complexity of anti-unification

Gabriel M. Kuper; Ken McAloon; Krishna V. Palem; Kenneth J. Perry

The anti-unifier is the dual notion to the unifier, i.e., it is the most specific term that has the input terms as instances. We show that the problem of anti-unification is in NC, in contrast to unification that is known to be P-complete.

Collaboration


Dive into the Kenneth J. Perry's collaboration.

Researchain Logo
Decentralizing Knowledge