Mark R. Tuttle
Intel
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mark R. Tuttle.
principles of distributed computing | 1987
Nancy A. Lynch; Mark R. Tuttle
This thesis introduces a new model for distributed computation in asynchronous networks, the input-output automaton. This simple, powerful model captures in a novel way the game-theoretical interaction between a system and its environment, and allows fundamental properties of distributed computation such as fair computation to be naturally expressed. Furthermore, this model can be used to construct modular, hierarchical correctness proofs of distributed algorithms. This thesis defines the input-output automaton model, and presents an interesting example of how this model can be used to construct such proofs.
Algorithmica | 1988
Yoram Moses; Mark R. Tuttle
This work applies the theory of knowledge in distributed systems to the design of efficient fault-tolerant protocols. We define a large class of problems requiring coordinated, simultaneous action in synchronous systems, and give a method of transforming specifications of such problems into protocols that areoptimal in all runs: these protocols are guaranteed to perform the simultaneous actions as soon as any other protocol could possibly perform them, given the input to the system and faulty processor behavior. This transformation is performed in two steps. In the first step we extract, directly from the problem specification, a high-level protocol programmed using explicit tests for common knowledge. In the second step we carefully analyze when facts become common knowledge, thereby providing a method of efficiently implementing these protocols in many variants of the omissions failure model. In the generalized omissions model, however, our analysis shows that testing for common knowledge is NP-hard. Given the close correspondence between common knowledge and simultaneous actions, we are able to show that no optimal protocol for any such problem can be computationally efficient in this model. The analysis in this paper exposes many subtle differences between the failure models, including the precise point at which this gap in complexity occurs.
acm symposium on parallel algorithms and architectures | 2008
Noga Alon; Chen Avin; Michal Koucky; Gady Kozma; Zvi Lotker; Mark R. Tuttle
We pose a new and intriguing question motivated by distributed computing regarding random walks on graphs: How long does it take for several independent random walks, starting from the same vertex, to cover an entire graph? We study the cover time - the expected time required to visit every node in a graph at least once - and we show that for a large collection of interesting graphs, running many random walks in parallel yields a speed-up in the cover time that is linear in the number of parallel walks. We demonstrate that an exponential speed-up is sometimes possible, but that some natural graphs allow only a logarithmic speed-up. A problem related to ours (in which the walks start from some probablistic distribution on vertices) was previously studied in the context of space efficient algorithms for undirected s-t-connectivity and our results yield, in certain cases, an improvement upon some of the earlier bounds.
principles of distributed computing | 1998
Maurice Herlihy; Sergio Rajsbaum; Mark R. Tuttle
We take a significant step toward unifying the synchronous, semi-synchronous, and asynchronous message-passing models of distributed computation. The key idea is the concept of a pseudosphere, a new combinatorial structure in which each process from a set of processes is independently assigned a value from a set of values. Pseudospheres have a number of nice combinatorial properties, but their principal interest lies in the observation that the behavior of protocols in the three models can be characterized as simple unions of pseudospheres, where the exact structure of these unions is determined by the timing properties of the model. We use this pseudosphere construction to derive new and remarkably succinct proofs of bounds on consensus and k-set agreement in the asynchronous and synchronous models, as well as the first lower bound on wait-free k-set agreement in the semi-synchronous model.
formal methods | 2003
Rajeev Joshi; Leslie Lamport; John Matthews; Serdar Tasiran; Mark R. Tuttle; Yuan Yu
We have a great deal of experience using the specification language TLA+ and its model checker TLC to analyze protocols designed at Digital and Compaq (both now part of HP). The tools and techniques we have developed apply equally well to software and hardware designs. In this paper, we describe our experience using TLA+ and TLC to verify cache-coherence protocols.
formal methods in computer-aided design | 2008
Murali Talupur; Mark R. Tuttle
A message flow is a sequence of messages sent among processors during the execution of a protocol, usually illustrated with something like a message sequence chart. Protocol designers use message flows to describe and reason about their protocols. We show how to derive high-quality invariants from message flows and use these invariants to accelerate a state-of-the-art method for parameterized protocol verification called the CMP method. The CMP method works by iteratively strengthening and abstracting a protocol. The labor-intensive portion of the method is finding the protocol invariants needed for each iteration. We provide a new analysis of the CMP method proving it works with any sound abstraction procedure. This facilitates the use of a new abstraction procedure tailored to our protocol invariants in the CMP method. Our experience is that message-flow derived invariants get to the heart of protocol correctness in the sense that only couple of additional invariants are needed for the CMP method to converge.
formal methods in computer-aided design | 2007
Ariel Cohen; John W. O'Leary; Amir Pnueli; Mark R. Tuttle; Lenore D. Zuck
We show how to verify the correctness of transactional memory implementations with a model checker. We show how to specify transactional memory in terms of the admissible interchange of transaction operations, and give proof rules for showing that an implementation satisfies this specification. This notion of an admissible interchange is a key to our ability to use a model checker, and lets us capture the various notions of transaction conflict as characterized by Scott. We demonstrate our work using the TLC model checker to verify several well-known implementations described abstractly in the TLA+ specification language.
symposium on the theory of computing | 1988
Joseph Y. Halpern; Yjoram Moses; Mark R. Tuttle
While the intuition underlying a zero knowledge proof system [GMR85] is that no “knowledge” is leaked by the prover to the verifier, researchers are just beginning to analyze such proof systems in terms of formal notions of knowledge. In this paper, we show how interactive proof systems motivate a new notion of practical knowledge, and we capture the definition of an interactive proof system in terms of practical knowledge. Using this notion of knowledge, we formally capture and prove the intuition that the prover does not leak any knowledge of any fact (other than the fact being proven) during a zero knowledge proof. We extend this result to show that the prover does not leak any knowledge of how to compute any information (such as the factorization of a number) during a zero knowledge proof. Finally, we define the notion of a weak interactive proof in which the prover is limited to probabilistic, polynomial-time computations, and we prove analogous security results for such proof systems. We show that, in a precise sense, any nontrivial weak interactive proof must be a proof about the provers knowledge, and show that, under natural conditions, the notions of interactive proofs of knowledge defined in [TW87] and [FFS87] are instances of weak interactive proofs.
foundations of computer science | 1993
Soma Chaudhuri; Maurice Herlihy; Nancy A. Lynch; Mark R. Tuttle
We prove tight bounds on the time needed to solve k-set agreement, a natural generalization of consensus. We analyze this problem in a synchronous, message-passing model where processors fail by crashing. We prove a lower bound of [f/k]+1 rounds of communication for solutions to k-set agreement that tolerate f failures. This bound is tight, and shows that there is an inherent tradeoff between the running time, the degree of coordination required, and the number of faults tolerated, even in idealized models like the synchronous model. The proof of this result is interesting because it is a geometric combination of other well-known proof techniques.<<ETX>>
formal methods in computer-aided design | 2009
John W. O'Leary; Murali Talupur; Mark R. Tuttle
We prove the parameterized correctness of one of the largest cache coherence protocols being used in modern multi-core processors today. Our approach is a generalization of a method we described last year that uses data type reduction and compositional reasoning to iteratively abstract and refine the protocol and uses invariants derived from protocol “flows” to make the abstraction-refinement loop converge. Our prior work demonstrated the value of sequencing information that appeared within the linear flows describing a protocol in design documents. This paper extends the notion of flows to capture intricate scenarios seen in real industrial protocols and demonstrates that there is also valuable information in the interaction among flows. We further show that judicious use of flows is required to make the method converge and identify which flows are most suitable.