Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Keith Marzullo is active.

Publication


Featured researches published by Keith Marzullo.


workshop on parallel & distributed debugging | 1991

Consistent detection of global predicates

Robert Cooper; Keith Marzullo

A fundamental problem in debugging and monitoring is detecting whether the state of system satisfies some predicate. If the system is distributed, then the resulting uncertainty in the state of the system makes such dectection, in general, ill-defined. This paper presents three algorithms for detecting global predicates in a well-defined way. These algorithms do so by interpreting predicates with respect to the communication that has occured in the system.


ACM Transactions on Computer Systems | 1990

Tolerating failures of continuous-valued sensors

Keith Marzullo

One aspect of fault-tolerance in process control programs is the ability to tolerate sensor failure. This paper presents a methodology for transforming a process control program that cannot tolerate sensor failures into one that can. Issues addressed include modifying specifications in order to accommodate uncertainty in sensor values and averaging sensor values in a fault-tolerant manner. In addition, a hierarchy of sensor failure models is identified, and both the attainable accuracy and the run-time complexity of sensor averaging with respect to this hierarchy is discussed.


IEEE Computer | 1991

Tools for distributed application management

Keith Marzullo; Robert Cooper; Mark D. Wood; Kenneth P. Birman

The issues of managing distributed applications are discussed, and a set of tools, the meta system, that solves some longstanding problems is presented. The Meta model of a distributed application is described. To make the discussion concrete, it is shown how NuMon, a seismological analysis system for monitoring compliance with nuclear test-ban treaties is managed within the Meta framework. The three steps entailed in using Meta are described. First the programmer instruments the application and its environment with sensors and actuators. The programmer then describes the application structure using the object-oriented data modeling facilities of the authors high-level control language, Lomita. Finally, the programmer writes a control program referencing the data model. Metas performance and real-time behavior are examined.<<ETX>>


workshop on management of replicated data | 1990

Deceit: a flexible distributed file system

Alexander Siegel; Kenneth P. Birman; Keith Marzullo

Deceit, a distributed file system that provides flexibility in the fault-tolerance and availability of files, is described. Deceit provides many capabilities to the user: file replication with concurrent reads and writes, a range of update propagation strategies, automatic disk load balancing and the ability to have multiple versions of a file. Deceit provides Sun Network File Server (NFS) protocol compatibility; no change in NFS client software is necessary in order to use Deceit. The purpose of Deceit is to replace large collections of NFS servers. NFS suffers from several problems in an environment where most clients mount most servers. First, if any one server crashes, clients will block or fail when they try to access that server, and, as the number of servers increases, this problem becomes more likely. Second, servers have a (roughly) fixed capacity, yet it is difficult to move files from one NFS server to another without disrupting clients. Third, replicating a file to increase its availability must be managed by the user. Deceit addresses these three problems.<<ETX>>


international workshop on distributed algorithms | 1991

Detection of Global State Predicates

Keith Marzullo; Gil Neiger

This paper examines algorithms for detecting when a property Φ holds during the execution of a distributed system. The properties we consider are expressed over the state of the system and are not assumed to have properties that facilitate detection, such as stability.


Archive | 1992

Primary-backup protocols: Lower bounds and optimal implementations

Navin Budhiraja; Keith Marzullo; Fred B. Schneider; Sam Toueg

We present a precise specification of the primary-backup approach. Then, for a variety of different failure models we prove lower bounds on the degree of replication, failover time, and worst-case blocking time for client requests. Finally, we outline primary-backup protocols and indicate which of our lower bounds are tight.


international conference on distributed computing systems | 1988

Supplying high availability with a standard network file system

Keith Marzullo; Frank B. Schmuck

The design of a network file service that is tolerant to fail-stop failures and that can be run on top of a standard network file service is described. The fault-tolerance is completely transparent, so the resulting file system supports the same set of heterogeneous workstations and applications as the chosen standard, To demonstrate that the design can provide the benefit of highly available files at a reasonable cost to the user, a prototype has been implemented using the Sun NFS protocol. The approach is not limited to being used with NFS and should apply to any network file service built along the client-server model.<<ETX>>


real time theory in practice rex workshop | 1991

Putting Time into Proof Outlines

Fred B. Schneider; Bard Bloom; Keith Marzullo

A logic for reasoning about timing properties of concurrent programs is presented. The logic is based on proof outlines and can handle maximal parallelism as well as resourceconstrained execution environments. The correctness proof for a mutual exclusion protocol that uses execution timings in a subtle way illustrates the logic in action.


workshop on management of replicated data | 1992

Highly-available services using the primary-backup approach

Navin Budhiraja; Keith Marzullo

The authors derive lower bounds and the corresponding optimal protocols for three parameters for synchronous primary-backup systems. They compare their results with similar results for active replication in order to determine whether the common folklore on the virtues of the two approaches can be shown formally. They also extend some of their results to asynchronous primary-backup systems. They implement an important subclass of primary-backup protocols that they call 0-blocking. These protocols are interesting because they introduce no additional protocol related delay into a failure-free service request. Through implementing these protocols the authors hope to determine the appropriateness of their theoretical system model and uncover other practical advantages or limitations of the primary-backup approach.<<ETX>>


symposium on reliable distributed systems | 1991

Masking failures of multidimensional sensors

Paul Chew; Keith Marzullo

A methodology for transforming a process control program that cannot tolerate sensor failure into one that can is presented. In this methodology, a reliable abstract sensor is created by combining information from several real sensors that measure the same physical value. To be useful, an abstract sensor must deliver reasonably accurate information at reasonable computational cost. The authors consider sensors that deliver multidimensional values (e.g. location or velocity in three dimensions). Geometric techniques are used to derive upper bounds on abstract sensor accuracy and to develop efficient algorithms for implementing abstract sensors.<<ETX>>

Collaboration


Dive into the Keith Marzullo's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sam Toueg

University of Toronto

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge