Arthur J. Bernstein
Stony Brook University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Arthur J. Bernstein.
symposium on operating systems principles | 1981
Arthur J. Bernstein; Paul K. Harter
Wirth [Wi77] categorized programs into three classes. The most difficult type of program to understand and write is a real-time program. Much work has been done in the formal verification of sequential programs, but much remains to be done for concurrent and real-time programs. The critical nature of typical real-time applications makes the validity problem for real-time programs particularly important. Owicki and Lamport [OL80] present a relatively new method for verifying concurrent programs using temporal logic. This paper presents an extension of their work to the area of real-time programs. A model and proof system are presented and their use demonstrated using examples from the literature.
IEEE Software | 1985
Ariel J. Frank; Larry D. Wittie; Arthur J. Bernstein
Channel-oriented packet casting is a predominant feature of Micros, an operating system designed to explore control and communication techniques for network computers containing thousands of hosts.
ACM Transactions on Programming Languages and Systems | 1980
Arthur J. Bernstein
In a recent paper C.A.R. Hoare outlined a language for concurrent programming. Guarded commands and nondeterminism are two features of the language. This paper points out two problems that arise in connection with these features and addresses one of them.
ACM Transactions on Database Systems | 1994
Narayanan Krishnakumar; Arthur J. Bernstein
Databases are replicated to improve performance and availability. The notion of correctness that has commonly been adopted for concurrent access by transactions to shared, possibly replicated, data is serializability. However, serializability may be impractical in high-performance applications since it imposes too stringent a restriction on concurrency. When serializability is relaxed, the integrity constraints describing the data may be violated. By allowing bounded violations of the integrity constraints, however, we are able to increase the concurrency of transactions that execute in a replicated environment. In this article, we introduce the notion of an N-ignorant transaction, which is a transaction that may be ignorant of the results of at most N prior transactions, which is a transaction that may be ignorant of the results of at most N prior transactions. A system in which all transactions are N-ignorant can have an N + 1-fold increase in concurrency over serializable systems, at the expense of bounded violations of its integrity constraints. We present algorithms for implementing replicated databases in N-ignorant systems. We then provide constructive methods for calculating the reachable states in such systems, given the value of N, so that one may assess the maximum liability that is incurred in allowing constraint violation. Finally, we generalize the notion of N-ignorance to a matrix of ignorance for the purpose of higher concurrency.
principles of distributed computing | 1982
David Gelernter; Arthur J. Bernstein
Design and implementation of an inter-address-space communication mechanism for the SBN network computer are described. SBNs basic communication primitives appear in context of a new distributed systems programming language strongly supported by the network communication kernel. A model in which all communication takes place via a distributed global buffer results in simplicity, generality and power in the communication primitives. Implementation issues raised by the requirements of the global buffer model are discussed in context of the SBN impementation effort.
Distributed Computing | 1987
Divyakant Agrawal; Arthur J. Bernstein; Pankaj Gupta; Soumitra Sengupta
Concurrency control algorithms have traditionally been based on locking and timestamp ordering mechanisms. Recently optimistic schemes have been proposed. In this paper a distributed, multi-version, optimistic concurrency control scheme is described which is particularly advantageous in a query-dominant environment. The drawbacks of the original optimistic concurrency control scheme, namely that inconsistent views may be seen by transactions (potentially causing unpredictable behavior) and that read-only transactions must be validated and may be rolled back, have been eliminated in the proposed algorithm. Read-only transactions execute in a completely asynchronous fashion and are therefore processed with very little overhead. Furthermore, the probability that read-write transactions are rolled back has been reduced by generalizing the validation algorithm. The effects of global transactions on local transaction processing are minimized. The algorithm is also free from dedlock and cascading rollback problems.
symposium on principles of database systems | 1991
Narayanan Krishnakumar; Arthur J. Bernstein
Bounded Ignorance in Replicated Systems* Narayanan Krishnakumar and Arthur J. Bernstein Dept. of Computer Science, SUNY at Stony Brook, Stony Brook, NY 11794-4400. email : {nkris,art}@cs. sunysb.edu tel : (516) 632-8457 Databases are replicated to improve performance and the availability of data. The notion of correctness that has commonly been adopted for concurrent access by transactions to shared, possibly replicated, data is serializability. However, serializability may be impractical in high performance applications since it imposes too stringent a restriction on concurrency. When serializability is relaxed, the integrity constraints describing the data may be violated. By allowing bounded violations of the integrity constraints, however, we are able to increase the concurrency of transactions that execute in a replicated environment. In this paper, we introduce the notion of an N-ignorant transaction, which is a transaction that may be ignorant of the results of at most N prior transactions. A system in which all transactions are N-ignorant can have an N + l-fold increase in concurrency over serializable systems, at the expense of bounded violations of its integrity constraints. We present algorithms for implementing N-ignorant replicated databases. We then provide constructive methods for calculating the reachable states in such systems, given the value of N, so that one may assess the maximum liability that is incurred in allowing constraint violation.
IEEE Transactions on Software Engineering | 1977
Abraham Silberschatz; Richard B. Kieburtz; Arthur J. Bernstein
In Concurrent Pascal, the syntactic and semantic definition of the language prevents the inadvertent definition of a program that might violate the integrity of a shared data object. However, the language also does not allow the dynamic allocation of reusable resources among processes, and this restriction seems unnecessarily stingent. This paper proposes the addition to Concurrent Pascal of a new type of program component, to be called a resource manager. By this means, dynamic resource allocation can be accomplished both safely and efficiently. The notion that a process holds access rights to a resource is generalized to the notion that it holds capability rights, but the capability to atually make use of a resource is granted dynamically. The anonymity of dynamically allocatable resources is also guaranteed.
Theoretical Computer Science | 2006
Shiyong Lu; Arthur J. Bernstein; Philip M. Lewis
Correctness is an important aspect of workflow management systems. However, most of the workflow literature focuses only on the modeling aspects and assumes that a workflow is correct if during the execution it respects the control and data dependency specified by the workflow designer. To address the correctness question properly we propose a new workflow model based on Hoare semantics that allows to: (1) automatically check if the desired outcome of a workflow can be produced by its actual implementation, (2) automatically synthesize a workflow implementation from the workflow specification and a given task library.In particular we: (1) formalize the semantics of workflows and tasks with pre-and postconditions, (2) for each control construct we provide a set of sound inference rules formalizing its semantics. While most of our workflow constructs are standard, two of them are new: the universal and the existential constructs. We then describe algorithms for automatically checking the correctness of workflows and for automatic workflow generation.
international conference on data engineering | 2000
Arthur J. Bernstein; Philip M. Lewis; Shiyong Lu
Many transaction processing applications execute at isolation levels lower than serializable in order to increase throughput and reduce response time. The problem is that non-serializable schedules are not guaranteed to be correct for all applications. The semantics of a particular application determines whether that application will run correctly at a lower isolation level, and in practice it appears that many applications do. Unfortunately, we know of an analysis technique that has been developed to test an application for its correctness at a particular level. Apparently decisions of this nature are made on an informal basis. In this paper we describe such a technique in a formal way. We use a new definition of correctness, semantic correctness, which is weaker than serializability, to investigate the correctness of such executions. For each isolation level, we prove a condition under which transactions that execute at that level will be semantically correct. In addition to the ANSI/ISO isolation levels of read uncommitted, read committed, and repeatable read, we also prove a condition for correct execution at the read committed with first-committer-wins (a variation of read committed) and at the snapshot isolation level. We assume that different transactions can be executing at different isolation levels, but that each transaction is executing at least at the read uncommitted level.