Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Karl N. Levitt is active.

Publication


Featured researches published by Karl N. Levitt.


Proceedings of the IEEE | 1978

SIFT: Design and analysis of a fault-tolerant computer for aircraft control

John H. Wensley; Leslie Lamport; Jack Goldberg; Milton W. Green; Karl N. Levitt; P. M. Melliar-Smith; Robert E. Shostak; Charles B. Weinstock

SIFT (Software Implemented Fault Tolerance) is an ultrareliable computer for critical aircraft control applications that achieves fault tolerance by the replication of tasks among processing units. The main processing units are off-the-shelf minicomputers, with standard microcomputers serving as the interface to the I/O system. Fault isolation is achieved by using a specially designed redundant bus system to interconnect the proeessing units. Error detection and analysis and system reconfiguration are performed by software. Iterative tasks are redundantly executed, and the results of each iteration are voted upon before being used. Thus, any single failure in a processing unit or bus can be tolerated with triplication of tasks, and subsequent failures can be tolerated after reconfiguration. Independent execution by separate processors means that the processors need only be loosely synchronized, and a novel fault-tolerant synchronization method is described. The SIFT software is highly structured and is formally specified using the SRI-developed SPECIAL language. The correctness of SIFT is to be proved using a hierarchy of formal models. A Markov model is used both to analyze the reliability of the system and to serve as the formal requirement for the SIFT design. Axioms are given to characterize the high-level behavior of the system, from which a correctness statement has been proved. An engineering test version of SIFT is currently being built.


Sigplan Notices | 1975

SELECT—a formal system for testing and debugging programs by symbolic execution

Robert S. Boyer; Bernard Elspas; Karl N. Levitt

SELECT is an experimental system for assisting in the formal systematic debugging of programs. It is intended to be a compromise between an automated program proving system and the current ad hoc debugging practice, and is similar to a system being developed by King et al. of IBM. SELECT systematically handles the paths of programs written in a LISP subset that includes arrays. For each execution path SELECT returns simplified conditions on input variables that cause the path to be executed, and simplified symbolic values for program variables at the path output. For conditions which form a system of linear equalities and inequalities SELECT will return input variable values that can serve as sample test data. The user can insert constraint conditions, at any point in the program including the output, in the form of symbolically executable assertions. These conditions can induce the system to select test data in user-specified regions. SELECT can also determine if the path is correct with respect to an output assertion. We present four examples demonstrating the various modes of system operation and their effectiveness in finding bugs. In some examples, SELECT was successful in automatically finding useful test data. In others, user interaction was required in the form of output assertions. SELECT appears to be a useful tool for rapidly revealing program errors, but for the future there is a need to expand its expressive and deductive power.


IEEE Transactions on Computers | 1968

Cellular Interconnection Arrays

William H. Kautz; Karl N. Levitt; Abraham Waksman

Abstract—A class of networks is described that has the capability of permuting in an arbitrary manner a set of n digital input lines onto a set of n digital output lines. The circuitry of the networks is arranged in cellular form, i. e., in a two-dimensional iterative pattern with mainly local intercell connections, where the basic cell behaves as a reversing switch with a single memory flip-flop. Various network forms are described, differing in the number of cells needed, in the shape of the array, and in the length and regularity of intercell connections. Also discussed are some ways of setting up the array to achieve a desired permutation.


Communications of The ACM | 1977

Proof techniques for hierarchically structured programs

Lawrence Robinson; Karl N. Levitt

A method for describing and structuring programs that simplifies proofs of their correctness is presented. The method formally represents a program in terms of levels of abstraction, each level of which can be described by a self-contained nonprocedural specification. The proofs, like the programs, are structured by levels. Although only manual proofs are described in the paper, the method is also applicable to semi-automatic and automatic proofs. Preliminary results are encouraging, indicating that the method can be applied to large programs, such as operating systems.


Artificial Intelligence | 1974

Reasoning about programs

Richard J. Waldinger; Karl N. Levitt

Abstract This paper describes a theorem prover that embodies knowledge about programming constructs, such as numbers, arrays, lists, and expressions. The program can reason about these concepts and is used as part of a program verification system that uses the Floyd-Naur explication of program semantics. It is implemented in the qa 4 language; the qa 4 system allows many pieces of strategic knowledge, each expressed as a small program, to be coordinated so that a program stands forward when it is relevant to the problem at hand. The language allows clear, concise representation of this sort of knowledge. The qa 4 system also has special facilities for dealing with commutative functions, ordering relations, and equivalence relations; these features are heavily used in this deductive system. The program interrogates the user and asks his advice in the course of a proof. Verifications have been found for Hoares FIND program, a real-number division algorithm, and some sort programs, as well as for many simpler algorithms. Additional theorems have been proved about a pattern matcher and a version of Robinsons unification algorithm. The appendix contains a complete, annotated listing of the deductive system and annotated traces of several of the deductions performed by the system.


Communications of The ACM | 1972

Cellular arrays for the solution of graph problems

Karl N. Levitt; William H. Kautz

A cellular array is a two-dimensional, checkerboard type interconnection of identical modules (or cells), where each cell contains a few bits of memory and a small amount of combinational logic, and communicates mainly with its immediate neighbors in the array. The chief computational advantage offered by cellular arrays is the improvement in speed achieved by virtue of the possibilities for parallel processing. In this paper it is shown that cellular arrays are inherently well suited for the solution of many graph problems. For example, the adjacency matrix of a graph is easily mapped onto an array; each matrix element is stored in one cell of the array, and typical row and column operations are readily implemented by simple cell logic. A major challenge in the effective use of cellular arrays for the solution of graph problems is the determination of algorithms that exploit the possibilities for parallelism, especially for problems whose solutions appear to be inherently serial. In particular, several parallelized algorithms are presented for the solution of certain spanning tree, distance, and path problems, with direct applications to wire routing, PERT chart analysis, and the analysis of many types of networks. These algorithms exhibit a computation time that in many cases grows at a rate not exceeding log2 n, where n is the number of nodes in the graph. Straightforward cellular implementations of the well-known serial algorithms for these problems require about n steps, and noncellular implementations require from n2 to n3 steps.


Sigplan Notices | 1975

On attaining reliable software for a secure operating system

Lawrence Robinson; Karl N. Levitt; Peter G. Neumann; Ashok R. Saxena

This paper presents a general methodology for the design, implementation, and proof of large software systems, each described as a hierarchy of abstract machines. The design and implementation occur in five stages as described in this paper. Formal proof may take place at each stage. We expect the methodology to simplify the proof effort in such a way as to make proof a feasible tool in the development of reliable software. In addition to the anticipated advantages in proof, we feel that the methodology improves a designers ability to formulate and organize the issues involved in the design of large systems, with additional benefits in system reliability. These advantages remain even if proof is not attempted. We are currently applying this methodology to the design and proof of a secure operating system. Each level in the system acts as a manager of all objects of a particular type (e .g ., directories, segments, linkage sections), and enforces all of the protection rules involved in the manipulation of these objects. In this paper we illustrate the methodology by examining three of the system levels, including specifications, for a simplified version of these levels. We also demonstrate some proofs of security-related properties and of correctness of implementation.


Communications of The ACM | 1978

An example of hierarchical design and proof

Jay M. Spitzen; Karl N. Levitt; Lawrence Robinson

Hierarchical programming is being increasingly recognized as helpful in the construction of large programs. Users of hierarchical techniques claim or predict substantial increases in productivity and in the reliability of the programs produced. In this paper we describe a formal method for hierarchical program specification, implementation, and proof. We apply this method to a significant list processing problem and also discuss a number of extensions to current programming languages that ease hierarchical program design and proof.


fall joint computer conference | 1968

A study of the data commutation problems in a self-repairable multiprocessor

Karl N. Levitt; Milton W. Green; Jack Goldberg

In recent years significant effort has been devoted to the development of techniques for realizing computer systems for which a significant problem of design arises from the unreliability of components and assembly as well as from special constraints on construction and operation. Examples of such systems are computers for aerospace missions and on-line process control.


IEEE Transactions on Computers | 1974

An Organization for a Highly Survivable Memory

Jack Goldberg; Karl N. Levitt; John H. Wensley

A memory organization is considered for which a large number of faults can be tolerated at a low cost in redundancy. The primitive element in the memory is a large-scale integrated (LSI) chip that realizes a section of memory, b bits wide by y words long, together with an address decoder for the y words. The chips (including spares) are connected via a switching network so that the memory can be reconfigured effectively in the presence of chip failures. The main results of the paper relating to the switching network are as follows.

Collaboration


Dive into the Karl N. Levitt's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge