Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where J. Lawrence Carter is active.

Publication


Featured researches published by J. Lawrence Carter.


Journal of Computer and System Sciences | 1979

Universal classes of hash functions

J. Lawrence Carter; Mark N. Wegman

Abstract This paper gives an input independent average linear time algorithm for storage and retrieval on keys. The algorithm makes a random choice of hash function from a suitable class of hash functions. Given any sequence of inputs the expected time (averaging over all functions in the class) to store and retrieve elements is linear in the length of the sequence. The number of references to the data base required by the algorithm for any input is extremely close to the theoretical minimum for any possible hash function with randomly distributed inputs. We present three suitable classes of hash functions which also can be evaluated rapidly. The ability to analyze the cost of storage and retrieval without worrying about the distribution of the input allows as corollaries improvements on the bounds of several algorithms.


Journal of Computer and System Sciences | 1981

New hash functions and their use in authentication and set equality

Mark N. Wegman; J. Lawrence Carter

Abstract In this paper we exhibit several new classes of hash functions with certain desirable properties, and introduce two novel applications for hashing which make use of these functions. One class contains a small number of functions, yet is almost universal2. If the functions hash n-bit long names into m-bit indices, then specifying a member of the class requires only O((m + log2log2(n)) · log2(n)) bits as compared to O(n) bits for earlier techniques. For long names, this is about a factor of m larger than the lower bound of m + log2n − log2m bits. An application of this class is a provably secure authentication technique for sending messages over insecure lines. A second class of functions satisfies a much stronger property than universal2. We present the application of testing sets for equality. The authentication technique allows the receiver to be certain that a message is genuine. An “enemy”—even one with infinite computer resources—cannot forge or modify a message without detection. The set equality technique allows operations including “add member to set,” “delete member from set” and “test two sets for equality” to be performed in expected constant time and with less than a specified probability of error.


symposium on the theory of computing | 1977

Universal classes of hash functions (Extended Abstract)

J. Lawrence Carter; Mark N. Wegman

This paper gives an input independent average linear time algorithm for storage and retrieval on keys. The algorithm makes a random choice of hash function from a suitable class of hash functions. Given any sequence of inputs the expected time (averaging over all functions in the class) to store and retrieve elements is linear in the length of the sequence. The number of references to the data base required by the algorithm for any input is extremely close to the theoretical minimum for any possible hash function with randomly distributed inputs. We present three suitable classes of hash functions which also may be evaluated rapidly. The ability to analyze the cost of storage and retrieval without worrying about the distribution of the input allows as corollaries improvements on the bounds of several algorithms.


symposium on the theory of computing | 1982

The theory of signature testing for VLSI

J. Lawrence Carter

Several methods for testing VLSI chips can be classified as signature methods. Both conventional and signature testing methods apply a number of test patterns to the inputs of the circuit. The difference is that a conventional method examines each output, while a signature method first accumulates the outputs in some data compression device, then examines the signature - the final contents of the accumulator - to see if it agrees with the signature produced by a good chip. Signature testing methods have several advantages, but they run the risk that masking may occur. Masking is said to occur if a faulty chip and a good chip behave differently on the test patterns, but the signatures are identical. When masking occurs, the signature testing method will incorrectly conclude that the chip is good, whereas a conventional method would discover that the chip is defective. This paper gives theoretical justification to the use of several signature testing techniques. We show that for these methods, the probability that masking will occur is small. An important difference between this and other work is that our results require very few assumptions about the behavior of faulty chips. They hold even in the presence of so-called correlated errors or even if the circuit were subject to sabatoge. When we speak of the probability of masking, we use the probabilistic approach of Gill, Rabin and others. That is, we introduce randomness into the testing method in a way which can be controlled by the designer. Thus, one theorem assumes that the order of the input patterns - or the patterns themselves - is random; another assumes that the connections between the chip and the signature accumulator are made randomly, and a third assumes that the signature accumulator itself incorporates a random choice. Most of the results of this paper use a particularly simple and practical signature accumulator based on a linear feedback shift register.


Communications of The ACM | 1977

A case study of a new code generation technique for compilers

J. Lawrence Carter

Recent developments in optimizing techniques have allowed a new design for compilers to emerge. Such a compiler translates the parsed source code into lower level code by a sequence of steps. Each step expands higher level statements into blocks of lower level code and then performs optimizations on the result. Each statement has only one possible expansion-the task of tailoring this code to take advantage of any special cases is done by the optimizations. This paper provides evidence that this strategy can indeed result in good object code. The traditionally difficult PL/I concatenate statement was investigated as a detailed example. A set of fairly simple optimizations was identified which allow the compiler to produce good code. More elaborate optimizations can further improve the object code. For most contexts of the concatenate statement, the code produced by a compiler using the expansionoptimization strategy described above compares favorably with the code produced by a conventional PL/I optimizing compiler.


mathematical foundations of computer science | 1978

Analysis of a universal class of hash functions

George Markowsky; J. Lawrence Carter; Mark N. Wegman

In this paper we use linear algebraic methods to analyze the performance of several classes of hash functions, including the class H2 presented by Carter and Wegman [2]. Suppose H is a suitable class, the hash functions in H map A to B, S is any subset of A whose size is equal to that of B, and x is any element of A. We show that the probability of choosing a function from H which maps x to the same value as more than t other elements of S is no greater than min (1/t2, 11/t4).


Algorithmica | 1991

Optimal tradeoffs for addition on systolic arrays

Alok Aggarwal; J. Lawrence Carter; S. Rao Kosaraju

The complexity of adding twon-bit numbers on a two-dimensional systolic array is investigated. We consider different constraints on the systolic array, including:whether or not the input and output ports lie on the periphery of the array,constraints placed on the arrival and departure times of inputs and outputs.For all combinations of the above constraints, we obtain optimal tradeoffs among the resources of area, pipeline delay, and worst-case time. It turns out that there is a subtle interplay among the constraints and some of our results seem counterintuitive. For instance, we show that allowing more-significant bits to arrive earlier than less-significant bits can speed up addition by a factor of logn. We also show that multiplexing can often result in a smaller array. On the other hand, we show that some known results, such as Chazelle and Moniers bounds for arrays that have input/output ports on the perimeter, also hold in less constrained models.


Theoretical Computer Science | 1981

A note on the existence of continuous functionals

J. Lawrence Carter; Ronald Fagin

Abstract Let if P = { p i | iϵI } and if Q = { q i | iϵI } be sets of partial functions with the same index set I . We say that Φ is an interpolating function (from P to Q ) if if Φ ( p i = q i for each i . We give simple necessary and sufficient conditions for the existence of a monotone interpolating functional. We show that these same conditions are necessary and sufficient for the existence of a continuous interpolating functional if the index set I is finite, but that they are not sufficient if the index set is infinite


foundations of computer science | 1979

New classes and applications of hash functions

Mark N. Wegman; J. Lawrence Carter


international test conference | 1986

Efficient Fault Simulation of CMOS Circuits with Accurate Models.

Zeev Barzilai; J. Lawrence Carter; Vijay S. Iyengar; Indira Nair; Barry K. Rosen; Joe D. Rutledge; Gabriel M. Silberman

Collaboration


Dive into the J. Lawrence Carter's collaboration.

Researchain Logo
Decentralizing Knowledge