Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Gelernter is active.

Publication


Featured researches published by David Gelernter.


ACM Transactions on Programming Languages and Systems | 1985

Generative communication in Linda

David Gelernter

Generative communication is the basis of a new distributed programming langauge that is intended for systems programming in distributed settings generally and on integrated network computers in particular. It differs from previous interprocess communication models in specifying that messages be added in tuple-structured form to the computation environment, where they exist as named, independent entities until some process chooses to receive them. Generative communication results in a number of distinguishing properties in the new language, Linda, that is built around it. Linda is fully distributed in space and distributed in time; it allows distributed sharing, continuation passing, and structured naming. We discuss these properties and their implications, then give a series of examples. Linda presents novel implementation problems that we discuss in Part II. We are particularly concerned with implementation of the dynamic global name space that the generative communication model requires.


Communications of The ACM | 1989

Linda in context

Nicholas Carriero; David Gelernter

How can a system that differs sharply from all currently fashionable approaches score any kind of success? Heres how.


symposium on principles of programming languages | 1986

Distributed data structures in Linda

Nicholas Carriero; David Gelernter; Jerrold S. Leichter

A <i>distributed data structure</i> is a data structure that can be manipulated by many parallel processes simultaneously. Distributed data structures are the natural complement to parallel program structures, where a <i>parallel program</i> (for our purposes) is one that is made up of many simultaneously active, communicating processes. Distributed data structures are impossible in most parallel programming languages, but they are supported in the parallel language Linda and they are central to Linda programming style. We outline Linda, then discuss some distributed data structures that have arisen in Linda programming experiments to date. Our intent is neither to discuss the design of the Linda system nor the performance of Linda programs, though we do comment on both topics; we are concerned instead with a few of the simpler and more basic techniques made possible by a language model that, we argue, is subtly but fundamentally different in its implications from most others.This material is based upon work supported by the National Science Foundation under Grant No. MCS-8303905. Jerry Leichter is supported by a Digital Equipment Corporation Graduate Engineering Education Program fellowship.


international conference on parallel architectures and languages europe | 1989

Multiple Tuple Spaces in Linda

David Gelernter

Multiple tuple spaces have been envisioned for Linda since the systems first comprehensive description; they are intended for two purposes. First, by allowing tuples to be organized into a hierarchy of separate spaces, they should make it possible to construct large Linda programs out of modules, to realize Lindas long-standing potential to be a model for persistent storage, to enforce separation between the system and users in a Linda-based operating system, and to support abstraction. Second, if we allow tuple spaces to be included among the fields of ordinary tuples, the Linda tuple-manipulation operators will allow us to operate not only on single data objects but on whole computations.


ACM Transactions on Computer Systems | 1986

The S/Net's Linda kernel

Nicholas Carriero; David Gelernter

Linda is a parallel programming language that differs from other parallel languages in its simplicity and in its support for distributed data structures. The S/Net is a multicomputer, designed and built at AT&T Bell Laboratories, that is based on a fast, word-parallel bus interconnect. We describe the Linda-supporting communication kernel we have implemented on the S/Net. The implementation suggests that Lindas unusual shared-memory-like communication primitives can be made to run well in the absence of physically shared memory; the simplicity of the language and of our implementations logical structure suggest that similar Linda implementations might readily be constructed on related architectures. We outline the language, and programming methodologies based on distributed data structures; we then describe the implementation, and the performance both of the Linda primitives themselves and of a simple S/Net-Linda matrix-multiplication program designed to exercise them.


acm sigplan symposium on principles and practice of parallel programming | 1988

Applications experience with Linda

Nicholas Carriero; David Gelernter

We describe three experiments using C-Linda to write parallel codes. The first involves assessing the similarity of DNA sequences. The results demonstrate Lindas flexibility—Linda solutions are presented that work well at two quite different levels of granularity. The second uses a prime finder to illustrate a class of algorithms that do not (easily) submit to automatic parallelizers, but can be parallelized in straight-forward fashion using C-Linda. The final experiment describes the process lattice model, an “inherently” parallel application that is naturally conceived as multiple interacting processes. Taken together, the experience described here bolsters our claim that Linda can bridge the gap between the growing collection of parallel hardware and users eager to exploit parallelism. This work is supported by the NSF under grants DCR-8601920 and DCR-8657615 and by the ONR under grant N00014-86-K-0310. We are grateful to Argonne National Labs for providing access to a Sequent Symmetry.


parallel computing | 1994

The Linda alternative to message-passing systems

Nicholas Carriero; David Gelernter; Timothy G. Mattson; Andrew H. Sherman

Abstract The use of distributed data structures in a logically-shared memory is a natural, readily-understood approach to parallel programming. The principal argument against such an approach for portable software has always been that efficient implementations could not scale to massively-parallel, distributed memory machines. Now, however, there is growing evidence that it is possible to develop efficient and portable implementations of virtual shared memory models on scalable architectures. In this paper we discuss one particular example: Linda. After presenting an introduction to the Linda model, we focus on the expressiveness of the model, on techniques required to build efficient implementations, and on observed performance both on workstation networks and distributed-memory parallel machines. Finally, we conclude by briefly discussing the range of applications developed with Linda and Lindas suitability for the sorts of heterogeneous, dynamically-changing computational environments that are of growing significance.


principles of distributed computing | 1982

Distributed communication via global buffer

David Gelernter; Arthur J. Bernstein

Design and implementation of an inter-address-space communication mechanism for the SBN network computer are described. SBNs basic communication primitives appear in context of a new distributed systems programming language strongly supported by the network communication kernel. A model in which all communication takes place via a distributed global buffer results in simplicity, generality and power in the communication primitives. Implementation issues raised by the requirements of the global buffer model are discussed in context of the SBN impementation effort.


symposium on principles of programming languages | 1987

Environments as first class objects

David Gelernter; Suresh Jagannathan; Thomas London

We describe a programming language called Symmetric Lisp that treats environments as first-class objects. Symmetric Lisp allows programmers to write expressions that evaluate to environments, and to create and denote variables and constants of type environment as well. One consequence is that the roles filled in other languages by a variety of limited, special purpose environment forms like records, structures, closures, modules, classes and abstract data types are filled instead by a single versatile and powerful structure. In addition to being its fundamental structuring tool, environments also serve as the basic functional object in the language. Because the elements of an environment are evaluated in parallel, Symmetric Lisp is a parallel programming language; because they may be assembled dynamically as well as statically, Symmetric Lisp accommodates an unusually flexible and simple (parallel) interpreter as well as other history-sensitive applications requiring dynamic environments. We show that first-class environments bring about fundamental changes in a languages structure: conventional distinctions between declarations and expressions, data structures and program structures, passive modules and active processes disappear.


international symposium on computer architecture | 1988

The architecture of a Linda coprocessor

Venkatesh Krishnaswamy; Sudhir R. Ahuja; Nicholas Carriero; David Gelernter

We describe the architecture of a coprocessor that supports the communication primitives of the Linda parallel programming environment in hardware. The coprocessor is a critical element in the architecture of the Linda Machine, an MIMD parallel processing system that is designed top down from the specifications of Linda. Communication in Linda programs takes place through a logically shared associative memory mechanism called tuple space. The Linda Machine, however, has no physically shared memory. The microprogrammable coprocessor implements distributed protocols for executing tuple space operations over the Linda Machine communication network. The coprocessor has been designed and is in the process of fabrication. We discuss the projected performance of the coprocessor and compare it with software Linda implementations.nThis work is supported in part by National Science Foundation grants CCR-8657615 and ONR N00014-86-K-0310.

Collaboration


Dive into the David Gelernter's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lenore D. Zuck

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge