Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrew A. Chien is active.

Publication


Featured researches published by Andrew A. Chien.


acm sigplan symposium on principles and practice of parallel programming | 1990

Concurrent aggregates (CA)

Andrew A. Chien; Willaim J. Dally

To program massively concurrent MIMD machines, programmers need tools for managing complexity. One important tool that has been used in the sequential programming world is hierarchies of abstractions. Unfortunately, most concurrent object-oriented languages construct hierarchical abstractions from objects that serialize — serializing the abstractions. In machines with tens of thousands of processors, unnecessary serialization of this sort can cause significant loss of concurrency. Concurrent Aggregates (CA) is an object-oriented language that allows programmers to build unserialized hierarchies of abstractions by using aggregates. An aggregate in CA is a homogeneous collection of objects (called representatives) that are grouped together and may be referenced by a single aggregate name. Aggregates are integrated into the object model, allowing them to be used wherever an object could be used. Concurrent Aggregates also incorporates several innovative language features that facilitate programming with aggregates. Intra-aggregate addressing aids cooperation between parts of an aggregate. Delegation allows programmers to compose an concurrent aggregate behavior from a number of objects or aggregates. Messages in CA are first class objects that can be used to create message handling abstractions (they handle messages as data). Such abstractions facilitate concurrent operations on aggregates. Continuations are also first class objects. In addition, programmers can construct continuations and use them just like system continuations. User constructed continuations can implement synchronization structures such as a barrier synchronization.


conference on high performance computing (supercomputing) | 1993

Concert-Efficient runtime support for concurrent object-oriented programming languages on stock hardware

Vijay Karamcheti; Andrew A. Chien

Inefficient implementations of global namespaces, message passing, and thread scheduling on stock multicomputers have prevented concurrent object-oriented programming (COOP) languages from gaining widespread acceptance. Recognizing that the architectures of stock multicomputers impose a hierarchy of costs for these operations, the authors describe a runtime system which provides different versions of each primitive, exposing performance distinctions for optimization. They confirm the advantages of a cost-hierarchy based runtime system organization by showing a variation of two orders of magnitude in version costs for a CM5 implementation. Frequency measurements based on COOP application programs demonstrate that a 39% invocation cost reduction is feasible by simply selecting cheaper versions of runtime operations.


programming language design and implementation | 1989

Experience with CST: programming and implementation

Waldemar Horwat; Andrew A. Chien; William J. Dally

CST is a programming language based on Smalltalk-802 that supports concurrency using locks, asynchronous messages, and distributed objects. In this paper, we describe CST: the language and its implementation. Example programs and initial programming experience with CST are described. Our implementation of CST generates native code for the J-machine, a fine-grained concurrent computer. Some compiler optimizations developed in conjunction with that implementation are also described.


languages and compilers for parallel computing | 1993

Analysis of Dynamic Structures for Efficient Parallel Execution

John Plevyak; Andrew A. Chien; Vijay Karamcheti

This paper presents a new structure analysis technique handling references and dynamic structures which enables precise analysis of infinite recursive data structures. The precise analysis depends on an enhancement of Chase et al.s Storage Shape Graph (SSG) called the Abstract Storage Graph (ASG) which extends SSGs with choice nodes, identity paths, and specialized storage nodes and references. These extensions allow ASGs to precisely describe singly- and multiply-linked lists as well as a number of other pointer structures such as octrees, and to analyze programs which manipulate them.


Journal of Parallel and Distributed Computing | 1995

Concurrent Aggregates (CA)

Andrew A. Chien

To program massively concurrent MIMD machines, programmers need tools for managing complexity that do not restrict concurrency. One important tool that has been used in sequential programs is hierarchies of abstractions. Unfortunately, most concurrent object-oriented languages construct hierarchical abstractions from objects that serialize the processing of requests, limiting the concurrency of abstractions. Concurrent Aggregates (CA) is an object-oriented language that provides tools for building abstractions with unrestricted concurrency, aggregates. Aggregates are collections of objects that have a single group name, allowing aggregates and objects to be used interchangeably, increasing program concurrency while preserving the program?s modularity structure. This paper describes and evaluates the use of aggregates in a programming langauge. Based on our programming experience, we evaluate basic support for aggregates (a one-to-one-of-many interface and intra-aggregate addressing). In many applications, the basic support for aggregates can be used to build multiaccess data abstractions that maintain program modularity and increase concurrency. The one-to-one-of-many interface and intra-aggregate naming are sufficient to build a wide variety of replication structures and distributed interfaces. We also evaluate language support in CA for composing multiaccess data abstractions (delegation, first-class messages, and first-class and user-defined continuations). Delegation was not useful because, in most cases, some coordination code was needed to glue abstractions together, and representation incompatibilities between aggregates make a traditional shared-state approach infeasible. First-class messages were extremely useful for implementing data-parallel operations and a variety of customized synchronization and scheduling structures. First-class continuations found widespread use in simple synchronization structures. User-defined continuations are useful for group, source-blind synchronization structures such as barriers, but the lack of identifying information in reply messages limits user-defined continuations? utility for more general fine-grained synchronization structures.


symposium on principles of programming languages | 1995

Obtaining sequential efficiency for concurrent object-oriented languages

John Plevyak; Xingbin Zhang; Andrew A. Chien

Concurrent object-oriented programming (COOP) languages focus the abstraction and encapsulation power of abstract data types on the problem of concurrency control. In particular, pure fine-grained concurrent object-oriented languages (as opposed to hybrid or data parallel) provides the programmer with a simple, uniform, and flexible model while exposing maximum concurrency. While such languages promise to greatly reduce the complexity of large-scale concurrent programming, the popularity of these languages has been hampered by efficiency which is often many orders of magnitude less than that of comparable sequential code. We present a sufficiency set of techniques which enables the efficiency of fine-grained concurrent object-oriented languages to equal that of traditional sequential languages (like C) when the required data is available. These techniques are empirically validated by the application to a COOP implementation of the Livermore Loops.


conference on high performance computing (supercomputing) | 1995

A Hybrid Execution Model for Fine-Grained Languages on Distributed Memory Multicomputers

John Plevyak; Vijay Karamcheti; Xingbin Zhang; Andrew A. Chien

While fine-grained concurrent languages can naturally capture concurrency in many irregular and dynamic problems, their flexibility has generally resulted in poor execution effciency. In such languages the computation consists of many small threads which are created dynamically and synchronized implicitly. In order to minimize the overhead of these operations, we propose a hybrid execution model which dynamically adapts to runtime data layout, providing both sequential efficiency and low overhead parallel execution. This model uses separately optimized sequential and parallel versions of code. Sequential efficiency is obtained by dynamically coalescing threads via stack-based execution and parallel efficiency through latency hiding and cheap synchronization using heap-allocated activation frames. Novel aspects of the stack mechanism include handling return values for futures and executing forwarded messages (the responsibility to reply is passed along, like call/cc in Scheme) on the stack. In addition, the hybrid execution model is expressed entirely in C, and therefore is easily portable to many systems. Experiments with function-call intensive programs show that this model achieves sequential efficiency comparable to C programs. Experiments with regular and irregular application kernels on the CM5 and T3D demonstrate that it can yield 1.5 to 3 times better performance than code optimized for parallel execution alone.


hypercube concurrent computers and applications | 1988

Object-oriented concurrent programming in CST

William J. Dally; Andrew A. Chien

CST is a programming language based on Smalltalk-80 that supports concurrency using locks, asynchronous messages, and distributed objects. Distributed objects have their state distributed across many nodes of a machine, but are referred to by a single name. Distributed objects are capable of processing many messages simultaneously and can be used to efficiently connect together large collections of objects. They can be used to construct a number of useful abstractions for concurrency. This paper describes the CST language, gives examples of its use, and discusses an initial implementation.


international conference on computer design | 1992

The Message Driven Processor: an integrated multicomputer processing element

William J. Dally; Andrew A. Chien; J.A.S. Fiske; G. Fyler; Waldemar Horwat; John S. Keen; Richard Lethin; Michael D. Noakes; Peter R. Nuth; D.S. Wills

A description is given of the Message-Driven Processor (MDP), an integrated multicomputer node. It incorporates a 36-bit integer processor, a memory management unit, a router for a 3D mesh network, a network interface, a 4K*36-bit word static RAM (SRAM), and an ECC dynamic RAM (DRAM) controller on a single 1.1 M-transistor VLSI chip. The MDP is not specialized for a single model of computation. Instead, it incorporates efficient primitive mechanisms for communication, synchronization, and naming. These mechanisms support most proposed parallel programming models. Each processing node of the MIT J-Machine consists of an MDP with 1 Mbit of DRAM.<<ETX>>


distributed memory computing conference | 1990

Experience with Concurrent Aggregates (CA): Implementation and Programming

Andrew A. Chien; William J. Dally

Programming languages for massively parallel concurrent computers need multi-access data abstraction tools. Most concurrent object-oriented languages serialize hierarchical abstractions. Thus multiple levels of abstraction can result in greatly diminished concurrency, even if each level only causes a tiny amount of serialization. This leaves programmers with the choice of reduced concurrency or working without useful levels of abstraction. Going without these levels of abstraction makes programs more difficult to write, understand, and debug.

Collaboration


Dive into the Andrew A. Chien's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Waldemar Horwat

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

D.S. Wills

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

J.A.S. Fiske

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

John S. Keen

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael D. Noakes

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Peter R. Nuth

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Richard Lethin

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge