David Chaiken
Massachusetts Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David Chaiken.
international symposium on computer architecture | 1994
David Chaiken; Anant Agarwal
This paper evaluates the tradeoffs involved in the design of the software-extended memory system of Alewife, a multiprocessor architecture that implements coherent shared memory through a combination of hardware and software mechanisms. For each block of memory, Alewife implements between zero and five coherence directory pointers in hardware and allows software to handle requests when the pointers are exhausted. The software includes a flexible coherence interface that facilitates protocol software implementation. This interface is indispensable for conducting experiments and has proven important for implementing enhancements to the basic system.Simulations of a number of applications running on a complete system (with up to 256 processors) demonstrate that the hybrid architecture with five pointers achieves between 71% and 100% of full-map directory performance at a constant cost per processing element. Our experience in designing the software protocol interfaces and experiments with a variety of system configurations lead to a detailed understanding of the interaction of the hardware and software components of the system. The results show that a small amount of shared memory hardware provides adequate performance: One-pointer systems reach between 42% and 100% of full-map performance on our parallel benchmarks. A software-only directory architecture with no hardware pointers has lower performance but minimal cost.
Proceedings of the IEEE | 1999
Anant Agarwal; Ricardo Bianchini; David Chaiken; Frederic T. Chong; Kirk L. Johnson; David M. Kranz; John Kubiatowicz; Beng-Hong Lim; Kenneth Mackenzie; Donald Yeung
A variety of models for parallel architectures, such as shared memory, message passing, and data flow, have converged in the recent past to a hybrid architecture form called distributed shared memory (DSM). Alewife, an early prototype of such DSM architectures, uses hybrid software and hardware mechanisms to support coherent shared memory, efficient user level messaging, fine grain synchronization, and latency tolerance. Alewife supports up to 512 processing nodes connected over a scalable and cost effective mesh network at a constant cost per node. Four mechanisms combine to achieve Alewifes goals of scalability and programmability: software extended coherent shared memory provides a global, linear address space; integrated message passing allows compiler and operating system designers to provide efficient communication and synchronization; support for fine grain computation allows many processors to cooperate on small problem sizes; and latency tolerance mechanisms-including block multithreading and prefetching-mask unavoidable delays due to communication. Extensive results from microbenchmarks, together with over a dozen complete applications running on a 32-node prototype, demonstrate that integrating message passing with shared memory enables a cost efficient solution to the cache coherence problem and provides a rich set of programming primitives. Our results further show that messaging and shared memory operations are both important because each helps the programmer to achieve the best performance for various machine configurations.
architectural support for programming languages and operating systems | 1992
John Kubiatowicz; David Chaiken; Anant Agarwal
Multiprocessor architects have begun to explore several mechanisms such as prefetching, context-switching and software-assisted dynamic cache-coherence, which transform single-phase memory transactions in conventional memory systems into multiphase operations. Multiphase operations introduce a window of vulnerability in which data can be invalidated before it is used. Losing data due to invalidations introduces damaging livelock situations. This paper discusses the origins of the window of vulnerability and proposes an architectural framework that closes it. The framework is implemented in Alewife, a large-scale multi-processor being built at MIT.
Proceedings of the US/Japan Workshop on Parallel Symbolic Computing: Languages, Systems, and Applications | 1992
Anant Agarwal; Jonathan Babb; David Chaiken; Godfrey D'Souza; Kirk L. Johnson; David A. Kranz; John Kubiatowicz; Beng-Hong Lim; Gino K. Maa; Kenneth Mackenzie; Daniel Nussbaum; Mike Parkin; Donald Yeung
The Sparcle chip will clock at no more than 50 MHz. It has no more than 200K transistors. It does not use the latest technologies and dissipates a paltry 2 watts. It has no on-chip cache, no fancy pads, and only 207 pins. It does not even support multiple-instructions issue. Then, why do we think this chip is interesting? Sparcle is a processor chip designed for large-scale multiprocessing. Processors suitable for multiprocessing environments must meet several requirements:
international symposium on computer architecture | 1995
Anant Agarwal; Ricardo Bianchini; David Chaiken; Kirk L. Johnson; David A. Kranz; John Kubiatowicz; Beng-Hong Lim; Kenneth Mackenzie; Donald Yeung
Operating Systems Review | 1991
David Chaiken; John Kubiatowicz; Anant Agarwal
Archive | 1991
Anant Agarwal; David Chaiken; Kirk L. Johnson; David A. Kranz; John Kubiatowicz; K. Kurihara; Beng-Hong Lim; Gino K. Maa; Daniel Nussbaum; Mike Parkin; Donald Yeung
Archive | 1991
Kiyoshi Kurihara; David Chaiken
Archive | 1994
John Kubiatowicz; David Chaiken
Archive | 1990
David Chaiken