Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kenneth Mackenzie is active.

Publication


Featured researches published by Kenneth Mackenzie.


symposium on operating systems principles | 1997

Application performance and flexibility on exokernel systems

M. Frans Kaashoek; Dawson R. Engler; Gregory R. Ganger; Héctor M. Briceño; Russell Hunt; David Mazières; Thomas Pinckney; Robert Grimm; John Jannotti; Kenneth Mackenzie

The exokernel operating system architecture safely gives untrusted software efficient control over hardware and software resou rces by separating management from protection. This paper describes an exokernel system that allows specialized applications to achieve high performance without sacrificing the performance of unm odified UNIX programs. It evaluates the exokernel architectur e by measuring end-to-end application performance on Xok, an exokernel for Intel x86-based computers, and by comparing Xok’s performance to the performance of two widely-used 4.4BSD UNIX systems (FreeBSD and OpenBSD). The results show that common unmodified UNIX applications can enjoy the benefits of exoker nels: applications either perform comparably on Xok/ExOS and the BSD UNIXes, or perform significantly better. In addition , the results show that customized applications can benefit subst antially from control over their resources (e.g., a factor of eight fo r a Web server). This paper also describes insights about the exokernel approach gained through building three different exokernel systems, and presents novel approaches to resource multiplexing.


job scheduling strategies for parallel processing | 1997

Implications of I/O for Gang Scheduled Workloads

Walter Lee; Matthew I. Frank; Victor Lee; Kenneth Mackenzie; Larry Rudolph

The job workloads of general-purpose multiprocessors usually include both compute-bound parallel jobs, which often require gang scheduling, as well as I/O-bound jobs, which require high CPU priority for the individual gang members of the job in order to achieve interactive response times. Our results indicate that an effective interactive multiprocessor scheduler must be flexible and tailor the priority, time quantum, and extent of gang scheduling to the individual needs of each job.


Proceedings of the IEEE | 1999

The MIT Alewife Machine

Anant Agarwal; Ricardo Bianchini; David Chaiken; Frederic T. Chong; Kirk L. Johnson; David M. Kranz; John Kubiatowicz; Beng-Hong Lim; Kenneth Mackenzie; Donald Yeung

A variety of models for parallel architectures, such as shared memory, message passing, and data flow, have converged in the recent past to a hybrid architecture form called distributed shared memory (DSM). Alewife, an early prototype of such DSM architectures, uses hybrid software and hardware mechanisms to support coherent shared memory, efficient user level messaging, fine grain synchronization, and latency tolerance. Alewife supports up to 512 processing nodes connected over a scalable and cost effective mesh network at a constant cost per node. Four mechanisms combine to achieve Alewifes goals of scalability and programmability: software extended coherent shared memory provides a global, linear address space; integrated message passing allows compiler and operating system designers to provide efficient communication and synchronization; support for fine grain computation allows many processors to cooperate on small problem sizes; and latency tolerance mechanisms-including block multithreading and prefetching-mask unavoidable delays due to communication. Extensive results from microbenchmarks, together with over a dozen complete applications running on a 32-node prototype, demonstrate that integrating message passing with shared memory enables a cost efficient solution to the cache coherence problem and provides a rich set of programming primitives. Our results further show that messaging and shared memory operations are both important because each helps the programmer to achieve the best performance for various machine configurations.


high-performance computer architecture | 1998

Exploiting two-case delivery for fast protected messaging

Kenneth Mackenzie; John Kubiatowicz; Matthew I. Frank; Walter Lee; Victor Lee; Anant Agarwal; M.F. Kaashoek

We propose and evaluate two complementary techniques to protect and virtualize a tightly-coupled network interface in a multicomputer. The techniques allow efficient, direct application access to network hardware in a multiprogrammed environment while gaining most of the benefits of a memory-based network interface. First, two-case delivery allows an application to receive a message directly from the network hardware in ordinary circumstances, but provides buffering transparently when required for protection. Second, virtual buffering stores messages in virtual memory on demand, providing the convenience of effectively unlimited buffer capacity while keeping actual physical memory consumption low. The evaluation is based on workloads of real and synthetic applications running on a simulator and partly on emulated hardware. The results show that the direct path is also the common path, justifying the use of software buffering. Further results show that physical buffering requirements remain low in our applications despite the use of unacknowledged messages and despite adverse scheduling conditions.


international conference on supercomputing | 1993

The NuMesh: a modular, scalable communications substrate

Steve Ward; Karim Abdalla; Rajeev Dujari; Michael Fetterman; Frank Honoré; Ricardo Jenez; Philippe Laffont; Kenneth Mackenzie; Chris Metcalf; Milan Singh. Minsky; John Nguyen; John Pezaris; Gill A. Pratt; Russell Tessier

Many standardized hardware communication interfaces offer runtime flexibility and configurability at the cost of efficiency. An alternate approach is the use of a highly-efficient, minimal communication element, with as much communication decision-making as possible done at compile time. NuMesh is a packaging and interconnect technology supporting high-bandwidth systolic communications on a 3D nearest-neighbor lattice; our goal is to combine Lego-like modularity with supercomputer performance. To date, the primary focus of the project has been the class of applications whose static communication patterns can be precompiled into independent and carefully choreographed finite state machines running on each node. Several extensions of the NuMesh to more general communication paradigms have been implemented, and the issues involved are under active exploration. This paper presents an overview of our approach, as well as an introduction to our current-generation prototype. We also discuss our software environment and simulation technology, and enumerate some of the applications and programming models we have developed to make full use of the capabilities of the NuMesh.


Proceedings of the US/Japan Workshop on Parallel Symbolic Computing: Languages, Systems, and Applications | 1992

Sparcle: A Multithreaded VLSI Processor for Parallel Computing

Anant Agarwal; Jonathan Babb; David Chaiken; Godfrey D'Souza; Kirk L. Johnson; David A. Kranz; John Kubiatowicz; Beng-Hong Lim; Gino K. Maa; Kenneth Mackenzie; Daniel Nussbaum; Mike Parkin; Donald Yeung

The Sparcle chip will clock at no more than 50 MHz. It has no more than 200K transistors. It does not use the latest technologies and dissipates a paltry 2 watts. It has no on-chip cache, no fancy pads, and only 207 pins. It does not even support multiple-instructions issue. Then, why do we think this chip is interesting? Sparcle is a processor chip designed for large-scale multiprocessing. Processors suitable for multiprocessing environments must meet several requirements:


international symposium on computer architecture | 1995

The MIT Alewife machine: architecture and performance

Anant Agarwal; Ricardo Bianchini; David Chaiken; Kirk L. Johnson; David A. Kranz; John Kubiatowicz; Beng-Hong Lim; Kenneth Mackenzie; Donald Yeung


Archive | 1994

FUGU: Implementing Translation and Protection in a Multiuser, Multimodel Multiprocessor

Kenneth Mackenzie; John Kubiatowicz; Anant Agarwal; M.F. Kaashoek


Lecture Notes in Computer Science | 2006

Mobile Resource Guarantees and Policies

David Aspinall; Kenneth Mackenzie


Archive | 2007

Mobile Resource Guarantees

Donald Sannella; Martin Hofmann; David Aspinall; Stephen Gilmore; Ian Stark; Lennart Beringer; Hans-Wolfgang Loidl; Kenneth Mackenzie; Alberto Momigliano; Olha Shkaravska

Collaboration


Dive into the Kenneth Mackenzie's collaboration.

Top Co-Authors

Avatar

Anant Agarwal

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Beng-Hong Lim

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

David Chaiken

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kirk L. Johnson

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ian Stark

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge