Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rebecca Isaacs is active.

Publication


Featured researches published by Rebecca Isaacs.


symposium on operating systems principles | 2009

The multikernel: a new OS architecture for scalable multicore systems

Andrew Baumann; Paul Barham; Pierre-Evariste Dagand; Tim Harris; Rebecca Isaacs; Simon Peter; Timothy Roscoe; Adrian Schüpbach; Akhilesh Singhania

Commodity computer systems contain more and more processor cores and exhibit increasingly diverse architectural tradeoffs, including memory hierarchies, interconnects, instruction sets and variants, and IO configurations. Previous high-performance computing systems have scaled in specific cases, but the dynamic nature of modern client and server workloads, coupled with the impossibility of statically optimizing an OS for all workloads and hardware variants pose serious challenges for operating system structures. We argue that the challenge of future multicore hardware is best met by embracing the networked nature of the machine, rethinking OS architecture using ideas from distributed systems. We investigate a new OS structure, the multikernel, that treats the machine as a network of independent cores, assumes no inter-core sharing at the lowest level, and moves traditional OS functionality to a distributed system of processes that communicate via message-passing. We have implemented a multikernel OS to show that the approach is promising, and we describe how traditional scalability problems for operating systems (such as memory management) can be effectively recast using messages and can exploit insights from distributed systems and networking. An evaluation of our prototype on multicore systems shows that, even on present-day machines, the performance of a multikernel is comparable with a conventional OS, and can scale better to support future hardware.


symposium on operating systems principles | 2013

Naiad: a timely dataflow system

Derek Gordon Murray; Frank McSherry; Rebecca Isaacs; Michael Isard; Paul Barham; Martín Abadi

Naiad is a distributed system for executing data parallel, cyclic dataflow programs. It offers the high throughput of batch processors, the low latency of stream processors, and the ability to perform iterative and incremental computations. Although existing systems offer some of these features, applications that require all three have relied on multiple platforms, at the expense of efficiency, maintainability, and simplicity. Naiad resolves the complexities of combining these features in one framework. A new computational model, timely dataflow, underlies Naiad and captures opportunities for parallelism across a wide class of algorithms. This model enriches dataflow computation with timestamps that represent logical points in the computation and provide the basis for an efficient, lightweight coordination mechanism. We show that many powerful high-level programming models can be built on Naiads low-level primitives, enabling such diverse tasks as streaming data analysis, iterative machine learning, and interactive graph mining. Naiad outperforms specialized systems in their target application domains, and its unique features enable the development of new high-performance applications.


european conference on computer systems | 2008

30 seconds is not enough!: a study of operating system timer usage

Simon Peter; Andrew Baumann; Timothy Roscoe; Paul Barham; Rebecca Isaacs

The basic system timer facilities used by applications and OS kernels for scheduling timeouts and periodic activities have remained largely unchanged for decades, while hardware architectures and application loads have changed radically. This raises concerns with CPU overhead power management and application responsiveness. In this paper we study how kernel timers are used in the Linux and Vista kernels, and the instrumentation challenges and tradeoffs inherent in conducting such a study. We show how the same timer facilities serve at least five distinct purposes, and examine their performance characteristics under a selection of application workloads. We show that many timer parameters supplied by application and kernel programmers are somewhat arbitrary, and examine the potential benefit of adaptive timeouts. We also discuss the further implications of our results, both for enhancements to the system timer functionality in existing kernels, and for the clean-slate design of a system timer subsystem for new OS kernels, including the extent to which applications might require such an interface at all.


acm sigops european workshop | 2004

Request extraction in Magpie: events, schemas and temporal joins

Rebecca Isaacs; Paul Barham; James R. Bulpin; Richard Mortier; Dushyanth Narayanan

This paper addresses the problem of extracting individual request activity from interleaved event traces. We present a new technique for event correlation which applies a form of temporal join over timestamped, parameterized event streams in order to identify the events pertaining to an individual request. Event schemas ensure that the request extraction mechanism applies to any server application or service without modification, and is robust against future changes in application behavior. This work is part of the Magpie project [2], which is developing infrastructure to track requests end-to-end in a distributed system.


acm special interest group on data communication | 2005

Anemone: using end-systems as a rich network management platform

Richard Mortier; Rebecca Isaacs; Paul Barham

Enterprise networks contain hundreds, if not thousands, of cooperative end-systems. We advocate devoting a small fraction of their idle cycles, free disk space and network bandwidth to create Anemone, a platform for network management. In contrast to current approaches which rely on traffic statistics provided by network devices, Anemone combines end-system instrumentation with routing protocol collection to provide a semantically rich view of the network.


Communications of The ACM | 2016

Incremental, iterative data processing with timely dataflow

Derek Gordon Murray; Frank McSherry; Michael Isard; Rebecca Isaacs; Paul Barham; Martín Abadi

We describe the timely dataflow model for distributed computation and its implementation in the Naiad system. The model supports stateful iterative and incremental computations. It enables both low-latency stream processing and high-throughput batch processing, using a new approach to coordination that combines asynchronous and fine-grained synchronous execution. We describe two of the programming frameworks built on Naiad: GraphLINQ for parallel graph processing, and differential dataflow for nested iterative and incremental computations. We show that a general-purpose system can achieve performance that matches, and sometimes exceeds, that of specialized systems.


eurographics workshop on parallel graphics and visualization | 2016

Interacting with large distributed datasets using sketch

Mihai Budiu; Rebecca Isaacs; Derek Gordon Murray; Gordon D. Plotkin; Paul Barham; Samer Al-Kiswany; Yazan Boshmaf; Qingzhou Luo; Alexandr Andoni

We present Sketch, a library and a distributed runtime for building interactive tools for exploring large datasets, distributed across multiple machines. We have built several applications using Sketch; here we describe a billion-row spreadsheet, and a distributed-systems performance analyzer. Sketch applications allow interactive and responsive exploration of complex distributed datasets, scaling effectively to use large computational resources.


Technical report / ETH Zurich, Department of Computer Science, Systems Group | 2012

Efficient Data-parallel Computing on Small Heterogeneous Clusters

Simon Peter; Rebecca Isaacs; Paul Barham; Black Richard; Timothy Roscoe

Cluster-based data-parallel frameworks such as MapReduce, Hadoop, and Dryad are increasingly popular for a large class of compute-intensive tasks. Such systems are designed for large-scale clusters, and employ several techniques to decrease the run time of jobs in the presence of failures, slow machines, and other effects. In this paper, we apply Dryad to smaller-scale, “ad-hoc” clusters such as those formed by aggregating the servers and workstations in a small office. We first show that, while Dryad’s greedy scheduling algorithm performs well at scale, it is significantly less optimal in a small (5-10 machine) cluster environment where nodes have widely differing performance characteristics. We further show that in such cases, performance models of dataflow operators can be constructed which predict runtimes of vertex processes with sufficient accuracy to allow a more intelligent planner to achieve significant performance gains for a variety of jobs, and we show how to efficiently construct such models. Our system enhances the DryadLINQ data-parallel language compiler with a planner/optimizer implemented using constraint programming, and can exploit our operator models to significantly enhance the performance of parallel jobs on ad-hoc clusters.


acm special interest group on data communication | 2002

A perspective on how ATM lost control

Simon Crosby; Sean Rooney; Rebecca Isaacs; Herbert Bos

Contrary to the initial high expectations, ATM failed to become the universal network technology covering all services and running from the desktop to the backbone. This paper tries to identify the technological problems that contributed to this failure.


operating systems design and implementation | 2004

Using magpie for request extraction and workload modelling

Paul Barham; Austin Donnelly; Rebecca Isaacs; Richard Mortier

Collaboration


Dive into the Rebecca Isaacs's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Simon Peter

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge