Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Eddie Kohler is active.

Publication


Featured researches published by Eddie Kohler.


ACM Transactions on Computer Systems | 2000

The click modular router

Eddie Kohler; Robert Tappan Morris; Benjie Chen; John Jannotti; M. Frans Kaashoek

Clicks is a new software architecture for building flexible and configurable routers. A Click router is assembled from packet processing modules called elements. Individual elements implement simple router functions like packet classification, queuing, scheduling, and interfacing with network devices. A router configurable is a directed graph with elements at the vertices; packets flow along the edges of the graph. Several features make individual elements more powerful and complex configurations easier to write, including pull connections, which model packet flow drivn by transmitting hardware devices, and flow-based router context, which helps an element locate other interesting elements. Click configurations are modular and easy to extend. A standards-compliant Click IP router has 16 elements on its forwarding path; some of its elements are also useful in Ethernet switches and IP tunnelling configurations. Extending the IP router to support dropping policies, fairness among flows, or Differentiated Services simply requires adding a couple of element at the right place. On conventional PC hardware, the Click IP router achieves a maximum loss-free forwarding rate of 333,000 64-byte packets per second, demonstrating that Clicks modular and flexible architecture is compatible with good performance.


symposium on operating systems principles | 1999

The Click modular router

Robert Tappan Morris; Eddie Kohler; John Jannotti; M. Frans Kaashoek

Click is a new software architecture for building flexible and configurable routers. A Click router is assembled from packet processing modules called elements. Individual elements implement simple router functions like packet classification, queueing, scheduling, and interfacing with network devices. Complete configurations are built by connecting elements into a graph; packets flow along the graphs edges. Several features make individual elements more powerful and complex configurations easier to write, including pull processing, which models packet flow driven by transmitting interfaces, and flow-based router context, which helps an element locate other interesting elements.We demonstrate several working configurations, including an IP router and an Ethernet bridge. These configurations are modular---the IP router has 16 elements on the forwarding path---and easy to extend by adding additional elements, which we demonstrate with augmented configurations. On commodity PC hardware running Linux, the Click IP router can forward 64-byte packets at 73,000 packets per second, just 10% slower than Linux alone.


international conference on mobile systems, applications, and services | 2005

A dynamic operating system for sensor nodes

Chih-Chieh Han; Ram Kumar; Roy Shea; Eddie Kohler; Mani B. Srivastava

Sensor network nodes exhibit characteristics of both embedded systems and general-purpose systems. They must use little energy and be robust to environmental conditions, while also providing common services that make it easy to write applications. In TinyOS, the current state of the art in sensor node operating systems, reusable components implement common services, but each node runs a single statically-linked system image, making it hard to run multiple applications or incrementally update applications. We present SOS, a new operating system for mote-class sensor nodes that takes a more dynamic point on the design spectrum. SOS consists of dynamically-loaded modules and a common kernel, which implements messaging, dynamic memory, and module loading and unloading, among other services. Modules are not processes: they are scheduled cooperatively and there is no memory protection. Nevertheless, the system protects against common module bugs using techniques such as typed entry points, watchdog timers, and primitive resource garbage collection. Individual modules can be added and removed with minimal system interruption. We describe SOSs design and implementation, discuss tradeoffs, and compare it with TinyOS and with the Maté virtual machine. Our evaluation shows that despite the dynamic nature of SOS and its higher-level kernel interface, its long term total usage nearly identical to that of systems such as Matè and TinyOS.


international conference on embedded networked sensor systems | 2005

Sympathy for the sensor network debugger

Nithya Ramanathan; Kevin Chang; Rahul Kapur; Lewis Girod; Eddie Kohler; Deborah Estrin

Being embedded in the physical world, sensor networks present a wide range of bugs and misbehavior qualitatively different from those in most distributed systems. Unfortunately, due to resource constraints, programmers must investigate these bugs with only limited visibility into the application. This paper presents the design and evaluation of Sympathy, a tool for detecting and debugging failures in sensor networks. Sympathy has selected metrics that enable efficient failure detection, and includes an algorithm that root-causes failures and localizes their sources in order to reduce overall failure notifications and point the user to a small number of probable causes. We describe Sympathy and evaluate its performance through fault injection and by debugging an active application, ESS, in simulation and deployment. We show that for a broad class of data gathering applications, it is possible to detect and diagnose failures by collecting and analyzing a minimal set of metrics at a centralized sink. We have found that there is a tradeoff between notification latency and detection accuracy; that additional metrics traffic does not always improve notification latency; and that Sympathys process of failure localization reduces.


symposium on operating systems principles | 2007

Information flow control for standard OS abstractions

Maxwell N. Krohn; Alexander Yip; Micah Z. Brodsky; Natan Cliffer; M. Frans Kaashoek; Eddie Kohler; Robert Tappan Morris

Decentralized Information Flow Control (DIFC) is an approach to security that allows application writers to control how data flows between the pieces of an application and the outside world. As applied to privacy, DIFC allows untrusted software to compute with private data while trusted security code controls the release of that data. As applied to integrity, DIFC allows trusted code to protect untrusted software from unexpected malicious inputs. In either case, only bugs in the trusted code, which tends to be small and isolated, can lead to security violations. We present Flume, a new DIFC model that applies at the granularity of operating system processes and standard OS abstractions (e.g., pipes and file descriptors). Flume was designed for simplicity of mechanism, to ease DIFCs use in existing applications, and to allow safe interaction between conventional and DIFC-aware processes. Flume runs as a user-level reference monitor onLinux. A process confined by Flume cannot perform most system calls directly; instead, an interposition layer replaces system calls with IPCto the reference monitor, which enforces data flowpolicies and performs safe operations on the processs behalf. We ported a complex web application (MoinMoin Wiki) to Flume, changingonly 2% of the original code. Performance measurements show a 43% slowdown on read workloadsand a 34% slowdown on write workloads, which aremostly due to Flumes user-level implementation.


symposium on operating systems principles | 2005

Labels and event processes in the asbestos operating system

Petros Efstathopoulos; Maxwell N. Krohn; Steve Vandebogart; Cliff Frey; David A. Ziegler; Eddie Kohler; David Mazières; M. Frans Kaashoek; Robert Tappan Morris

Asbestos, a new prototype operating system, provides novel labeling and isolation mechanisms that help contain the effects of exploitable software flaws. Applications can express a wide range of policies with Asbestoss kernel-enforced label mechanism, including controls on inter-process communication and system-wide information flow. A new event process abstraction provides lightweight, isolated contexts within a single process, allowing the same process to act on behalf of multiple users while preventing it from leaking any single users data to any other user. A Web server that uses Asbestos labels to isolate user data requires about 1.5 memory pages per user, demonstrating that additional security can come at an acceptable cost.


acm special interest group on data communication | 2006

Designing DCCP: congestion control without reliability

Eddie Kohler; Mark Handley; Sally Floyd

Fast-growing Internet applications like streaming media and telephony prefer timeliness to reliability, making TCP a poor fit. Unfortunately, UDP, the natural alternative, lacks congestion control. High-bandwidth UDP applications must implement congestion control themselves-a difficult task-or risk rendering congested networks unusable. We set out to ease the safe deployment of these applications by designing a congestion-controlled unreliable transport protocol. The outcome, the Datagram Congestion Control Protocol or DCCP, adds to a UDP-like foundation the minimum mechanisms necessary to support congestion control. We thought those mechanisms would resemble TCPs, but without reliability and, especially, cumulative acknowledgements, we had to reconsider almost every aspect of TCPs design. The resulting protocol sheds light on how congestion control interacts with unreliable transport, how modern network constraints impact protocol design, and how TCPs reliable bytestream semantics intertwine with its other mechanisms, including congestion control.


acm special interest group on data communication | 2003

Internet research needs better models

Sally Floyd; Eddie Kohler

Networking researchers work from mental models of the Internet’s important properties. The scenarios used in simulations and experiments reveal aspects of these mental models (including our own), often including one or more of the following implicit assumptions: Flows live for a long time and transfer a lot of data. Simple topologies, like a “dumbbell” topology with one congested link, are sufficient to study many traffic properties. Flows on the congested link share a small range of round-trip times. Most data traffic across the link is one-way; reverse-path traffic is rarely congested. All of these modeling assumptions affect simulation and experimental results, and therefore our evaluations of research. But none of them are confirmed by measurement studies, and some are actively wrong. Some divergences from reality are unimportant, in that they don’t affect the validity of simulation results, and simple models help us understand the underlying dynamics of our systems. However, as a community we do not yet understand which aspects of models affect fundamental system behavior and which aspects can safely be ignored. It is our belief that lack of good measurements, lack of tools for evaluating measurement results and applying their results to models, and lack of diverse and well-understood simulation scenarios based on these models are holding back the field. We need a much richer understanding of the range of realistic models, and of the likely relevance of different model parameters to network performance.


ACM Transactions on Sensor Networks | 2009

Sensor network data fault types

Kevin Ni; Nithya Ramanathan; Mohamed Nabil Hajj Chehade; Laura Balzano; Sheela Nair; Sadaf Zahedi; Eddie Kohler; Gregory J. Pottie; Mark Hansen; Mani B. Srivastava

This tutorial presents a detailed study of sensor faults that occur in deployed sensor networks and a systematic approach to model these faults. We begin by reviewing the fault detection literature for sensor networks. We draw from current literature, our own experience, and data collected from scientific deployments to develop a set of commonly used features useful in detecting and diagnosing sensor faults. We use this feature set to systematically define commonly observed faults, and provide examples of each of these faults from sensor data collected at recent deployments.


symposium on operating systems principles | 2013

Speedy transactions in multicore in-memory databases

Stephen Tu; Wenting Zheng; Eddie Kohler; Barbara Liskov; Samuel Madden

Silo is a new in-memory database that achieves excellent performance and scalability on modern multicore machines. Silo was designed from the ground up to use system memory and caches efficiently. For instance, it avoids all centralized contention points, including that of centralized transaction ID assignment. Silos key contribution is a commit protocol based on optimistic concurrency control that provides serializability while avoiding all shared-memory writes for records that were only read. Though this might seem to complicate the enforcement of a serial order, correct logging and recovery is provided by linking periodically-updated epochs with the commit protocol. Silo provides the same guarantees as any serializable database without unnecessary scalability bottlenecks or much additional latency. Silo achieves almost 700,000 transactions per second on a standard TPC-C workload mix on a 32-core machine, as well as near-linear scalability. Considered per core, this is several times higher than previously reported results.

Collaboration


Dive into the Eddie Kohler's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

M. Frans Kaashoek

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Robert Tappan Morris

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Sally Floyd

International Computer Science Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Massimiliano Antonio Poletto

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ramesh Govindan

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Lewis Girod

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge