Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ben Pfaff is active.

Publication


Featured researches published by Ben Pfaff.


acm special interest group on data communication | 2008

NOX: towards an operating system for networks

Natasha Gude; Teemu Koponen; Justin Pettit; Ben Pfaff; Martin Casado; Nick McKeown; Scott Shenker

As anyone who has operated a large network can attest, enterprise networks are difficult to manage. That they have remained so despite significant commercial and academic efforts suggests the need for a different network management paradigm. Here we turn to operating systems as an instructive example in taming management complexity. In the early days of computing, programs were written in machine languages that had no common abstractions for the underlying physical resources. This made programs hard to write, port, reason about, and debug. Modern operating systems facilitate program development by providing controlled access to high-level abstractions for resources (e.g., memory, storage, communication) and information (e.g., files, directories). These abstractions enable programs to carry out complicated tasks safely and efficiently on a wide variety of computing hardware. In contrast, networks are managed through low-level configuration of individual components. Moreover, these configurations often depend on the underlying network; for example, blocking a user’s access with an ACL entry requires knowing the user’s current IP address. More complicated tasks require more extensive network knowledge; forcing guest users’ port 80 traffic to traverse an HTTP proxy requires knowing the current network topology and the location of each guest. In this way, an enterprise network resembles a computer without an operating system, with network-dependent component configuration playing the role of hardware-dependent machine-language programming. What we clearly need is an “operating system” for networks, one that provides a uniform and centralized programmatic interface to the entire network. Analogous to the read and write access to various resources provided by computer operating systems, a network operating system provides the ability to observe and control a network. A network operating system does not manage the network itself; it merely provides a programmatic interface. Applications implemented on top of the network operating system perform the actual management tasks. The programmatic interface should be general enough to support a broad spectrum of network management applications. Such a network operating system represents two major conceptual departures from the status quo. First, the network operating system presents programs with a centralized programming model; programs are written as if the entire network were present on a single machine (i.e., one would use Dijkstra to compute shortest paths, not Bellman-Ford). This requires (as in [3, 8, 14] and elsewhere) centralizing network state. Second, programs are written in terms of high-level abstractions (e.g., user and host names), not low-level configuration parameters (e.g., IP and MAC addresses). This allows management directives to be enforced independent of the underlying network topology, but it requires that the network operating system carefully maintain the bindings (i.e., mappings) between these abstractions and the low-level configurations. Thus, a network operating system allows management applications to be written as centralized programs over highlevel names as opposed to the distributed algorithms over low-level addresses we are forced to use today. While clearly a desirable goal, achieving this transformation from distributed algorithms to centralized programming presents significant technical challenges, and the question we pose here is: Can one build a network operating system at significant scale?


computer and communications security | 2004

On the effectiveness of address-space randomization

Hovav Shacham; Matthew Page; Ben Pfaff; Eu-Jin Goh; Nagendra Modadugu; Dan Boneh

Address-space randomization is a technique used to fortify systems against buffer overflow attacks. The idea is to introduce artificial diversity by randomizing the memory location of certain system components. This mechanism is available for both Linux (via PaX ASLR) and OpenBSD. We study the effectiveness of address-space randomization and find that its utility on 32-bit architectures is limited by the number of bits available for address randomization. In particular, we demonstrate a <i>derandomization attack</i> that will convert any standard buffer-overflow exploit into an exploit that works against systems protected by address-space randomization. The resulting exploit is as effective as the original exploit, although it takes a little longer to compromise a target machine: on average 216 seconds to compromise Apache running on a Linux PaX ASLR system. The attack does not require running code on the stack. We also explore various ways of strengthening address-space randomization and point out weaknesses in each. Surprisingly, increasing the frequency of re-randomizations adds at most 1 bit of security. Furthermore, compile-time randomization appears to be more effective than runtime randomization. We conclude that, on 32-bit architectures, the only benefit of PaX-like address-space randomization is a small slowdown in worm propagation speed. The cost of randomization is extra complexity in system support.


acm sigops european workshop | 2004

Data lifetime is a systems problem

Tal Garfinkel; Ben Pfaff; Jim Chow; Mendel Rosenblum

As sensitive data lifetime (i.e. propagation and duration in memory) increases, so does the risk of exposure. Unfortunately, this issue has been largely overlooked in the design of most of todays operating systems, libraries, languages, etc. As a result, applications are likely to leave the sensitive data they handle (passwords, financial and military information, etc.) scattered widely over memory, leaked to disk, etc. and left there for an indeterminate period of time. This greatly increases the impact of a system compromise.Dealing with data lifetime issues is currently left to application developers, who largely overlook them. Security-aware developers who attempt to address them (e.g. cryptographic library writers) are stymied by the limitations of the operating systems, languages, etc. they rely on. We argue that data lifetime is a systems issue which must be recognized and addressed at all layers of the software stack.


acm special interest group on data communication | 2016

PISCES: A Programmable, Protocol-Independent Software Switch

Muhammad Shahbaz; Sean Choi; Ben Pfaff; Changhoon Kim; Nick Feamster; Nick McKeown; Jennifer Rexford

Hypervisors use software switches to steer packets to and from virtual machines (VMs). These switches frequently need upgrading and customization—to support new protocol headers or encapsulations for tunneling and overlays, to improve measurement and debugging features, and even to add middlebox-like functions. Software switches are typically based on a large body of code, including kernel code, and changing the switch is a formidable undertaking requiring domain mastery of network protocol design and developing, testing, and maintaining a large, complex codebase. Changing how a software switch forwards packets should not require intimate knowledge of its implementation. Instead, it should be possible to specify how packets are processed and forwarded in a high-level domain-specific language (DSL) such as P4, and compiled to run on a software switch. We present PISCES, a software switch derived from Open vSwitch (OVS), a hard-wired hypervisor switch, whose behavior is customized using P4. PISCES is not hard-wired to specific protocols; this independence makes it easy to add new features. We also show how the compiler can analyze the high-level specification to optimize forwarding performance. Our evaluation shows that PISCES performs comparably to OVS and that PISCES programs are about 40 times shorter than equivalent changes to OVS source code.


measurement and modeling of computer systems | 2004

Performance analysis of BSTs in system software

Ben Pfaff

Binary search tree (BST) based data structures, such as AVL trees, red-black trees, and splay trees, are often used in system software, such as operating system kernels. Choosing the right kind of tree can impact performance significantly, but the literature offers few empirical studies for guidance. We compare 20 BST variants using three experiments in real-world scenarios with real and artificial workloads. The results indicate that when input is expected to be randomly ordered with occasional runs of sorted order, red-black trees are preferred; when insertions often occur in sorted order, AVL trees excel for later random access, whereas splay trees perform best for later sequential or clustered access. For node representations, use of parent pointers is shown to be the fastest choice, with threaded nodes a close second choice that saves memory; nodes without parent pointers or threads suffer when traversal and modification are combined; maintaining a in-order doubly linked list is advantageous when traversal is very common; and right-threaded nodes perform poorly.


acm special interest group on data communication | 2017

A Database Approach to SDN Control Plane Design

Bruce Davie; Teemu Koponen; Justin Pettit; Ben Pfaff; Martin Casado; Natasha Gude; Amar Padmanabhan; Tim Petty; Kenneth James Duda; Anupam Chanda

Software-defined networking (SDN) is a well-known example of a research idea that has been reduced to practice in numerous settings. Network virtualization has been successfully developed commercially using SDN techniques. This paper describes our experience in developing production-ready, multi-vendor implementations of a complex network virtualization system. Having struggled with a traditional network protocol approach (based on OpenFlow) to achieving interoperability among vendors, we adopted a new approach. We focused first on defining the control information content and then used a generic database protocol to synchronize state between the elements. Within less than nine months of starting the design, we had achieved basic interoperability between our network virtualization controller and the hardware switches of six vendors. This was a qualitative improvement on our decidedly mixed experience using OpenFlow. We found a number of benefits to the database approach, such as speed of implementation, greater hardware diversity, the ability to abstract away implementation details of the hardware, clarified state consistency model, and extensibility of the overall system.


symposium on operating systems principles | 2003

Terra: a virtual machine-based platform for trusted computing

Tal Garfinkel; Ben Pfaff; Jim Chow; Mendel Rosenblum; Dan Boneh


HotNets | 2009

Extending Networking into the Virtualization Layer.

Ben Pfaff; Justin Pettit; Keith E. Amidon; Martin Casado; Teemu Koponen; Scott Shenker


operating systems design and implementation | 2002

Optimizing the migration of virtual computers

Constantine P. Sapuntzakis; Ramesh Chandra; Ben Pfaff; Jim Chow; Monica S. Lam; Mendel Rosenblum


usenix security symposium | 2004

Understanding data lifetime via whole system simulation

Jim Chow; Ben Pfaff; Tal Garfinkel; Kevin Christopher; Mendel Rosenblum

Collaboration


Dive into the Ben Pfaff's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Scott Shenker

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge