Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alex C. Snoeren is active.

Publication


Featured researches published by Alex C. Snoeren.


acm special interest group on data communication | 2001

Hash-based IP traceback

Alex C. Snoeren; Craig Partridge; Luis A. Sanchez; Christine E. Jones; Fabrice Tchakountio; Stephen T. Kent; W. Timothy Strayer

The design of the IP protocol makes it difficult to reliably identify the originator of an IP packet. Even in the absence of any deliberate attempt to disguise a packets origin, wide-spread packet forwarding techniques such as NAT and encapsulation may obscure the packets true source. Techniques have been developed to determine the source of large packet flows, but, to date, no system has been presented to track individual packets in an efficient, scalable fashion.We present a hash-based technique for IP traceback that generates audit trails for traffic within the network, and can trace the origin of a single IP packet delivered by the network in the recent past. We demonstrate that the system is effective, space-efficient (requiring approximately 0.5% of the link capacity per unit time in storage), and implementable in current or next-generation routing hardware. We present both analytic and simulation results showing the systems effectiveness.


acm/ieee international conference on mobile computing and networking | 2000

An end-to-end approach to host mobility

Alex C. Snoeren; Hari Balakrishnan

We present the design and implementation of an end-to-end architecture for Internet host mobility using dynamic updates to the Domain Name System (DNS) to track host location. Existing TCP connections are retained using secure and efficient connection migration, enabling established connections to seamlessly negotiate a change in endpoint IP addresses without the need for a third party. Our architecture is secure—name updates are effected via the secure DNS update protocol, while TCP connection migration uses a novel set of Migrate options—and provides a pure end-system alternative to routing-based approaches such as Mobile IP. Mobile IP was designed under the principle that fixed Internet hosts and applications were to remain unmodified and only the underlying IP substrate should change. Our architecture requires no changes to the unicast IP substrate, instead modifying transport protocols and applications at the end hosts. We argue that this is not a hindrance to deployment; rather, in a significant number of cases, it allows for an easier deployment path than Mobile IP, while simultaneously giving better performance. We compare and contrast the strengths of end-to-end and network-layer mobility schemes, and argue that end-to-end schemes are better suited to many common mobile applications. Our performance experiments show that hand-off times are governed by TCP migrate latencies, and are on the order of a round-trip time of the communicating peers.


IEEE ACM Transactions on Networking | 2002

Single-packet IP traceback

Alex C. Snoeren; Craig Partridge; Luis A. Sanchez; Christine E. Jones; Fabrice Tchakountio; Beverly Schwartz; Stephen T. Kent; W. Timothy Strayer

The design of the IP protocol makes it difficult to reliably identify the originator of an IP packet. Even in the absence of any deliberate attempt to disguise a packets origin, widespread packet forwarding techniques such as NAT and encapsulation may obscure the packets true source. Techniques have been developed to determine the source of large packet flows, but, to date, no system has been presented to track individual packets in an efficient, scalable fashion. We present a hash-based technique for IP traceback that generates audit trails for traffic within the network, and can trace the origin of a single IP packet delivered by the network in the recent past. We demonstrate that the system is effective, space efficient (requiring approximately 0.5% of the link capacity per unit time in storage), and implementable in current or next-generation routing hardware. We present both analytic and simulation results showing the systems effectiveness.


Communications of The ACM | 2010

Difference engine: harnessing memory redundancy in virtual machines

Diwaker Gupta; Sangmin Lee; Michael Vrable; Stefan Savage; Alex C. Snoeren; George Varghese; Geoffrey M. Voelker; Amin Vahdat

Virtual machine monitors (VMMs) are a popular platform for Internet hosting centers and cloud-based compute services. By multiplexing hardware resources among virtual machines (VMs) running commodity operating systems, VMMs decrease both the capital outlay and management overhead of hosting centers. Appropriate placement and migration policies can take advantage of statistical multiplexing to effectively utilize available processors. However, main memory is not amenable to such multiplexing and is often the primary bottleneck in achieving higher degrees of consolidation. Previous efforts have shown that content-based page sharing provides modest decreases in the memory footprint of VMs running similar operating systems and applications. Our studies show that significant additional gains can be had by leveraging both subpage level sharing (through page patching) and incore memory compression. We build Difference Engine, an extension to the Xen VMM, to support each of these---in addition to standard copy-on-write full-page sharing---and demonstrate substantial savings across VMs running disparate workloads (up to 65%). In head-to-head memory-savings comparisons, Difference Engine outperforms VMware ESX server by a factor 1.6--2.5 for heterogeneous workloads. In all cases, the performance overhead of Difference Engine is less than 7%.


acm special interest group on data communication | 2006

Jigsaw: solving the puzzle of enterprise 802.11 analysis

Yu-Chung Cheng; John Bellardo; Péter Benkö; Alex C. Snoeren; Geoffrey M. Voelker; Stefan Savage

The combination of unlicensed spectrum, cheap wireless interfaces and the inherent convenience of untethered computing have made 802.11 based networks ubiquitous in the enterprise. Modern universities, corporate campuses and government offices routinely de-ploy scores of access points to blanket their sites with wireless Internet access. However, while the fine-grained behavior of the 802.11 protocol itself has been well studied, our understanding of how large 802.11 networks behave in their full empirical complex-ity is surprisingly limited. In this paper, we present a system called Jigsaw that uses multiple monitors to provide a single unified view of all physical, link, network and transport-layer activity on an 802.11 network. To drive this analysis, we have deployed an infrastructure of over 150 radio monitors that simultaneously capture all 802.11b and 802.11g activity in a large university building (1M+ cubic feet). We describe the challenges posed by both the scale and ambiguity inherent in such an architecture, and explain the algorithms and inference techniques we developed to address them. Finally, using a 24-hour distributed trace containing more than 1.5 billion events, we use Jigsaws global cross-layer viewpoint to isolate performance artifacts, both explicit, such as management inefficiencies, and implicit, such as co-channel interference. We believe this is the first analysis combining this scale and level of detail for a production 802.11 network.


symposium on operating systems principles | 2005

Scalability, fidelity, and containment in the potemkin virtual honeyfarm

Michael Vrable; Justin Ma; Jay Chen; David Moore; Erik Vandekieft; Alex C. Snoeren; Geoffrey M. Voelker; Stefan Savage

The rapid evolution of large-scale worms, viruses and bot-nets have made Internet malware a pressing concern. Such infections are at the root of modern scourges including DDoS extortion, on-line identity theft, SPAM, phishing, and piracy. However, the most widely used tools for gathering intelligence on new malware -- network honeypots -- have forced investigators to choose between monitoring activity at a large scale or capturing behavior with high fidelity. In this paper, we describe an approach to minimize this tension and improve honeypot scalability by up to six orders of magnitude while still closely emulating the execution behavior of individual Internet hosts. We have built a prototype honeyfarm system, called Potemkin, that exploits virtual machines, aggressive memory sharing, and late binding of resources to achieve this goal. While still an immature implementation, Potemkin has emulated over 64,000 Internet honeypots in live test runs, using only a handful of physical servers.


acm special interest group on data communication | 2007

Cloud control with distributed rate limiting

Barath Raghavan; Kashi Venkatesh Vishwanath; Sriram Ramabhadran; Ken Yocum; Alex C. Snoeren

Todays cloud-based services integrate globally distributed resources into seamless computing platforms. Provisioning and accounting for the resource usage of these Internet-scale applications presents a challenging technical problem. This paper presents the design and implementation of distributed rate limiters, which work together to enforce a global rate limit across traffic aggregates at multiple sites, enabling the coordinated policing of a cloud-based services network traffic. Our abstraction not only enforces a global limit, but also ensures that congestion-responsive transport-layer flows behave as if they traversed a single, shared limiter. We present two designs - one general purpose, and one optimized for TCP - that allow service operators to explicitly trade off between communication costs and system accuracy, efficiency, and scalability. Both designs are capable of rate limiting thousands of flows with negligible overhead (less than 3% in the tested configuration). We demonstrate that our TCP-centric design is scalable to hundreds of nodes while robust to both loss and communication delay, making it practical for deployment in nationwide service providers.


internet measurement conference | 2003

Best-path vs. multi-path overlay routing

David G. Andersen; Alex C. Snoeren; Hari Balakrishnan

Time-varying congestion on Internet paths and failures due to software, hardware, and configuration errors often disrupt packet delivery on the Internet.Many aproaches to avoiding these problems use multiple paths between two network locations. These approaches rely on a path-independence assumption in order to work well; i.e., they work best when the problems on different paths between two locations are uncorrelated in time.This paper examines the extent to which this assumption holds on the Internet by analyzing 14 days of data collected from 30 nodes in the RON testbed. We examine two problems that manifest themselves---congestion-triggered loss and path failures---and find that the chances of losing two packets between the same hosts is nearly as high when those packets are sent through an intermediate node (60%) as when they are sent back-to-back on the same path (70%). In so doing, we also compare two different ways of taking advantage of path redundancy proposed in the literature: mesh routing based on packet replication, and reactive routing based on adaptive path selection.


acm special interest group on data communication | 2015

Inside the Social Network's (Datacenter) Network

Arjun Roy; Hongyi Zeng; Jasmeet Bagga; George Porter; Alex C. Snoeren

Large cloud service providers have invested in increasingly larger datacenters to house the computing infrastructure required to support their services. Accordingly, researchers and industry practitioners alike have focused a great deal of effort designing network fabrics to efficiently interconnect and manage the traffic within these datacenters in performant yet efficient fashions. Unfortunately, datacenter operators are generally reticent to share the actual requirements of their applications, making it challenging to evaluate the practicality of any particular design. Moreover, the limited large-scale workload information available in the literature has, for better or worse, heretofore largely been provided by a single datacenter operator whose use cases may not be widespread. In this work, we report upon the network traffic observed in some of Facebooks datacenters. While Facebook operates a number of traditional datacenter services like Hadoop, its core Web service and supporting cache infrastructure exhibit a number of behaviors that contrast with those reported in the literature. We report on the contrasting locality, stability, and predictability of network traffic in Facebooks datacenters, and comment on their implications for network architecture, traffic engineering, and switch design.


global communications conference | 1999

Adaptive inverse multiplexing for wide-area wireless networks

Alex C. Snoeren

The limited bandwidth of current wide-area wireless access networks (WWANs) is often insufficient for demanding applications, such as streaming audio or video, data mining applications, or high-resolution imaging. Inverse multiplexing is a standard application-transparent method used to provide higher end-to-end bandwidth by splitting traffic across multiple physical links, creating a single logical channel. While commonly used in ISDN and analog dialup installations, current implementations are designed for private links with stable channel characteristics. Unfortunately, most WWAN technologies use shared channels with highly variable link characteristics, including bandwidth, latency, and loss rates. This paper presents an adaptive inverse multiplexing scheme for WWAN environments, termed link quality balancing, which uses relative performance metrics to adjust traffic scheduling across bundled links. By exchanging loss rate information, we compute relative short-term available bandwidths for each link. We discuss the challenges of adaptation in a WWAN network, CDPD in particular, and present performance measurements of our current implementation of wide-area multi-link PPP (WAMP) for CDPD modems under both constant bit rate (CBR) and TCP loads.

Collaboration


Dive into the Alex C. Snoeren's collaboration.

Top Co-Authors

Avatar

Amin Vahdat

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stefan Savage

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

George Porter

University of California

View shared research outputs
Top Co-Authors

Avatar

Ken Yocum

University of California

View shared research outputs
Top Co-Authors

Avatar

Feng Lu

University of California

View shared research outputs
Top Co-Authors

Avatar

Brent N. Chun

University of California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge