Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Timothy Wood is active.

Publication


Featured researches published by Timothy Wood.


mobile ad hoc networking and computing | 2005

The feasibility of launching and detecting jamming attacks in wireless networks

Wenyuan Xu; Wade Trappe; Yanyong Zhang; Timothy Wood

Wireless networks are built upon a shared medium that makes it easy for adversaries to launch jamming-style attacks. These attacks can be easily accomplished by an adversary emitting radio frequency signals that do not follow an underlying MAC protocol. Jamming attacks can severely interfere with the normal operation of wireless networks and, consequently, mechanisms are needed that can cope with jamming attacks. In this paper, we examine radio interference attacks from both sides of the issue: first, we study the problem of conducting radio interference attacks on wireless networks, and second we examine the critical issue of diagnosing the presence of jamming attacks. Specifically, we propose four different jamming attack models that can be used by an adversary to disable the operation of a wireless network, and evaluate their effectiveness in terms of how each method affects the ability of a wireless node to send and receive packets. We then discuss different measurements that serve as the basis for detecting a jamming attack, and explore scenarios where each measurement by itself is not enough to reliably classify the presence of a jamming attack. In particular, we observe that signal strength and carrier sensing time are unable to conclusively detect the presence of a jammer. Further, we observe that although by using packet delivery ratio we may differentiate between congested and jammed scenarios, we are nonetheless unable to conclude whether poor link utility is due to jamming or the mobility of nodes. The fact that no single measurement is sufficient for reliably classifying the presence of a jammer is an important observation, and necessitates the development of enhanced detection schemes that can remove ambiguity when detecting a jammer. To address this need, we propose two enhanced detection protocols that employ consistency checking. The first scheme employs signal strength measurements as a reactive consistency check for poor packet delivery ratios, while the second scheme employs location information to serve as the consistency check. Throughout our discussions, we examine the feasibility and effectiveness of jamming attacks and detection schemes using the MICA2 Mote platform.


ACM Transactions on Autonomous and Adaptive Systems | 2008

Agile dynamic provisioning of multi-tier Internet applications

Bhuvan Urgaonkar; Prashant J. Shenoy; Abhishek Chandra; Pawan Goyal; Timothy Wood

Dynamic capacity provisioning is a useful technique for handling the multi-time-scale variations seen in Internet workloads. In this article, we propose a novel dynamic provisioning technique for multi-tier Internet applications that employs (1) a flexible queuing model to determine how much of the resources to allocate to each tier of the application, and (2) a combination of predictive and reactive methods that determine when to provision these resources, both at large and small time scales. We propose a novel data center architecture based on virtual machine monitors to reduce provisioning overheads. Our experiments on a forty-machine Xen/Linux-based hosting platform demonstrate the responsiveness of our technique in handling dynamic workloads. In one scenario where a flash crowd caused the workload of a three-tier application to double, our technique was able to double the application capacity within five minutes, thus maintaining response-time targets. Our technique also reduced the overhead of switching servers across applications from several minutes to less than a second, while meeting the performance targets of residual sessions.


workshop on wireless security | 2004

Channel surfing and spatial retreats: defenses against wireless denial of service

Wenyuan Xu; Timothy Wood; Wade Trappe; Yanyong Zhang

Wireless networks are built upon a shared medium that makes it easy for adversaries to launch denial of service (DoS) attacks. One form of denial of service is targeted at preventing sources from communicating. These attacks can be easily accomplished by an adversary by either bypassing MAC-layer protocols, or emitting a radio signal targeted at jamming a particular channel. In this paper we present two strategies that may be employed by wireless devices to evade a MAC/PHY-layer jamming-style wireless denial of service attack. The first strategy, channel surfing, is a form of spectral evasion that involves legitimate wireless devices changing the channel that they are operating on. The second strategy, spatial retreats, is a form of spatial evasion whereby legitimate mobile devices move away from the locality of the DoS emitter. We study both of these strategies for three broad wireless communication scenarios: two-party radio communication, an infrastructured wireless network, and an ad hoc wireless network. We evaluate several of our proposed strategies and protocols through ns-2 simulations and experiments on the Berkeley mote platform.


Computer Networks | 2009

Sandpiper: Black-box and gray-box resource management for virtual machines

Timothy Wood; Prashant J. Shenoy; Arun Venkataramani; Mazin S. Yousif

Virtualization can provide significant benefits in data centers by enabling dynamic virtual machine resizing and migration to eliminate hotspots. We present Sandpiper, a system that automates the task of monitoring and detecting hotspots, determining a new mapping of physical to virtual resources, resizing virtual machines to their new allocations, and initiating any necessary migrations. Sandpiper implements a black-box approach that is fully OS- and application-agnostic and a gray-box approach that exploits OS- and application-level statistics. We implement our techniques in Xen and conduct a detailed evaluation using a mix of CPU, network and memory-intensive applications. Our results show that Sandpiper is able to resolve single server hotspots within 20s and scales well to larger, data center environments. We also show that the gray-box approach can help Sandpiper make more informed decisions, particularly in response to memory pressure.


international middleware conference | 2008

Profiling and Modeling Resource Usage of Virtualized Applications

Timothy Wood; Ludmila Cherkasova; Kivanc M. Ozonat; Prashant J. Shenoy

Next Generation Data Centers are transforming labor-inten- sive, hard-coded systems into shared, virtualized, automated, and fully managed adaptive infrastructures. Virtualization technologies promise great opportunities for reducing energy and hardware costs through server consolidation. However, to safely transition an application running natively on real hardware to a virtualized environment, one needs to estimate the additional resource requirements incurred by virtualization overheads. In this work, we design a general approach for estimating the resource requirements of applications when they are transferred to a virtual environment. Our approach has two key components: a set of microbenchmarks to profile the different types of virtualization overhead on a given platform, and a regression-based model that maps the native system usage profile into a virtualized one. This derived model can be used for estimating resource requirements of any application to be virtualized on a given platform. Our approach aims to eliminate error-prone manual processes and presents a fully automated solution. We illustrate the effectiveness of our methodology using Xen virtual machine monitor. Our evaluation shows that our automated model generation procedure effectively characterizes the different virtualization overheads of two diverse hardware platforms and that the models have median prediction error of less than 5% for both the RUBiS and TPC-W benchmarks.


IEEE Transactions on Network and Service Management | 2015

NetVM: High Performance and Flexible Networking Using Virtualization on Commodity Platforms

Jinho Hwang; K. K. Ramakrishnan; Timothy Wood

NetVM brings virtualization to the Network by enabling high bandwidth network functions to operate at near line speed, while taking advantage of the flexibility and customization of low cost commodity servers. NetVM allows customizable data plane processing capabilities such as firewalls, proxies, and routers to be embedded within virtual machines, complementing the control plane capabilities of Software Defined Networking. NetVM makes it easy to dynamically scale, deploy, and reprogram network functions. This provides far greater flexibility than existing purpose-built, sometimes proprietary hardware, while still allowing complex policies and full packet inspection to determine subsequent processing. It does so with dramatically higher throughput than existing software router platforms. NetVM is built on top of the KVM platform and Intel DPDK library. We detail many of the challenges we have solved such as adding support for high-speed inter-VM communication through shared huge pages and enhancing the CPU scheduler to prevent overheads caused by inter-core communication and context switching. NetVM allows true zero-copy delivery of data to VMs both for packet processing and messaging among VMs within a trust boundary. Our evaluation shows how NetVM can compose complex network functionality from multiple pipelined VMs and still obtain throughputs up to 10 Gbps, an improvement of more than 250% compared to existing techniques that use SR-IOV for virtualized networking.


IEEE Network | 2015

Toward a software-based network: integrating software defined networking and network function virtualization

Timothy Wood; K. K. Ramakrishnan; Jinho Hwang; Wei Zhang

Communication networks are changing. They are becoming more and more “software- based.” Two trends reflect this: the use of software defined networking and the use of virtualization to exploit common off-the-shelf hardware to provide a wide array of network-resident functions. To truly achieve the vision shared by many service providers of a high-performance software-based network that is flexible, lowercost, and agile, a fast and carefully designed network function virtualization platform along with a comprehensive SDN control plane is needed. The shift toward software-based network services broadens the type of networking capabilities offered in provider networks and cloud platforms by allowing network services to be dynamically deployed across shared hosts. Combining this with an SDN control plane that recognizes the power of a dynamically changing network infrastructure allows network functions to be placed when they are needed and where they are most appropriate in the network. Our system, SDNFV harmoniously combines the two fast moving technological directions of SDN and virtualization to further the goal of achieving a true software-based network.


international conference on distributed computing systems | 2013

HybridMR: A Hierarchical MapReduce Scheduler for Hybrid Data Centers

Bikash Sharma; Timothy Wood; Chita R. Das

Virtualized environments are attractive because they simplify cluster management, while facilitating cost-effective workload consolidation. As a result, virtual machines in public clouds or private data centers, have become the norm for running transactional applications like web services and virtual desktops. On the other hand, batch workloads like MapReduce, are typically deployed in a native cluster to avoid the performance overheads of virtualization. While both these virtual and native environments have their own strengths and weaknesses, we demonstrate in this work that it is feasible to provide the best of these two computing paradigms in a hybrid platform. In this paper, we make a case for a hybrid data center consisting of native and virtual environments, and propose a 2-phase hierarchical scheduler, called HybridMR, for the effective resource management of interactive and batch workloads. In the first phase, HybridMR classifies incoming MapReduce jobs based on the expected virtualization overheads, and uses this information to automatically guide placement between physical and virtual machines. In the second phase, HybridMR manages the run-time performance of MapReduce jobs collocated with interactive applications in order to provide best effort delivery to batch jobs, while complying with the Service Level Agreements (SLAs) of interactive applications. By consolidating batch jobs with over-provisioned foreground applications, the available unused resources are better utilized, resulting in improved application performance and energy efficiency. Evaluations on a hybrid cluster consisting of 24 physical servers and 48 virtual machines, with diverse workload mix of interactive and batch MapReduce applications, demonstrate that HybridMR can achieve up to 40% improvement in the completion times of MapReduce jobs, over the virtual-only case, while complying with the SLAs of interactive applications. Compared to the native-only cluster, at the cost of minimal performance penalty, HybridMR boosts resource utilization by 45%, and achieves up to 43% energy savings. These results indicate that a hybrid data center with an efficient scheduling mechanism can provide a cost-effective solution for hosting both batch and interactive workloads.


european conference on computer systems | 2011

ZZ and the art of practical BFT execution

Timothy Wood; Rahul Singh; Arun Venkataramani; Prashant J. Shenoy; Emmanuel Cecchet

The high replication cost of Byzantine fault-tolerance (BFT) methods has been a major barrier to their widespread adoption in commercial distributed applications. We present ZZ, a new approach that reduces the replication cost of BFT services from 2f+1 to practically f+1. The key insight in ZZ is to use f+1 execution replicas in the normal case and to activate additional replicas only upon failures. In data centers where multiple applications share a physical server, ZZ reduces the aggregate number of execution replicas running in the data center, improving throughput and response times. ZZ relies on virtualization---a technology already employed in modern data centers---for fast replica activation upon failures, and enables newly activated replicas to immediately begin processing requests by fetching state on-demand. A prototype implementation of ZZ using the BASE library and Xen shows that, when compared to a system with 2f+1 replicas, our approach yields lower response times and up to 33% higher throughput in a prototype data center with four BFT web applications. We also show that ZZ can handle simultaneous failures and achieve sub-second recovery.


workshop on hot topics in middleboxes and network function virtualization | 2016

OpenNetVM: A Platform for High Performance Network Service Chains

Wei Zhang; Guyue Liu; Wenhui Zhang; Neel Shah; Phillip Lopreiato; Gregoire Todeschi; K. K. Ramakrishnan; Timothy Wood

Network middleboxes are growing in number and diversity. Middleboxes have been deployed widely to complement the basic end-to-end functionality provided by the Internet Protocol suite that depends only on the minimal functionality of a best-effort network layer. The move from purpose-built hardware middleboxes to software appliances running in virtual machines provides much needed deployment flexibility, but significant challenges remain. Just as Software Defined Networking (SDN) research and product development was greatly accelerated with the release of several open source SDN platforms, we believe that Network Function Virtualization (NFV) research can see similar growth with the development of a flexible platform that enables high performance NFV prototypes. Towards this end we have been building OpenNetVM, an efficient packet processing framework that greatly simplifies the development of network functions, as well as research into their management and optimization. OpenNetVM runs network functions in lightweight Docker containers, enabling fast startup and reducing memory overheads. The OpenNetVM platform manager provides load balancing, flexible flow management, and service name abstractions. OpenNetVM efficiently routes packets through dynamically created service chains, achieving throughputs of 10 Gbps even when traversing a chain of 6 NFs. In this paper we describe our architecture and evaluate its performance compared to existing NFV platforms.

Collaboration


Dive into the Timothy Wood's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Prashant J. Shenoy

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Wei Zhang

George Washington University

View shared research outputs
Top Co-Authors

Avatar

Guyue Liu

George Washington University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

H. Howie Huang

George Washington University

View shared research outputs
Top Co-Authors

Avatar

Arun Venkataramani

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Emmanuel Cecchet

University of Massachusetts Amherst

View shared research outputs
Researchain Logo
Decentralizing Knowledge