Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John M. Tracey is active.

Publication


Featured researches published by John M. Tracey.


international world wide web conferences | 2004

A method for transparent admission control and request scheduling in e-commerce web sites

Sameh Elnikety; Erich M. Nahum; John M. Tracey; Willy Zwaenepoel

This paper presents a method for admission control and request scheduling for multiply-tiered e-commerce Web sites, achieving both stable behavior during overload and improved response times. Our method externally observes execution costs of requests online, distinguishing different request types, and performs overload protection and preferential scheduling using relatively simple measurements and a straight forward control mechanism. Unlike previous proposals, which require extensive changes to the server or operating system, our method requires no modifications to the host O.S., Web server, application server or database. Since our method is external, it can be implemented in a proxy. We present such an implementation, called Gatekeeper, using it with standard software components on the Linux operating system. We evaluate the proxy using the industry standard TPC-W workload generator in a typical three-tiered e-commerce environment. We show consistent performance during overload and throughput increases of up to 10 percent. Response time improves by up to a factor of 14, with only a 15 percent penalty to large jobs.


IEEE Communications Magazine | 2013

Meridian: an SDN platform for cloud network services

Mohammad Banikazemi; David P. Olshefski; Anees Shaikh; John M. Tracey; Guohui Wang

As the number and variety of applications and workloads moving to the cloud grows, networking capabilities have become increasingly important. Over a brief period, networking support offered by both cloud service providers and cloud controller platforms has developed rapidly. In most of these cloud networking service models, however, users must configure a variety of network-layer constructs such as switches, subnets, and ACLs, which can then be used by their cloud applications. In this article, we argue for a service-level network model that provides higher- level connectivity and policy abstractions that are integral parts of cloud applications. Moreover, the emergence of the software-defined networking (SDN) paradigm provides a new opportunity to closely integrate application provisioning in the cloud with the network through programmable interfaces and automation. We describe the architecture and implementation of Meridian, an SDN controller platform that supports a service-level model for application networking in clouds. We discuss some of the key challenges in the design and implementation, including how to efficiently handle dynamic updates to virtual networks, orchestration of network tasks on a large set of devices, and how Meridian can be integrated with multiple cloud controllers.


international conference on distributed computing systems | 2007

Understanding Instant Messaging Traffic Characteristics

Zhen Xiao; Lei Guo; John M. Tracey

Instant messaging (IM) has become increasingly popular due to its quick response time, its ease of use, and possibility of multitasking. It is estimated that there are several millions of instant messaging users who use IM for various purposes: simple requests and responses, scheduling face to face meetings, or just to check the availability of colleagues and friends. Despite its popularity and user base, little has been done to characterize IM traffic. One reason might be its relatively small traffic volume, although this is changing as more users start using video or voice chats and file attachments. Moreover, all major instant messaging systems route text messages through central servers. While this facilitates firewall traversal and gives instant messaging companies more control, it creates a potential bottleneck at the instant messaging servers. This is especially so for large instant messaging operators with tens of millions of users and during flash crowd events. Another reason for the lack of previous studies is the difficulty in getting access to instant messaging traces due to privacy concerns. In this paper, we analyze the traffic of two popular instant messaging systems, AOL Instant Messenger (AIM) and MSN/Windows Live Messenger, from thousands of employees in a large enterprise. We found that most instant messaging traffic is due to presence, hints, or other extraneous traffic. Chat messages constitute only a small percentage of the total IM traffic. This means, during overload, IM servers can protect the instantaneous nature of the communication by dropping extraneous traffic. We also found that the social network of lM users does not follow a power law distribution. It can be characterized by a Weibull distribution. Our analysis sheds light on instant messaging system design and optimization and provides a scientific basis for instant messaging workload generation.


measurement and modeling of computer systems | 2007

Evaluating SIP server performance

Erich M. Nahum; John M. Tracey; Charles P. Wright

SIP is a protocol of growing importance, with uses for VoIP, instant messaging, presence, and more. However, its performance is not well-studied or understood. In this extended abstract we overview our experimental evaluation of common SIP server scenarios using open-source SIP software such as OpenSER and SIP prunning on Linux. We show performance varies greatly depending on the server scenario and how the protocol is used. Depending on the configuration, through put can vary from hundreds to thousands of operations per second. For example, we observe that the choice of stateless vs. stateful proxying, using TCP rather than UDP, or including MD5-based authentication can each can affect performance by a factor of 2-4. We also provide kernel and application profiles using Oprofile that help explain and illustrate processing costs. Finally, we provide a simple fix for transaction-stateful proxying that improves performance by a factor of 10. Full details can be found in our accompanying technical report.


Ibm Journal of Research and Development | 2014

Software defined networking to support the software defined environment

Colin Dixon; David P. Olshefski; Vinit Jain; Casimer M. DeCusatis; Wes Felter; John B. Carter; Mohammad Banikazemi; V. Mann; John M. Tracey; Renato J. Recio

Software defined networking (SDN) represents a new approach in which the decision-making process of the network is moved from distributed network devices to a logically centralized controller, implemented as software running on commodity servers. This enables more automation and optimization of the network and, when combined with software defined compute and software defined storage, forms one of the three pillars of IBMs software defined environment (SDE). This paper provides an overview of SDN, focusing on several technologies gaining attention and the benefits they provide for cloud-computing providers and end-users. These technologies include (i) logically centralized SDN controllers to manage virtual and physical networks, (ii) new abstractions for virtual networks and network virtualization, and (iii) new routing algorithms that eliminate limitations of traditional Ethernet routing and allow newer network topologies. Additionally, we present IBMs vision for SDN, describing how these technologies work together to virtualize the underlying physical network infrastructure and automate resource provisioning. The vision includes automated provisioning of multi-tier applications, application performance monitoring, and the enabling of dynamic adaptation of network resources to application workloads. Finally, we explore the implications of SDN on network topologies, quality of service, and middleboxes (e.g., network appliances).


acm sigops european workshop | 2004

Position: short object lifetimes require a delete-optimized storage system

Fred Douglis; John Davis Palmer; Elizabeth Suzanne Richards; David Tao; William H. Tetzlaff; John M. Tracey; Jian Yin

Early file systems were designed with the expectation that data would typically be read from disk many times before being deleted; on-disk structures were therefore optimized for reading. As main memory sizes increased, more read requests could be satisfied from data cached in memory, motivating file system designs that optimize write performance. Here, we describe how one might build a storage system that optimizes not only reading and writing, but creation and deletion as well. Efficiency is achieved, in part, by automating deletion based on relative retention values rather than requiring data be deleted explicitly by an application. This approach is well suited to an emerging class of applications that process data at consistently high rates of ingest. This paper explores trade-offs in clustering data by retention value and age and examines the effects of allowing the retention values to change under application control.


measurement and modeling of computer systems | 2005

Evaluating the impact of simultaneous multithreading on network servers using real hardware

Yaoping Ruan; Vivek S. Pai; Erich M. Nahum; John M. Tracey

This paper examines the performance of simultaneous multithreading (SMT) for network servers using actual hardware, multiple network server applications, and several workloads. Using three versions of the Intel Xeon processor with Hyper-Threading, we perform macroscopic analysis as well as microarchitectural measurements to understand the origins of the performance bottlenecks for SMT processors in these environments. The results of our evaluation suggest that the current SMT support in the Xeon is application and workload sensitive, and may not yield significant benefits for network servers.In general, we find that enabling SMT on real hardware usually produces only slight performance gains, and can sometimes lead to performance loss. In the uniprocessor case, previous studies appear to have neglected the OS overhead in switching from a uniprocessor kernel to an SMT-enabled kernel. The performance loss associated with such support is comparable to the gains provided by SMT. In the 2-way multiprocessor case, the higher number of memory references from SMT often causes the memory system to become the bottleneck, offsetting any processor utilization gains. This effect is compounded by the growing gap between processor speeds and memory latency. In trying to understand the large gains shown by simulation studies, we find that while the general trends for microarchitectural behavior agree with real hardware, differences in sizing assumptions and performance models yield much more optimistic benefits for SMT than we observe.


Ibm Journal of Research and Development | 2001

Adaptive fast path architecture

Elbert C. Hu; Philippe Joubert; Robert B. King; Jason D. LaVoie; John M. Tracey

Adaptive Fast Path Architecture (AFPA) is a software architecture that dramatically improves the efficiency, and therefore the capacity, of Web and other network servers. The architecture includes a RAM-based cache that serves static content and a reverse proxy that can distribute requests for dynamic content to multiple servers. These two mechanisms are combined using a flexible layer-7 (content-based) routing facility. The architecture defines interfaces that allow these generic mechanisms to be exploited to accelerate a variety of application protocols, including HTTP. Efficiency is derived from maximizing the number of requests that are handled entirely within the kernel, using a deferred-interrupt context instead of threads wherever possible. AFPA has been implemented on several server platforms including Microsoft Windows NT® and Windows® 2000, OS/390®, AIX®, and most recently Linux. By conservative estimates, AFPA more than doubles capacity for serving static content compared to conventional server architectures, and has allowed IBM to establish a leadership position in Web server performance. A prototype implementation of AFPA on Linux delivers more than 10000 SPECweb96 operations per second on a single processor.


Ibm Journal of Research and Development | 2010

SIP server performance on multicore systems

Charles P. Wright; Erich M. Nahum; D. Wood; John M. Tracey; Elbert C. Hu

This paper evaluates the performance of a popular open-source Session Initiation Protocol (SIP) server on three different multicore architectures. We examine the baseline performance and introduce three analysis-driven optimizations that involve increasing the number of slots in hash tables, an in-memory database for user authentication information, and incremental garbage collection for user location information. Wider hash tables reduce the search time and improve multicore scalability by reducing lock contention. The in-memory database reduces interprocess communication and locking. Incremental garbage collection smooths out peaks of both central processing unit and shared memory utilization, eliminating bursts of failed SIP interactions and reducing lock contention on the shared memory segment. Each optimization affects single-core performance and multicore scalability in different ways. The overall result is an improvement in absolute performance on eight cores by a factor of 16 and a doubling of multicore scalability. Results somewhat vary across architectures but follow similar trends, indicating the generality of these optimizations.


network operations and management symposium | 2016

Experiences evaluating OpenStack network data plane performance and scalability

Bengi Karacali; John M. Tracey

Growth in cloud computing motivates cloud networks that provide excellent performance and scalability. Improvements rely on the ability to measure these characteristics. Measurement is complicated by a combinatorial explosion of implementations, configurations, metrics, workloads and scenarios. We present tools and a framework that facilitate performance and scalability evaluation. We present the results of applying the framework to six OpenStack network implementations/configurations. The results validate the frameworks ability to highlight the performance and scalability impact of changes to the underlying cloud implementation and configuration. Our experience also yields important lessons regarding cloud network evaluation.

Researchain Logo
Decentralizing Knowledge