Anees Shaikh
IBM
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Anees Shaikh.
international conference on computer communications | 2001
Anees Shaikh; Renu Tewari; Mukesh Agrawal
The rapid growth of the Internet in users and content has fueled extensive efforts to improve the users overall Internet experience. A growing number of providers deliver content from multiple servers or proxies to reduce response time by moving content closer to end users. An increasingly popular mechanism to direct clients to the closest point of service is DNS-based redirection, due to its transparency and generality. This paper draws attention to two of the main issues in using DNS: (1) the negative effects of reducing or eliminating the cache lifetimes of DNS information, and (2) the implicit assumption that client nameservers are indicative of actual client location and performance. We quantify the impact of reducing DNS TTL values on Web access latency and show that it can increase name resolution latency by two orders of magnitude. Using HTTP and DNS server logs, as well as a large number of dial-up ISP clients, we measure client-nameserver proximity and show that a significant fraction are distant, more than 8 hops apart. Finally, we suggest protocol modifications to improve the accuracy of DNS-based redirection schemes.
internet measurement conference | 2003
Aditya Akella; Srinivasan Seshan; Anees Shaikh
Performance limitations in the current Internet are thought to lie at the edges of the network -- i.e last mile connectivity to users, or access links of stub ASes. As these links are upgraded, however, it is important to consider where new bottlenecks and hot-spots are likely to arise. Through an extensive measurement study, we discover, classify and characterize non-access bottleneck links in terms of their location, latency and available capacity. We find that nearly half of the paths explored have a non-access bottleneck with available capacity less than 50 Mbps. The bottlenecks identified are roughly equally split between intra-ISP links and links between ISPs. These results have implications on issues such as the choice of access providers and route optimization.
acm special interest group on data communication | 1999
Anees Shaikh; Jennifer Rexford; Kang G. Shin
Internet service providers face a daunting challenge in provisioning network resources, due to the rapid growth of the Internet and wide fluctuations in the underlying traffic patterns. The ability of dynamic routing to circumvent congested links and improve application performance makes it a valuable traffic engineering tool. However, deployment of load-sensitive routing is hampered by the overheads imposed by link-state update propagation, path selection, and signaling. Under reasonable protocol and computational overheads, traditional approaches to load-sensitive routing of IP traffic are ineffective, and can introduce significant route flapping, since paths are selected based on out-of-date link-state information. Although stability is improved by performing load-sensitive routing at the flow level, flapping still occurs, because most IP flows have a short duration relative to the desired frequency of link-state updates. To address the efficiency and stability challenges of load-sensitive routing, we introduce a new hybrid approach that performs dynamic routing of long-lived flows, while forwarding short-lived flows on static preprovisioned paths. By relating the detection of long-lived flows to the timescale of link-state update messages in the routing protocol, route stability is considerably improved. Through simulation experiments using a one-week ISP packet trace, we show that our hybrid approach significantly outperforms traditional static and dynamic routing schemes, by reacting to fluctuations in network load without introducing route flapping.
symposium on cloud computing | 2011
Theophilus Benson; Aditya Akella; Anees Shaikh; Sambit Sahu
Enterprises today face several challenges when hosting line-of-business applications in the cloud. Central to many of these challenges is the limited support for control over cloud network functions, such as, the ability to ensure security, performance guarantees or isolation, and to flexibly interpose middleboxes in application deployments. In this paper, we present the design and implementation of a novel cloud networking system called CloudNaaS. Customers can leverage CloudNaaS to deploy applications augmented with a rich and extensible set of network functions such as virtual network isolation, custom addressing, service differentiation, and flexible interposition of various middleboxes. CloudNaaS primitives are directly implemented within the cloud infrastructure itself using high-speed programmable network elements, making CloudNaaS highly efficient. We evaluate an OpenFlow-based prototype of CloudNaaS and find that it can be used to instantiate a variety of network functions in the cloud, and that its performance is robust even in the face of large numbers of provisioned services and link/device failures.
international conference on distributed computing systems | 2011
Upendra Sharma; Prashant J. Shenoy; Sambit Sahu; Anees Shaikh
In this paper we present Kingfisher, a {\em cost-aware} system that provides efficient support for elasticity in the cloud by (i) leveraging multiple mechanisms to reduce the time to transition to new configurations, and (ii) optimizing the selection of a virtual server configuration that minimizes the cost. We have implemented a prototype of Kingfisher and have evaluated its efficacy on a laboratory cloud platform. Our experiments with varying application workloads demonstrate that Kingfisher is able to (i) decrease the cost of virtual server resources by as much as
IEEE ACM Transactions on Networking | 2001
Anees Shaikh; Jennifer Rexford; Kang G. Shin
24\%
acm special interest group on data communication | 2004
Aditya Akella; Jeffrey Pang; Bruce M. Maggs; Srinivasan Seshan; Anees Shaikh
compared to the current cost-unaware approach, (ii) reduce by an order of magnitude the time to transition to a new configuration through multiple elasticity mechanisms in the cloud, and (iii), illustrate the opportunity for design alternatives which trade-off the cost of server resources with the time required to scale the application.
IEEE Communications Magazine | 2013
Mohammad Banikazemi; David P. Olshefski; Anees Shaikh; John M. Tracey; Guohui Wang
Quality-of-service (QoS) routing satisfies application performance requirements and optimizes network resource usage by selecting paths based on connection traffic parameters and link load information. However, distributing link state imposes significant bandwidth and processing overhead on the network. This paper investigates the performance tradeoff between protocol overhead and the quality of the routing decisions in the context of the source-directed link state routing protocols proposed for IP and ATM networks. We construct a detailed model of QoS routing that parameterizes the path-selection algorithm, link-cost function, and link state update policy. Through extensive simulation experiments with several network topologies and traffic patterns, we uncover the effects of stale link state information and random fluctuations in traffic load on the routing and setup overheads. We then investigate how inaccuracy of link state information interacts with the size and connectivity of the underlying topology. Finally, we show that tuning the coarseness of the link-cost metric to the inaccuracy of underlying link state information reduces the computational complexity of the path-selection algorithm without significantly degrading performance. This work confirms and extends earlier studies, and offers new insights for designing efficient quality-of-service routing policies in large networks.
acm special interest group on data communication | 2012
Guohui Wang; T. S. Eugene Ng; Anees Shaikh
The limitations of BGP routing in the Internet are often blamed for poor end-to-end performance and prolonged connectivity interruptions. Recent work advocates using overlays to effectively bypass BGPs path selection in order to improve performance and fault tolerance. In this paper, we explore the possibility that intelligent control of BGP routes, coupled with ISP multihoming, can provide competitive end-to-end performance and reliability. Using extensive measurements of paths between nodes in a large content distribution network, we compare the relative benefits of overlay routing and multihoming route control in terms of round-trip latency, TCP connection throughput, and path availability. We observe that the performance achieved by route control together with multihoming to three ISPs (3-multihoming), is within 5-15% of overlay routing employed in conjunction 3-multihoming, in terms of both end-to-end RTT and throughput. We also show that while multihoming cannot offer the nearly perfect resilience of overlays, it can eliminate almost all failures experienced by a singly-homed end-network. Our results demonstrate that, by leveraging the capability of multihoming route control, it is not necessary to circumvent BGP routing to extract good wide-area performance and availability from the existing routing system.
IEEE Journal on Selected Areas in Communications | 1997
Anees Shaikh; Kang G. Shin
As the number and variety of applications and workloads moving to the cloud grows, networking capabilities have become increasingly important. Over a brief period, networking support offered by both cloud service providers and cloud controller platforms has developed rapidly. In most of these cloud networking service models, however, users must configure a variety of network-layer constructs such as switches, subnets, and ACLs, which can then be used by their cloud applications. In this article, we argue for a service-level network model that provides higher- level connectivity and policy abstractions that are integral parts of cloud applications. Moreover, the emergence of the software-defined networking (SDN) paradigm provides a new opportunity to closely integrate application provisioning in the cloud with the network through programmable interfaces and automation. We describe the architecture and implementation of Meridian, an SDN controller platform that supports a service-level model for application networking in clouds. We discuss some of the key challenges in the design and implementation, including how to efficiently handle dynamic updates to virtual networks, orchestration of network tasks on a large set of devices, and how Meridian can be integrated with multiple cloud controllers.