Supratik Bhattacharyya
Sprint Corporation
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Supratik Bhattacharyya.
acm special interest group on data communication | 2002
Alberto Medina; Nina Taft; Kavé Salamatian; Supratik Bhattacharyya; Christophe Diot
Very few techniques have been proposed for estimating traffic matrices in the context of Internet traffic. Our work on POP-to-POP traffic matrices (TM) makes two contributions. The primary contribution is the outcome of a detailed comparative evaluation of the three existing techniques. We evaluate these methods with respect to the estimation errors yielded, sensitivity to prior information required and sensitivity to the statistical assumptions they make. We study the impact of characteristics such as path length and the amount of link sharing on the estimation errors. Using actual data from a Tier-1 backbone, we assess the validity of the typical assumptions needed by the TM estimation techniques. The secondary contribution of our work is the proposal of a new direction for TM estimation based on using choice models to model POP fanouts. These models allow us to overcome some of the problems of existing methods because they can incorporate additional data and information about POPs and they enable us to make a fundamentally different kind of modeling assumption. We validate this approach by illustrating that our modeling assumption matches actual Internet data well. Using two initial simple models we provide a proof of concept showing that the incorporation of knowledge of POP features (such as total incoming bytes, number of customers, etc.) can reduce estimation errors. Our proposed approach can be used in conjunction with existing or future methods in that it can be used to generate good priors that serve as inputs to statistical inference techniques.
international conference on computer communications | 2004
Athina Markopoulou; Gianluca Iannaccone; Supratik Bhattacharyya; Chen-Nee Chuah; Christophe Diot
We analyze IS-IS routing updates from sprints IP network to characterize failures that affect IP connectivity. Failures are first classified based on probable causes such as maintenance activities, router-related and optical layer problems. Key temporal and spatial characteristics of each class are analyzed and, when appropriate, parameterized using well-known distributions. Our results indicate that 20% of all failures is due to planned maintenance activities. Of the unplanned failures, almost 30% are shared by multiple links and can be attributed to router-related and optical equipment-related problems, while 70% affect a single link at a time. Our classification of failures according to different causes reveals the nature and extent of failures in todays IP backbones. Furthermore, our characterization of the different classes can be used to develop a probabilistic failure model, which is important for various traffic engineering problems.
IEEE ACM Transactions on Networking | 2008
Athina Markopoulou; Gianluca Iannaccone; Supratik Bhattacharyya; Chen-Nee Chuah; Yashar Ganjali; Christophe Diot
As the Internet evolves into a ubiquitous communication infrastructure and supports increasingly important services, its dependability in the presence of various failures becomes critical. In this paper, we analyze IS-IS routing updates from the Sprint IP backbone network to characterize failures that affect IP connectivity. Failures are first classified based on patterns observed at the IP-layer; in some cases, it is possible to further infer their probable causes, such as maintenance activities, router-related and optical layer problems. Key temporal and spatial characteristics of each class are analyzed and, when appropriate, parameterized using well-known distributions. Our results indicate that 20% of all failures happen during a period of scheduled maintenance activities. Of the unplanned failures, almost 30% are shared by multiple links and are most likely due to router-related and optical equipment-related problems, respectively, while 70% affect a single link at a time. Our classification of failures reveals the nature and extent of failures in the Sprint IP backbone. Furthermore, our characterization of the different classes provides a probabilistic failure model, which can be used to generate realistic failure scenarios, as input to various network design and traffic engineering problems.
IEEE Network | 2004
Gianluca Iannaccone; Chen-Nee Chuah; Supratik Bhattacharyya; Christophe Diot
Large IP networks usually combine protection and restoration mechanisms at various layers of the protocol stack to minimize service disruption in the event of failures. Sprint has chosen an IP-based restoration approach for building a highly available tier 1 IP backbone. This article describes the design principles of Sprints network that makes IP-based restoration an effective and cost-efficient approach. The effectiveness of IP-based restoration is evaluated by analyzing network failure characteristics, and measuring disruptions in service availability during controlled failure experiments in the backbone. Current trends for improving the performance of IP-based restoration are also discussed.
international test conference | 2003
Antonio Nucci; Bianca Schroeder; Supratik Bhattacharyya; Nina Taft; Christophe Diot
Intra-domain routing in IP backbone networks relies on link-state protocols such as IS-IS or OSPF. These protocols associate a weight (or cost) with each network link, and compute traffic routes based on these weight. However, proposed methods for selecting link weights largely ignore the issue of failures which arise as part of everyday network operations (maintenance, accidental, etc.). Changing link weights during a short-lived failure is impractical. However such failures are frequent enough to impact network performance. We propose a Tabu-search heuristic for choosing link weights which allow a network to function almost optimally during short link failures. The heuristic takes into account possible link failure scearios when choosing weights, thereby mitigating the effect of such failures. We find that the weights chosen by the heuristic can reduce link overload during transient link failures by as much as 40% at the cost of a small performance degradation in the absence of failures (10%).
acm special interest group on data communication | 2001
Supratik Bhattacharyya; Christophe Diot; Jorjeta G. Jetcheva
In this paper, we study traffic demands in an IP bacbkone, identify the routes used by these demands, and evaluate traffic granularity levels that are attractive for improving the poor load balancing that our study reveals. The data used in this study was collected at a major POP in a commercial Tier-1 IP backbone. In the first part of this paper we ask two questions. What is the traffic demand between a pair of POPs in the backbone? How stable is this demand? We develop a methodology that combines packet-level traces from access links in the POP and BGP routing information to build components of POP-to-POP traffic matrices. Our analysis shows that the geographic spread of traffic across egress POPs is far from uniform. In addition, we find that the time of day behaviors for different POPs and different access links also exhibit a high degree of heterogeneity. In the second part of this work, we examine commercial routing practices to assess how these demands are routed through the backbone. We find that traffic between a pair of POPs is engineered to be restricted to a few paths and that this contributes to widely varying link utilization levels. The natural question that follows from these findings is whether or not there is a better way to spread the traffic across backbone paths. We identify traffic aggregates based on destination address prefixes and find that this set of criteria isolates a few aggregates that account for an overwhelmingly large portion of inter-POP traffic. We demonstrate that these aggregates exhibit stability throughout the day on per-hour time scales, and thus form a natural basis for splitting traffic over multiple paths to improve load balancing.
international conference on computer communications | 1999
Supratik Bhattacharyya; Donald F. Towsley; James F. Kurose
An important concern for source-based multicast congestion control algorithms is the loss path multiplicity (LPM) problem that arises because a transmitted packet can be lost on one or more of the many end-to-end paths in a multicast tree. Consequently, if a multicast sources transmission rate is regulated according to loss indications from receivers, the rate may be completely throttled as the number of loss paths increases. In this paper, we analyze a family of additive increase multiplicative decrease congestion control algorithms and show that, unless careful attention is paid to the LPM problem, the average session bandwidth of a multicast session may be reduced drastically as the size of the multicast group increases. This makes it impossible to share bandwidth in a max-min fair manner among unicast and multicast sessions. We show that max-min fairness can be achieved however if every multicast session regulates its rate according to the most congested end-to-end path in its multicast tree. We present an idealized protocol for tracking the most congested path under changing network conditions, and use simulations to illustrate that tracking the most congested path is indeed a promising approach.
acm special interest group on data communication | 2002
Konstantina Papagiannaki; Nina Taft; Supratik Bhattacharyya; Patrick Thiran; Kavé Salamatian; Christophe Diot
Studies of the Internet traffic at the level of network prefixes, fixed length prefixes, TCP flows, AS’s, and WWW traffic, have all shown that a very small percentage of the flows carries the largest part of the information. This behavior is commonly referred to as “the elephants and mice phenomenon”. Traffic engineering applications, such as re-routing or load balancing, could exploit this property by treating elephant flows differently. In this context, though, elephants should not only contribute significantly to the overall load, but also exhibit sufficient persistence in time. The challenge is to be able to examine a flow’s bandwidth and classify it as an elephant based on the data collected across all the flows on a link. In this paper, we present a classification scheme that is based on the definition of a separation threshold, that elephants have to exceed. We introduce two single-feature classification schemes, and show that the resulting elephants are highly volatile. We then propose a two-feature classification scheme that incorporates temporal characteristics and show that this approach is more successful in isolating elephants that exhibit consistency thus making them more attractive for traffic engineering applications.
IEEE ACM Transactions on Networking | 2008
Kuai Xu; Zhi Li Zhang; Supratik Bhattacharyya
Recent spates of cyber-attacks and frequent emergence of applications affecting Internet traffic dynamics have made it imperative to develop effective techniques that can extract, and make sense of, significant communication patterns from Internet traffic data for use in network operations and security management. In this paper, we present a general methodology for building comprehensive behavior profiles of Internet backbone traffic in terms of communication patterns of end-hosts and services. Relying on data mining and entropy-based techniques, the methodology consists of significant cluster extraction, automatic behavior classification and structural modeling for in-depth interpretive analyses. We validate the methodology using data sets from the core of the Internet.
international performance computing and communications conference | 2006
Avinash Sridharan; Tao Ye; Supratik Bhattacharyya
Considerable research has been done on detecting and blocking portscan activities that are typically conducted by infected hosts to discover other vulnerable hosts. However, the focus has been on enterprise gateway-level intrusion detection systems where the traffic volume is low and network configuration information is readily available. This paper investigates the effectiveness of existing portscan detection algorithms in the context of a large transit backbone network and proposes a new algorithm that meets the demands of aggregated high speed backbone traffic. Specifically, we evaluate two existing approaches - the portscan detection algorithm in SNORT, and a modified version of the TRW algorithm that is a part of the intrusion detection tool BRO. We then propose a new approach, TAPS, which uses sequential hypothesis testing to detect hosts that exhibit abnormal access patterns in terms of destination hosts and destination ports. We perform a comparative analysis of these three approaches using real backbone packet traces, and find that TAPS exhibits the best performance in terms of catching the maximum number of true scanners and yielding the least number of false positives. We have a working implementation of TAPS on our monitoring platform. Further implementation optimizations using bloom filters are identified and discussed
Collaboration
Dive into the Supratik Bhattacharyya's collaboration.
French Institute for Research in Computer Science and Automation
View shared research outputs