Aleksandar Kuzmanovic
Northwestern University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Aleksandar Kuzmanovic.
acm special interest group on data communication | 2003
Aleksandar Kuzmanovic; Edward W. Knightly
Denial of Service attacks are presenting an increasing threat to the global inter-networking infrastructure. While TCPs congestion control algorithm is highly robust to diverse network conditions, its implicit assumption of end-system cooperation results in a well-known vulnerability to attack by high-rate non-responsive flows. In this paper, we investigate a class of low-rate denial of service attacks which, unlike high-rate attacks, are difficult for routers and counter-DoS mechanisms to detect. Using a combination of analytical modeling, simulations, and Internet experiments, we show that maliciously chosen low-rate DoS traffic patterns that exploit TCPs retransmission timeout mechanism can throttle TCP flows to a small fraction of their ideal rate while eluding detection. Moreover, as such attacks exploit protocol homogeneity, we study fundamental limits of the ability of a class of randomized timeout mechanisms to thwart such low-rate DoS attacks
IEEE ACM Transactions on Networking | 2009
Ao-Jan Su; David R. Choffnes; Aleksandar Kuzmanovic; Fabián E. Bustamante
To enhance Web browsing experiences, content distribution networks (CDNs) move Web content ¿closer¿ to clients by caching copies of Web objects on thousands of servers worldwide. Additionally, to minimize client download times, such systems perform extensive network and server measurements and use them to redirect clients to different servers over short time scales. In this paper, we explore techniques for inferring and exploiting network measurements performed by the largest CDN, Akamai; our objective is to locate and utilize quality Internet paths without performing extensive path probing or monitoring. Our contributions are threefold. First, we conduct a broad measurement study of Akamais CDN. We probe Akamais network from 140 PlanetLab (PL) vantage points for two months. We find that Akamai redirection times, while slightly higher than advertised, are sufficiently low to be useful for network control. Second, we empirically show that Akamai redirections overwhelmingly correlate with network latencies on the paths between clients and the Akamai servers. Finally, we illustrate how large-scale overlay networks can exploit Akamai redirections to identify the best detouring nodes for one-hop source routing. Our research shows that in more than 50% of investigated scenarios, it is better to route through the nodes ¿recommended¿ by Akamai than to use the direct paths. Because this is not the case for the rest of the scenarios, we develop low-overhead pruning algorithms that avoid Akamai-driven paths when they are not beneficial. Because these Akamai nodes are part of a closed system, we provide a method for mapping Akamai-recommended paths to those in a generic overlay and demonstrate that these one-hop paths indeed outperform direct ones.
internet measurement conference | 2009
Ionut Trestian; Supranamaya Ranjan; Aleksandar Kuzmanovic; Antonio Nucci
Characterizing the relationship that exists between peoples application interests and mobility properties is the core question relevant for location-based services, in particular those that facilitate serendipitous discovery of people, businesses and objects. In this paper, we apply rule mining and spectral clustering to study this relationship for a population of over 280,000 users of a 3G mobile network in a large metropolitan area. Our analysis reveals that (i) Peoples movement patterns are correlated with the applications they access, e.g., stationary users and those who move more often and visit more locations tend to access different applications. (ii) Location affects the applications accessed by users, i.e., at certain locations, users are more likely to evince interest in a particular class of applications than others irrespective of the time of day. (iii) Finally, the number of serendipitous meetings between users of similar cyber interest is larger in regions with higher density of hotspots. Our analysis demonstrates how cellular network providers and location-based services can benefit from knowledge of the inter-play between users and their locations and interests.
IEEE ACM Transactions on Networking | 2006
Aleksandar Kuzmanovic; Edward W. Knightly
Denial of Service attacks are presenting an increasing threat to the global inter-networking infrastructure. While TCPs congestion control algorithm is highly robust to diverse network conditions, its implicit assumption of end-system cooperation results in a well-known vulnerability to attack by high-rate non-responsive flows. In this paper, we investigate a class of low-rate denial of service attacks which, unlike high-rate attacks, are difficult for routers and counter-DoS mechanisms to detect. Using a combination of analytical modeling, simulations, and Internet experiments, we show that maliciously chosen low-rate DoS traffic patterns that exploit TCPs retransmission timeout mechanism can throttle TCP flows to a small fraction of their ideal rate while eluding detection. Moreover, as such attacks exploit protocol homogeneity, we study fundamental limits of the ability of a class of randomized timeout mechanisms to thwart such low-rate DoS attacks.
IEEE ACM Transactions on Networking | 2010
Ionut Trestian; Supranamaya Ranjan; Aleksandar Kuzmanovic; Antonio Nucci
Understanding Internet access trends at a global scale, i.e., how people use the Internet, is a challenging problem that is typically addressed by analyzing network traces. However, obtaining such traces presents its own set of challenges owing to either privacy concerns or to other operational difficulties. The key hypothesis of our work here is that most of the information needed to profile the Internet endpoints is already available around us-on the Web. In this paper, we introduce a novel approach for profiling and classifying endpoints. We implement and deploy a Google-based profiling tool, that accurately characterizes endpoint behavior by collecting and strategically combining information freely available on the Web. Our Web-based ¿unconstrained endpoint profiling¿ (UEP) approach shows advances in the following scenarios: (1) even when no packet traces are available, it can accurately infer application and protocol usage trends at arbitrary networks; (2) when network traces are available, it outperforms state-of-the-art classification tools such as BLINC; (3) when sampled flow-level traces are available, it retains high classification capabilities. We explore other complementary UEP approaches, such as p2p- and reverse-DNS-lookup-based schemes, and show that they can further improve the results of the Web-based UEP. Using this approach, we perform unconstrained endpoint profiling at a global scale: for clients in four different world regions (Asia, South and North America, and Europe). We provide the first-of-its-kind endpoint analysis that reveals fascinating similarities and differences among these regions.
acm special interest group on data communication | 2010
Constantine Dovrolis; Krishna P. Gummadi; Aleksandar Kuzmanovic; Sascha D. Meinrath
Measurement Lab (M-Lab) is an open, distributed server platform for researchers to deploy active Internet measurement tools. The goal of M-Lab is to advance network research and empower the public with useful information about their broadband connections. By enhancing Internet transparency, M-Lab helps sustain a healthy, innovative Internet. This article describes M-Labs objectives, administrative organization, software and hardware infrastructure. It also provides an overview of the currently available measurement tools and datasets, and invites the broader networking research community to participate in the project.
international conference on computer communications | 2011
Ionut Trestian; Supranamaya Ranjan; Aleksandar Kuzmanovic; Antonio Nucci
Smartphones have changed the way people communicate. Most prominently, using commonplace mobile device features (e.g., high resolution cameras), they started producing and uploading large amounts of content that increases at an exponential pace. In the absence of viable technical solutions, some cellular network providers are considering to start charging special usage fees to address the problem.
web intelligence | 2010
Ao-Jan Su; Y. Charlie Hu; Aleksandar Kuzmanovic; Cheng Kok Koh
Search engines have greatly influenced the way people access information on the Internet as such engines provide the preferred entry point to billions of pages on the Web. Therefore, highly ranked web pages generally have higher visibility to people and pushing the ranking higher has become the top priority for webmasters. As a matter of fact, search engine optimization(SEO) has became a sizeable business that attempts to improve their clients’ ranking. Still, the natural reluctance of search engine companies to reveal their internal mechanisms and the lack of ways to validate SEO’s methods have created numerous myths and fallacies associated with ranking algorithms; Google’sin particular. In this paper, we focus on the Google ranking algorithm and design, implement, and evaluate a ranking system to systematically validate assumptions others have made about this popular ranking algorithm. We demonstrate that linear learning models, coupled with a recursive partitioning ranking scheme, are capable of reverse engineering Google’s ranking algorithm with high accuracy. As an example, we manage to correctly predict 7 out of the top 10 pages for 78% of evaluated keywords. Moreover, for content-only ranking, our system can correctly predict 9 or more pages out of the top 10 ones for 77% of search terms. We show how our ranking system can be used to reveal the relative importance of ranking features in Google’s ranking function, provide guidelines for SEOs and webmasters to optimize their web pages, validate or disapprove new ranking features, and evaluate search engine ranking results for possible ranking bias.
acm special interest group on data communication | 2008
Amit Mondal; Aleksandar Kuzmanovic
The well-accepted wisdom is that TCPs exponential backoff mechanism, introduced by Jacobson 20 years ago, is essential for preserving the stability of the Internet. In this paper, we show that removing exponential backoff from TCP altogether can be done without inducing any stability side-effects. We introduce the implicit packet conservation principle and show that as long as the endpoints uphold this principle, they can only improve their end-to-end performance relative to the exponential backoff case. By conducting large-scale simulations, modeling, and network experiments in Emulab and the Internet using a kernel-level FreeBSD TCP implementation, realistic traffic distributions, and complex network topologies, we demonstrate that TCPs binary exponential backoff mechanism can be safely removed. Moreover, we show that insuitability of TCPs exponential backoff is fundamental, i.e., independent from the currently-dominant Internet traffic properties or bottleneck capacities. Surprisingly, our results indicate that a path to incrementally deploying the change does exist.
international conference on network protocols | 2006
Yan Gao; Leiwen Deng; Aleksandar Kuzmanovic; Yan Chen
Proxy caching servers are widely deployed in todays Internet. While cooperation among proxy caches can significantly improve a networks resilience to denial-of-service (DoS) attacks, lack of cooperation can transform such servers into viable DoS targets. In this paper, we investigate a class of pollution attacks that aim to degrade a proxys caching capabilities, either by ruining the cache file locality, or by inducing false file locality. Using simulations, we propose and evaluate the effects of pollution attacks both in Web and peer-to- peer (p2p) scenarios, and reveal dramatic variability in resilience to pollution among several cache replacement policies. We develop efficient methods to detect both false-locality and locality-disruption attacks, as well as a combination of the two. To achieve high scalability for a large number of clients/requests without sacrificing the detection accuracy, we leverage streaming computation techniques, Le., bloom filters. Evaluation results from large-scale simulations show that these mechanisms are effective and efficient in detecting and mitigating such attacks. Furthermore, a squid-based implementation demonstrates that our protection mechanism forces the attacker to launch extremely large distributed attacks in order to succeed.