John S. Otto
Northwestern University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by John S. Otto.
internet measurement conference | 2012
John S. Otto; Mario A. Sánchez; John P. Rula; Fabián E. Bustamante
Content Delivery Networks (CDNs) rely on the Domain Name System (DNS) for replica server selection. DNS-based server selection builds on the assumption that, in the absence of information about the clients actual network location, the location of a clients DNS resolver provides a good approximation. The recent growth of remote DNS services breaks this assumption and can negatively impact clients web performance. In this paper, we assess the end-to-end impact of using remote DNS services on CDN performance and present the first evaluation of an industry-proposed solution to the problem. We find that remote DNSusage can indeed significantly impact clients web performance and that the proposed solution, if available, can effectively address the problem for most clients. Considering the performance cost of remote DNS usage and the limited adoption base of the industry-proposed solution, we present and evaluate an alternative approach, Direct Resolution, to readily obtain comparable performance improvements without requiring CDN or DNS participation.
acm special interest group on data communication | 2011
Zachary S. Bischof; John S. Otto; Mario A. Sánchez; John P. Rula; David R. Choffnes; Fabián E. Bustamante
Evaluating and characterizing Internet Service Providers (ISPs) is critical to subscribers shopping for alternative ISPs, companies providing reliable Internet services, and governments surveying the coverage of broadband services to its citizens. Ideally, ISP characterization should be done at scale, continuously, and from end users. While there has been significant progress toward this end, current approaches exhibit apparently unavoidable tradeoffs between coverage, continuous monitoring and capturing user-perceived performance. In this paper, we argue that network-intensive applications running on end systems avoid these tradeoffs, thereby offering an ideal platform for ISP characterization. Based on data collected from 500,000 peer-to-peer BitTorrent users across 3,150 networks, together with the reported results from the U.K. Ofcom/SamKnows studies, we show the feasibility of this approach to characterize the service that subscribers can expect from a particular ISP. We discuss remaining research challenges and design requirements for a solution that enables efficient and accurate ISP characterization at an Internet scale.
acm special interest group on data communication | 2012
Zachary S. Bischof; John S. Otto; Fabián E. Bustamante
Broadband characterization has recently attracted much attention from the research community and the general public. Given this interest and the important business and policy implications of residential Internet service characterization, recent years have brought a variety of approaches to profiling Internet services, ranging from Web-based platforms to dedicated infrastructure inside home networks. We have previously argued that network-intensive applications provide an almost ideal vantage point for broadband service characterization at sufficient scale, nearly continuously and from end users. While we have shown that the approach is indeed effective at characterization and can enable performance comparisons between service providers and geographic regions, a key unanswered question is how well the performance characteristics captured by these network-intensive applications can predict the overall user experience with other applications. In this paper, using BitTorrent as an example network-intensive application, we present initial results that demonstrate how to obtain estimates of bandwidth and latency of a network connection by leveraging passive monitoring and limited active measurements from network intensive applications. We then analyze user experienced web performance under a variety of network conditions and show how estimated metrics from this network intensive application can serve as good web performance predictors.
passive and active network measurement | 2013
Mario A. Sánchez; John S. Otto; Zachary S. Bischof; Fabián E. Bustamante
In recent years the quantity and diversity of Internet-enabled consumer devices in the home have increased significantly. These trends complicate device usability and home resource management and have implications for crowdsourced approaches to broadband characterization. The UPnP protocol has emerged as an open standard for device and service discovery to simplify device usability and resource management in home networks. In this work, we leverage UPnP to understand the dynamics of home device usage, both at a macro and micro level, and to sketch an effective approach to broadband characterization that runs behind the last meter. Using UPnP measurements collected from over 13K end users, we show that while home networks can be quite complex, the number of devices that actively and regularly connect to the Internet is limited. Furthermore, we find a high correlation between the number of UPnP-enabled devices in home networks and the presence of UPnP-enabled gateways, and show how this can be leveraged for effective broadband characterization.
Proceedings of the Special Workshop on Internet and Disasters | 2011
Zachary S. Bischof; John S. Otto; Fabián E. Bustamante
Peer-to-peer (P2P) systems represent some of the largest distributed systems in todays Internet. Among P2P systems, BitTorrent is the most popular, potentially accounting for 20--50% of P2P file-sharing traffic. In this paper, we argue that this popularity can be leveraged to monitor the impact of natural disasters and political unrest on the Internet. We focus our analysis on the 2011 Tohoku earthquake and tsunami and use a view from BitTorrent to show that it is possible to identify specific regions and network links where Internet usage and connectivity were most affected.
IEEE ACM Transactions on Networking | 2015
Mario A. Sánchez; John S. Otto; Zachary S. Bischof; David R. Choffnes; Fabián E. Bustamante; Balachander Krishnamurthy; Walter Willinger
Poor visibility into the network hampers progress in a number of important research areas, from network troubleshooting to Internet topology and performance mapping. This persistent, well-known problem has served as motivation for numerous proposals to build or extend existing Internet measurement platforms by recruiting larger, more diverse vantage points. Capturing the edge of the network, however, remains an elusive goal. We argue that at its root the problem is one of incentives. Todays measurement platforms build on the assumption that the goals of experimenters and those hosting the platform are the same. As much of the Internet growth occurs in residential broadband networks, this assumption no longer holds. We present a measurement experimentation platform that reaches the network edge by explicitly aligning the objectives of the experimenters with those of the users hosting the platform. Dasu-our current prototype-is designed to support both network measurement experimentation and broadband characterization. Dasu has been publicly available since July 2010 and has been installed by over 100 000 users with a heterogeneous set of connections spreading across 2431 autonomous systems (ASs) and 166 countries. We discuss some of the challenges we faced building and using a platform for the Internets edge, describe its design and implementation, and illustrate the unique perspective its current deployment brings to Internet measurement.
international conference on computer communications | 2012
John S. Otto; Rade Stanojevic; Nikolaos Laoutaris
In the current usage-based pricing scheme offered by most cloud computing providers, customers are charged based on the capacity and the lease time of the resources they capture (bandwidth, number of virtual machines, IOPS rate, etc.). Taking advantage of this pricing scheme, customers can implement auto-scaling purchase policies by leasing (e.g., hourly) necessary amounts of resources to satisfy a desired QoS threshold under their current demand. Auto-scaling yields strict QoS and variable charges. Some customers, however, would be willing to settle for a more relaxed statistical QoS in exchange for a predictable flat charge. In this work we propose Temporal Rate Limiting (TRL), a purchase policy that permits a customer to allocate optimally a specified purchase budget over a predefined period of time. TRL offers the same expected QoS with auto-scaling but at a lower, flat charge. It also outperforms in terms of QoS a naive flat charge policy that splits the available budget uniformly in time. We quantify the benefits of TRL analytically and also deploy TRL on Amazon EC2 and perform a live validation in the context of a “blacklisting” application for Twitter.
acm special interest group on data communication | 2011
Mario A. Sánchez; John S. Otto; Zachary S. Bischof; Fabián E. Bustamante
Evaluating and characterizing access ISPs is critical to consumers shopping for alternative services and governments surveying the availability of broadband services to their citizens. We present Dasu, a service for crowdsourcing ISP characterization to the edge of the network. Dasu is implemented as an extension to a popular BitTorrent client and has been available since July 2010. While the prototype uses BitTorrent as its host application, its design is agnostic to the particular host application. The demo showcases our current implementation using both a prerecorded execution trace and a live run.
Proceedings of the National Academy of Sciences of the United States of America | 2014
Arnau Gavaldà-Miralles; David R. Choffnes; John S. Otto; Mario A. Sánchez; Fabián E. Bustamante; Luís A. Nunes Amaral; Jordi Duch; Roger Guimerà
Significance The emergence of the Internet as the primary medium for information exchange has led to the development of many decentralized sharing systems. The most popular among them, BitTorrent, is used by tens of millions of people monthly and is responsible for more than one-third of the total Internet traffic. Despite its growing social, economic, and technological importance, there is little understanding of how users behave in this ecosystem. Because of the decentralized structure of peer-to-peer services, it is very difficult to gather data on users behaviors, and it is in this sense that peer-to-peer file-sharing has been called the “dark matter” of the Internet. Here, we investigate users activity patterns and uncover socioeconomic factors that could explain their behavior. Tens of millions of individuals around the world use decentralized content distribution systems, a fact of growing social, economic, and technological importance. These sharing systems are poorly understood because, unlike in other technosocial systems, it is difficult to gather large-scale data about user behavior. Here, we investigate user activity patterns and the socioeconomic factors that could explain the behavior. Our analysis reveals that (i) the ecosystem is heterogeneous at several levels: content types are heterogeneous, users specialize in a few content types, and countries are heterogeneous in user profiles; and (ii) there is a strong correlation between socioeconomic indicators of a country and users behavior. Our findings open a research area on the dynamics of decentralized sharing ecosystems and the socioeconomic factors affecting them, and may have implications for the design of algorithms and for policymaking.
conference on emerging network experiment and technology | 2014
Arnau Gavaldà-Miralles; John S. Otto; Fabián E. Bustamante; Luís A. Nunes Amaral; Jordi Duch; Roger Guimerà
Though the impact of file-sharing of copyrighted content has been discussed for over a decade, only in the past few years have countries begun to adopt legislation to criminalize this behavior. These laws impose penalties ranging from warnings and monetary fines to disconnecting Internet service. While their supporters are quick to point out trends showing the efficacy of these laws at reducing use of file-sharing sites, their analyses rely on brief snapshots of activity that cannot reveal long- and short-term trends. In this paper, we introduce an approach to model user behavior based on a hidden Markov model and apply it to analyze a two-year-long user-level trace of download activity of over 38k users from around the world. This approach allows us to quantify the true impact of file-sharing laws on user behavior, identifying behavioral trends otherwise difficult to identify. For instance, despite an initial reduction in activity in New Zealand when a three-strikes law took effect, after two months activity had returned to the level observed prior to the law being enacted. Given that punishment seems to, at best, result in short-term compliance, we suggest that incentives-based approaches may be more effective at changing user behavior.