Bernhard Ager
ETH Zurich
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Bernhard Ager.
internet measurement conference | 2010
Bernhard Ager; Wolfgang Mühlbauer; Georgios Smaragdakis; Steve Uhlig
The Domain Name System (DNS) is a fundamental building block of the Internet. Today, the performance of more and more applications depend not only on the responsiveness of DNS, but also the exact answer returned by the queried DNS resolver, e.g., for Content Distribution Networks (CDN). In this paper, we compare local DNS resolvers against GoogleDNS and OpenDNS for a large set of vantage points. Our end-host measurements inside 50 commercial ISPs reveal that two aspects have a significant impact on responsiveness: (1) the latency to the DNS resolver, (2) the content of the DNS cache when the query is issued. We also observe significant diversity, even at the AS-level, among the answers provided by the studied DNS resolvers. We attribute this diversity to the location-awareness of CDNs as well as to the location of DNS resolvers that breaks the assumption made by CDNs about the vicinity of the end-user and its DNS resolver. Our findings pinpoint limitations within the DNS deployment of some ISPs, as well as the way third-party DNS resolvers bias DNS replies.
internet measurement conference | 2011
Bernhard Ager; Wolfgang Mühlbauer; Georgios Smaragdakis; Steve Uhlig
Recent studies show that a significant part of Internet traffic is delivered through Web-based applications. To cope with the increasing demand for Web content, large scale content hosting and delivery infrastructures, such as data-centers and content distribution networks, are continuously being deployed. Being able to identify and classify such hosting infrastructures is helpful not only to content producers, content providers, and ISPs, but also to the research community at large. For example, to quantify the degree of hosting infrastructure deployment in the Internet or the replication of Web content. In this paper, we introduce Web Content Cartography, i.e., the identification and classification of content hosting and delivery infrastructures. We propose a lightweight and fully automated approach to discover hosting infrastructures based only on DNS measurements and BGP routing table snapshots. Our experimental results show that our approach is feasible even with a limited number of well-distributed vantage points. We find that some popular content is served exclusively from specific regions and ASes. Furthermore, our classification enables us to derive content-centric AS rankings that complement existing AS rankings and shed light on recent observations about shifts in inter-domain traffic and the AS topology.
passive and active network measurement | 2012
Nadi Sarrar; Gregor Maier; Bernhard Ager; Robin Sommer; Steve Uhlig
While the IETF standardized IPv6 more than fifteen years ago, IPv4 is still the prevalent Internet protocol today. On June 8th, 2011, several large content and service providers coordinated a large-scale IPv6 test-run, by enabling support for IPv6 simultaneously: the World IPv6 Day. In this paper, we compare IPv6 activity before, during, and after the event. We examine traffic traces recorded at a large European Internet Exchange Point (IXP) and on the campus of a major US university; analyzing volume, application mix, and the use of tunneling protocols for transporting IPv6 packets. For the exchange point we find that native IPv6 traffic almost doubled during the World IPv6 Day while changes in tunneled traffic were limited. At the university, IPv6 traffic increased from 3---6 GB/day to over 130 GB/day during the World IPv6 Day, accompanied by a significant shift in the application and HTTP destination mix. Our results also show that a significant number of participants at the World IPv6 Day kept their IPv6 support online even after the test period ended, suggesting that they did not encounter any significant problems.
passive and active network measurement | 2012
Fabian Schneider; Bernhard Ager; Gregor Maier; Anja Feldmann; Steve Uhlig
Being responsible for more than half of the total traffic volume in the Internet, HTTP is a popular subject for traffic analysis. From our experiences with HTTP traffic analysis we identified a number of pitfalls which can render a carefully executed study flawed. Often these pitfalls can be avoided easily. Based on passive traffic measurements of 20.000 European residential broadband customers, we quantify the potential error of three issues: Non-consideration of persistent or pipelined HTTP requests, mismatches between the Content-Type header field and the actual content, and mismatches between the Content-Length header and the actual transmitted volume. We find that 60% (30%) of all HTTP requests (bytes) are persistent (i.e., not the first in a TCP connection) and 4% are pipelined. Moreover, we observe a Content-Type mismatch for 35% of the total HTTP volume. In terms of Content-Length accuracy our data shows a factor of at least 3.2 more bytes reported in the HTTP header than actually transferred.
Computing | 2014
Eduard Glatz; Stelios Mavromatidis; Bernhard Ager; Xenofontas A. Dimitropoulos
Visualizing communication logs, like NetFlow records, is extremely useful for numerous tasks that need to analyze network traffic traces, like network planning, performance monitoring, and troubleshooting. Communication logs, however, can be massive, which necessitates designing effective visualization techniques for large data sets. To address this problem, we introduce a novel network traffic visualization scheme based on the key ideas of (1) exploiting frequent itemset mining (FIM) to visualize a succinct set of interesting traffic patterns extracted from large traces of communication logs; and (2) visualizing extracted patterns as hypergraphs that clearly display multi-attribute associations. We demonstrate case studies that support the utility of our visualization scheme and show that it enables the visualization of substantially larger data sets than existing network traffic visualization schemes based on parallel-coordinate plots or graphs. For example, we show that our scheme can easily visualize the patterns of more than 41 million NetFlow records. Previous research has explored using parallel-coordinate plots for visualizing network traffic flows. However, such plots do not scale to data sets with thousands of even millions of flows.
conference on information sciences and systems | 2006
Bernhard Ager; Holger Dreger; Anja Feldmann
Even though the key ideas behind DNSSEC have been introduced quite some time ago DNSSEC has not yet seen large scale deployment. This is in large part due to the anticipated overhead of DNSSEC. While the overheads have been reduced by the introduction of the delegation signer model, it is still not clear if they are bearable. Therefore, we in this paper examine the actual overheads of DNSSEC. We first examine how the packet sizes of a DNS trace increase if DNSSEC would be used. Then we explore the CPU and memory overheads imposed by DNSSEC by replaying a DNS client trace in a testbed initialized with roughly 100,000 zones.
symposium on sdn research | 2016
Vasileios Kotronis; Rowan Klöti; Matthias Rost; Panagiotis Georgopoulos; Bernhard Ager; Stefan Schmid; Xenofontas A. Dimitropoulos
Modern Internet applications, from HD video-conferencing to health monitoring and remote control of power-plants, pose stringent demands on network latency, bandwidth and availability. An approach to support such applications and provide inter-domain guarantees, enabling new avenues for innovation, is using centralized inter-domain routing brokers. These entities centralize routing control for mission-critical traffic across domains, working in parallel to BGP. In this work, we propose using IXPs as natural points for stitching inter-domain paths under the control of inter-domain routing brokers. To evaluate the potential of this approach, we first map the global substrate of inter-IXP pathlets that IXP members could offer, based on measurements for 229 IXPs worldwide. We show that using IXPs as stitching points has two useful properties. Up to 91% of the total IPv4 address space can be served by such inter-domain routing brokers when working in concert with just a handful of large IXPs and their associated ISP members. Second, path diversity on the inter-IXP graph increases by up to 29 times, as compared to current BGP valley-free routing. To exploit the rich path diversity, we introduce algorithms that inter-domain routing brokers can use to embed paths, subject to bandwidth and latency constraints. We show that our algorithms scale to the sizes of the measured graphs and can serve diverse simulated path request mixes. Our work highlights a novel direction for SDN innovation across domains, based on logically centralized control and programmable IXP fabrics.
acm special interest group on data communication | 2016
Rowan Klöti; Bernhard Ager; Vasileios Kotronis; George Nomikos; Xenofontas A. Dimitropoulos
Internet eXchange Points (IXPs) are core components of the Internet infrastructure where Internet Service Providers (ISPs) meet and exchange traffic. During the last few years, the number and size of IXPs have increased rapidly, driving the flattening and shortening of Internet paths. However, understanding the present status of the IXP ecosystem and its potential role in shaping the future Internet requires rigorous data about IXPs, their presence, status, participants, etc. In this work, we do the first cross-comparison of three well-known publicly available IXP databases, namely of PeeringDB, Euro-IX, and PCH. A key challenge we address is linking IXP identifiers across databases maintained by different organizations. We find different AS-centric versus IXP-centric views provided by the databases as a result of their data collection approaches. In addition, we highlight differences and similarities w.r.t. IXP participants, geographical coverage, and co-location facilities. As a side-product of our linkage heuristics, we make publicly available the union of the three databases, which includes 40.2% more IXPs and 66.3% more IXP participants than the commonly-used PeeringDB. We also publish our analysis code to foster reproducibility of our experiments and shed preliminary insights into the accuracy of the union dataset.
privacy enhancing technologies | 2015
David Gugelmann; Markus Happe; Bernhard Ager; Vincent Lenders
Abstract Privacy in the Web has become a major concern resulting in the popular use of various tools for blocking tracking services. Most of these tools rely on manually maintained blacklists, which need to be kept up-to-date to protect Web users’ privacy efficiently. It is challenging to keep pace with today’s quickly evolving advertisement and analytics landscape. In order to support blacklist maintainers with this task, we identify a set of Web traffic features for identifying privacyintrusive services. Based on these features, we develop an automatic approach that learns the properties of advertisement and analytics services listed by existing blacklists and proposes new services for inclusion on blacklists. We evaluate our technique on real traffic traces of a campus network and find in the order of 200 new privacy-intrusive Web services that are not listed by the most popular Firefox plug-in Adblock Plus. The proposed Web traffic features are easy to derive, allowing a distributed implementation of our approach.
conference on computer communications workshops | 2010
Juhoon Kim; Fabian Schneider; Bernhard Ager; Anja Feldmann
The finding by Maier et al [1] that Network News Transport Protocol (NNTP) traffic is responsible for up to 5% of residential network traffic inspires us to revisit todays Usenet usage. For this purpose we have developed an NNTP analyzer for the Bro network intrusion detection system. We find that NNTP is intensively used by a small fraction of the residential broadband lines that we study and that almost all traffic is to NNTP servers that require subscription for a monthly fee. The accessed content resembles what one might expect from file-sharing systems---archives and multimedia files. Accordingly, it appears that NNTP is used by some as a high performance alternative to traditional P2P file-sharing options such as eDonkey or BitTorrent.