Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Georgios Smaragdakis is active.

Publication


Featured researches published by Georgios Smaragdakis.


measurement and modeling of computer systems | 2009

Delay tolerant bulk data transfers on the internet

Nikolaos Laoutaris; Georgios Smaragdakis; Pablo Rodriguez; Ravi Sundaram

Many emerging scientific and industrial applications require transferring multiple terabytes of data on a daily basis. Examples include pushing scientific data from particle accelerators/colliders to laboratories around the world, synchronizing datacenters across continents, and replicating collections of high-definition videos from events taking place at different time-zones. A key property of all above applications is their ability to tolerate delivery delays ranging from a few hours to a few days. Such delay-tolerant bulk (DTB) data are currently being serviced mostly by the postal system using hard drives and DVDs, or by expensive dedicated networks. In this paper, we propose transmitting such data through commercial ISPs by taking advantage of already-paid-for off-peak bandwidth resulting from diurnal traffic patterns and percentile pricing. We show that between sender-receiver pairs with small time-zone difference, simple source scheduling policies are able to take advantage of most of the existing off-peak capacity. When the time-zone difference increases, taking advantage of the full capacity requires performing store-and-forward through intermediate storage nodes. We present an extensive evaluation of the two options based on traffic data from 200+ links of a large transit provider with points of presence (PoPs) at three continents. Our results indicate that there exists huge potential for performing multiterabyte transfers on a daily basis at little or no additional cost.


internet measurement conference | 2010

Comparing DNS resolvers in the wild

Bernhard Ager; Wolfgang Mühlbauer; Georgios Smaragdakis; Steve Uhlig

The Domain Name System (DNS) is a fundamental building block of the Internet. Today, the performance of more and more applications depend not only on the responsiveness of DNS, but also the exact answer returned by the queried DNS resolver, e.g., for Content Distribution Networks (CDN). In this paper, we compare local DNS resolvers against GoogleDNS and OpenDNS for a large set of vantage points. Our end-host measurements inside 50 commercial ISPs reveal that two aspects have a significant impact on responsiveness: (1) the latency to the DNS resolver, (2) the content of the DNS cache when the query is issued. We also observe significant diversity, even at the AS-level, among the answers provided by the studied DNS resolvers. We attribute this diversity to the location-awareness of CDNs as well as to the location of DNS resolvers that breaks the assumption made by CDNs about the vicinity of the end-user and its DNS resolver. Our findings pinpoint limitations within the DNS deployment of some ISPs, as well as the way third-party DNS resolvers bias DNS replies.


acm special interest group on data communication | 2013

Pushing CDN-ISP collaboration to the limit

Benjamin Frank; Ingmar Poese; Yin Lin; Georgios Smaragdakis; Anja Feldmann; Bruce M. Maggs; Jannis Rake; Steve Uhlig; Rick Weber

Today a spectrum of solutions are available for istributing content over the Internet, ranging from commercial CDNs to ISP-operated CDNs to content-provider-operated CDNs to peer-to-peer CDNs. Some deploy servers in just a few large data centers while others deploy in thousands of locations or even on millions of desktops. Recently, major CDNs have formed strategic alliances with large ISPs to provide content delivery network solutions. Such alliances show the natural evolution of content delivery today driven by the need to address scalability issues and to take advantage of new technology and business opportunities. In this paper we revisit the design and operating space of CDN-ISP collaboration in light of recent ISP and CDN alliances. We identify two key enablers for supporting collaboration and improving content delivery performance: informed end-user to server assignment and in-network server allocation. We report on the design and evaluation of a prototype system, NetPaaS, that materializes them. Relying on traces from the largest commercial CDN and a large tier-1 ISP, we show that NetPaaS is able to increase CDN capacity on-demand, enable coordination, reduce download time, and achieve multiple traffic engineering goals leading to a win-win situation for both ISP and CDN.


internet measurement conference | 2011

Web content cartography

Bernhard Ager; Wolfgang Mühlbauer; Georgios Smaragdakis; Steve Uhlig

Recent studies show that a significant part of Internet traffic is delivered through Web-based applications. To cope with the increasing demand for Web content, large scale content hosting and delivery infrastructures, such as data-centers and content distribution networks, are continuously being deployed. Being able to identify and classify such hosting infrastructures is helpful not only to content producers, content providers, and ISPs, but also to the research community at large. For example, to quantify the degree of hosting infrastructure deployment in the Internet or the replication of Web content. In this paper, we introduce Web Content Cartography, i.e., the identification and classification of content hosting and delivery infrastructures. We propose a lightweight and fully automated approach to discover hosting infrastructures based only on DNS measurements and BGP routing table snapshots. Our experimental results show that our approach is feasible even with a limited number of well-distributed vantage points. We find that some popular content is served exclusively from specific regions and ASes. Furthermore, our classification enables us to derive content-centric AS rankings that complement existing AS rankings and shed light on recent observations about shifts in inter-domain traffic and the AS topology.


IEEE ACM Transactions on Networking | 2009

Spatio-temporal network anomaly detection by assessing deviations of empirical measures

Ioannis Ch. Paschalidis; Georgios Smaragdakis

We introduce an Internet traffic anomaly detection mechanism based on large deviations results for empirical measures. Using past traffic traces we characterize network traffic during various time-of-day intervals, assuming that it is anomaly-free. We present two different approaches to characterize traffic: (i) a model-free approach based on the method of types and Sanovs theorem, and (ii) a model-based approach modeling traffic using a Markov modulated process. Using these characterizations as a reference we continuously monitor traffic and employ large deviations and decision theory results to ldquocomparerdquo the empirical measure of the monitored traffic with the corresponding reference characterization, thus, identifying traffic anomalies in real-time. Our experimental results show that applying our methodology (even short-lived) anomalies are identified within a small number of observations. Throughout, we compare the two approaches presenting their advantages and disadvantages to identify and classify temporal network anomalies. We also demonstrate how our framework can be used to monitor traffic from multiple network elements in order to identify both spatial and temporal anomalies. We validate our techniques by analyzing real traffic traces with time-stamped anomalies.


acm special interest group on data communication | 2012

Enabling content-aware traffic engineering

Ingmar Poese; Benjamin Frank; Georgios Smaragdakis; Steve Uhlig; Anja Feldmann; Bruce M. Maggs

Today, a large fraction of Internet traffic is originated by Content Delivery Networks (CDNs). To cope with increasing demand for content, CDNs have deployed massively distributed infrastructures. These deployments pose challenges for CDNs as they have to dynamically map end-users to appropriate servers without being full+y aware of the network conditions within an Internet Service Provider (ISP) or the end-user location. On the other hand, ISPs struggle to cope with rapid traffic shifts caused by the dynamic server selection policies of the CDNs. The challenges that CDNs and ISPs face separately can be turned into an opportunity for collaboration. We argue that it is sufficient for CDNs and ISPs to coordinate only in server selection, not routing, in order to perform traffic engineering. To this end, we propose Content-aware Traffic Engineering (CaTE), which dynamically adapts server selection for content hosted by CDNs using ISP recommendations on small time scales. CaTE relies on the observation that by selecting an appropriate server among those available to deliver the content, the path of the traffic in the network can be influenced in a desired way. We present the design and implementation of a prototype to realize CaTE, and show how CDNs and ISPs can jointly take advantage of the already deployed distributed hosting infrastructures and path diversity, as well as the ISP detailed view of the network status without revealing sensitive operational information. By relying on tier-1 ISP traces, we show that CaTE allows CDNs to enhance the end-user experience while enabling an ISP to achieve several traffic engineering goals.


2010 9th Conference of Telecommunication, Media and Internet | 2010

Energy trade-offs among content delivery architectures

Anja Feldmann; Andreas Gladisch; Mario Kind; Christoph Lange; Georgios Smaragdakis; Fritz-Joachim Westphal

It is envogue to consider how to incorporate various home devices such as set-top boxes into content delivery architectures using the Peer-to-Peer (P2P) paradigm. The hope is to enhance the efficienc of content delivery, e.g., in terms of reliability, availability, throughput, or to reduce the cost of the content delivery platform or to improve the end user experience. While it is easy to point out the benefit of such proposals they usually do not consider the implications with regards to the energy costs. In this paper we explore the energy trade-offs of such P2P architectures, data center architectures, and content distribution networks (CDNs) by building upon an energy consumption model of the transport network and datacenters developed in the context of Internet TV (IPTV). Our results show that a CDN within an ISP is able to minimize the overall power consumption. While a P2P architecture may reduce the power consumption of the service provider it increases the overall energy consumption.


global communications conference | 2004

The effect of router buffer size on HighSpeed TCP performance

Dhiman Barman; Georgios Smaragdakis; Ibrahim Matta

We study the effect of the IP router buffer size on the throughput of HighSpeed TCP (HSTCP). We are motivated by the fact that, in high speed routers, the buffer size is important, as a large buffer size might be a constraint. We first derive an analytical model for HighSpeed TCP and we show that for a small buffer size, equal to 10% of the bandwidth-delay product, HighSpeed TCP can achieve more than 90% of the bottleneck capacity. We also show that setting the buffer size equal to 20% can increase the utilization of HighSpeed TCP up to 98%. On the contrary, setting the buffer size to less than 10% of the bandwidth-delay product can decrease HighSpeed TCPs throughput significantly. We also study the performance effects under both DropTail and RED AQM (active queue management). Analytical results obtained using a fixed-point approach are compared to those obtained by simulation.


IEEE ACM Transactions on Networking | 2013

Delay-tolerant bulk data transfers on the internet

Nikolaos Laoutaris; Georgios Smaragdakis; Rade Stanojevic; Pablo Rodriguez; Ravi Sundaram

Many emerging scientific and industrial applications require transferring multiple terabytes of data on a daily basis. Examples include pushing scientific data from particle accelerators/ colliders to laboratories around the world, synchronizing datacenters across continents, and replicating collections of high-definition videos from events taking place at different time-zones. A key property of all above applications is their ability to tolerate delivery delays ranging from a few hours to a few days. Such delay-tolerant bulk (DTB) data are currently being serviced mostly by the postal system using hard drives and DVDs, or by expensive dedicated networks. In this paper, we propose transmitting such data through commercial ISPs by taking advantage of already-paid-for off-peak bandwidth resulting from diurnal traffic patterns and percentile pricing. We show that between sender-receiver pairs with small time-zone difference, simple source scheduling policies are able to take advantage of most of the existing off-peak capacity. When the time-zone difference increases, taking advantage of the full capacity requires performing store-and-forward through intermediate storage nodes. We present an extensive evaluation of the two options based on traffic data from 200+ links of a large transit provider with points of presence (PoPs) at three continents. Our results indicate that there exists huge potential for performing multiterabyte transfers on a daily basis at little or no additional cost.


internet measurement conference | 2013

On the benefits of using a large IXP as an internet vantage point

Nikolaos Chatzis; Georgios Smaragdakis; Jan Böttger; Thomas Krenc; Anja Feldmann

In the context of measuring the Internet, a long-standing question has been whether there exist well-localized physical entities in todays network where traffic from a representative cross-section of the constituents of the Internet can be observed at a fine-enough granularity to paint an accurate and informative picture of how these constituents shape and impact much of the structure and evolution of todays Internet and the actual traffic it carries. In this paper, we first answer this question in the affirmative by mining 17 weeks of continuous sFlow data from one of the largest European IXPs. Examining these weekly snapshots, we discover a vantage point with excellent visibility into the Internet, seeing week-in and week-out traffic from all 42K+ routed ASes, almost all 450K+ routed prefixes, from close to 1.5M servers, and around a quarter billion IPs from all around the globe. Second, to show the potential of such vantage points, we analyze the server-related portion of the traffic at this IXP, identify the server IPs and cluster them according to the organizations responsible for delivering the content. In the process, we observe a clear trend among many of the critical Internet players towards network heterogenization; that is, either hosting servers of third-party networks in their own infrastructures or pursuing massive deployments of their own servers in strategically chosen third-party networks. While the latter is a well-known business strategy of companies such as Akamai, Google, and Netflix, we show in this paper the extent of network heterogenization in todays Internet and illustrate how it enriches the traditional, largely traffic-agnostic AS-level view of the Internet.

Collaboration


Dive into the Georgios Smaragdakis's collaboration.

Top Co-Authors

Avatar

Anja Feldmann

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Steve Uhlig

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Nikolaos Chatzis

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mema Roussopoulos

National and Kapodistrian University of Athens

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge