Arnaud Legout
French Institute for Research in Computer Science and Automation
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Arnaud Legout.
internet measurement conference | 2006
Arnaud Legout; Guillaume Urvoy-Keller; Pietro Michiardi
The performance of peer-to-peer file replication comes from its piece and peer selection strategies. Two such strategies have been introduced by the BitTorrent protocol: the rarest first and choke algorithms. Whereas it is commonly admitted that BitTorrent performs well, recent studies have proposed the replacement of the rarest first and choke algorithms in order to improve efficiency and fairness. In this paper, we use results from real experiments to advocate that the replacement of the rarest first and choke algorithms cannot be justified in the context of peer-to-peer file replication in the Internet.We instrumented a BitTorrent client and ran experiments on real torrents with different characteristics. Our experimental evaluation is peer oriented, instead of tracker oriented, which allows us to get detailed information on all exchanged messages and protocol events. We go beyond the mere observation of the good efficiency of both algorithms. We show that the rarest first algorithm guarantees close to ideal diversity of the pieces among peers. In particular, on our experiments, replacing the rarest first algorithm with source or network coding solutions cannot be justified. We also show that the choke algorithm in its latest version fosters reciprocation and is robust to free riders. In particular, the choke algorithm is fair and its replacement with a bit level tit-for-tat solution is not appropriate. Finally, we identify new areas of improvements for efficient peer-to-peer file replication protocols.
conference on emerging network experiment and technology | 2011
Ashwin Rao; Arnaud Legout; Yeon-sup Lim; Donald F. Towsley; Chadi Barakat; Walid Dabbous
Video streaming represents a large fraction of Internet traffic. Surprisingly, little is known about the network characteristics of this traffic. In this paper, we study the network characteristics of the two most popular video streaming services, Netflix and YouTube. We show that the streaming strategies vary with the type of the application (Web browser or native mobile application), and the type of container (Silverlight, Flash, or HTML5) used for video streaming. In particular, we identify three different streaming strategies that produce traffic patterns from non-ack clocked ON-OFF cycles to bulk TCP transfer. We then present an analytical model to study the potential impact of these streaming strategies on the aggregate traffic and make recommendations accordingly.
Computer Networks | 2011
Stevens Le Blond; Arnaud Legout; Walid Dabbous
Peer-to-peer (P2P) locality has recently raised a lot of interest in the community. Indeed, whereas P2P content distribution enables financial savings for the content providers, it dramatically increases the traffic on inter-ISP links.To solve this issue, the idea to keep a fraction of the P2P traffic local to each ISP was introduced a few years ago. Since then, P2P solutions exploiting locality have been introduced. However, several fundamental issues on locality still need to be explored. In particular, how far can we push locality, and what is, at the scale of the Internet, the reduction of traffic that can be achieved with locality?In this paper, we perform extensive experiments on a controlled environment with up to 10,000 BitTorrent clients to evaluate the impact of high locality on inter-ISP links traffic and peers download completion time.We introduce two simple mechanisms that make high locality possible in challenging scenarios and we show that we save up to several orders of magnitude inter-ISP traffic compared to traditional locality without adversely impacting peers download completion time. In addition, we crawled 214,443 torrents representing 6,113,224 unique peers spread among 9605ASes. We show that whereas the torrents we crawled generated 11.6 petabytes of inter-ISP traffic, our locality policy implemented for all torrents could have reduced the global inter-ISP traffic by up to 40%.
international conference on computer communications | 2001
Arnaud Legout; Jörg Nonnenmacher; Ernst W. Biersack
Using multicast delivery to multiple receivers reduces the aggregate bandwidth required from the network compared to using unicast delivery to each receiver. However, multicast is not yet widely deployed in the Internet. One reason is the lack of incentive to use multicast delivery. To encourage the use of multicast delivery, we define a new bandwidth-allocation policy, called LogRD, taking into account the number of downstream receivers. This policy gives more bandwidth to a multicast flow as compared to a unicast flow that shares the same bottleneck, without starving the unicast flows, however. The LogRD policy also provides an answer to the question on how to treat a multicast flow compared to a unicast flow sharing the same bottleneck. We investigate three bandwidth-allocation policies for multicast flows and evaluate their impact on both receiver satisfaction and fairness using a simple analytical study and a comprehensive set of simulations. The policy that allocates the available bandwidth as a logarithmic function of the number of receivers downstream of the bottleneck achieves the best tradeoff between receiver satisfaction and fairness.
arXiv: Networking and Internet Architecture | 2011
Eitan Altman; Arnaud Legout; Yuedong Xu
This paper studies economic utilities and quality of service (QoS) in a two-sided non-neutral market where Internet service providers (ISPs) charge content providers (CPs) for the content delivery. We propose new models that involve a CP, an ISP, end users and advertisers. The CP may have either a subscription revenue model (charging end users) or an advertisement revenue model (charging advertisers). We formulate the interactions between the ISP and the CP as a noncooperative game for the former and an optimization problem for the latter. Our analysis shows that the revenue model of the CP plays a significant role in a non-neutral Internet. With the subscription model, both the ISP and the CP receive better (or worse) utilities as well as QoS in the presence of the side payment at the same time. With the advertisement model, the side payment impedes the CP from investing on its contents.
internet measurement conference | 2011
Stevens Le Blond; Chao Zhang; Arnaud Legout; Keith W. Ross; Walid Dabbous
In this paper, we show how to exploit real-time communication applications to determine the IP address of a targeted user. We focus our study on Skype, although other real-time communication applications may have similar privacy issues. We first design a scheme that calls an identified-targeted user inconspicuously to find his IP address, which can be done even if he is behind a NAT. By calling the user periodically, we can then observe the mobility of the user. We show how to scale the scheme to observe the mobility patterns of tens of thousands of users. We also consider the linkability threat, in which the identified user is linked to his Internet usage. We illustrate this threat by combining Skype and BitTorrent to show that it is possible to determine the filesharing usage of identified users. We devise a scheme based on the identification field of the IP datagrams to verify with high accuracy whether the identified user is participating in specific torrents. We conclude that any Internet user can leverage Skype, and potentially other real-time communication systems, to observe the mobility and filesharing usage of tens of millions of identified users.
Proceedings of the 2012 ACM conference on CoNEXT student workshop | 2012
Maksym Gabielkov; Arnaud Legout
In this work, we collected the entire Twitter social graph that consists of 537 million Twitter accounts connected by 23.95 billion links, and performed a preliminary analysis of the collected data. In order to collect the social graph, we implemented a distributed crawler on the PlanetLab infrastructure that collected all information in 4 months. Our preliminary analysis already revealed some interesting properties. Whereas there are 537 million Twitter accounts, only 268 million already sent at least one tweet and no more than 54 million have been recently active. In addition, 40% of the accounts are not followed by anybody and 25% do not follow anybody. Finally, we found that the Twitter policies, but also social conventions (like the follow-back convention) have a huge impact on the structure of the Twitter social graph.
measurement and modeling of computer systems | 2014
Maksym Gabielkov; Ashwin Rao; Arnaud Legout
Twitter is one of the largest social networks using exclusively directed links among accounts. This makes the Twitter social graph much closer to the social graph supporting real life communications than, for instance, Facebook. Therefore, understanding the structure of the Twitter social graph is interesting not only for computer scientists, but also for researchers in other fields, such as sociologists. However, little is known about how the information propagation in Twitter is constrained by its inner structure. In this paper, we present an in-depth study of the macroscopic structure of the Twitter social graph unveiling the highways on which tweets propagate, the specific user activity associated with each component of this macroscopic structure, and the evolution of this macroscopic structure with time for the past 6 years. For this study, we crawled Twitter to retrieve all accounts and all social relationships (follow links) among accounts; the crawl completed in July 2012 with 505 million accounts interconnected by 23 billion links. Then, we present a methodology to unveil the macroscopic structure of the Twitter social graph. This macroscopic structure consists of 8 components defined by their connectivity characteristics. Each component group users with a specific usage of Twitter. For instance, we identified components gathering together spammers, or celebrities. Finally, we present a method to approximate the macroscopic structure of the Twitter social graph in the past, validate this method using old datasets, and discuss the evolution of the macroscopic structure of the Twitter social graph during the past 6 years.
IEEE Network | 2002
Arnaud Legout; Ernst W. Biersack
Today, the dominant paradigm for congestion control in the Internet is based on the notion of TCP friendliness. To be TCP-friendly, a source must behave in such a way as to achieve a bandwidth that is similar to the bandwidth obtained by a TCP flow that would observe the same round-trip time (RTT) and the same loss rate. However, with the success of the Internet comes the deployment of an increasing number of applications that do not use TCP as a transport protocol. These applications can often improve their own performance by not being TCP-friendly, which severely penalizes TCP flows. To design new applications to be TCP-friendly is often a difficult task. The idea of the fair queuing (FQ) paradigm as a means to improve congestion control was first introduced by Keshav (1991). While Keshav made a fundamental step toward a new paradigm for the design of congestion control protocols, he did not formalize his results so that his findings could be extended for the design of new congestion control protocols. We make this step and formally define the FQ paradigm as a paradigm for the design of new end-to-end congestion control protocols. This paradigm relies on FQ scheduling with per-flow scheduling and longest queue drop buffer management in each router. We assume only selfish and noncollaborative end users. Our main contribution is the formal statement of the congestion control problem as a whole, which enables us to demonstrate the validity of the FQ paradigm. We also demonstrate that the FQ paradigm does not adversely impact the throughput of TCP flows and explain how to apply the FQ paradigm for the design of new congestion control protocols. As a pragmatic validation of the FQ paradigm, we discuss a new multicast congestion control protocol called packet pair receiver-driven layered multicast (PLM).
Proceedings of the 2012 ACM conference on CoNEXT student workshop | 2012
Ashwin Rao; Justine Sherry; Arnaud Legout; Arvind Krishnamurthy; Walid Dabbous; David R. Choffnes
Mobile networks are the most popular, fastest growing and least understood systems in today’s Internet ecosystem. Despite a large collection of privacy, policy and performance issues in mobile networks, users and researchers are faced with few options to characterize and address them. In this poster we present Meddle, a framework aimed at enhancing transparency in mobile networks and providing a platform that enables users (and researchers) control mobile traffic.
Collaboration
Dive into the Arnaud Legout's collaboration.
Commonwealth Scientific and Industrial Research Organisation
View shared research outputsFrench Institute for Research in Computer Science and Automation
View shared research outputs