Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Guillaume Urvoy-Keller is active.

Publication


Featured researches published by Guillaume Urvoy-Keller.


passive and active network measurement | 2004

Dissecting BitTorrent: Five Months in a Torrent’s Lifetime

Mikel Izal; Guillaume Urvoy-Keller; Ernst W. Biersack; Pascal Felber; Anwar Al Hamra; Luis Garcés-Erice

Popular content such as software updates is requested by a large number of users. Traditionally, to satisfy a large number of requests, lager server farms or mirroring are used, both of which are expensive. An inexpensive alternative are peer-to-peer based replication systems, where users who retrieve the file, act simultaneously as clients and servers. In this paper, we study BitTorrent, a new and already very popular peer-to-peer application that allows distribution of very large contents to a large set of hosts. Our analysis of BitTorrent is based on measurements collected on a five months long period that involved thousands of peers. We assess the performance of the algorithms used in BitTorrent through several metrics. Our conclusions indicate that BitTorrent is a realistic and inexpensive alternative to the classical server-based content distribution.


internet measurement conference | 2006

Rarest first and choke algorithms are enough

Arnaud Legout; Guillaume Urvoy-Keller; Pietro Michiardi

The performance of peer-to-peer file replication comes from its piece and peer selection strategies. Two such strategies have been introduced by the BitTorrent protocol: the rarest first and choke algorithms. Whereas it is commonly admitted that BitTorrent performs well, recent studies have proposed the replacement of the rarest first and choke algorithms in order to improve efficiency and fairness. In this paper, we use results from real experiments to advocate that the replacement of the rarest first and choke algorithms cannot be justified in the context of peer-to-peer file replication in the Internet.We instrumented a BitTorrent client and ran experiments on real torrents with different characteristics. Our experimental evaluation is peer oriented, instead of tracker oriented, which allows us to get detailed information on all exchanged messages and protocol events. We go beyond the mere observation of the good efficiency of both algorithms. We show that the rarest first algorithm guarantees close to ideal diversity of the pieces among peers. In particular, on our experiments, replacing the rarest first algorithm with source or network coding solutions cannot be justified. We also show that the choke algorithm in its latest version fosters reciprocation and is robust to free riders. In particular, the choke algorithm is fair and its replacement with a bit level tit-for-tat solution is not appropriate. Finally, we identify new areas of improvements for efficient peer-to-peer file replication protocols.


european conference on parallel processing | 2003

Hierarchical peer-to-peer systems

Luis Garcés-Erice; Ernst W. Biersack; Pascal Felber; Keith W. Ross; Guillaume Urvoy-Keller

Structured peer-to-peer (P2P) lookup services organize peers into a flat overlay network and offer distributed hash table (DHT) functionality. Data is associated with keys and each peer is responsible for a subset of the keys. In hierarchical DHTs, peers are organized into groups, and each group has its autonomous intra-group overlay network and lookup service. Groups are organized in a top-level overlay network. To find a peer that is responsible for a key, the top-level overlay first determines the group responsible for the key; the responsible group then uses its intra-group overlay to determine the specific peer that is responsible for the key. We provide a general framework and a scalable hierarchical overlay management. We study a two-tier hierarchy using Chord for the top level. Our analysis shows that by using the most reliable peers in the top level, the hierarchical design significantly reduces the expected number of hops.


measurement and modeling of computer systems | 2003

Analysis of LAS scheduling for job size distributions with high variance

Idris A. Rai; Guillaume Urvoy-Keller; Ernst W. Biersack

Recent studies of Internet traffic have shown that flow size distributions often exhibit a high variability property in the sense that most of the flows are short and more than half of the total load is constituted by a small percentage of the largest flows. In the light of this observation, it is interesting to revisit scheduling policies that are known to favor small jobs in order to quantify the benefit for small and the penalty for large jobs. Among all scheduling policies that do not require knowledge of job size, the least attained service (LAS) scheduling policy is known to favor small jobs the most. We investigate the M/G/1/LAS queue for both, load ? < 1 and ? = 1. Our analysis shows that for job size distributions with a high variability property, LAS favors short jobs with a negligible penalty to the few largest jobs, and that LAS achieves a mean response time over all jobs that is close to the mean response time achieved by SRPT.Finally, we implement LAS in the ns-2 network simulator to study its performance benefits for TCP flows. When LAS is used to schedule packets over the bottleneck link, more than 99% of the shortest flows experience smaller mean response times under LAS than under FIFO and only the largest jobs observe a negligible increase in response time. The benefit of using LAS as compared to FIFO is most pronounced at high load.


IEEE Network | 2005

Size-based scheduling to improve the performance of short TCP flows

Idris A. Rai; Ernst W. Biersack; Guillaume Urvoy-Keller

The Internet today carries different types of traffic that have different service requirements. A large fraction of the traffic is either Web traffic requiring low response time or peer-to-peer traffic requiring high throughput. Meeting both performance requirements in a network where routers use droptail or RED for buffer management and FIFO as the service policy is an elusive goal. It is therefore worthwhile to investigate alternative scheduling and buffer management policies for bottleneck links. We propose to use the least attained service (LAS) policy to improve the response time of Web traffic. Under LAS, the next packet to be served is the one belonging to the flow that has received the least amount of service. When the buffer is full, the packet dropped belongs to the flow that has received the most service. We show that under LAS, as compared to FIFO with droptail, the transmission time and loss rate for short TCP flows are significantly reduced, with only a negligible increase in transmission time for the largest flows. The improvement seen by short TCP flows under LAS is mainly due to the way LAS interacts with the TCP protocol in the slow start phase, which results in shorter round-trip times and zero loss rates for short flows.


international conference on distributed computing systems | 2004

Data indexing in peer-to-peer DHT networks

Luis Garcés-Erice; Pascal Felber; Ernst W. Biersack; Guillaume Urvoy-Keller; Keith W. Ross

Peer-to-peer distributed hash table (DHT) systems make it simple to discover specific data when their complete identifiers - or keys - are known in advance. In practice, however, users looking up resources stored in peer-to-peer systems often have only partial information for identifying these resources. We describe techniques for indexing data stored in peer-to-peer DHT networks, and discovering the resources that match a given user query. Our system creates multiple indexes, organized hierarchically, which permit users to locate data even using scarce information, although at the price of a higher lookup cost. The data itself is stored on only one (or few) of the nodes. Experimental evaluation demonstrates the effectiveness of our indexing techniques on a distributed peer-to-peer bibliographic database with realistic user query workloads.


Parallel Processing Letters | 2003

HIERARCHICAL PEER-TO-PEER SYSTEMS

Luis Garcés-Erice; Ernst W. Biersack; Keith W. Ross; Pascal Felber; Guillaume Urvoy-Keller

Structured peer-to-peer (P2P) lookup services organize peers into a flat overlay network and offer distributed hash table (DHT) functionality. Data is associated with keys and each peer is responsible for a subset of the keys. In hierarchical DHTs, peers are organized into groups, and each group has its autonomous intra-group overlay network and lookup service. Groups are organized in a top-level overlay network. To find a peer that is responsible for a key, the top-level overlay first determines the group responsible for the key; the responsible group then uses its intra-group overlay to determine the specific peer that is responsible for the key. We provide a general framework for hierarchical DHTs with scalable overlay management. We specifically study a two-tier hierarchy that uses Chord for the top level. Our analysis shows that by using the most reliable peers in the top level, the hierarchical design significantly reduces the expected number of hops. We also present a method to construct hierarchical DHTs that map well to the Internet topology and achieve short intra-group communication delay. The results demonstrate the feasibility of locality-based peer groups, which allow P2P systems to take full advantage of the hierarchical design.


internet measurement conference | 2009

Challenging statistical classification for operational usage: the ADSL case

Marcin Pietrzyk; Jean-Laurent Costeux; Guillaume Urvoy-Keller; Taoufik En-Najjary

Accurate identification of network traffic according to application type is a key issue for most companies, including ISPs. For example, some companies might want to ban p2p traffic from their network while some ISPs might want to offer additional services based on the application. To classify applications on the fly, most companies rely on deep packet inspection (DPI) solutions. While DPI tools can be accurate, they require constant updates of their signatures database. Recently, several statistical traffic classification methods have been proposed. In this paper, we investigate the use of these methods for an ADSL provider managing many Points of Presence (PoPs). We demonstrate that statistical methods can offer performance similar to the ones of DPI tools when the classifier is trained for a specific site. It can also complement existing DPI techniques to mine traffic that the DPI solution failed to identify. However, we also demonstrate that, even if a statistical classifier is very accurate on one site, the resulting model cannot be applied directly to other locations. We show that this problem stems from the statistical classifier learning site specific information.


Lecture Notes in Computer Science | 2003

Topology-Centric Look-Up Service

Luis Garcés-Erice; Keith W. Ross; Ernst W. Biersack; Pascal Felber; Guillaume Urvoy-Keller

Topological considerations are of paramount importance in the design of a P2P lookup service. We present TOPLUS, a lookup service for structured peer-to-peer networks that is based on the hierarchical grouping of peers according to network IP prefixes. TOPLUS is fully distributed and symmetric, in the sense that all nodes have the same role. Packets are routed to their destination along a path that mimics the router-level shortest-path, thereby providing a small “stretch”. Experimental evaluation confirms that a lookup in TOPLUS takes time comparable to that of IP routing.


passive and active network measurement | 2007

Performance limitations of ADSL users: a case study

Matti Siekkinen; Denis Collange; Guillaume Urvoy-Keller; Ernst W. Biersack

We report results from the analysis of a 24-hour packet trace containing TCP traffic of approximately 1300 residential ADSL clients. Some of our observations confirm earlier studies: the major fraction of the total traffic originates from P2P applications and small fractions of connections and clients are responsible for the vast majority of the traffic. However, our main contribution is a throughput performance analysis of the clients. We observe suprisingly low utilizations of upload and download capacity for most of the clients. Furthermore, by using our TCP root cause analysis tool, we obtain a striking result: in over 90% of the cases, the low utilization is mostly due to the (P2P) applications clients use, which limit the transmission rate and not due to network congestion, for instance. P2P applications typically impose upload rate limits to avoid uplink saturation that hurt download performance. Our analysis shows that these rate limits are very low and, as a consequence, the aggregate download rates for these applications are low.

Collaboration


Dive into the Guillaume Urvoy-Keller's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Myriana Rifai

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pascal Felber

University of Neuchâtel

View shared research outputs
Top Co-Authors

Avatar

Dino Lopez Pacheco

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Lucile Sassatelli

Centre national de la recherche scientifique

View shared research outputs
Researchain Logo
Decentralizing Knowledge