Pietro Michiardi
Institut Eurécom
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Pietro Michiardi.
communications and multimedia security | 2002
Pietro Michiardi; Refik Molva
Countermeasures for node misbehavior and selfishness are mandatory requirements in MANET. Selfishness that causes lack of node activity cannot be solved by classical security means that aim at verifying the correctness and integrity of an operation. We suggest a generic mechanism based on reputation to enforce cooperation among the nodes of a MANET to prevent selfish behavior. Each network entity keeps track of other entities’ collaboration using a technique called reputation. The reputation is calculated based on various types of information on each entity’s rate of collaboration. Since there is no incentive for a node to maliciously spread negative information about other nodes, simple denial of service attacks using the collaboration technique itself are prevented. The generic mechanism can be smoothly extended to basic network functions with little impact on existing protocols.
internet measurement conference | 2006
Arnaud Legout; Guillaume Urvoy-Keller; Pietro Michiardi
The performance of peer-to-peer file replication comes from its piece and peer selection strategies. Two such strategies have been introduced by the BitTorrent protocol: the rarest first and choke algorithms. Whereas it is commonly admitted that BitTorrent performs well, recent studies have proposed the replacement of the rarest first and choke algorithms in order to improve efficiency and fairness. In this paper, we use results from real experiments to advocate that the replacement of the rarest first and choke algorithms cannot be justified in the context of peer-to-peer file replication in the Internet.We instrumented a BitTorrent client and ran experiments on real torrents with different characteristics. Our experimental evaluation is peer oriented, instead of tracker oriented, which allows us to get detailed information on all exchanged messages and protocol events. We go beyond the mere observation of the good efficiency of both algorithms. We show that the rarest first algorithm guarantees close to ideal diversity of the pieces among peers. In particular, on our experiments, replacing the rarest first algorithm with source or network coding solutions cannot be justified. We also show that the choke algorithm in its latest version fosters reciprocation and is robust to free riders. In particular, the choke algorithm is fair and its replacement with a bit level tit-for-tat solution is not appropriate. Finally, we identify new areas of improvements for efficient peer-to-peer file replication protocols.
ad hoc networks | 2005
Pietro Michiardi; Refik Molva
This paper focuses on the formal assessment of the properties of cooperation enforcement mechanisms used to detect and prevent selfish behavior of nodes forming a mobile ad hoc network. In the first part, we demonstrate the requirement for a cooperation enforcement mechanism using cooperative game theory that allows us to determine a lower bound on the size of coalitions of cooperating nodes. In the second part, using non-cooperative game theory, we compare our cooperation enforcement mechanism CORE to other popular mechanisms. Under the hypothesis of perfect monitoring of node behavior, CORE appears to be equivalent to a wide range of history-based strategies like tit-for-tat. Further, adopting a more realistic assumption taking into account imperfect monitoring due to probable communication errors, the non-cooperative model puts in evidence the superiority of CORE over other history-based schemes.
workshop on online social networks | 2008
Chi-Anh La; Pietro Michiardi
In this work we present a measurement study of user mobility in Second Life. We first discuss different techniques to collect user traces and then focus on results obtained using a crawler that we built. Tempted by the question whether our methodology could provide similar results to those obtained in real-world experiments, we study the statistical distribution of user contacts and show that user mobility in Second Life presents similar traits to those of real humans. We further push our analysis to radio networks that emerge from user interaction and show that they are highly clustered. Lastly, we focus on the spatial properties of user movements and observe that users in Second Life revolve around several points of interest traveling in general short distances. Using maximum likelihood estimation, we show that our empirical data best fit to power-law with cutoff distributions, indicating that contact time distributions in a virtual environment has very similar characteristics to those observed in real-world experiments.
conference on emerging network experiment and technology | 2008
Nikolaos Laoutaris; Damiano Carra; Pietro Michiardi
Motivated by emerging cooperative P2P applications we study new uplink allocation algorithms for substituting the rate-based choke/unchoke algorithm of BitTorrent which was developed for non-cooperative environments. Our goal is to shorten the download times by improving the uplink utilization of nodes. We develop a new family of uplink allocation algorithms which we call BitMax, to stress the fact that they allocate to each unchoked node the maximum rate it can sustain, instead of an 1/(k + 1) equal share as done in the existing BitTorrent. BitMax computes in each interval the number of nodes to be unchoked, and the corresponding allocations, and thus does not require any empirically preset parameters like k. We demonstrate experimentally that Bit-Max can reduce significantly the download times in a typical reference scenario involving mostly ADSL nodes. We also consider scenarios involving network bottlenecks caused by filtering of P2P traffic at ISP peering points and show that BitMax retains its gains also in these cases.
international conference on peer-to-peer computing | 2011
Rajesh Sharma; Anwitaman Datta; Matteo DeH'Amico; Pietro Michiardi
Friend-to-friend networks, i.e. peer-to-peer networks where data are exchanged and stored solely through nodes owned by trusted users, can guarantee dependability, privacy and uncensorability by exploiting social trust. However, the limitation of storing data only on friends can come to the detriment of data availability: if no friends are online, then data stored in the system will not be accessible. In this work, we explore the tradeoffs between redundancy (i.e., how many copies of data are stored on friends), data placement (the choice of which friend nodes to store data on) and data availability (the probability of finding data online). We show that the problem of obtaining maximal availability while minimizing redundancy is NP-complete; in addition, we perform an exploratory study on data placement strategies, and we investigate their performance in terms of redundancy needed and availability obtained. By performing a trace-based evaluation, we show that nodes with as few as 10 friends can already obtain good availability levels.
international conference on peer-to-peer computing | 2012
Thomas Mager; Ernst W. Biersack; Pietro Michiardi
Wuala is a popular online backup and file sharing system that has been successfully operated for several years. Very little is known about the design and implementation of Wuala. We capture the network traffic exchanged between the machines participating in Wuala to reverse engineer the design and operation of Wuala. When Wuala was launched, it used a clever combination of centralized storage in data centers for long-term backup with peer-assisted file caching of frequently downloaded files. Large files are broken up into transmission blocks and additional transmission blocks are generated using a classical redundancy coding scheme. Multiple transmission blocks are sent in parallel to different machines and reliability is assured via a simple Automatic Repeat Request protocol on top of UDP. Recently, however, Wuala has adopted a pure client/server based architecture. Our findings and the underlying reasons are substantiated by an interview with a co-founder of Wuala. The main reasons are lower resource usage on the client side, which is important in the case of mobile terminals, a much simpler software architecture, and a drastic reduction in the cost of data transfers originating at the data center.
international conference on peer-to-peer computing | 2010
Laszlo Toka; Matteo Dell'Amico; Pietro Michiardi
In this work we study the benefits of a peer- assisted approach to online backup applications, in which spare bandwidth and storage space of end- hosts complement that of an online storage service. Via simulations, we analyze the interplay between two key aspects of such applications: data placement and bandwidth allocation. Our analysis focuses on metrics such as the time required to complete a backup and a restore operation, as well as the storage costs. We show that, by using adequate bandwidth allocation policies in which storage space at a cloud provider can be used temporarily, hybrid systems can achieve performance comparable to traditional client-server architectures at a fraction of the costs. Moreover, we explore the impact of mechanisms to impose fairness and conclude that a peer-assisted approach does not discriminate peers in terms of performance, but associates a storage cost to peers contributing with little resources.
ieee international conference computer and communications | 2006
Guillaume Urvoy-Keller; Pietro Michiardi
In this paper we adopt a simulation approach to study the performance of the BitTorrent protocol in terms of the entropy that qualifies a torrent and the structure of the overlay used to distribute the content. We find that the entropy of a torrent, defined as the diversity that characterizes the distribution of pieces of the content, plays an important role for the system to achieve optimal performance. We then relate the performance of BitTorrent with the characteristics of the distribution overlay built by the peers taking part in the torrent. Our results show that the number of connections a given peer maintains with other peers and the fraction of those connections initiated by the peer itself are key factors to sustain a high entropy, hence an optimal system performance. Those results were obtained for a realistic choice of torrent sizes and system parameters, under the assumption of a flash-crowd peer arrival pattern.
Security and Communication Networks | 2009
Roberto Di Pietro; Pietro Michiardi; Refik Molva
Summary Hop-by-hop data aggregation is a very important technique used to reduce the communication overhead and energy expenditure of sensor nodes during the process of data collection in a wireless sensor network (WSN). However, the unattended nature of WSNs calls for data aggregation techniques to be secure. Indeed, sensor nodes can be compromised to mislead the base station (BS) by injecting bogus data into the network during both forwarding and aggregation of data. Moreover, data aggregation might increase the risk of confidentiality violations: If sensors close to the BS are corrupted, an adversary could easily access to the results of the ‘in network’ computation performed by the WSN. Further, nodes can also fail due to random and non-malicious causes (e.g., battery exhaustion), hence availability should be considered as well. In this paper we tackle the above issues that affect data aggregation techniques by proposing a mechanism that: (i) provides both confidentiality and integrity of the aggregated data so that for any compromised sensor in the WSN the information acquired could only reveal the readings performed by a small, constant number of neighboring sensors of the compromised one; (ii) detects bogus data injection attempts; (iii) provides high resilience to sensor failures. Our protocol is based on the concept of delayed aggregation and peer monitoring and requires local interactions only. Hence, it is highly scalable and introduces small overhead; detailed analysis supports our findings. Copyright