Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Damiano Carra is active.

Publication


Featured researches published by Damiano Carra.


conference on emerging network experiment and technology | 2008

Uplink allocation beyond choke/unchoke: or how to divide and conquer best

Nikolaos Laoutaris; Damiano Carra; Pietro Michiardi

Motivated by emerging cooperative P2P applications we study new uplink allocation algorithms for substituting the rate-based choke/unchoke algorithm of BitTorrent which was developed for non-cooperative environments. Our goal is to shorten the download times by improving the uplink utilization of nodes. We develop a new family of uplink allocation algorithms which we call BitMax, to stress the fact that they allocate to each unchoked node the maximum rate it can sustain, instead of an 1/(k + 1) equal share as done in the existing BitTorrent. BitMax computes in each interval the number of nodes to be unchoked, and the corresponding allocations, and thus does not require any empirically preset parameters like k. We demonstrate experimentally that Bit-Max can reduce significantly the download times in a typical reference scenario involving mostly ADSL nodes. We also consider scenarios involving network bottlenecks caused by filtering of P2P traffic at ISP peering points and show that BitMax retains its gains also in these cases.


international conference on peer-to-peer computing | 2008

Faster Content Access in KAD

Moritz Steiner; Damiano Carra; Ernst W. Biersack

Many different distributed hash tables (DHTs) have been designed, but only few have been successfully deployed. The implementation of a DHT needs to deal with practical aspects (e.g. related to churn, or to the delay) that are often only marginally considered in the design. In this paper, we analyze in detail the content retrieval process in KAD, the implementation of the DHT Kademlia that is part of several popular peer-to-peer clients. In particular, we present a simple model to evaluate the impact of different design parameters on the overall lookup latency. We then perform extensive measurements on the lookup performance using an instrumented client. From the analysis of the results, we propose an improved scheme that is able to significantly decrease the overall lookup latency without increasing the overhead.


international conference on big data | 2013

HFSP: Size-based scheduling for Hadoop

Mario Pastorelli; Antonio Barbuzzi; Damiano Carra; Matteo Dell'Amico; Pietro Michiardi

Size-based scheduling with aging has, for long, been recognized as an effective approach to guarantee fairness and near-optimal system response times. We present HFSP, a scheduler introducing this technique to a real, multi-server, complex and widely used system such as Hadoop. Size-based scheduling requires a priori job size information, which is not available in Hadoop: HFSP builds such knowledge by estimating it on-line during job execution. Our experiments, which are based on realistic workloads generated via a standard benchmarking suite, pinpoint at a significant decrease in system response times with respect to the widely used Hadoop Fair scheduler, and show that HFSP is largely tolerant to job size estimation errors.


IEEE Journal on Selected Areas in Communications | 2007

Graph Based Analysis of Mesh Overlay Streaming Systems

Damiano Carra; R. Lo Cigno; Ernst W. Biersack

This paper studies fundamental properties of stream-based content distribution services. We assume the presence of an overlay network (such as those built by P2P systems) with limited degree of connectivity, and we develop a mathematical model that captures the essential features of overlay-based streaming protocols and systems. The methodology is based on stochastic graph theory, and models the streaming system as a stochastic process, whose characteristics are related to the streaming protocol. The model captures the elementary properties of the streaming system such as the number of active connections, the different play-out delay of nodes, and the probability of not receiving the stream due to node failures/misbehavior. Besides the static properties, the model is able to capture the transient behavior of the distribution graphs, i.e., the evolution of the structure over time, for instance in the initial phase of the distribution process. Contributions of this paper include a detailed definition of the methodology, its comparison with other analytical approaches and with simulative results, and a discussion of the additional insights enabled by this methodology. Results show that mesh based architectures are able to provide bounds on the receiving delay and maintain rate fluctuations due to system dynamics very low. Additionally, given the tight relationship between the stochastic process and the properties of the distribution protocol, this methodology gives basic guidelines for the design of such protocols and systems.


international conference on peer-to-peer computing | 2008

On the Impact of Greedy Strategies in BitTorrent Networks: The Case of BitTyrant

Damiano Carra; Giovanni Neglia; Pietro Michiardi

The success of BitTorrent has fostered the development of variants to its basic components. Some of the variants adopt greedy approaches aiming at exploiting the intrinsic altruism of the original version of BitTorrent in order to maximize the benefit of participating to a torrent. In this work we study BitTyrant, a recently proposed strategic client. BitTyrant tries to determine the exact amount of contribution necessary to maximize its download rate by dynamically adapting and shaping the upload rate allocated to its neighbors. We evaluate in detail the various mechanisms used by BitTyrant to identify their contribution to the performance of the client. Our findings indicate that the performance gain is due to the increased number of connections established by a BitTyrant client, rather than for its subtle uplink allocation algorithm; surprisingly, BitTyrant reveals to be altruistic and particularly efficient in disseminating the content, especially during the initial phase of the distribution process. The apparent gain of a single BitTyrant client, however, disappears in the case of a widespread adoption: our results indicate a severe loss of efficiency that we analyzed in detail. In contrast, a widespread adoption of the latest version of the mainline BitTorrent client would provide increased benefit for all peers.


international conference on communications | 2008

On some fundamental properties of P2P push/pull protocols

R. Lo Cigno; Alessandro Russo; Damiano Carra

The peer-to-peer communication paradigm is changing the way the Internet works, and the perspective network and service providers are looking at the telecommunication business. P2P applications and networks are taking foot by the day, and new systems are proposed continuously with ever novel features and better performances. In spite of attention and success, however, there is still a lack of fundamental analysis and understanding of the elementary properties of these systems. In this paper we consider a class of P2P protocols suitable both for content delivery (file-based communications) and for high-bandwidth media streaming like video and TV, and explore its fundamental properties. The class considered is known as mesh-based swarming push-pull systems or interleave protocols. They split the content in pieces and alternate continuously two phases: One where the peer pushes a piece to another peer to percolate information in the system, and the other when it pulls a piece trying to retrieve missing information. We compare synchronous and asynchronous models and explore the impact of protocols parameters, such as the dimension of the active neighborhood, trying to identify the efficiency of these very simple protocols in different scenarios, gaining insight to design the next protocol generation with performance and efficiency in mind.


conference on emerging network experiment and technology | 2007

Building a reliable P2P system out of unreliable P2P clients: the case of KAD

Damiano Carra; Ernst W. Biersack

Distributed Hash Tables (DHT) provide a framework for managing information in a large distributed network of nodes. One of the main challenges DHT systems must face is node churn, i.e., nodes can arrive and depart at any time. To assure that information published in a DHT remains available despite node churn is equivalent to building a reliable system out of unreliable components. In this paper we analyze KAD, a widely deployed DHT system. We focus on how to assure that information published in KAD remains available despite churn. We apply reliability theory to understand how the probability of finding an object published in KAD evolves over time. We also evaluate the cost, in terms of message exchanges, associated with the publishing scheme. Once we have identified the main weaknesses of the existing system, we propose an improved publishing scheme that is able to assure the same reliability but at a dramatically reduced cost. By exploiting knowledge about the lifetime of the nodes used to store the information, it is possible to reduce the publishing cost by one order of magnitude.


Computer Networks | 2012

Peer-assisted content distribution on a budget

Pietro Michiardi; Damiano Carra; Francesco Albanese; Azer Bestavros

In this paper, we propose a general framework and present a prototype implementation of peer-assisted content delivery application. Our system - called Cyclops - dynamically adjusts the bandwidth consumed by content servers (which represents the bulk of content delivery costs) to feed a set of swarming clients, based on a feedback signal that gauges the real-time health of the swarm. Our extensive evaluation of Cyclops in a variety of settings - including controlled PlanetLab and live Internet experiments involving thousands of users - shows a significant reduction in content distribution costs when compared to existing swarming solutions, with a minor impact on the content delivery times.


IEEE Transactions on Parallel and Distributed Systems | 2008

Stochastic Graph Processes for Performance Evaluation of Content Delivery Applications in Overlay Networks

Damiano Carra; R. Lo Cigno; Ernst W. Biersack

This paper proposes a new methodology to model the distribution of finite-size content to a group of users connected through an overlay network. Our methodology describes the distribution process as a constrained stochastic graph process (CSGP), where the constraints dictated by the content distribution protocol and the characteristics of the overlay network define the interaction among nodes. A CSGP is a semi-Markov process whose state is described by the graph itself. CSGPs offer a powerful description technique that can be exploited by Monte Carlo integration methods to compute in a very efficient way not only the mean but also the full distribution of metrics such as the file download times or the number of hops from the source to the receiving nodes. We model several distribution architectures based on trees and meshes as CSGPs and solve them numerically. We are able to study scenarios with a very large number of nodes, and we can precisely quantify the performance differences between the tree-based and mesh-based distribution architectures.


modeling, analysis, and simulation on computer and telecommunication systems | 2014

Revisiting Size-Based Scheduling with Estimated Job Sizes

Matteo Dell'Amico; Damiano Carra; Mario Pastorelli; Pietro Michiardi

We study size-based schedulers, and focus on the impact of inaccurate job size information on response time and fairness. Our intent is to revisit previous results, which allude to performance degradation for even small errors on job size estimates, thus limiting the applicability of size-based schedulers. We show that scheduling performance is tightly connected to workload characteristics: in the absence of large skew in the job size distribution, even extremely imprecise estimates suffice to outperform size-oblivious disciplines. Instead, when job sizes are heavily skewed, known size-based disciplines suffer. In this context, we show - for the first time - the dichotomy of over-estimation versus under-estimation. The former is, in general, less problematic than the latter, as its effects are localized to individual jobs. Instead, under-estimation leads to severe problems that may affect a large number of jobs. We present an approach to mitigate these problems: our technique requires no complex modifications to original scheduling policies and performs very well. To support our claim, we proceed with a simulation-based evaluation that covers an unprecedented large parameter space, which takes into account a variety of synthetic and real workloads. As a consequence, we show that size-based scheduling is practical and outperforms alternatives in a wide array of use-cases, even in presence of inaccurate size information.

Collaboration


Dive into the Damiano Carra's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge