Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matteo Dell'Amico is active.

Publication


Featured researches published by Matteo Dell'Amico.


international conference on peer-to-peer computing | 2010

Online Data Backup: A Peer-Assisted Approach

Laszlo Toka; Matteo Dell'Amico; Pietro Michiardi

In this work we study the benefits of a peer- assisted approach to online backup applications, in which spare bandwidth and storage space of end- hosts complement that of an online storage service. Via simulations, we analyze the interplay between two key aspects of such applications: data placement and bandwidth allocation. Our analysis focuses on metrics such as the time required to complete a backup and a restore operation, as well as the storage costs. We show that, by using adequate bandwidth allocation policies in which storage space at a cloud provider can be used temporarily, hybrid systems can achieve performance comparable to traditional client-server architectures at a fraction of the costs. Moreover, we explore the impact of mechanisms to impose fairness and conclude that a peer-assisted approach does not discriminate peers in terms of performance, but associates a storage cost to peers contributing with little resources.


international conference on big data | 2013

HFSP: Size-based scheduling for Hadoop

Mario Pastorelli; Antonio Barbuzzi; Damiano Carra; Matteo Dell'Amico; Pietro Michiardi

Size-based scheduling with aging has, for long, been recognized as an effective approach to guarantee fairness and near-optimal system response times. We present HFSP, a scheduler introducing this technique to a real, multi-server, complex and widely used system such as Hadoop. Size-based scheduling requires a priori job size information, which is not available in Hadoop: HFSP builds such knowledge by estimating it on-line during job execution. Our experiments, which are based on realistic workloads generated via a standard benchmarking suite, pinpoint at a significant decrease in system response times with respect to the widely used Hadoop Fair scheduler, and show that HFSP is largely tolerant to job size estimation errors.


international conference on detection of intrusions and malware and vulnerability assessment | 2010

Take a deep breath: a stealthy, resilient and cost-effective botnet using skype

Antonio Nappa; Aristide Fattori; Marco Balduzzi; Matteo Dell'Amico; Lorenzo Cavallaro

Skype is one of the most used P2P applications on the Internet: VoIP calls, instant messaging, SMS and other features are provided at a low cost to millions of users. Although Skype is a closed source application, an API allows developers to build custom plugins which interact over the Skype network, taking advantage of its reliability and capability to easily bypass firewalls and NAT devices. Since the protocol is completely undocumented, Skype traffic is particularly hard to analyze and to reverse engineer. We propose a novel botnet model that exploits an overlay network such as Skype to build a parasitic overlay, making it extremely difficult to track the botmaster and disrupt the botnet without damaging legitimate Skype users. While Skype is particularly valid for this purpose due to its abundance of features and its widespread installed base, ourmodel is generically applicable to distributed applications that employ overlay networks to send direct messages between nodes (e.g., peer-to-peer software with messaging capabilities). We are convinced that similar bot-netmodels are likely to appear into the wild in the near future and that the threats they pose should not be underestimated. Our contribution strives to provide the tools to correctly evaluate and understand the possible evolution and deployment of this phenomenon.


international conference on peer-to-peer computing | 2011

Data transfer scheduling for P2P storage

Laszlo Toka; Matteo Dell'Amico; Pietro Michiardi

In Peer-to-Peer storage and backup applications, large amounts of data have to be transferred between nodes. In general, recipient of data transfers are not chosen randomly from the whole set of nodes in the Peer-to-Peer networks, but they are chosen according to peer selection rules imposing several criteria, such as resource contributions, position in DHTs, or trust between nodes. Imposing too stringent restrictions on the choice of nodes that are eligible to receive data can have a negative impact on the amount of time needed to complete data transfer, and scheduling choices influence this result as well. We formalize the problem of data transfer scheduling, and devise means for calculating (knowing a posteriori the availability patterns of nodes) optimal scheduling choices; we then propose and evaluate realistic scheduling policies, and evaluate their overheads in transfer times with respect to the optimal. We show that allowing even a small flexibility in choosing nodes after the peer selection step results in large improvements on time to complete transfers, and that even simple informed scheduling policies can significantly reduce transfer time overhead.


computer and communications security | 2015

Monte Carlo Strength Evaluation: Fast and Reliable Password Checking

Matteo Dell'Amico; Maurizio Filippone

Modern password guessing attacks adopt sophisticated probabilistic techniques that allow for orders of magnitude less guesses to succeed compared to brute force. Unfortunately, best practices and password strength evaluators failed to keep up: they are generally based on heuristic rules designed to defend against obsolete brute force attacks. Many passwords can only be guessed with significant effort, and motivated attackers may be willing to invest resources to obtain valuable passwords. However, it is eminently impractical for the defender to simulate expensive attacks against each user to accurately characterize their password strength. This paper proposes a novel method to estimate the number of guesses needed to find a password using modern attacks. The proposed method requires little resources, applies to a wide set of probabilistic models, and is characterised by highly desirable convergence properties. The experiments demonstrate the scalability and generality of the proposal. In particular, the experimental analysis reports evaluations on a wide range of password strengths, and of state-of-the-art attacks on very large datasets, including attacks that would have been prohibitively expensive to handle with existing simulation-based approaches.


modeling, analysis, and simulation on computer and telecommunication systems | 2014

Revisiting Size-Based Scheduling with Estimated Job Sizes

Matteo Dell'Amico; Damiano Carra; Mario Pastorelli; Pietro Michiardi

We study size-based schedulers, and focus on the impact of inaccurate job size information on response time and fairness. Our intent is to revisit previous results, which allude to performance degradation for even small errors on job size estimates, thus limiting the applicability of size-based schedulers. We show that scheduling performance is tightly connected to workload characteristics: in the absence of large skew in the job size distribution, even extremely imprecise estimates suffice to outperform size-oblivious disciplines. Instead, when job sizes are heavily skewed, known size-based disciplines suffer. In this context, we show - for the first time - the dichotomy of over-estimation versus under-estimation. The former is, in general, less problematic than the latter, as its effects are localized to individual jobs. Instead, under-estimation leads to severe problems that may affect a large number of jobs. We present an approach to mitigate these problems: our technique requires no complex modifications to original scheduling policies and performs very well. To support our claim, we proceed with a simulation-based evaluation that covers an unprecedented large parameter space, which takes into account a variety of synthetic and real workloads. As a consequence, we show that size-based scheduling is practical and outperforms alternatives in a wide array of use-cases, even in presence of inaccurate size information.


Information Security Technical Report | 2013

HiPoLDS: A Hierarchical Security Policy Language for Distributed Systems

Matteo Dell'Amico; Gabriel Serme; Muhammad Sabir Idrees; Anderson Santana de Oliveira; Yves Roudier

Expressing security policies to govern distributed systems is a complex and error-prone task. Policies are hard to understand, often expressed with unfriendly syntax, making it difficult for security administrators and for business analysts to create intelligible specifications. We introduce the Hierarchical Policy Language for Distributed Systems (HiPoLDS), which has been designed to enable the specification of security policies in distributed systems in a concise, readable, and extensible way. HiPoLDS design focuses on decentralized execution environments under the control of multiple stakeholders. It represents policy enforcement through the use of distributed reference monitors, which control the flow of information between services. HiPoLDS allows the definition of both abstract and concrete policies, expressing respectively high-level properties required and concrete implementation details to be ultimately introduced into the service implementation.


very large data bases | 2016

NG-DBSCAN: scalable density-based clustering for arbitrary data

Alessandro Lulli; Matteo Dell'Amico; Pietro Michiardi; Laura Ricci

We present NG-DBSCAN, an approximate density-based clustering algorithm that operates on arbitrary data and any symmetric distance measure. The distributed design of our algorithm makes it scalable to very large datasets; its approximate nature makes it fast, yet capable of producing high quality clustering results. We provide a detailed overview of the steps of NG-DBSCAN, together with their analysis. Our results, obtained through an extensive experimental campaign with real and synthetic data, substantiate our claims about NG-DBSCANs performance and scalability.


ieee international conference on cloud computing technology and science | 2017

HFSP: Bringing Size-Based Scheduling To Hadoop

Mario Pastorelli; Damiano Carra; Matteo Dell'Amico; Pietro Michiardi

Size-based scheduling with aging has been recognized as an effective approach to guarantee fairness and near-optimal system response times. We present HFSP, a scheduler introducing this technique to a real, multi-server, complex, and widely used system such as Hadoop. Size-based scheduling requires a priori job size information, which is not available in Hadoop: HFSP builds such knowledge by estimating it on-line during job execution. Our experiments, which are based on realistic workloads generated via a standard benchmarking suite, pinpoint at a significant decrease in system response times with respect to the widely used Hadoop Fair scheduler, without impacting the fairness of the scheduler, and show that HFSP is largely tolerant to job size estimation errors.


IEEE Transactions on Computers | 2016

PSBS: Practical Size-Based Scheduling

Matteo Dell'Amico; Damiano Carra; Pietro Michiardi

Size-based schedulers have very desirable performance properties: optimal or near-optimal response time can be coupled with strong fairness. Despite this, however, such systems are rarely implemented in practical settings, because they require knowing a priori the amount of work needed to complete jobs: this assumption is difficult to satisfy in concrete systems. It is definitely more likely to inform the system with an estimate of the job sizes, but existing studies point to somewhat pessimistic results if size-based policies use imprecise job size estimations. We take the goal of designing scheduling policies that explicitly deal with inexact job sizes. First, we prove that, in the absence of errors, it is always possible to improve any scheduling policy by designing a size-based one that dominates it: in the new policy, no jobs will complete later than in the original one. Unfortunately, size-based schedulers can perform badly with inexact job size information when job sizes are heavily skewed; we show that this issue, and the pessimistic results shown in the literature, are due to problematic behavior when large jobs are underestimated. Once the problem is identified, it is possible to amend size-based schedulers to solve the issue. We generalize FSP-a fair and efficient size-based scheduling policy-to solve the problem highlighted above; in addition, our solution deals with different job weights (that can be assigned to a job independently from its size). We provide an efficient implementation of the resulting protocol, which we call Practical Size-Based Scheduler (PSBS). Through simulations evaluated on synthetic and real workloads, we show that PSBS has near-optimal performance in a large variety of cases with inaccurate size information, that it performs fairly and that it handles job weights correctly. We believe that this work shows that PSBS is indeed pratical, and we maintain that it could inspire the design of schedulers in a wide array of real-world use cases.

Collaboration


Dive into the Matteo Dell'Amico's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge