Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Maurizio Matteo Munafo is active.

Publication


Featured researches published by Maurizio Matteo Munafo.


internet measurement conference | 2012

Inside dropbox: understanding personal cloud storage services

Idilio Drago; Marco Mellia; Maurizio Matteo Munafo; Anna Sperotto; Ramin Sadre; Aiko Pras

Personal cloud storage services are gaining popularity. With a rush of providers to enter the market and an increasing offer of cheap storage space, it is to be expected that cloud storage will soon generate a high amount of Internet traffic. Very little is known about the architecture and the performance of such systems, and the workload they have to face. This understanding is essential for designing efficient cloud storage systems and predicting their impact on the network. This paper presents a characterization of Dropbox, the leading solution in personal cloud storage in our datasets. By means of passive measurements, we analyze data from four vantage points in Europe, collected during 42 consecutive days. Our contributions are threefold: Firstly, we are the first to study Dropbox, which we show to be the most widely-used cloud storage system, already accounting for a volume equivalent to around one third of the YouTube traffic at campus networks on some days. Secondly, we characterize the workload users in different environments generate to the system, highlighting how this reflects on network traffic. Lastly, our results show possible performance bottlenecks caused by both the current system architecture and the storage protocol. This is exacerbated for users connected far from storage data-centers. All measurements used in our analyses are publicly available in anonymized form at the SimpleWeb trace repository: http://traces.simpleweb.org/dropbox/


international conference on distributed computing systems | 2011

Dissecting Video Server Selection Strategies in the YouTube CDN

Ruben Torres; Alessandro Finamore; Jin Ryong Kim; Marco Mellia; Maurizio Matteo Munafo; Sanjay G. Rao

In this paper, we conduct a detailed study of the YouTube CDN with a view to understanding the mechanisms and policies used to determine which data centers users download video from. Our analysis is conducted using week-long datasets simultaneously collected from the edge of five networks - two university campuses and three ISP networks - located in three different countries. We employ state-of-the-art delay-based geolocation techniques to find the geographical location of YouTube servers. A unique aspect of our work is that we perform our analysis on groups of related YouTube flows. This enables us to infer key aspects of the system design that would be difficult to glean by considering individual flows in isolation. Our results reveal that while the RTT between users and data centers plays a role in the video server selection process, a variety of other factors may influence this selection including load-balancing, diurnal effects, variations across DNS servers within a network, limited availability of rarely accessed video, and the need to alleviate hot-spots that may arise due to popular video content.


IEEE Network | 2011

Experiences of Internet traffic monitoring with tstat

Alessandro Finamore; Marco Mellia; Michela Meo; Maurizio Matteo Munafo; Dario Rossi

Since the early days of the Internet, network traffic monitoring has always played a strategic role in understanding and characterizing users¿ activities. In this article, we present our experience in engineering and deploying Tstat, an open source passive monitoring tool that has been developed in the past 10 years. Started as a scalable tool to continuously monitor packets that flow on a link, Tstat has evolved into a complex application that gives network researchers and operators the possibility to derive extended and complex measurements thanks to advanced traffic classifiers. After discussing Tstat capabilities and internal design, we present some examples of measurements collected deploying Tstat at the edge of several ISP networks in past years. While other works report a continuous decline of P2P traffic with streaming and file hosting services rapidly increasing in popularity, the results presented in this article picture a different scenario. First, P2P decline has stopped, and in the last months of 2010 there was a counter tendency to increase P2P traffic over UDP, so the common belief that UDP traffic is negligible is not true anymore. Furthermore, streaming and file hosting applications have either stabilized or are experiencing decreasing traffic shares. We then discuss the scalability issues software-based tools have to cope with when deployed in real networks, showing the importance of properly identifying bottlenecks.


conference on emerging network experiment and technology | 2014

The Cost of the "S" in HTTPS

David Naylor; Alessandro Finamore; Ilias Leontiadis; Yan Grunenberger; Marco Mellia; Maurizio Matteo Munafo; Konstantina Papagiannaki; Peter Steenkiste

Increased user concern over security and privacy on the Internet has led to widespread adoption of HTTPS, the secure version of HTTP. HTTPS authenticates the communicating end points and provides confidentiality for the ensuing communication. However, as with any security solution, it does not come for free. HTTPS may introduce overhead in terms of infrastructure costs, communication latency, data usage, and energy consumption. Moreover, given the opaqueness of the encrypted communication, any in-network value added services requiring visibility into application layer content, such as caches and virus scanners, become ineffective. This paper attempts to shed some light on these costs. First, taking advantage of datasets collected from large ISPs, we examine the accelerating adoption of HTTPS over the last three years. Second, we quantify the direct and indirect costs of this evolution. Our results show that, indeed, security does not come for free. This work thus aims to stimulate discussion on technologies that can mitigate the costs of HTTPS while still protecting the users privacy.


IEEE Personal Communications | 1997

Local and global handovers for mobility management in wireless ATM networks

Marco Ajmone Marsan; Carla Fabiana Chiasserini; R. Lo Cigno; Maurizio Matteo Munafo; Andrea Fumagalli

This article deals with the problem of virtual circuit (VC) management in wireless ATM (W-ATM) networks with mobile user terminals. In W-ATM networks, a VC terminating at a mobile user may require dynamic reestablishment during the short time span necessary for terminal handover due to its movement from one (macro)cell to another. The VC reestablishment procedure has to ensure in-sequence and loss-free delivery of the ATM cells containing user data. After a classification of the solutions proposed so far in the literature, a novel technique for the dynamic reestablishment of VCs in W-ATM networks is described, and its performance is evaluated through simulation. The proposed technique allows for a progressive upgrade of the fixed part of the ATM network and for the incremental introduction of user terminal mobility.


traffic monitoring and analysis | 2012

Uncovering the big players of the web

Vinicius Gehlen; Alessandro Finamore; Marco Mellia; Maurizio Matteo Munafo

In this paper we aim at observing how today the Internet large organizations deliver web content to end users. Using one-week long data sets collected at three vantage points aggregating more than 30,000 Internet customers, we characterize the offered services precisely quantifying and comparing the performance of different players. Results show that today 65% of the web traffic is handled by the top 10 organizations. We observe that, while all of them serve the same type of content, different server architectures have been adopted considering load balancing schemes, servers number and location: some organizations handle thousands of servers with the closest being few milliseconds far away from the end user, while others manage few data centers. Despite this, the performance of bulk transfer rate offered to end users are typically good, but impairment can arise when content is not readily available at the server and has to be retrieved from the CDN back-end.


ieee international conference computer and communications | 2007

DoWitcher: Effective Worm Detection and Containment in the Internet Core

Supranamaya Ranjan; S. Shah; Antonio Nucci; Maurizio Matteo Munafo; R. Cruz; S. Muthukrishnan

Enterprise networks are increasingly offloading the responsibility for worm detection and containment to the carrier networks. However, current approaches to the zero-day worm detection problem such as those based on content similarity of packet payloads are not scalable to the carrier link speeds (OC-48 and up-wards). In this paper, we introduce a new system, namely DoWitcher, which in contrast to previous approaches is scalable as well as able to detect the stealthiest worms that employ low-propagation rates or polymorphisms to evade detection. DoWitcher uses an incremental approach toward worm detection: First, it examines the layer-4 traffic features to discern the presence of a worm anomaly; Next, it determines a flow-filter mask that can be applied to isolate the suspect worm flows and; Finally, it enables full-packet capture of only those flows that match the mask, which are then processed by a longest common subsequence algorithm to extract the worm content signature. Via a proof-of-concept implementation on a commercially available network analyzer processing raw packets from an OC-48 link, we demonstrate the capability of DoWitcher to detect low-rate worms and extract signatures for even the polymorphic worms.


international conference on computer communications | 2002

A new class of QoS routing strategies based on network graph reduction

Claudio Ettore Casetti; R. Lo Cigno; Marco Mellia; Maurizio Matteo Munafo

This paper discusses a new approach to QoS routing, introducing the notion of algorithm resilience (i.e., its capability to adapt to network and load modifications) as the performance index of the algorithm itself, for a given network topology, load and traffic pattern. The new approach can be summarized as network graph reduction, i.e., a modification of the graph describing the network before the routing path is computed, in order to exclude from the path selection over-congested portions of the network. This solution leads to a class of two-step routing algorithms, where both steps are simple, hence allowing efficient implementation. Simulation experiments, run on randomly-generated topologies and traffic patterns, show that these routing algorithms outperform both the standard minimum hop algorithm and those QoS-based algorithms based on the same metrics but not using the notion of network graph reduction.


Performance Evaluation | 1995

ATM simulation with CLASS

M. Ajmone Marsan; Andrea Bianco; Tien Van Do; L. Jereb; R. Lo Cigno; Maurizio Matteo Munafo

Abstract The paper describes an efficient, versatile and extensible software tool for the analysis of the quality of connectionless services in ATM networks. The tool is named CLASS for ConnectionLess ATM Services Simulator. CLASS is a time-driven, slotted, synchronous simulator, entirely written in standard C language. CLASS allows the performance analysis of ATM networks adopting the viewpoint of both the end-user, and the network manager; the measured performance parameters include the cell and message loss probabilities and the cell and message delay jitters. The investigation of the impact of shaping and policing techniques, and of the use of connectionless servers on the network performance is also supported. With CLASS, the network synthetic workload can be modeled choosing from a variety of traffic generators ranging from simple Poisson traffic sources to sources modelling the traffic produced when higher level protocols, like TCP, access the ATM services.


international conference on computer communications | 2013

Exploring the cloud from passive measurements: The Amazon AWS case

Ignacio Bermudez; Stefano Traverso; Marco Mellia; Maurizio Matteo Munafo

This paper presents a characterization of Amazons Web Services (AWS), the most prominent cloud provider that offers computing, storage, and content delivery platforms. Leveraging passive measurements, we explore the EC2, S3 and CloudFront AWS services to unveil their infrastructure, the pervasiveness of content they host, and their traffic allocation policies. Measurements reveal that most of the content residing on EC2 and S3 is served by one Amazon datacenter, located in Virginia, which appears to be the worst performing one for Italian users. This causes traffic to take long and expensive paths in the network. Since no automatic migration and load-balancing policies are offered by AWS among different locations, content is exposed to the risks of outages. The CloudFront CDN, on the contrary, shows much better performance thanks to the effective cache selection policy that serves 98% of the traffic from the nearest available cache. CloudFront exhibits also dynamic load-balancing policies, in contrast to the static allocation of instances on EC2 and S3. Information presented in this paper will be useful for developers aiming at entrusting AWS to deploy their contents, and for researchers willing to improve cloud design.

Collaboration


Dive into the Maurizio Matteo Munafo's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrea Fumagalli

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Z. Zsoka

Budapest University of Technology and Economics

View shared research outputs
Researchain Logo
Decentralizing Knowledge