Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Antonio Augusto de Aragão Rocha is active.

Publication


Featured researches published by Antonio Augusto de Aragão Rocha.


Performance Evaluation | 2010

Estimating self-sustainability in peer-to-peer swarming systems

Daniel Sadoc Menasché; Antonio Augusto de Aragão Rocha; Edmundo de Souza e Silva; Rosa Maria Meri Leão; Donald F. Towsley; Arun Venkataramani

Peer-to-peer swarming is one of the \emph{de facto} solutions for distributed content dissemination in todays Internet. By leveraging resources provided by clients, swarming systems reduce the load on and costs to publishers. However, there is a limit to how much cost savings can be gained from swarming; for example, for unpopular content peers will always depend on the publisher in order to complete their downloads. In this paper, we investigate this dependence. For this purpose, we propose a new metric, namely \emph{swarm self-sustainability}. A swarm is referred to as self-sustaining if all its blocks are collectively held by peers; the self-sustainability of a swarm is the fraction of time in which the swarm is self-sustaining. We pose the following question: how does the self-sustainability of a swarm vary as a function of content popularity, the service capacity of the users, and the size of the file? We present a model to answer the posed question. We then propose efficient solution methods to compute self-sustainability. The accuracy of our estimates is validated against simulation. Finally, we also provide closed-form expressions for the fraction of time that a given number of blocks is collectively held by peers.


performance evaluation methodolgies and tools | 2006

Modeling, analysis, measurement and experimentation with the Tangram-II integrated environment

Edmundo de Souza e Silva; Ana Paula Couto da Silva; Antonio Augusto de Aragão Rocha; Rosa Maria Meri Leão; Flávio P. Duarte; Fernando J. S. Filho; G.D.G. Jaime; Richard R. Muntz

A large number of performance evaluation tools have been developed over the years to support the analyst in the difficult task of model building. As systems increase in complexity, the need is critical for tools that are able to help the user throughout the whole modeling cycle, from model building to model solution and experimentation. In this work we describe the main features of the TANGRAM-II modeling environment. The tool has a powerful and flexible model interface, unique algorithms for the numerical solution of models, includes an event driven and fluid simulators that provides a variety of facilities useful for obtaining the measures of interest, and has a traffic engineering environment integrated with the other tool modules.


measurement and modeling of computer systems | 2011

Implications of peer selection strategies by publishers on the performance of P2P swarming systems

Daniel Sadoc Menasché; Antonio Augusto de Aragão Rocha; Edmundo de Souza e Silva; Donald F. Towsley; Rosa Maria Meri Leão

In peer-to-peer swarming systems, as peers join a swarm to download a content they bring resources such as bandwidth and memory to the system. That way, the capacity of the system increases with the arrival rate of peers. Furthermore, if publishers are intermittent, increasing the arrival rate of peers can increase content availability [7]. In the presence of stable publishers that have enough service capacity for peers to smoothly complete their download [6], increasing the arrival rate of peers decreases the probability that a piece will be unavailable among peers. However, if the capacity of the stable publisher, U pieces/second, is not large enough, it has been shown that the system might be unstable [3, 5, 14]. Hajek and Zhou [3, 14], following up work by Mathieu and Reynier [5], have shown that if the arrival rate of peers, λ, is greater than U , the number of peers increases unboundedly with time. It has also been shown that simple strategies can alleviate, and in some cases resolve, the instability problem. For instance, if peers reside in the system after completing their downloads, on average, the same time that they take to download a piece, then the system is always stable [14]. Nevertheless, as peers have no incentive to stay in the system after completing their downloads, it is important to investigate whether other simple strategies that do not depend on providing incentives for peers to remain online after the download completion can improve system performance and stability. In a peer to peer system, each peer has to make two decisions before transmitting each piece: 1) which piece to transmit and 2) to whom to transmit it. Although the former question has received some attention in previous works (for instance, it has been shown that rarest-first piece selection and random useful piece selection yield the same stability region [3]), to the best of our knowledge the implications of the peer selection strategy have not been discussed yet (previous works assumed random peer selection [3, 9, 14], a notable exception being [5] – see related work section). Let the throughput be the rate at which peers leave the system. The goal of this paper is to evaluate the impact of different peer selection strategies on the throughput (hence, stability) of the system. We pose the following questions: a) how to increase the throughput of the system by letting peers strategically select their neighbors? b) how does throughput scale with the number of peers in a closed peer-to-peer swarming system? We provide the following answers to the above questions. First, we derive an upper bound on the throughput when the stable publisher adopts the most deprived peer selection [1] and rarest-first piece selection, while peers adopt random peer selection and random useful piece selection. The bound is significantly larger than the maximum attainable throughput when both peers and publishers adopt random peer and random useful piece selection. Then, we consider a closed system and we use a simple Markov chain model to study how the throughput of the system scales with the number of peers.


quantitative evaluation of systems | 2013

On the interplay between content popularity and performance in p2p systems

Edmundo de Souza e Silva; Rosa Maria Meri Leão; Daniel Sadoc Menasché; Antonio Augusto de Aragão Rocha

Peer-to-peer swarming, as used by BitTorrent, is one of the de facto solutions for content dissemination in todays Internet. By leveraging resources provided by users, peer-to-peer swarming is a simple and efficient mechanism for content distribution. In this paper we survey recent work on peer-to-peer swarming, relating content popularity to three performance metrics of such systems: fairness, content availability/ self-sustainability and scalability.


IEEE ACM Transactions on Networking | 2013

Content availability and bundling in swarming systems

Daniel Sadoc Menasché; Antonio Augusto de Aragão Rocha; Bin Li; Donald F. Towsley; Arun Venkataramani

BitTorrent, the immensely popular file swarming system, suffers a fundamental problem: content unavailability. Although swarming scales well to tolerate flash crowds for popular content, it is less useful for unpopular content as peers arriving after the initial rush find it unavailable. In this paper, we present a model to quantify content availability in swarming systems. We use the model to analyze the availability and the performance implications of bundling, a strategy commonly adopted by many BitTorrent publishers today. We find that even a limited amount of bundling exponentially reduces content unavailability. For swarms with highly unavailable publishers, the availability gain of bundling can result in a net decrease in average download time. We empirically confirm the models conclusions through experiments on PlanetLab using the Mainline BitTorrent client.


communication systems and networks | 2014

Pros & cons of model-based bandwidth control for client-assisted content delivery

Abhigyan Sharma; Arun Venkataramani; Antonio Augusto de Aragão Rocha

A key challenge in client-assisted content delivery is determining how to allocate limited server bandwidth across a large number of files being concurrently served so as to optimize global performance and cost objectives. In this paper, we present a comprehensive experimental evaluation of strategies to control server bandwidth allocation. As part of this effort, we introduce a new model-based control approach that relies on an accurate yet concise “cheat sheet” based on a priori offline measurement to predict swarm performance as a function of the server bandwidth and other swarm parameters. Our evaluation using a prototype system, SwarmServer, instantiating static, dynamic, and model-based controllers shows that static and dynamic controllers can both be suboptimal due to different reasons. In comparison, a model-based approach consistently outperforms both static and dynamic approaches provided it has access to detailed measurements in the regime of interest. Nevertheless, the broad applicability of a model-based approach may be limited in practice because of the overhead of developing and maintaining a comprehensive measurement-based model of swarm performance in each regime of interest.


Computer Networks | 2012

Heterogeneous download times in a homogeneous BitTorrent swarm

Fabricio Murai; Antonio Augusto de Aragão Rocha; Daniel R. Figueiredo; Edmundo de Souza e Silva

Modeling and understanding BitTorrent (BT) dynamics is a recurrent research topic mainly due to its high complexity and tremendous practical efficiency. Over the years, different models have uncovered various phenomena exhibited by the system, many of which have direct impact on its performance. In this paper we identify and characterize a phenomenon that has not been previously observed: homogeneous peers (with respect to their upload capacities) experience heterogeneous download times. This behavior has direct impact on peer and system performance, such as high variability of download times, unfairness with respect to peer arrival order, bursty departures and content synchronization. Detailed packet-level simulations and prototype-based experiments on the Internet were performed to characterize this phenomenon. We also develop a mathematical model that accurately predicts the heterogeneous download rates of the homogeneous peers as a function of their content. In addition, we apply the model to calculate lower and upper bounds to the number of departures that occur in a burst. The heterogeneous download rates are more prevalent in unpopular swarms (very few peers). Although few works have addressed this kind of swarm, these by far represent the most common type of swarm in BT.


measurement and modeling of computer systems | 2009

Modeling chunk availability in P2P swarming systems

Daniel Sadoc Menasché; Antonio Augusto de Aragão Rocha; Edmundo de Souza e Silva; Rosa Maria Meri Leão; Donald F. Towsley; Arun Venkataramani

Peer-to-peer swarming systems a la BitTorrent are usually deployed for the dissemination of popular content. Popular content naturally gets highly replicated in the network and capacity scales with demand ensuring high performance for peers requesting popular content. Nevertheless, the behavior of swarming systems in the face of unpopular content and small populations of users also deserves attention. First, it is important to understand what is the popularity threshold above which the use of swarming systems is most beneficial for a publisher. The second reason is economic. With the monetization of BitTorrent clients such as Vuze (previously known as Azureus), and surveys showing a huge demand for legal P2P content [2], publishers need to identify how to best allocate their resources across multiple swarms. For that purpose, it is imperative to identify whether a swarm is self sustaining or not. This is particularly evident in a market where enterprises that can make “everything available, with small costs”, thrive [1]. Third, models focusing on small user populations may provide insight on when and if coding, bundling [6] or other techniques can help to make unpopular swarms last longer without the support of a publisher. For large populations, Massoulie and Vojnovic [5] used a coupon collector model to show that rarest-first guarantees almost uniform distribution of chunks across the population leading to a robust system. Fan et al. [4] considered the large population regime, and used fluid approximations and stochastic differential equations to model the dynamics of a population of users. Qiu and Srikant [8], also considered large populations and concluded that the efficiency of the system is always high. For small populations we have Markov Chain (MC) models [10] that provide insights on the performance of the system but that do not consider the problem of chunk availability. In comparison, the goal of this paper is to analyze, especially for a small population of users, how chunk availability varies as a function of different system parameters such as arrival rate of peers and download capacity.


local computer networks | 2014

On the possibility of mitigating content pollution in Content-Centric Networking

Igor G. Ribeiro; Antonio Augusto de Aragão Rocha; Célio Vinicius N. de Albuquerque; Flavio Guimaraes

Content-Centric Networking is an architecture proposal for the future Internet that brings fundamental changes in the way the network operates. Contents are identified and requested based on their names and for security reasons they must be digitally signed by their publishers. Even though this new architecture was designed to be safe, one potential security threat is that malicious publishers may create polluted versions of legitimate contents, reducing their availability and degrading network resources. Because of the non-negligible overhead of checking a large number of signatures, it is not feasible to make it a mandatory task for every router, especially in the network core. In this paper, we propose CCNCheck: a mechanism in which CCN routers probabilistically check the content signatures. We evaluate the mechanism against simulations and found evidences that using CCNCheck increases the fraction of recovered contents and decreases the wastage of network resources.


next generation internet | 2007

A non-cooperative active measurement technique for estimating the average and variance of the one-way delay

Antonio Augusto de Aragão Rocha; Rosa Maria Meri Leão; Edmundo de Souza e Silva

Active measurements are a useful tool for obtaining a variety of Internet metrics. One-way metrics, in general, require the execution of processes at the remote machine and/or machines with synchronized clocks. This work proposes a new algorithm to estimate the first two moments of the one-way delay random variable without the need to access a target machine and to have the machine clocks synchronized. The technique uses the IPID field information and can be easily implemented using ICMP Echo request and reply messages.

Collaboration


Dive into the Antonio Augusto de Aragão Rocha's collaboration.

Top Co-Authors

Avatar

Daniel Sadoc Menasché

Federal University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar

Edmundo de Souza e Silva

Federal University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar

Donald F. Towsley

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Rosa Maria Meri Leão

Federal University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar

Arun Venkataramani

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Igor G. Ribeiro

Federal Fluminense University

View shared research outputs
Top Co-Authors

Avatar

Pedro B. Velloso

Pierre-and-Marie-Curie University

View shared research outputs
Researchain Logo
Decentralizing Knowledge