Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rodrigo S. Couto is active.

Publication


Featured researches published by Rodrigo S. Couto.


Annales Des Télécommunications | 2011

Virtual networks: isolation, performance, and trends

Natalia Castro Fernandes; Marcelo D. D. Moreira; Igor M. Moraes; Lyno Henrique G. Ferraz; Rodrigo S. Couto; Hugo E. T. Carvalho; Miguel Elias M. Campista; Luís Henrique Maciel Kosmalski Costa; Otto Carlos Muniz Bandeira Duarte

Currently, there is a strong effort of the research community in rethinking the Internet architecture to cope with its current limitations and support new requirements. Many researchers conclude that there is no one-size-fits-all solution for all of the user and network provider needs and thus advocate for a pluralist network architecture, which allows the coexistence of different protocol stacks running at the same time over the same physical substrate. In this paper, we investigate the advantages and limitations of the virtualization technologies for creating a pluralist environment for the Future Internet. We analyze two types of virtualization techniques, which provide multiple operating systems running on the same hardware, represented by Xen, or multiple network flows on the same switch, represented by OpenFlow. First, we define the functionalities needed by a Future Internet virtual network architecture and how Xen and OpenFlow provide them. We then analyze Xen and OpenFlow in terms of network programmability, processing, forwarding, control, and scalability. Finally, we carry out experiments with Xen and OpenFlow network prototypes, identifying the overhead incurred by each virtualization tool by comparing it with native Linux. Our experiments show that OpenFlow switch forwards packets as well as native Linux, achieving similar high forwarding rates. On the other hand, we observe that the high complexity involving Xen virtual machine packet forwarding limits the achievable packet rates. There is a clear trade-off between flexibility and performance, but we conclude that both Xen and OpenFlow are suitable platforms for network virtualization.


IEEE Communications Magazine | 2014

Network design requirements for disaster resilience in IaaS clouds

Rodrigo S. Couto; Stefano Secci; Miguel Elias M. Campista; Luís Henrique Maciel Kosmalski Costa

Many corporations rely on disaster recovery schemes to keep their computing and network services running after unexpected situations, such as natural disasters and attacks. As corporations migrate their infrastructure to the cloud using the infrastructure as a service model, cloud providers need to offer disaster-resilient services. This article provides guidelines to design a data center network infrastructure to support a disaster-resilient infrastructure as a service cloud. These guidelines describe design requirements, such as the time to recover from disasters, and allow the identification of important domains that deserve further research efforts, such as the choice of data center site locations and disaster-resilient virtual machine placement.


global communications conference | 2012

A reliability analysis of datacenter topologies

Rodrigo S. Couto; Miguel Elias M. Campista; Luís Henrique Maciel Kosmalski Costa

The network infrastructure plays an important role for datacenter applications. Therefore, datacenter network architectures are designed with three main goals: bandwidth, latency and reliability. This work focuses on the last goal and provides a comparative analysis of the topologies of prevalent datacenter architectures. Those architectures use a network based only on switches or a hybrid scheme of servers and switches to perform packet forwarding. We analyze failures of the main networking elements (link, server, and switch) to evaluate the tradeoffs of the different datacenter topologies. Considering only the network topology, our analysis provides a baseline study to the choice or design of a datacenter network with regard to reliability. Our results show that, as the number of failures increases, the considered hybrid topologies can substantially increase the path length, whereas servers on the switch-only topology tend to disconnect more quickly from the main network.


international conference on network of future | 2011

VNEXT: Virtual network management for Xen-based Testbeds

Pedro Silveira Pisa; Rodrigo S. Couto; Hugo E. T. Carvalho; Daniel J. S. Neto; Natalia Castro Fernandes; Miguel Elias M. Campista; Luís Henrique Maciel Kosmalski Costa; Otto Carlos Muniz Bandeira Duarte; Guy Pujolle

Network testbeds strongly rely on virtualization that allows the simultaneous execution of multiple protocol stacks but also increases the management and control tasks. This paper presents a system to control and manage virtual networks based on the Xen platform. The goal of the proposed system is to assist network administrators to perform decision making in this challenging virtualized environment. The system management and control tasks consist of defining virtual networks, turning on, turning off, migrating virtual routers, and monitoring the virtual networks within few mouse clicks thanks to a user-friendly graphical interface. The administrator can also perform high-level decisions, such as redefining the virtual network topology by using the plane-separation and loss-free live migration functionality, or saving energy by shutting down physical routers. Our performance tests assure the system has low response time; for instance, less than 3 minutes to create 4-node virtual networks.


Journal of Network and Systems Management | 2016

Reliability and Survivability Analysis of Data Center Network Topologies

Rodrigo S. Couto; Stefano Secci; Miguel Elias M. Campista; Luís Henrique Maciel Kosmalski Costa

The architecture of several data centers have been proposed as alternatives to the conventional three-layer one. Most of them employ commodity equipment for cost reduction. Thus, robustness to failures becomes even more important, because commodity equipment is more failure-prone. Each architecture has a different network topology design with a specific level of redundancy. In this work, we aim at analyzing the benefits of different data center topologies taking the reliability and survivability requirements into account. We consider the topologies of three alternative data center architecture: Fat-tree, BCube, and DCell. Also, we compare these topologies with a conventional three-layer data center topology. Our analysis is independent of specific equipment, traffic patterns, or network protocols, for the sake of generality. We derive closed-form formulas for the Mean Time To Failure of each topology. The results allow us to indicate the best topology for each failure scenario. In particular, we conclude that BCube is more robust to link failures than the other topologies, whereas DCell has the most robust topology when considering switch failures. Additionally, we show that all considered alternative topologies outperform a three-layer topology for both types of failures. We also determine to which extent the robustness of BCube and DCell is influenced by the number of network interfaces per server.


Computer Networks | 2015

Server placement with shared backups for disaster-resilient clouds

Rodrigo S. Couto; Stefano Secci; Miguel Elias M. Campista; Luís Henrique Maciel Kosmalski Costa

A key strategy to build disaster-resilient clouds is to employ backups of virtual machines in a geo-distributed infrastructure. Today, the continuous and acknowledged replication of virtual machines in different servers is a service provided by different hypervisors. This strategy guarantees that the virtual machines will have no loss of disk and memory content if a disaster occurs, at a cost of strict bandwidth and latency requirements. Considering this kind of service, in this work, we propose an optimization problem to place servers in a wide area network. The goal is to guarantee that backup machines do not fail at the same time as their primary counterparts. In addition, by using virtualization, we also aim to reduce the amount of backup servers required. The optimal results, achieved in real topologies, reduce the number of backup servers by at least 40%. Moreover, this work highlights several characteristics of the backup service according to the employed network, such as the fulfillment of latency requirements.


global communications conference | 2011

XTC: A Throughput Control Mechanism for Xen-Based Virtualized Software Routers

Rodrigo S. Couto; Miguel Elias M. Campista; Luís Henrique Maciel Kosmalski Costa

Xen is a tool for hardware virtualization often used to build virtual routers. Xen, however, does not assure the fundamental requirement of network isolation among these routers. This work proposes XTC (Xen Throughput Control) to fill this gap, and therefore, to guarantee multiple network coexistence without interference. XTC sets the amount of CPU allocated to each virtual router according to the maximum throughput allowed. Xen behavior is modeled by using experimental data, and based on these data, XTC is designed using feedback control. Results obtained in a testbed demonstrate the XTC ability to isolate virtual network capacities and to adapt to system changes.


global communications conference | 2014

Latency versus survivability in geo-distributed data center design

Rodrigo S. Couto; Stefano Secci; Miguel Elias M. Campista; Luís Henrique Maciel Kosmalski Costa

A hot topic in data center design is to envision geo-distributed architectures spanning a few sites across wide area networks, allowing more proximity to the end users and higher survivability, defined as the capacity of a system to operate after failures. As a shortcoming, this approach is subject to an increase of latency between servers, caused by their geographic distances. In this paper, we address the trade-off between latency and survivability in geo-distributed data centers, through the formulation of an optimization problem. Simulations considering realistic scenarios show that the latency increase is significant only in the case of very strong survivability requirements, whereas it is negligible for moderate survivability requirements. For instance, the worst-case latency is less than 4 ms when guaranteeing that 80% of the servers are available after a failure, in a network where the latency could be up to 33 ms.


Computer Networks | 2014

Network resource control for Xen-based virtualized software routers

Rodrigo S. Couto; Miguel Elias M. Campista; Luís Henrique Maciel Kosmalski Costa

Abstract The pluralist architecture is considered an alternative for the future Internet to support multiple services with contrasting requirements. In this approach, machine virtualization techniques play a fundamental role. Nevertheless, when applied to networking, they impose critical bottlenecks since they do not provide suitable mechanisms to orchestrate the utilization of the underlying resources. In this work, we propose XTC (Xen Throughput Control) to fill this gap and control the utilization of network resources in Xen-based virtual routers. The main idea is to provide aggregate control, regardless of the traffic on specific network interfaces. To achieve this goal, XTC indirectly adjusts the maximum throughput of a virtual router by controlling the amount of CPU given to it. Our experimental results show that XTC provides differentiation and fairness between virtual routers and also adapts to system disturbances.


Computers & Electrical Engineering | 2017

Assessing the impacts of IPsec cryptographic algorithms on a virtual network embedding problem

Bernardo C.V. Camilo; Rodrigo S. Couto; Luís Henrique Maciel Kosmalski Costa

Abstract Network virtualization has emerged as an alternative to traditional networking, allowing several different virtual networks to operate on the same physical infrastructure. Despite its wide adoption, virtualization still has some open issues. One of the challenges is related to resource allocation of virtual networks on the physical substrate. In the literature, this problem is known as virtual network embedding. Different papers propose virtual network embedding considering different aspects, but only a few address security, which is a key requirement for many applications. This work quantifies the overhead of cryptographic algorithms in order to use them in virtual network embedding solutions. Both theoretical and experimental evaluation of IPsec algorithms are conducted. The obtained results are applied to a known virtual network embedding problem, using realistic characteristics. These results demonstrate the importance of considering such overheads to perform the allocation of secure virtual networks.

Collaboration


Dive into the Rodrigo S. Couto's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Miguel Elias M. Campista

Rio de Janeiro State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marcelo G. Rubinstein

Rio de Janeiro State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tatiana Sciammarella

Federal University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar

Felipe Silva

Federal University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar

Hugo E. T. Carvalho

Federal University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar

Hugo de Freitas Siqueira Sadok

Federal University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar

Natalia Castro Fernandes

Federal University of Rio de Janeiro

View shared research outputs
Researchain Logo
Decentralizing Knowledge