Edmundo Roberto Mauro Madeira
State University of Campinas
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Edmundo Roberto Mauro Madeira.
Journal of Internet Services and Applications | 2011
Luiz F. Bittencourt; Edmundo Roberto Mauro Madeira
Workflows have been used to represent a variety of applications involving high processing and storage demands. As a solution to supply this necessity, the cloud computing paradigm has emerged as an on-demand resources provider. While public clouds charge users in a per-use basis, private clouds are owned by users and can be utilized with no charge. When a public cloud and a private cloud are merged, we have what we call a hybrid cloud. In a hybrid cloud, the user has elasticity provided by public cloud resources that can be aggregated to the private resources pool as necessary. One question faced by the users in such systems is: Which are the best resources to request from a public cloud based on the current demand and on resources costs? In this paper we deal with this problem, presenting HCOC: The Hybrid Cloud Optimized Cost scheduling algorithm. HCOC decides which resources should be leased from the public cloud and aggregated to the private cloud to provide sufficient processing power to execute a workflow within a given execution time. We present extensive experimental and simulation results which show that HCOC can reduce costs while achieving the established desired execution time.
IEEE Communications Magazine | 2012
Luiz F. Bittencourt; Edmundo Roberto Mauro Madeira; N.L.S. da Fonseca
Schedulers for cloud computing determine on which processing resource jobs of a workflow should be allocated. In hybrid clouds, jobs can be allocated on either a private cloud or a public cloud on a pay per use basis. The capacity of the communication channels connecting these two types of resources impacts the makespan and the cost of workflow execution. This article introduces the scheduling problem in hybrid clouds presenting the main characteristics to be considered when scheduling workflows, as well as a brief survey of some of the scheduling algorithms used in these systems. To assess the influence of communication channels on job allocation, we compare and evaluate the impact of the available bandwidth on the performance of some of the scheduling algorithms.
Journal of Internet Services and Applications | 2015
Leonardo Richter Bays; Rodrigo Ruas Oliveira; Marinho P. Barcellos; Luciano Paschoal Gaspary; Edmundo Roberto Mauro Madeira
Network virtualization has become increasingly prominent in recent years. It enables the creation of network infrastructures that are specifically tailored to the needs of distinct network applications and supports the instantiation of favorable environments for the development and evaluation of new architectures and protocols. Despite the wide applicability of network virtualization, the shared use of routing devices and communication channels leads to a series of security-related concerns. It is necessary to provide protection to virtual network infrastructures in order to enable their use in real, large scale environments. In this paper, we present an overview of the state of the art concerning virtual network security. We discuss the main challenges related to this kind of environment, some of the major threats, as well as solutions proposed in the literature that aim to deal with different security aspects.
network operations and management symposium | 2012
Thiago A. L. Genez; Luiz F. Bittencourt; Edmundo Roberto Mauro Madeira
Cloud computing is being used to avoid maintenance costs and upfront investment, while providing elasticity to the available computational power in a pay-per-use basis. Customers can make use of the cloud as a software (SaaS), platform (PaaS), or infrastructure (IaaS) provider. When one customer utilizes an environment provided by a SaaS cloud, she is unaware of any details about the computational infrastructure where her requests are being processed. Therefore, such infrastructure can be composed of computational resources from a datacenter owned by the SaaS or its resources can be leased from a cloud infrastructure provider. In this paper we present an integer linear program (ILP) formulation for the problem of scheduling SaaS customers workflows into multiple IaaS providers where SLA exists at two levels. In addition, we present heuristics to solve the relaxed version of the presented ILP. Simulation results show that the proposed ILP is able to find low-cost solutions for short deadlines, while the proposed heuristics are effective when deadlines are larger.
Journal of Grid Computing | 2010
Luiz F. Bittencourt; Edmundo Roberto Mauro Madeira
The workflow paradigm has become the standard to represent processes and their execution flows. With the evolution of e-Science, workflows are becoming larger and more computational demanding. Such e-Science necessities match with what computational Grids have to offer. Grids are shared distributed platforms which will eventually receive multiple requisitions to execute workflows. With this, there is a demand for a scheduler which deals with multiple workflows in the same set of resources, thus the development of multiple workflow scheduling algorithms is necessary. In this paper we describe four different initial strategies for scheduling multiple workflows on Grids and evaluate them in terms of schedule length and fairness. We present results for the initial schedule and for the makespan after the execution with external load. From the results we conclude that interleaving the workflows on the Grid leads to good average makespan and provides fairness when multiple workflows share the same set of resources.
parallel, distributed and network-based processing | 2010
Luiz F. Bittencourt; Rizos Sakellariou; Edmundo Roberto Mauro Madeira
Among the numerous DAG scheduling heuristics suitable for heterogeneous systems, the Heterogeneous Earliest Finish Time (HEFT) heuristic is known to give good results in short time. In this paper, we propose an improvement of HEFT, where the locally optimal decisions made by the heuristic do not rely on estimates of a single task only, but also look ahead in the schedule and take into account information about the impact of this decision to the children of the task being allocated. Preliminary simulation results indicate that the lookahead variation of HEFT can effectively reduce the makespan of the schedule in most cases without making the algorithm’s execution time prohibitively high.
Proceedings of the 9th International Workshop on Middleware for Grids, Clouds and e-Science | 2011
Daniel Guimaraes do Lago; Edmundo Roberto Mauro Madeira; Luiz F. Bittencourt
This paper presents a virtual machine scheduling algorithm for Cloud Computing based on Green Computing concepts. The goal of this algorithm is the minimization of energy consumption in task executions in a cloud computing environment. This algorithm uses some features like shutdown of underutilized hosts, migration of loads of hosts that are operating below a certain threshold, and DVFS. It also applies the concept of active cooling control in order to minimize power consumption. The choice of which host will receive load is based on the concept of higher energy efficiency from the hosts, which is given by the ratio of MIPS by the energy consumed of each host. Results from simulation of this algorithm confronted with other surveyed algorithms have shown that it can improve the power consumption in clouds composed of heterogeneous datacenters while being equivalent to the best algorithms in homogeneous datacenters.
international conference on web engineering | 2006
Matthias Fluegge; Ivo José Garcia dos Santos; Neil Paiva Tizzo; Edmundo Roberto Mauro Madeira
One of the great challenges to be faced in order to enable the success of future Web-based applications is to find effective ways to handle with the interoperability demands. In this context, Service-Oriented Architectures and Web Services technology are being considered as the most affordable solution topromote interoperability, by applying strategies like Service Composition. Nevertheless, most composition approaches applied nowadays in real world contexts lack dynamism. In fact, there is not yet a consensus regarding what would really be a dynamic composition. In this paper we propose some criteriato identify the levels of dynamism and automatization in service compositions. Furthermore, taking into account a model driven approach, we propose a strategy where different techniques can be used to make compositions more dynamic and automatic. This strategy is then exemplified and discussed considering an e-Government composition scenario.
latin-american symposium on dependable computing | 2011
Alan Massaru Nakai; Edmundo Roberto Mauro Madeira; Luiz Eduardo Buzato
The Internet has become the universal support for computer applications. This increases the need for solutions that provide dependability and QoS for web applications. The replication of web servers on geographically distributed data centers allows the service provider to tolerate disastrous failures and to improve the response times perceived by clients. A key issue for good performance of worldwide distributed web services is the efficiency of the load balancing mechanism used to distribute client requests among the replicated servers. Load balancing can reduce the need for over-provision of resources, and help tolerate abrupt load peaks and/or partial failures through load conditioning. In this paper, we propose a new load balancing solution that reduces service response times by redirecting requests to the closest remote servers without overloading them. We also describe a middle ware that implements this protocol and present the results of a set of simulations that show its usefulness.
conference on network and service management | 2010
Luiz F. Bittencourt; Carlos R. Senna; Edmundo Roberto Mauro Madeira
Cloud computing has recently emerged as a convergence of concepts such as cluster computing, grid computing, utility computing, and virtualization. In hybrid clouds, the user has its private cloud available for use, but she can also request new resources to public clouds in a pay-per-use basis when there is an increase in demand. In this scenario it is important to decide when and how to request these new resources to satisfy deadlines and/or to get a reasonable execution time, while minimizing the monetary costs involved. In this paper we propose a strategy to schedule service workflows in a hybrid cloud. The strategy aims at determining which services should use paid resources and what kind of resource should be requested to the cloud in order to minimize costs and meet deadlines. Experiments suggest that the strategy can decrease the execution costs while maintaining reasonable execution times.