Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Luiz F. Bittencourt is active.

Publication


Featured researches published by Luiz F. Bittencourt.


Journal of Internet Services and Applications | 2011

HCOC: a cost optimization algorithm for workflow scheduling in hybrid clouds

Luiz F. Bittencourt; Edmundo Roberto Mauro Madeira

Workflows have been used to represent a variety of applications involving high processing and storage demands. As a solution to supply this necessity, the cloud computing paradigm has emerged as an on-demand resources provider. While public clouds charge users in a per-use basis, private clouds are owned by users and can be utilized with no charge. When a public cloud and a private cloud are merged, we have what we call a hybrid cloud. In a hybrid cloud, the user has elasticity provided by public cloud resources that can be aggregated to the private resources pool as necessary. One question faced by the users in such systems is: Which are the best resources to request from a public cloud based on the current demand and on resources costs? In this paper we deal with this problem, presenting HCOC: The Hybrid Cloud Optimized Cost scheduling algorithm. HCOC decides which resources should be leased from the public cloud and aggregated to the private cloud to provide sufficient processing power to execute a workflow within a given execution time. We present extensive experimental and simulation results which show that HCOC can reduce costs while achieving the established desired execution time.


IEEE Communications Magazine | 2012

Scheduling in hybrid clouds

Luiz F. Bittencourt; Edmundo Roberto Mauro Madeira; N.L.S. da Fonseca

Schedulers for cloud computing determine on which processing resource jobs of a workflow should be allocated. In hybrid clouds, jobs can be allocated on either a private cloud or a public cloud on a pay per use basis. The capacity of the communication channels connecting these two types of resources impacts the makespan and the cost of workflow execution. This article introduces the scheduling problem in hybrid clouds presenting the main characteristics to be considered when scheduling workflows, as well as a brief survey of some of the scheduling algorithms used in these systems. To assess the influence of communication channels on job allocation, we compare and evaluate the impact of the available bandwidth on the performance of some of the scheduling algorithms.


network operations and management symposium | 2012

Workflow scheduling for SaaS / PaaS cloud providers considering two SLA levels

Thiago A. L. Genez; Luiz F. Bittencourt; Edmundo Roberto Mauro Madeira

Cloud computing is being used to avoid maintenance costs and upfront investment, while providing elasticity to the available computational power in a pay-per-use basis. Customers can make use of the cloud as a software (SaaS), platform (PaaS), or infrastructure (IaaS) provider. When one customer utilizes an environment provided by a SaaS cloud, she is unaware of any details about the computational infrastructure where her requests are being processed. Therefore, such infrastructure can be composed of computational resources from a datacenter owned by the SaaS or its resources can be leased from a cloud infrastructure provider. In this paper we present an integer linear program (ILP) formulation for the problem of scheduling SaaS customers workflows into multiple IaaS providers where SLA exists at two levels. In addition, we present heuristics to solve the relaxed version of the presented ILP. Simulation results show that the proposed ILP is able to find low-cost solutions for short deadlines, while the proposed heuristics are effective when deadlines are larger.


Journal of Grid Computing | 2010

Towards the Scheduling of Multiple Workflows on Computational Grids

Luiz F. Bittencourt; Edmundo Roberto Mauro Madeira

The workflow paradigm has become the standard to represent processes and their execution flows. With the evolution of e-Science, workflows are becoming larger and more computational demanding. Such e-Science necessities match with what computational Grids have to offer. Grids are shared distributed platforms which will eventually receive multiple requisitions to execute workflows. With this, there is a demand for a scheduler which deals with multiple workflows in the same set of resources, thus the development of multiple workflow scheduling algorithms is necessary. In this paper we describe four different initial strategies for scheduling multiple workflows on Grids and evaluate them in terms of schedule length and fairness. We present results for the initial schedule and for the makespan after the execution with external load. From the results we conclude that interleaving the workflows on the Grid leads to good average makespan and provides fairness when multiple workflows share the same set of resources.


parallel, distributed and network-based processing | 2010

DAG Scheduling Using a Lookahead Variant of the Heterogeneous Earliest Finish Time Algorithm

Luiz F. Bittencourt; Rizos Sakellariou; Edmundo Roberto Mauro Madeira

Among the numerous DAG scheduling heuristics suitable for heterogeneous systems, the Heterogeneous Earliest Finish Time (HEFT) heuristic is known to give good results in short time. In this paper, we propose an improvement of HEFT, where the locally optimal decisions made by the heuristic do not rely on estimates of a single task only, but also look ahead in the schedule and take into account information about the impact of this decision to the children of the task being allocated. Preliminary simulation results indicate that the lookahead variation of HEFT can effectively reduce the makespan of the schedule in most cases without making the algorithm’s execution time prohibitively high.


Proceedings of the 9th International Workshop on Middleware for Grids, Clouds and e-Science | 2011

Power-aware virtual machine scheduling on clouds using active cooling control and DVFS

Daniel Guimaraes do Lago; Edmundo Roberto Mauro Madeira; Luiz F. Bittencourt

This paper presents a virtual machine scheduling algorithm for Cloud Computing based on Green Computing concepts. The goal of this algorithm is the minimization of energy consumption in task executions in a cloud computing environment. This algorithm uses some features like shutdown of underutilized hosts, migration of loads of hosts that are operating below a certain threshold, and DVFS. It also applies the concept of active cooling control in order to minimize power consumption. The choice of which host will receive load is based on the concept of higher energy efficiency from the hosts, which is given by the ratio of MIPS by the energy consumed of each host. Results from simulation of this algorithm confronted with other surveyed algorithms have shown that it can improve the power consumption in clouds composed of heterogeneous datacenters while being equivalent to the best algorithms in homogeneous datacenters.


conference on network and service management | 2010

Scheduling service workflows for cost optimization in hybrid clouds

Luiz F. Bittencourt; Carlos R. Senna; Edmundo Roberto Mauro Madeira

Cloud computing has recently emerged as a convergence of concepts such as cluster computing, grid computing, utility computing, and virtualization. In hybrid clouds, the user has its private cloud available for use, but she can also request new resources to public clouds in a pay-per-use basis when there is an increase in demand. In this scenario it is important to decide when and how to request these new resources to satisfy deadlines and/or to get a reasonable execution time, while minimizing the monetary costs involved. In this paper we propose a strategy to schedule service workflows in a hybrid cloud. The strategy aims at determining which services should use paid resources and what kind of resource should be requested to the cloud in order to minimize costs and meet deadlines. Experiments suggest that the strategy can decrease the execution costs while maintaining reasonable execution times.


IEEE Cloud Computing | 2017

Mobility-Aware Application Scheduling in Fog Computing

Luiz F. Bittencourt; Javier Diaz-Montes; Rajkumar Buyya; Omer Farooq Rana; Manish Parashar

Fog computing provides a distributed infrastructure at the edges of the network, resulting in low-latency access and faster response to application requests when compared to centralized clouds. With this new level of computing capacity introduced between users and the data center-based clouds, new forms of resource allocation and management can be developed to take advantage of the Fog infrastructure. A wide range of applications with different requirements run on end-user devices, and with the popularity of cloud computing many of them rely on remote processing or storage. As clouds are primarily delivered through centralized data centers, such remote processing/storage usually takes place at a single location that hosts user applications and data. The distributed capacity provided by Fog computing allows execution and storage to be performed at different locations. The combination of distributed capacity, the range and types of user applications, and the mobility of smart devices require resource management and scheduling strategies that takes into account these factors altogether. We analyze the scheduling problem in Fog computing, focusing on how user mobility can influence application performance and how three different scheduling policies, namely concurrent, FCFS, and delay-priority, can be used to improve execution based on application characteristics.


Journal of Communications and Networks | 2013

A seamless flow mobility management architecture for vehicular communication networks

Rodolfo Ipolito Meneguette; Luiz F. Bittencourt; Edmundo Roberto Mauro Madeira

Vehicular ad-hoc networks (VANETs) are self-organizing, self-healing networks which provide wireless communication among vehicular and roadside devices. Applications in such networks can take advantage of the use of simultaneous connections, thereby maximizing the throughput and lowering latency. In order to take advantage of all radio interfaces of the vehicle and to provide good quality of service for vehicular applications, we developed a seamless flow mobility management architecture based on vehicular network application classes with network-based mobility management. Our goal is to minimize the time of flow connection exchange in order to comply with the minimum requirements of vehicular application classes, as well as to maximize their throughput. Network simulator (NS-3) simulations were performed to analyse the behaviour of our architecture by comparing it with other three scenarios. As a result of this work, we observed that the proposed architecture presented a low handover time, with lower packet loss and lower delay.


network operations and management symposium | 2010

Enabling execution of service workflows in grid/cloud hybrid systems

Luiz F. Bittencourt; Carlos R. Senna; Edmundo Roberto Mauro Madeira

Cloud computing systems provide on demand access to computational resources for dedicated use. Grid computing allows users to share heterogeneous resources from multiple administrative domains applied to common tasks. In this paper we discuss the characteristics and requirements of a hybrid system composed of both grid and cloud technologies. We propose an infrastructure which is able to manage the execution of service workflows in such system. We show how the infrastructure can be expanded by acquiring computational resources on demand from the cloud during the workflow execution, and how it manages these resources and the workflow execution without user interference.

Collaboration


Dive into the Luiz F. Bittencourt's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rafael L. Gomes

State University of Campinas

View shared research outputs
Top Co-Authors

Avatar

Carlos R. Senna

State University of Campinas

View shared research outputs
Top Co-Authors

Avatar

Eduardo Cerqueira

Federal University of Pará

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thiago A. L. Genez

State University of Campinas

View shared research outputs
Top Co-Authors

Avatar

Mario Gerla

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marcio R. M. Assis

State University of Campinas

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge