Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Raquel Vigolvino Lopes is active.

Publication


Featured researches published by Raquel Vigolvino Lopes.


ieee international symposium on parallel distributed processing workshops and phd forum | 2010

Business-driven capacity planning of a cloud-based it infrastructure for the execution of Web applications

Raquel Vigolvino Lopes; Francisco Vilar Brasileiro; Paulo Ditarso Maciel

With the emergence of the cloud computing paradigm and the continuous search to reduce the cost of running Information Technology (IT) infrastructures, we are currently experiencing an important change in the way these infrastructures are assembled, configured and managed. In this research we consider the problem of managing a computing infrastructure whose processing elements are acquired from infrastructure-as-a-service (IaaS) providers, and used to support the execution of long-lived on-line transactions processing applications, whose workloads experience huge fluctuations over time, such as Web applications. Resources can be acquired from IaaS providers in different ways, each with its own pricing scheme and associated quality of service. One acquisition model allows resources to be reserved for long periods of usage at a reduced usage price, while others allow dedicated resources to be instantiated on demand at any time, subject to the availability of resources, with a usage price that is usually larger than that paid for reserved resources. In this context, the problem that we address in this paper is how the provider of a Web application should plan the long-term reservation contracts with an IaaS provider, in such a way that its profitability is increased. We propose a model that can be used for guiding this capacity planning activity. We use the model to evaluate the gains that can be achieved with the judicious planning of the infrastructure capacity in a number of scenarios. We show that the gains can be substantial, specially when the load variation between normal operational periods and surge periods is large.


ieee international conference on cloud computing technology and science | 2010

Investigating Business-Driven Cloudburst Schedulers for E-Science Bag-of-Tasks Applications

David Candeia; Ricardo Araujo; Raquel Vigolvino Lopes; Francisco Vilar Brasileiro

The new ways of doing science, rooted on the unprecedented processing, communication and storage infrastructures that became available to scientists, are collectively called e-Science. Many research labs now need non-trivial computational power to run e-Science applications. Grid and voluntary computing are well-established solutions that cater to this need, but are not accessible for all labs and institutions. Besides, there is an uncertainty about the future amount of resources that will be available in such infrastructures, which prevents the researchers from planning their activities to guarantee that deadlines will be met. With the emergence of the cloud computing paradigm come new opportunities. One possibility is to run e-Science activities at resources acquired on-demand from cloud providers. However, although very low, there is a cost associated with the usage of cloud resources. Besides that, the amount of resources that can be simultaneously acquired is, in practice, limited. Another possibility is the not new idea of composing hybrid infrastructures in which the huge amount of computational resources shared by the grid infrastructures are used whenever possible and extra capacity is acquired from cloud computing providers. We here investigate how to schedule e-Science activities in such hybrid infrastructures so that deadlines are met and costs are reduced.


Journal of Parallel and Distributed Computing | 2012

Business-driven short-term management of a hybrid IT infrastructure

Paulo Ditarso Maciel; Francisco Vilar Brasileiro; Ricardo Araújo Santos; David Candeia; Raquel Vigolvino Lopes; Marcus Carvalho; Renato Miceli; Nazareno Andrade; Miranda Mowbray

We consider the problem of managing a hybrid computing infrastructure whose processing elements are comprised of in-house dedicated machines, virtual machines acquired on-demand from a cloud computing provider through short-term reservation contracts, and virtual machines made available by the remote peers of a best-effort peer-to-peer (P2P) grid. Each of these resources has different cost basis and associated quality of service guarantees. The applications that run in this hybrid infrastructure are characterized by a utility function: the utility gained with the completion of an application depends on the time taken to execute it. We take a business-driven approach to manage this infrastructure, aiming at maximizing the profit yielded, that is, the utility produced as a result of the applications that are run minus the cost of the computing resources that are used to run them. We propose a heuristic to be used by a contract planner agent that establishes the contracts with the cloud computing provider to balance the cost of running an application and the utility that is obtained with its execution, with the goal of producing a high overall profit. Our analytical results show that the simple heuristic proposed achieves very high relative efficiency in the use of the hybrid infrastructure. We also demonstrate that the ability to estimate the grid behaviour is an important condition for making contracts that allow such relative efficiency values to be achieved. On the other hand, our simulation results with realistic error predictions show only a modest improvement in the profit achieved by the simple heuristic proposed, when compared to a heuristic that does not consider the grid when planning contracts, but uses it, and another that is completely oblivious to the existence of the grid. This calls for the development of more accurate predictors for the availability of P2P grids, and more elaborated heuristics that can better deal with the several sources of non-determinism present in this hybrid infrastructure.


2011 International Green Computing Conference and Workshops | 2011

Assessing data deduplication trade-offs from an energy and performance perspective

Lauro Beltrão Costa; Samer Al-Kiswany; Raquel Vigolvino Lopes; Matei Ripeanu

The energy costs of running computer systems are a growing concern: for large data centers, recent estimates put these costs higher than the cost of hardware itself. As a consequence, energy efficiency has become a pervasive theme for designing, deploying, and operating computer systems. This paper evaluates the energy trade-offs brought by data deduplication in distributed storage systems. Depending on the workload, deduplication can enable a lower storage footprint, reduce the I/O pressure on the storage system, and reduce network traffic, at the cost of increased computational overhead. From an energy perspective, data deduplication enables a trade-off between the energy consumed for additional computation and the energy saved by lower storage and network load. The main point our experiments and model bring home is the following: while for non energy-proportional machines performance- and energy-centric optimizations have break-even points that are relatively close, for the newer generation of energy proportional machines the break-even points are significantly different. An important consequence of this difference is that, with newer systems, there are higher energy inefficiencies when the system is optimized for performance.


ieee international conference on cloud computing technology and science | 2015

Business-Driven Long-Term Capacity Planning for SaaS Applications

David Candeia; Ricardo Araújo Santos; Raquel Vigolvino Lopes

Capacity Planning is one of the activities developed by Information Technology departments over the years, it aims at estimating the amount of resources needed to offer a computing service. This activity contributes to achieving high Quality of Service levels and also to pursuing better economic results for companies. In the Cloud Computing context, one plausible scenario is to have Software-as-a-Service (SaaS) providers that build their IT infrastructure acquiring resources from Infrastructure-as-a-Service (IaaS) providers. SaaS providers can reduce operational costs and complexity by buying instances from a reservation market, but then need to predict the number of instances needed in the long-term. This work investigates how important is the capacity planning in this context and how simple business-driven heuristics for long-term capacity planning impact on the profit achieved by SaaS providers. Simulation experiments were performed using synthetic e-commerce workloads. Our analysis show that proposed heuristics increase SaaS provider profit, on average, at 9.6501 percent per year. Analysing such results we demonstrate that capacity planning is still an important activity, contributing to the increase of SaaS providers profit. Besides, a good capacity planning may also avoid bad reputation due to unacceptable performance, which is a gain very hard to measure.


IEEE Transactions on Parallel and Distributed Systems | 2016

A Taxonomy of Job Scheduling on Distributed Computing Systems

Raquel Vigolvino Lopes; Daniel A. Menascé

Hundreds of papers on job scheduling for distributed systems are published every year and it becomes increasingly difficult to classify them. Our analysis revealed that half of these papers are barely cited. This paper presents a general taxonomy for scheduling problems and solutions in distributed systems. This taxonomy was used to classify and make publicly available the classification of 109 scheduling problems and their solutions. These 109 problems were further clustered into ten groups based on the features of the taxonomy. The proposed taxonomy will facilitate researchers to build on prior art, increase new research visibility, and minimize redundant effort.


grid computing | 2010

Predicting the Quality of Service of a Peer-to-Peer Desktop Grid

Marcus Carvalho; Renato Miceli; Paulo Ditarso Maciel; Francisco Vilar Brasileiro; Raquel Vigolvino Lopes

Peer-to-peer (P2P) desktop grids have been proposed as an economical way to increase the processing capabilities of information technology (IT) infrastructures. In a P2P grid, a peer donates its idle resources to the other peers in the system, and, in exchange, can use the idle resources of other peers when its processing demand surpasses its local computing capacity. Despite their cost-effectiveness, scheduling of processing demands on IT infrastructures that encompass P2P desktop grids is more difficult. At the root of this difficulty is the fact that the quality of the service provided by P2P desktop grids varies significantly over time. The research we report in this paper tackles the problem of estimating the quality of service of P2P desktop grids. We base our study on the OurGrid system, which implements an autonomous incentive mechanism based on reciprocity, called the Network of Favours (NoF). In this paper we propose a model for predicting the quality of service of a P2P desktop grid that uses the NoF incentive mechanism. The model proposed is able to estimate the amount of resources that is available for a peer in the system at future instants of time. We also evaluate the accuracy of the model by running simulation experiments fed with field data. Our results show that in the worst scenario the proposed model is able to predict how much of a given demand for resources a peer is going to obtain from the grid with a mean prediction error of only 7.2%.


ieee international conference on cloud computing technology and science | 2012

Towards practical auto scaling of user facing applications

Lilia Rodrigues Sampaio; Raquel Vigolvino Lopes

Cloud Computing has the purpose of providing computing services in different levels, from remote data storage and computing resources (Infrastructure as a Service - IaaS) to applications accessed through the Internet (Software as a Service - SaaS). From the perspective of an application provider, it is important to manage the capacity of the applications being offered. Typically, such applications are long-lived Web-based applications that present highly variable workload, which is difficult to be predicted accurately. In order to manage the capacity of such applications efficiently, application providers have two options: run the applications on a statically over-provisioned infrastructure that is able to handle the expected peak load of the applications, or acquiring resources on an on-demand basis from IaaS providers. In this paper we pursue the later option. We aim at investigating the completeness and usability of a new service offered by IaaS providers, which has being known as ”auto-scaling“. This service allows the configuration of capacity management policies that must be applied to dynamically decide on acquiring or releasing resource instances for a given application. Policies like that have been studied in the last decade by researchers in academia. We here try to shed some light on the plausibility of using the new auto-scaling service to implement policies defined by researchers. To this end, we evaluate an implementation of important dynamic provisioning policies onto the auto-scaling service and the cost of such service, trying to finally find out a link between the current cloud market and the studies on dynamic provisioning of resources being carried out in academia.


Proceedings of the 9th Latin-American Conference on Pattern Languages of Programming | 2012

Distributed test agents: a pattern for the development of automatic system tests for distributed applications

Giovanni Farias; Ayla Dantas; Raquel Vigolvino Lopes; Dalton Dario Serey Guerrero

This paper presents a test pattern for developing automated system tests for distributed applications. System tests are those intended to test the whole, completely integrated application. Developing such tests is hard because it demands the probing and analysis of data from distributed objects that sometimes present asynchronous operations. The Distributed Test Agents pattern is intended to guide the testers in the development of automated system tests for distributed applications. During the development of the tests, testers can abstract several details regarding the configuration of the distributed components and they can use and access these components in a simple and synchronous way.


integrated network management | 2011

Evaluating the impact of planning long-term contracts on the management of a hybrid IT infrastructure

Paulo Ditarso Maciel; Francisco Vilar Brasileiro; Raquel Vigolvino Lopes; Marcus Carvalho; Miranda Mowbray

The cloud computing market has emerged as an alternative for the provisioning of resources on a pay-as-you-go basis. This flexibility potentially allows clients of cloud computing solutions to reduce the total cost of ownership of their Information Technology infrastructures. On the other hand, this market-based model is not the only way to reduce costs. Among other solutions proposed, peer-to-peer (P2P) grid computing has been suggested as a way to enable a simpler economy for the trading of idle resources. In this paper, we consider an IT infrastructure which benefits from both of these strategies. In such a hybrid infrastructure, computing power can be obtained from in-house dedicated resources, from resources acquired from cloud computing providers, and from resources received as donations from a P2P grid. We take a business-driven approach to the problem and try to maximise the profit that can be achieved by running applications in this hybrid infrastructure. The execution of applications yields utility, while costs may be incurred when resources are used to run the applications, or even when they sit idle. We assume that resources made available from cloud computing providers can be either reserved in advance, or bought on-demand. We study the impact that long-term contracts established with the cloud computing providers have on the profit achieved. Anticipating the optimal contracts is not possible due to the many uncertainties in the system, which stem from the prediction error on the workload demand, the lack of guarantees on the quality of service of the P2P grid, and fluctuations in the future prices of on-demand resources. However, we show that the judicious planning of long term contracts can lead to profits close to those given by an optimal contract set. In particular, we model the planning problem as an optimisation problem and show that the planning performed by solving this optimization problem is robust to the inherent uncertainties of the system, producing profits that for some scenarios can be more than double those achieved by following some common rule-of-thumb approaches to choosing reservation contracts.

Collaboration


Dive into the Raquel Vigolvino Lopes's collaboration.

Top Co-Authors

Avatar

Francisco Vilar Brasileiro

Federal University of Campina Grande

View shared research outputs
Top Co-Authors

Avatar

Marcus Carvalho

Federal University of Campina Grande

View shared research outputs
Top Co-Authors

Avatar

Paulo Ditarso Maciel

Federal University of Campina Grande

View shared research outputs
Top Co-Authors

Avatar

Giovanni Farias

Federal University of Campina Grande

View shared research outputs
Top Co-Authors

Avatar

David Candeia

Federal University of Campina Grande

View shared research outputs
Top Co-Authors

Avatar

Renato Miceli

Federal University of Campina Grande

View shared research outputs
Top Co-Authors

Avatar

Ricardo Araújo Santos

Federal University of Campina Grande

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alessandro Fook

Federal University of Campina Grande

View shared research outputs
Researchain Logo
Decentralizing Knowledge