Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gustavo Rostirolla is active.

Publication


Featured researches published by Gustavo Rostirolla.


Future Generation Computer Systems | 2018

A lightweight plug-and-play elasticity service for self-organizing resource provisioning on parallel applications

Rodrigo da Rosa Righi; Vinicius Facco Rodrigues; Gustavo Rostirolla; Cristiano André da Costa; Eduardo Roloff; Philippe Olivier Alexandre Navaux

Abstract Today cloud elasticity can bring benefits to parallel applications, besides the traditional targets including Web and critical-business demands. This consists in adapting the number of resources and processes at runtime, so users do not need to worry about the best choice for them beforehand. To accomplish this, the most common approaches use threshold-based reactive elasticity or time-consuming proactive elasticity. However, both present at least one problem related to the need of a previous user experience, lack on handling load peaks, completion of parameters or design for a specific infrastructure and workload setting. In this context, we developed a hybrid elasticity service for master–slave parallel applications named Helpar. The proposal presents a closed control loop elasticity architecture that adapts at runtime the values of lower and upper thresholds. The main scientific contribution is the proposition of the Live Thresholding (LT) technique for controlling elasticity. LT is based on the TCP congestion algorithm and automatically manages the value of the elasticity bounds to enhance better reactiveness on resource provisioning. The idea is to provide a lightweight plug-and-play service at the PaaS (Platform-as-a-Service) level of a cloud, in which users are completely unaware of the elasticity feature, only needing to compile their applications with Helpar prototype. For evaluation, we used a numerical integration application and OpenNebula to compare the Helpar execution against two scenarios: a set of static thresholds and a non-elastic application. The results present the lightweight feature of Helpar, besides highlighting its performance competitiveness in terms of application time (performance) and cost (performance × energy) metrics.


Concurrency and Computation: Practice and Experience | 2016

Joint-analysis of performance and energy consumption when enabling cloud elasticity for synchronous HPC applications

Rodrigo da Rosa Righi; Cristiano André da Costa; Vinicius Facco Rodrigues; Gustavo Rostirolla

A key characteristic of cloud computing is elasticity, automatically adjusting system resources to an applications workload. Both reactive and horizontal approaches represent traditional means to offer this capability, in which rule‐condition‐action statements and upper and lower thresholds occur to instantiate or consolidate compute nodes and virtual machines. Although elasticity can be beneficial for many HPC (high‐performance computing) scenarios, it also imposes significant challenges in the development of applications. In addition to issues related to how we can incorporate this new feature in such applications, there is a problem associated with the performance and resource pair and, consequently, with energy consumption. Further exploring this last difficulty, we must be capable of analyzing elasticity effectiveness as a function of employed thresholds with clear metrics to compare elastic and non‐elastic executions properly. In this context, this article explores elasticity metrics in two ways: (i) the use of a cost function that combines application time with different energy models; (ii) the extension of speedup and efficiency metrics, commonly used to evaluate parallel systems, to cover cloud elasticity. To accomplish (i) and (ii), we developed an elasticity model known as AutoElastic, which reorganizes resources automatically across synchronous parallel applications. The results, obtained with the AutoElastic prototype using the OpenNebula middleware, are encouraging. Considering a CPU‐bound application, an upper threshold close to 70% was the best option for obtaining good performance with a non‐prohibitive elasticity cost. In addition, the value of 90% for this threshold was the best option when we plan an efficiency‐driven execution. Copyright


Clei Electronic Journal | 2016

Impact of Thresholds and Load Patterns when Executing HPC Applications with Cloud Elasticity

Vinicius Facco Rodrigues; Gustavo Rostirolla; Rodrigo da Rosa Righi; Cristiano André da Costa; Jorge Luis Victória Barbosa

Elasticity is one of the most known capabilities related to cloud computing, being largely deployed reactively using thresholds. In this way, maximum and minimum limits are used to drive resource allocation and deallocation actions, leading to the following problem statements: How can cloud users set the threshold values to enable elasticity in their cloud applications? And what is the impact of the application’s load pattern in the elasticity? This article tries to answer these questions for iterative high performance computing applications, showing the impact of both thresholds and load patterns on application performance and resource consumption. To accomplish this, we developed a reactive and PaaS-based elasticity model called AutoElastic and employed it over a private cloud to execute a numerical integration application. Here, we are presenting an analysis of best practices and possible optimizations regarding the elasticity and HPC pair. Considering the results, we observed that the maximum threshold influences the application time more than the minimum one. We concluded that threshold values close to 100% of CPU load are directly related to a weaker reactivity, postponing resource reconfiguration when its activation in advance could be pertinent for reducing the application runtime.


acm symposium on applied computing | 2015

Rescheduling and checkpointing as strategies to run synchronous parallel programs on P2P desktop grids

Rodrigo da Rosa Righi; Alexandre Veith; Vinicius Facco Rodrigues; Gustavo Rostirolla; Cristiano André da Costa; Kleinner Farias; Antonio Marcos Alberti

Today, BSP (Bulk-Synchronous Parallel) represents one of the most often used models for writing tightly-coupled parallel programs. As resource substrates, commonly clusters and eventually computational grids are used to run BSP applications. In this context, here we investigate the use of collaborative computing and idle resources to execute this kind of demand, so we are proposing a model named BSPonP2P to answer the following question: How can we develop an efficient and viable model to run BSP applications on P2P Desktop Grids? We answer it by providing both process rescheduling and checkpointing to deal with dynamism at application and infrastructure levels and resource heterogeneity. The results concern a prototype that ran over a subset of the Grid5000, showing encouraging results on using collaboration and volatile resources for HPC.


2015 Sustainable Internet and ICT for Sustainability (SustainIT) | 2015

GreenHPC: a novel framework to measure energy consumption on HPC applications

Gustavo Rostirolla; Rodrigo da Rosa Righi; Vinicius Facco Rodrigues; Pedro Velho; Edson Luiz Padoin

Energy consumption on systems that have a continuous power source is tightly-related to both the computing time of an application and its required CPU load. Considering the scope of HPC applications which commonly have a time precision in nano or milliseconds, we observe a lack of systems that combine appropriate sampling rate, low intrusiveness and low cost. In this context, this article presents a model called GreenHPC that uses a hall effect sensor to precisely capture current with an arbitrary timeslice on HPC applications. Its scientific contribution relies on analyzing the energy consumption at a cluster scale, without application intrusiveness, showing the impact of maintaining idle nodes or turning them off for energy saving. Furthermore, considering the use of GreenHPC over the execution of a seismic wave application, we also present the number of employed processors which present the best energy consumption index. Finally, we have used the obtained results to infer a model to estimate energy consumption of HPC applications. All the developed work has a special concern on reproducibility, so all data and hardware schematics are available for download at 1.


IEEE Transactions on Sustainable Computing | 2018

ElCity: An Elastic Multilevel Energy Saving Model for Smart Cities

Gustavo Rostirolla; Rodrigo da Rosa Righi; Jorge Luis Victória Barbosa; Cristiano André da Costa

As a result of rural and suburban migration to the cities, urban life has become a significant challenge for citizens and, particularly, for city administrators who must manage the sustainable use of resources such as energy, water, and transportation. Smart cities are the biggest vision to efficiently address these challenges through a real-time monitoring, providing an intelligent planning and a sustainable urban development. However, to accomplish them we need a tightly integration among citizens, city devices, city administrators, and the data center platform where all data is stored, combined, and processed. In this context, we propose ElCity, a model that combines citizens and city devices data to enable an elastic multilevel management of energy consumption for a particular city. As design decision, this management must occur automatically without affecting the quality of already offered services. The main contribution of ElCity model concerns the exploration of the cloud elasticity concept in multiple target levels (smartphones from citizens, city devices involved in the public lightning, and data center nodes), turning on or off the resources on each level in accordance with their demands. In this way, this article presents the ElCity architecture, detailing its modules distributed along the three data sources, in addition to an experiment that uses city devices and citizens data from Rome to explore energy saving. The results are promising, with an Energy Monitor module that allows the estimation of the energy consumption of elastic applications based on CPU and memory traces with an average and median precision of 97.15 and 97.72 percent. Moreover, we proposed a reduction of more than 90 percent in the energy spent in public lightning in the city of Rome which was obtained thanks to an analysis of geolocation data from their citizens.


Journal of Grid Computing | 2017

Towards Enabling Live Thresholding as Utility to Manage Elastic Master-Slave Applications in the Cloud

Vinicius Facco Rodrigues; Rodrigo da Rosa Righi; Gustavo Rostirolla; Jorge Luis Victória Barbosa; Cristiano André da Costa; Antonio Marcos Alberti; Victor Chang

The elasticity feature of cloud computing has been proved as pertinent for parallel applications, since users do not need to take care about the best choice for the number of processes/resources beforehand. To accomplish this, the most common approaches use threshold-based reactive elasticity or time-consuming proactive elasticity. However, both present at least one problem related to: the need of a previous user experience, lack on handling load peaks, completion of parameters or design for a specific infrastructure and workload setting. In this regard, we developed a hybrid elasticity service for Master-Slave parallel applications named Helpar (Hybrid Elasticity Model for Parallel Applications). As parameterless model, Helpar presents a closed control loop elasticity architecture that adapts at runtime the values of lower and upper thresholds. Thus, we intend to provide a practical and effortless realization of the cloud elasticity and parallel computing duet, so delivering this capability as a plug-and-play utility to end users. Besides presenting Helpar, our purpose is to provide a comparison between Helpar and our previous work on reactive elasticity called AutoElastic. We will explore different metrics, including applications’ time, energy consumption and cost, as well as distinct types of workloads when executing a scientific HPC application. The results present the Helpar’s lightweight feature, besides highlighting its performance competitiveness in terms of application time and cost (performance × energy) metrics. In other words, the hand-tuning of thresholds in AutoElastic often is responsible for the best results, but this procedure may be time-consuming besides optimized for a particular set of application and infrastructure.


international conference on rfid | 2016

Exploring cloud elasticity on developing an EPCGlobal-compliant middleware

Rodrigo da Rosa Righi; Eduardo Souza dos Reis; Gustavo Rostirolla; Cristiano André da Costa; Antonio Marcos Alberti

The digital universe is growing at significant rates in recent years. One of the main responsible for this sentence is the Internet of Things (IoT), which requires middlewares capable of handling this increase of data volume in real-time. Solutions modeled at software, hardware and/or architecture level present limitations to handle such load, facing a scalability problem in the IoT scope. In this context, this article presents a model named Eliot (Elasticity-driven Internet of Things) which combines both cloud and high performance computing to address the IoT scalability problem in EPCglobal-compliant architectures. Based on the Eliot model, we developed a prototype that can run as a plugin solution together with any current EPCglobal-compliant middleware. The results are encouraging, presenting significant performance gains when comparing both elastic and non-elastic executions.


international conference of the chilean computer science society | 2016

Proposal of a network congestion-aware RFID model for online management of assets

Leandro Andrioli; Rodrigo da Rosa Righi; Gustavo Rostirolla; Cristiano André da Costa

This article presents the ACMA model — Automatic Control and Management of Assets using RFID. The model uses context awareness to manage and monitor corporate assets in companies with multiple units. ACMA offers a centralized point of access in the cloud in which administrators can get online data about each asset from all companied previously registered in the system. Considering the eventual huge amount of data from sensors at each company, our differential approach consists in considering network congestion to control the data updating interval in the path from companies to the cloud. The idea is to search for reliability and integrity of network operations, without losing or corrupting data when updating the data to the cloud. Thus, this article describes the ACMA model, its architecture and algorithms. We developed a prototype that put in a clear way the benefits of using adaptivity on transferring RFID data to the cloud.


green computing and communications | 2016

Elastic Management of Physical Spaces and Objects in Multi-Hospital Environments

Rodrigo da Rosa Righi; Gustavo Rostirolla; Cristiano André da Costa; Murillo Goulart; Erico Rocha

The fast growing and the aging of the population are some of the major problems today in terms of medical treatment in public or private hospitals. In this context, particularly talking about the health equipments belonging to each hospital, we also observe that governments and administrators still face difficulties in overseeing the tracking, allocation and management of such equipments in real-time. The gap of acquiring a global overview about what each hospital has or not invariably leads to resource wasting in some places, while there is a lacking of resources in others, so providing a deficient service at regional or national scale. Here, we are proposing the EMH (Elastic Multi-hospital Management) model, which uses the cloud elasticity facility to provide a distributed framework for managing things including physical spaces and objects at real-time within the context of multiple healthcare institutions. Our solution consists in offering a centralized management control center that works with cloud elasticity to enlarge or reduce the number of virtual machines to support the variable incoming demands from the hospitals. Thus, the idea is to subsidize real-time decision making in favor of managers who can, for instance, plan expansions, new health policies or equipment exchanges between hospitals. EMH is based on the RFID (Radio-frequency identification) sensor tracking technology, combined with the client-server Web Service concept, to detect motion and usability of objects. Finally, we developed a prototype of the EMH model which emphasized the benefits on providing a unique administration point and handling incoming workload with cloud elasticity to address user requests inside an acceptable response time (here denoted as lower than 200 ms).

Collaboration


Dive into the Gustavo Rostirolla's collaboration.

Top Co-Authors

Avatar

Rodrigo da Rosa Righi

Universidade do Vale do Rio dos Sinos

View shared research outputs
Top Co-Authors

Avatar

Cristiano André da Costa

Universidade do Vale do Rio dos Sinos

View shared research outputs
Top Co-Authors

Avatar

Vinicius Facco Rodrigues

Universidade do Vale do Rio dos Sinos

View shared research outputs
Top Co-Authors

Avatar

Jorge Luis Victória Barbosa

Universidade do Vale do Rio dos Sinos

View shared research outputs
Top Co-Authors

Avatar

Eduardo Souza dos Reis

Universidade do Vale do Rio dos Sinos

View shared research outputs
Top Co-Authors

Avatar

Gabriel Souto Fischer

Universidade do Vale do Rio dos Sinos

View shared research outputs
Top Co-Authors

Avatar

Victor Chang

Xi'an Jiaotong-Liverpool University

View shared research outputs
Top Co-Authors

Avatar

Alexandre Veith

Universidade do Vale do Rio dos Sinos

View shared research outputs
Top Co-Authors

Avatar

Edson Luiz Padoin

Universidade Federal do Rio Grande do Sul

View shared research outputs
Researchain Logo
Decentralizing Knowledge