Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrea Schwertner Charão is active.

Publication


Featured researches published by Andrea Schwertner Charão.


Journal of Computer Science | 2014

MAPREDUCE CHALLENGES ON PERVASIVE GRIDS

Luiz Angelo Steffenel; Olivier Flauzac; Andrea Schwertner Charão; Patricia Pitthan Barcelos; Benhur de Oliveira Stein; Guilherme W. Cassales; Sergio Nesmachnow; Javier Rey; Matias Cogorno; Manuele Kirsch-Pinheiro; Carine Souveyet

This study presents the advances on designing and implementing scalable techniques to support the development and execution of MapReduce application in pervasive distributed computing infrastructures, in the context of the PER-MARE project. A pervasive framework for MapReduce applications is very useful in practice, especially in those scientific, enterprises and educational centers which have many unused or underused computing resources, which can be fully exploited to solve relevant problems that demand large computing power, such as scientific computing applications, big data processing, etc. In this study, we pro-pose the study of multiple techniques to support volatility and heterogeneity on MapReduce, by applying two complementary approaches: Improving the Apache Hadoop middleware by including context-awareness and fault-tolerance features; and providing an alternative pervasive grid implementation, fully adapted to dynamic environments. The main design and implementation decisions for both alternatives are described and validated through experiments, demonstrating that our approaches provide high reliability when executing on pervasive environments. The analysis of the experiments also leads to several insights on the requirements and constraints from dynamic and volatile systems, reinforcing the importance of context-aware information and advanced fault-tolerance features to provide efficient and reliable MapReduce services on pervasive grids.


2013 Eighth International Conference on P2P, Parallel, Grid, Cloud and Internet Computing | 2013

PER-MARE: Adaptive Deployment of MapReduce over Pervasive Grids

Luiz Angelo Steffenel; Olivier Flauzac; Andrea Schwertner Charão; Patricia Pitthan Barcelos; Benhur de Oliveira Stein; Sergio Nesmachnow; Manuele Kirsch Pinheiro; Daniel Diaz

Map Reduce is a parallel programming paradigm successfully used to perform computations on massive amounts of data, being widely deployed on clusters, grid, and cloud infrastructures. Interestingly, while the emergence of cloud infrastructures has opened new perspectives, several enterprises hesitate to put sensible data on the cloud and prefer to rely on internal resources. In this paper we introduce the PER-MARE initiative, which aims at proposing scalable techniques to support existent Map Reduce data-intensive applications in the context of loosely coupled networks such as pervasive and desktop grids. By relying on the Map Reduce programming model, PER-MARE proposes to explore the potential advantages of using free unused resources available at enterprises as pervasive grids, alone or in a hybrid environment. This paper presents the main lines that orient the PER-MARE approach and some preliminary results.


Procedia Computer Science | 2014

Performance Improvement of Data Mining in Weka through GPU Acceleration

Tiago Augusto Engel; Andrea Schwertner Charão; Manuele Kirsch-Pinheiro; Luiz Angelo Steffenel

Data mining tools may be computationally demanding, so there is an increasing interest on parallel computing strategies to improve their performance. The popularization of Graphics Processing Units (GPUs) increased the computing power of current desktop computers, but desktop-based data mining tools do not usually take full advantage of these architectures. This paper exploits an approach to improve the performance of Weka, a popular data mining tool, through parallelization on GPU-accelerated machines. From the profiling of Weka object-oriented code, we chose to parallelize a matrix multiplication method using state-of-the-art tools. The implementation was merged into Weka so that we could analyze the impact of parallel execution on its performance. The results show a significant speedup on the target parallel architectures, compared to the original, sequential Weka code.


ambient intelligence | 2016

Improving the performance of Apache Hadoop on pervasive environments through context-aware scheduling

Guilherme W. Cassales; Andrea Schwertner Charão; Manuele Kirsch-Pinheiro; Carine Souveyet; Luiz Angelo Steffenel

This article proposes to improve Apache Hadoop scheduling through a context-aware approach. Apache Hadoop is the most popular implementation of the MapReduce paradigm for distributed computing, but its design does not adapt automatically to computing nodes’ context and capabilities. By introducing context-awareness into Hadoop, we intent to dynamically adapt its scheduling to the execution environment. This is a necessary feature in the context of pervasive grids, which are heterogeneous, dynamic and shared environments. The solution has been incorporated into Hadoop and assessed through controlled experiments. The experiments demonstrate that context-awareness provides comparative performance gains, especially when some of the resources disappear during execution.


ambient intelligence | 2015

Performance improvement of data mining in Weka through multi-core and GPU acceleration: opportunities and pitfalls

Tiago Augusto Engel; Andrea Schwertner Charão; Manuele Kirsch-Pinheiro; Luiz Angelo Steffenel

AbstractData mining tools may be computationally demanding, which leads to an increasing interest on parallel computing strategies in order to improve their performance. While multi-core processors and Graphics Processing Units (GPUs) accelerators increased the computing power of current desktop computers, we observe that desktop-based data mining tools do not take full advantage of these architectures yet. This paper investigates strategies to improve the performance of Weka, a popular data mining tool, through multi-core and GPU acceleration. Using performance profiling of Weka, we identify operations that could improve the data mining performance when parallelized. We selected two of these operations, and analyze the impact of their parallel execution on Weka’s performance. These experiments demonstrate that while significant speedups can be achieved, all operations are not prone to be parallelized, which reinforces the need for a careful and well-studied selection of the candidates.


Procedia Computer Science | 2015

Context-aware Scheduling for Apache Hadoop over Pervasive Environments☆

Guilherme W. Cassales; Andrea Schwertner Charão; Manuele Kirsch Pinheiro; Carine Souveyet; Luiz Angelo Steffenel

Abstract This article proposes to improve Apache Hadoop scheduling through the usage of context-awareness. Apache Hadoop is the most popular implementation of the MapReduce paradigm for distributed computing, but its design doesn’t adapt automatically to computing nodes’ context and capabilities. By introducing context-awareness into Hadoop, we intent to dynamically adapt its scheduling to the execution environment. This is a necessary feature in the context of pervasive grids, which are heterogeneous, dynamic and shared environments. The solution has been incorporated into Hadoop and evaluated through controlled experi- ments. The experiments demonstrate that context-awareness provides comparative performance gains, especially when part of the resources disappear during execution.


latin american network operations and management symposium | 2009

FlexVAPs: a system for managing virtual appliances in heterogeneous virtualized environments

Diego Kreutz; Andrea Schwertner Charão

Virtual appliances have emerged as an important concept in systems virtualization. They are conceived as data packages that can be electronically delivered and easily shared and distributed. A virtual appliance usually contains at least an operating system, libraries and tools targeted to provide a specific service on top of a virtual machine monitor. As virtualization technologies are rapidly evolving, there are many virtual machine monitors in the market and it is increasingly usual to have multiple, different monitors coexisting in the same software infrastructure. In this paper, we propose a software system for managing virtual appliances in such kind of heterogeneous virtualized infrastructure. This system, named FlexVAPs, provide the users with a flexible and adaptable virtual appliance management system, allowing to instantiate virtual machines over different types of virtual machine monitors. To validate our ideas, we designed and prototyped a system architecture based on weakly coupled and specialized components. This allows us to extend and use the system in different computing environments, easily meeting users particular requirements. At the end of this paper, we establish a comparison among FlexVAPs and two major solutions targeted to virtual appliance management, so as to highlight their benefits and drawbacks.


international conference on computational science and its applications | 2017

Test case generation from BPMN models for automated testing of Web-based BPM applications

Jessica Lasch de Moura; Andrea Schwertner Charão; João Carlos D. Lima; Benhur de Oliveira Stein

This article proposes an approach to generate test cases from BPMN models, for automated testing of Web applications implemented with the support of BPM suites. The work is primarily focused on functional testing and has the following objectives: (i) identify execution paths from the flow analysis in the BPMN model and (ii) generate the initial code of test scripts to be run on a given Web application testing tool. Throughout the article, we describe the design and implementation of a solution to achieve these goals, targeting automated tests using Selenium and Cucumber as tools. The approach was applied to processes from a public repository and was able to generate test scenarios from different BPMN models.


Ciência e Natura | 2013

ACOMPANHAMENTO DA SAFRA 2012/2013 DE ARROZ IRRIGADO NO RIO GRANDE DO SUL POR MODELAGEM NUMÉRICA

Nereu Augusto Streck; Michel Rocha da Silva; Hamilton Telles Rosa; Lidiane Cristine Walter; Rômulo Pulcinelli Benedetti; Cristiano de Carli; Andrea Schwertner Charão; Elio Marcolin; Simone Erotildes Teleginski Ferraz; Enio Marchesan

O objetivo deste trabalho foi realizar o acompanhamento de safra de arroz irrigado durante o ano agricola 2012/2013 para o estado do Rio Grande do Sul. Para o acompanhamento de safra, foi utilizado o modelo SimulArroz. Foi utilizado o grupo de maturacao precoce de cultivares de arroz, com simulacoes nos niveis de tecnologia alto, medio e baixo. A media de produtividade observada foi de 7481,7 kg/ha, enquanto que a media de produtividade simulada com o SimulArroz nos niveis de tecnologia alto, media e baixo foi de 8732,0 kg/ha, 7041,7 kg/ha e 5597,9 kg/ha respectivamente. O modelo demonstrou boa sensibilidade as variacoes meteorologicas entre as regioes do RS, demonstrando maior potencial produtivo de arroz para as regioes da Fronteira Oeste e da Campanha. Portanto, o modelo SimulArroz simulou de forma satisfatoria a produtividade de arroz irrigado para os municipios em que foi testado, podendo ser utilizado no acompanhamento de safra.


international conference on computational science and its applications | 2012

Impact of pay-as-you-go cloud platforms on software pricing and development: a review and case study

Fernando Pires Barbosa; Andrea Schwertner Charão

One of the major highlights of cloud computing concerns the pay-as-you-go pricing model, where one pays according to the amount of resources consumed. Some cloud platforms already offer the pay-as-you-go model and this creates a new scenario in which the rational computing resource consumption gains in importance. In this paper, we address the impact of this new approach in software pricing and software development. Our hypothesis is that hardware consumption may impact directly on the software vendor profit and thus it can be necessary to adapt some software development practices. In this direction, we discuss the need to revise well-established models such as COCOMO II and some aspects related to requirements engineering and benchmarking tools. We also present a case study pointing that disregarding the rational consumption of resources can generate wastes that may impact on the software vendor profit.

Collaboration


Dive into the Andrea Schwertner Charão's collaboration.

Top Co-Authors

Avatar

Luiz Angelo Steffenel

University of Reims Champagne-Ardenne

View shared research outputs
Top Co-Authors

Avatar

Benhur de Oliveira Stein

Universidade Federal de Santa Maria

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Haroldo Fraga de Campos Velho

National Institute for Space Research

View shared research outputs
Top Co-Authors

Avatar

João Carlos D. Lima

Universidade Federal de Santa Maria

View shared research outputs
Top Co-Authors

Avatar

Guilherme W. Cassales

Universidade Federal de Santa Maria

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eduardo Kessler Piveta

Universidade Federal de Santa Maria

View shared research outputs
Top Co-Authors

Avatar

Gustavo Rissetti

Universidade Federal de Santa Maria

View shared research outputs
Top Co-Authors

Avatar

Roberto P. Souto

National Institute for Space Research

View shared research outputs
Researchain Logo
Decentralizing Knowledge