Eduardo Roloff
Universidade Federal do Rio Grande do Sul
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Eduardo Roloff.
ieee international conference on cloud computing technology and science | 2012
Eduardo Roloff; Matthias Diener; Alexandre Carissimi; Philippe Olivier Alexandre Navaux
High-Performance Computing (HPC) in the cloud has reached the mainstream and is currently a hot topic in the research community and the industry. The attractiveness of cloud for HPC is the capability to run large applications on powerful, scalable hardware without needing to actually own or maintain this hardware. In this paper, we conduct a detailed comparison of HPC applications running on three cloud providers, Amazon EC2, Microsoft Azure and Rackspace. We analyze three important characteristics of HPC, deployment facilities, performance and cost efficiency and compare them to a cluster of machines. For the experiments, we used the well-known NAS parallel benchmarks as an example of general scientific HPC applications to examine the computational and communication performance. Our results show that HPC applications can run efficiently on the cloud. However, care must be taken when choosing the provider, as the differences between them are large. The best cloud provider depends on the type and behavior of the application, as well as the intended usage scenario. Furthermore, our results show that HPC in the cloud can have a higher performance and cost efficiency than a traditional cluster, up to 27% and 41%, respectively.
Future Generation Computer Systems | 2018
Rodrigo da Rosa Righi; Vinicius Facco Rodrigues; Gustavo Rostirolla; Cristiano André da Costa; Eduardo Roloff; Philippe Olivier Alexandre Navaux
Abstract Today cloud elasticity can bring benefits to parallel applications, besides the traditional targets including Web and critical-business demands. This consists in adapting the number of resources and processes at runtime, so users do not need to worry about the best choice for them beforehand. To accomplish this, the most common approaches use threshold-based reactive elasticity or time-consuming proactive elasticity. However, both present at least one problem related to the need of a previous user experience, lack on handling load peaks, completion of parameters or design for a specific infrastructure and workload setting. In this context, we developed a hybrid elasticity service for master–slave parallel applications named Helpar. The proposal presents a closed control loop elasticity architecture that adapts at runtime the values of lower and upper thresholds. The main scientific contribution is the proposition of the Live Thresholding (LT) technique for controlling elasticity. LT is based on the TCP congestion algorithm and automatically manages the value of the elasticity bounds to enhance better reactiveness on resource provisioning. The idea is to provide a lightweight plug-and-play service at the PaaS (Platform-as-a-Service) level of a cloud, in which users are completely unaware of the elasticity feature, only needing to compile their applications with Helpar prototype. For evaluation, we used a numerical integration application and OpenNebula to compare the Helpar execution against two scenarios: a set of static thresholds and a non-elastic application. The results present the lightweight feature of Helpar, besides highlighting its performance competitiveness in terms of application time (performance) and cost (performance × energy) metrics.
parallel, distributed and network-based processing | 2017
Eduardo Roloff; Matthias Diener; Luciano Paschoal Gaspary; Philippe Olivier Alexandre Navaux
Unlike traditional cluster systems, the Cloud Computing paradigm provides access to an execution environment without upfront investments in hardware and facilities. Due to the elasticity and the pay-per-use billing model, it is possible to configure experimental environments with minimal idle costs. In this paper, we perform an extensive evaluation of the major commercial public clouds. Our results show that performance degradation due to virtualization and other cloud overheads is insignificant. However, the network interconnection in the cloud still remains a large bottleneck for HPC application performance.
international conference on conceptual structures | 2015
Emmanuell Diaz Carreño; Eduardo Roloff; Philippe Olivier Alexandre Navaux
Abstract Cloud Computing has emerged as a solution to perform large-scale scientific computing. The elasticity of the cloud and its pay-as-you-go model present an interesting opportunity for applications commonly executed in clusters or supercomputers. This paper presents the challenges of migrating and performing a numerical weather prediction (NWP) application in a cloud computing infrastructure. We compared the execution of this High-Performance Computing (HPC) application in a local cluster and the cloud using different instance sizes. The experiments demonstrate that, although processing and networking create a limiting factor, storing input and output datasets in the cloud present an attractive option to share results and ease the deployment of a test-bed for a weather research platform. Results show that cloud infrastructure can be used as a viable HPC alternative for numerical weather prediction software.
ieee international conference on high performance computing data and analytics | 2015
Emmanuell Diaz Carreño; Eduardo Roloff; Philippe Olivier Alexandre Navaux
Cloud Computing emerged as a viable environment to perform scientific computation. The charging model and the elastic capability to allocate machines as needed are attractive for applications that execute traditionally in clusters or supercomputers. This paper presents our experiences of porting and executing a weather prediction application to the an IaaS cloud. We compared the execution of this application in our local cluster against the execution in the IaaS provider. Our results show that processing and networking in the cloud create a limiting factor compared to a physical cluster. Otherwise to store input and output data in the cloud presents a potential option to share results and to build a test-bed for a weather research platform on the cloud. Performance results show that a cloud infrastructure can be used as a viable alternative for HPC applications.
international conference on cloud computing and services science | 2018
Eduardo Roloff; Matthias Diener; Luciano Paschoal Gaspary; Philippe Olivier Alexandre Navaux
Cloud computing providers offer a variety of instance sizes, types, and configurations that have different prices but can interoperate. As many parallel applications have heterogeneous computational demands, these different instance types can be exploited to reduce the cost of executing a parallel application while maintaining an acceptable performance. In this paper, we perform an analysis of load imbalance patterns with an intentionally-imbalanced artificial benchmark to discover which patterns can benefit from a heterogeneous cloud system. Experiments with this artificial benchmark as well as applications from the NAS Parallel Benchmark suite show that the price of executing an imbalanced application can be reduced substantially on a heterogeneous cloud for a variety of imbalance patterns, while maintaining acceptable performance. By using a heterogeneous cloud, cost efficiency was improved by up to 63%, while performance was reduced by less
utility and cloud computing | 2017
Otávio Carvalho; Eduardo Roloff; Philippe Olivier Alexandre Navaux
Sensor networks have become ubiquitous -- being present from personal smartphones to smart cities deployments -- and are producing large volumes of data at increasing rates. Distributed event stream processing systems, in its turn, are a specific kind of systems that help us to parallelize event processing. Therefore, they provide us capabilities to produce quick insights and decisions, in near real-time, on top of multiple data streams. However, current systems for large scale processing do not focus on Internet of Things and Sensor Network workloads, which makes the performance decrease quickly as the workload size increases. In order to process large scale events with acceptable latency percentiles and high throughput, special systems are needed, such as distributed event stream processing systems. In this work, we propose an architecture for Internet of Things data workloads, in a combination of sensor networks data sources and distributed event stream processing systems, focused on smart grid data profiles. In our evaluations, the system was able to process up to 45K messages per second using 8 processing nodes, while providing stable latencies for micro-batches above 30 seconds.
ieee international conference on high performance computing data and analytics | 2017
Otávio Carvalho; Manuel Garcia; Eduardo Roloff; Emmanuell Diaz Carreño; Philippe Olivier Alexandre Navaux
The advent of Internet of Things is now part of our reality. Increasing amounts of data are being continuously generated and monitored through widespread sensing technologies such as personal smartphones, large scale smart cities sensor deployments and smart electrical grids.
european conference on parallel processing | 2017
Eduardo Roloff; Matthias Diener; Emmanuell Diaz Carreño; Luciano Paschoal Gaspary; Philippe Olivier Alexandre Navaux
Public cloud providers offer a wide range of instance types, with different processing and interconnection speeds, as well as varying prices. Furthermore, the tasks of many parallel applications show different computational demands due to load imbalance. These differences can be exploited for improving the cost efficiency of parallel applications in many cloud environments by matching application requirements to instance types. In this paper, we introduce the concept of heterogeneous cloud systems consisting of different instance types to leverage the different computational demands of large parallel applications for improved cost efficiency. We present a mechanism that automatically suggests a suitable combination of instances based on a characterization of the application and the instance types. With such a heterogeneous cloud, we are able to improve cost efficiency significantly for a variety of MPI-based applications, while maintaining a similar performance.
ieee international conference on high performance computing data and analytics | 2016
Eduardo Roloff; Emmanuell Diaz Carreño; Jimmy K. M. Valverde-Sánchez; Matthias Diener; Matheus da Silva Serpa; Guillaume Houzeaux; Lucas Mello Schnorr; Nicolas Maillard; Luciano Paschoal Gaspary; Philippe Olivier Alexandre Navaux
This paper evaluates the behavior of the Microsoft Azure G5 cloud instance type over multiple Data Centers. The purpose is to identify if there are major differences between them and to help the users choose the best option for their needs. Our results show that there are differences in the network level for the same instance type in different locations and inside the same location at different times. The network performance causes interference in the applications level, as we could verify in our results.
Collaboration
Dive into the Eduardo Roloff's collaboration.
Philippe Olivier Alexandre Navaux
Universidade Federal do Rio Grande do Sul
View shared research outputs