Miguel G. Xavier
Pontifícia Universidade Católica do Rio Grande do Sul
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Miguel G. Xavier.
parallel, distributed and network-based processing | 2013
Miguel G. Xavier; Marcelo Veiga Neves; Fabio Diniz Rossi; Tiago C. Ferreto; Timoteo Lange; C.A.F. De Rose
The use of virtualization technologies in high performance computing (HPC) environments has traditionally been avoided due to their inherent performance overhead. However, with the rise of container-based virtualization implementations, such as Linux VServer, OpenVZ and Linux Containers (LXC), it is possible to obtain a very low overhead leading to near-native performance. In this work, we conducted a number of experiments in order to perform an in-depth performance evaluation of container-based virtualization for HPC. We also evaluated the trade-off between performance and isolation in container-based virtualization systems and compared them with Xen, which is a representative of the traditional hypervisor-based virtualization systems used today.
parallel, distributed and network-based processing | 2014
Miguel G. Xavier; Marcelo Veiga Neves; César A. F. De Rose
Virtualization as a platform for resource-intensive applications, such as MapReduce (MR), has been the subject of many studies in the last years, as it has brought benefits such as better manageability, overall resource utilization, security and scalability. Nevertheless, because of the performance overheads, virtualization has traditionally been avoided in computing environments where performance is a critical factor. In this context, container-based virtualization can be considered a lightweight alternative to the traditional hypervisor-based virtualization systems. In fact, there is a trend towards using containers in MR clusters in order to provide resource sharing and performance isolation (e.g., Mesos and YARN). However, there are still no studies evaluating the performance overhead of the current container-based systems and their ability to provide performance isolation when running MR applications. In this work, we conducted experiments to effectively compare and contrast the current container-based systems (Linux VServer, OpenVZ and Linux Containers (LXC)) in terms of performance and manageability when running on MR clusters. Our results showed that although all container-based systems reach a near-native performance for MapReduce workloads, LXC is the one that offers the best relationship between performance and management capabilities (specially regarding to performance isolation).
Journal of Network and Computer Applications | 2017
Fabio Diniz Rossi; Miguel G. Xavier; César A. F. De Rose; Rodrigo N. Calheiros; Rajkumar Buyya
The high energy consumption of data centers has been a recurring issue in recent research. In cloud environments, several techniques are being used that aim for energy efficiency, ranging from scaling the processors frequency, to the use of sleep states during idle periods and the consolidation of virtual machines. Although these techniques enable a reduction in power consumption, they usually impact application performance. In this paper, we present an orchestration of different energy-savings techniques in order to improve the trade-off between energy consumption and application performance. To this end, we implemented the Energy-Efficient Cloud Orchestrator - e-eco - a management system that acts along with the cloud load balancer deciding which technique to apply during execution. To evaluate e-eco, tests were carried out in a real environment using scale-out applications on a dynamic cloud infrastructure, taking into account transactions per second as a performance metric. In addition to the empirical experiments, we also analyzed the scalability of our approach with an enhanced version of the CloudSim simulator. Results of our evaluations demonstrated that e-eco is able to reduce energy consumption up to 25% compared to power-agnostic approaches at a cost of only 6% of extra SLA violations. When compared to existing power-aware approaches, e-eco achieved the best trade-off between performance and energy-savings. These results showed that our orchestration approach showed a better balance in regard to a more energy-efficient data center with smaller impact on application performance when compared with other works presented in the literature.
Journal of Bioinformatics and Computational Biology | 2014
Raquel Dias; Miguel G. Xavier; Fabio Diniz Rossi; Marcelo Veiga Neves; Timoteo Lange; Adriana Giongo; C. A. F. De Rose; Eric W. Triplett
Metagenomic sequencing technologies are advancing rapidly and the size of output data from high-throughput genetic sequencing has increased substantially over the years. This brings us to a scenario where advanced computational optimizations are requested to perform a metagenomic analysis. In this paper, we describe a new parallel implementation of nucleotide BLAST (MPI-blastn) and a new tool for taxonomic attachment of Basic Local Alignment Search Tool (BLAST) results that supports the NCBI taxonomy (NCBI-TaxCollector). MPI-blastn obtained a high performance when compared to the mpiBLAST and ScalaBLAST. In our best case, MPI-blastn was able to run 408 times faster in 384 cores. Our evaluations demonstrated that NCBI-TaxCollector is able to perform taxonomic attachments 125 times faster and needs 120 times less RAM than the previous TaxCollector. Through our optimizations, a multiple sequence search that currently takes 37 hours can be performed in less than 6 min and a post processing with NCBI taxonomic data attachment, which takes 48 hours, now is able to run in 23 min.
parallel, distributed and network-based processing | 2015
Fabio Diniz Rossi; Miguel G. Xavier; Yuri J. Monti; César A. F. De Rose
Energy-aware management strategies are a recent trend towards achieving energy-efficient computing in HPC clusters. One of the approaches behind those strategies is to apply energy-saving states on idle nodes, alternating them among different sleep states that reflect on many power consumption levels. This paper investigated the way such energy-efficient strategies affected the job turnaround time - the elapsed time between when the job is submitted and when the job is completed, including the wait time as well as the jobs actual execution time - in these clusters. Based on the results we proposed a Best-Fit Energy-Aware Strategy that switches the nodes to a sleep state, depending on the throughput of the resource managers job queue. We simulated the proposed strategy using the SimGrid simulator. Our preliminary results showed a reduction of up to 19% in the overall energy consumption and give us a better understanding of the trade-offs involved in using energy-efficient strategies.
Concurrency and Computation: Practice and Experience | 2017
Miguel G. Xavier; Fabio Diniz Rossi; César A. F. De Rose; Rodrigo N. Calheiros; Danielo G. Gomes
The more large‐scale data centers infrastructure costs increase, the more simulation‐based evaluations are needed to understand better the trade‐off between energy and performance and support the development of new energy‐aware resource allocation policies. Specifically, in the cloud computing field, various simulators are able to predict and measure the behavior of applications on different architectures using different resource allocation policies. Yet, only a few of them have the ability to simulate energy‐saving strategies, and none of them support the complete advanced configuration and power interface (ACPI) specification. ACPI defines a terminology for all possible power states of a machine and their associated power rate. The hardware industry has relied on ACPI to provide up‐to‐date standard interfaces for hardware discovery, configuration, power management, and monitoring, enabling a better understanding of the energy consumption level of different hardware states, referred to as ACPI G‐states, S‐states, and P‐states. In this paper, we improve the modeling and simulation of the ACPI G/S‐states and show not only that these states offer different energy‐saving levels but also that state transitions consume energy. In addition, we model the latency to transit between two states and the effects on the turnaround time when the transitions are not performed conservatively. Furthermore, the equations provide essential information to quantify the trade‐off between energy consumption and performance and assist in the analysis/decision on which strategy fits better in the environment and how it could be refined. Our expanded energy model was implemented in CloudSim and validated with simulation‐based experiments with a very high level of accuracy, with a standard deviation of at most 6%. Copyright
acm symposium on applied computing | 2014
Miguel G. Xavier; Israel C. De Oliveira; Robson D. Dos Passos; César A. F. De Rose
Cloud computing (CC) has become a very popular computing model in the last decade, as it has proved to be a cost-effective alternative to large-scale computers hosted in conventional Information Technologies (IT) department. The ability to provide on-demand IT-related resources either through the internet (public clouds) or private networks (private clouds) based on customers needs, and the scalability they have to deliver resources horizontally, can be considered the main reasons for such popularity and have led such a CC platforms to a scenario involving the hosting of other types of systems, such as those that control large volume of data, like database management systems (DBMS).
international symposium on computers and communications | 2013
Timoteo Lange; Paolo Cemim; Miguel G. Xavier; César A. F. De Rose
Recent studies have demonstrated advantages in using Data Base Management System (DBMS) in virtual environments, like the consolidation of several DBMS isolated by virtual machines on a single physical machine to reduce maintenance costs and energy consumption. Furthermore, live migration can improve database availability, allowing transparent maintenance operations on host machines. However, there are issues that still need to be addressed, like overall performance degradation of the DBMS when running in virtual environments and connections instabilities during a live migration. In this context, new virtualization techniques are emerging, like the virtual database, which is considered a less intrusive alternative for the traditional database virtualization over virtual machines. This paper analyzes aspects of this new virtualization approach, like performance and connection stability during a database migration process and its isolation capabilities. Our evaluation shows very promising results compared to the traditional approach over virtual machines, including a more efficient and stable live migration, maintaining the required isolation characteristics for a virtualized DBMS.
2012 13th Symposium on Computer Systems | 2012
Timoteo Lange; Paolo Cemim; Fabio Diniz Rossi; Miguel G. Xavier; Rafael Lorenzo Belle; Tiago C. Ferreto; César A. F. De Rose
Modern database systems are applications that have a variable workload demanding a great share of the available resources during peak periods. To avoid sub utilization of resources during periods of low load, virtualization technology allows this type of application to be provisioned dynamically. This technique has improved in the last decades and new approaches are emerging to improve performance, like operating system level database virtualization. This paper analyzes the characteristics of this approach compared to traditional virtualization techniques.
parallel, distributed and network-based processing | 2017
Miguel G. Xavier; Kassiano J. Matteussi; Gabriel R. Franca; Wagner P. Pereira; César A. F. De Rose
With mobility increasing more each year, mobile devices and operating system (OS) fragmentation are increasing at an even faster pace. A multitude of screen sizes, network connection types, and OS versions have emerged in the market and led mobile developers to rethink testing practices to ensure quality and a good experience for increasingly demanding users who crave highly reliable and stable applications. Such best practices come at the high cost for testing infrastructure and maintenance, making most development teams bypass test cycles and deliver applications before they are thoroughly validated. And when teams pursue automated test cycles on clouds, expenses due to high-cost services are not always worth the investment in application development phases. As a result, test cycles which are essential to validate application reliability and stability, such as regression, functional, and leakage tests are left out, primarily affecting user experience. This paper shows the challenges intrinsic to mobile application testing in consonance with the opportunities provided by clouds. We explored the state of the art to synthesize current cloud-based mobile application testing architectures to convey the need for a new concept and platform to minimize maintenance and make testing infrastructures more cost-effective. Hence, we proposed an alternative architecture using emulated devices for testing automation, which aims for massive test cycles at a lower cost.