Valentin Plugaru
University of Luxembourg
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Valentin Plugaru.
Mbio | 2015
Cédric C. Laczny; Tomasz Sternal; Valentin Plugaru; Piotr Gawron; Arash Atashpendar; Houry Hera Margossian; Sergio Coronado; Laurens van der Maaten; Nikos Vlassis; Paul Wilmes
AbstractBackgroundMetagenomics is limited in its ability to link distinct microbial populations to genetic potential due to a current lack of representative isolate genome sequences. Reference-independent approaches, which exploit for example inherent genomic signatures for the clustering of metagenomic fragments (binning), offer the prospect to resolve and reconstruct population-level genomic complements without the need for prior knowledge.ResultsWe present VizBin, a Java™-based application which offers efficient and intuitive reference-independent visualization of metagenomic datasets from single samples for subsequent human-in-the-loop inspection and binning. The method is based on nonlinear dimension reduction of genomic signatures and exploits the superior pattern recognition capabilities of the human eye-brain system for cluster identification and delineation. We demonstrate the general applicability of VizBin for the analysis of metagenomic sequence data by presenting results from two cellulolytic microbial communities and one human-borne microbial consortium. The superior performance of our application compared to other analogous metagenomic visualization and binning methods is also presented.ConclusionsVizBin can be applied de novo for the visualization and subsequent binning of metagenomic datasets from single samples, and it can be used for the post hoc inspection and refinement of automatically generated bins. Due to its computational efficiency, it can be run on common desktop machines and enables the analysis of complex metagenomic datasets in a matter of minutes. The software implementation is available at https://claczny.github.io/VizBin under the BSD License (four-clause) and runs under Microsoft Windows™, Apple Mac OS X™ (10.7 to 10.10), and Linux.
EE-LSDS 2013 Revised Selected Papers of the COST IC0804 European Conference on Energy Efficiency in Large Scale Distributed Systems - Volume 8046 | 2013
Mateusz Guzek; Sébastien Varrette; Valentin Plugaru; Johnatan E. Pecero; Pascal Bouvry
With a growing concern on the considerable energy consumed by HPC platforms and data centers, research efforts are targeting toward green approaches with higher energy efficiency. In particular, virtualization is emerging as the prominent approach to mutualize the energy consumed by a single server running multiple Virtual Machines VMs instances. However, little understanding has been obtained about the potential overhead in energy consumption and the throughput reduction for virtualized servers and/or computing resources, nor if it simply suits an environment as high-demanding as a High Performance Computing HPC platform. In this paper, a novel holistic model for the power of HPC node and its eventual virtualization layer is proposed. More importantly, we create and validate an instance of the proposed model using concrete measures taken on the Grid5000 platform. In particular, we use three widespread virtualization frameworks, namely Xen, KVM, and VMware ESXi and compare them with a baseline environment running in native mode. The conducted experiments were performed on top of benchmarking tools that reflect an HPC usage, i.e. HPCC, IOZone and Bonnie++. To abstract from the specifics of a single architecture, the benchmarks were run using two different hardware configurations, based on Intel and AMD processors. The benchmark scores are presented for all configurations to highlight their varying performance. The measured data is used to create a statistical holistic model of power of a machine that takes into account impacts of its components utilization metrics, as well as used application, virtualization, and hardware. The purpose of the model is to enable estimation of energy consumption of HPC platforms in areas such as simulation, scheduling or accounting.
ieee international conference on cloud computing technology and science | 2014
Valentin Plugaru; Sébastien Varrette; Pascal Bouvry
Energy efficiency remains a prevalent concern in the development of future HPC systems. Thus the next generations of supercomputers are foreseen to be developed as hybrid systems featuring traditional processors, accelerators (such as GPGPUs) and/or low-power processor architectures (ARM, Intel Atom, etc.) primarily designed for the mobile and embedded devices market. Also, a confluence with the Cloud Computing (CC) paradigm is anticipated, driven by economic sustainability factors. However, the performance impact of running Cloud middleware on such crossbred platforms remains to be explored, especially on low power processors. In this context, this paper brings two main contributions: (1) the design and implementation of BACH, a framework able to execute automated performance evaluations of Cloud and HPC cluster environments, (2) the concrete validation of the framework for the evaluation of the modern Open Stack Infrastructure-as-a-Service (IaaS) middleware, deployed on a cutting-edge cluster based on ultra low power energy efficient ARM processors. The efficiency in itself is measured with synthetic HPC benchmarks: HPCC (incorporating the well known HPL), HPCG and real world applications from the bioinformatics domain - GROMACS and ABySS. The experimental evaluation revealed an average 24% performance drop in performance for compute-intensive tasks and 65.6% drop in communication capacity compared to the native environment without the IaaS solution, showing a non-negligible impact on the tested platform. To our knowledge, this is one of the first studies of this type, since deployment attempts of the Open Stack infrastructure on top of ARM platforms are in early stages, and are generally performed only for demonstration purposes.
IEEE Transactions on Cloud Computing | 2016
Joseph Emeras; Sébastien Varrette; Valentin Plugaru; Pascal Bouvry
While High Performance Computing (HPC) centers continuously evolve to provide more computing power to their users, we observe a wish for the convergence between Cloud Computing (CC) and High Performance Computing (HPC) platforms, with the commercial hope to see Cloud Computing (CC) infrastructures to eventually replace in-house facilities. If we exclude the performance point of view where many previous studies highlight a non-negligible overhead induced by the virtualization layer at the heart of every Cloud middleware when running a HPC workload, the question of the real cost-effectiveness is often left aside with the intuition that, most probably, the instances offered by the Cloud providers are competitive from a cost point of view. In this article, we wanted to assert (or infirm) this intuition by analyzing what composes the Total Cost of Ownership (TCO) of an in-house HPC facility operated internally since 2007. This Total Cost of Ownership (TCO) model is then used to compare with the induced cost that would have been required to run the same platform (and the same workload) over a competitive Cloud IaaS offer. Our approach to address this price comparison is three-fold. First we propose a theoretical price-performance model based on the study of the actual Cloud instances proposed by one of the major Cloud IaaS actors: Amazon Elastic Compute Cloud (EC2). Then, based on the HPC facility TCO analysis we propose a hourly price comparison between our in-house cluster and the equivalent EC2 instances. Finally, based on the experimental benchmarking on the local cluster and on the Cloud instances we propose an update of the former theoretical price model to reflect the real system performance. The results obtained advocate in general for the acquisition of an in-house HPC facility, which balances the common intuition in favor of Cloud Computing platforms, would they be provided by the reference Cloud provider worldwide.
symposium on computer architecture and high performance computing | 2013
Sébastien Varrette; Mateusz Guzek; Valentin Plugaru; Xavier Besseron; Pascal Bouvry
Concurrency and Computation: Practice and Experience | 2014
Mateusz Guzek; Sébastien Varrette; Valentin Plugaru; Johnatan E. Pecero; Pascal Bouvry
international conference on parallel processing | 2014
Sébastien Varrette; Valentin Plugaru; Mateusz Guzek; Xavier Besseron; Pascal Bouvry
Archive | 2015
Xavier Besseron; Valentin Plugaru; Amir Houshang Mahmoudi; Sébastien Varrette; Bernhard Peters; Pascal Bouvry
Archive | 2018
Pascal Bouvry; Sébastien Varrette; Valentin Plugaru; Sarah Peter; Hyacinthe Cartiaux; Clément Parisot
Archive | 2018
Aurélien Ginolhac; Joseph Emeras; Sébastien Varrette; Valentin Plugaru; Sarah Diehl; Clément Parisot; Hyacinthe Cartiaux; Pascal Bouvry