Sébastien Varrette
University of Luxembourg
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sébastien Varrette.
high performance computing systems and applications | 2014
Sébastien Varrette; Pascal Bouvry; Hyacinthe Cartiaux; Fotis Georgatos
The intensive growth of processing power, data storage and transmission capabilities has revolutionized many aspects of science. These resources are essential to achieve high-quality results in many application areas. In this context, the University of Luxembourg (UL) operates since 2007 an High Performance Computing (HPC) facility and the related storage by a very small team. The aspect of bridging computing and storage is a requirement of UL service - the reasons are both legal (certain data may not move) and performance related. Nowadays, people from the three faculties and/or the two Interdisciplinary centers within the UL, are users of this facility. More specifically, key research priorities such as Systems Bio-medicine (by LCSB) and Security, Reliability & Trust (by SnT) require access to such HPC facilities in order to function in an adequate environment. The management of HPC solutions is a complex enterprise and a constant area for discussion and improvement. The UL HPC facility and the derived deployed services is a complex computing system to manage by its scale: at the moment of writing, it consists of 150 servers, 368 nodes (3880 computing cores) and 1996 TB of shared storage which are all configured, monitored and operated by only three persons using advanced IT automation solutions based on Puppet [1], FAI [2] and Capistrano [3]. This paper covers all the aspects in relation to the management of such a complex infrastructure, whether technical or administrative. Most design choices or implemented approaches have been motivated by several years of experience in addressing research needs, mainly in the HPC area but also in complementary services (typically Web-based). In this context, we tried to answer in a flexible and convenient way many technological issues. This experience report may be of interest for other research centers and universities belonging either to the public or the private sector looking for good if not best practices in cluster architecture and management.
international conference on cloud computing | 2011
Benoît Bertholon; Sébastien Varrette; Pascal Bouvry
The security issues raised by the Cloud paradigm are not always tackled from the user point of view. For instance, considering an Infrastructure-as-a-Service (IaaS) Cloud, it is currently impossible for a user to certify in a reliable and secure way that the environment he deployed (typically a Virtual Machine(VM)) has not been corrupted, whether by malicious acts or not. Yet having this functionality would enhance the confidence on the IaaS provider and therefore attract new customers. This paper fills this need by proposing CERTICLOUD, a novel approach for the protection of IaaS platforms that relies on the concepts developed in the Trusted Computing Group (TCG) together with hardware elements, i.e., Trusted Platform Module (TPM) to offer a secured and reassuring environment. Those aspects are guaranteed by two protocols: TCRR and Verify MyVM. When the first one asserts the integrity of a remote resource and permits to exchange a private symmetric key, the second authorizes the user to detect trustfully and on demand any tampering attempt on its running VM. These protocols being key components in the proposed framework, we take very seriously their analysis against known cryptanalytic attacks. This is testified by their successful validation by AVISPA and Scyther, two reference tools for the automatic verification of security protocols. The CERTICLOUD proposal is then detailed: relying on the above protocols, this platform provides the secure storage of users environments and their safe deployment onto a virtualization framework. While the physical resources are checked by TCRR, the user can execute on demand the Verify MyVM protocol to certify the integrity of its deployed environment. Experimental results operated on a first prototype of CERTICLOUD demonstrate the feasibility and the low overhead of the approach, together with its easy implementation on recent commodity machines.
EE-LSDS 2013 Revised Selected Papers of the COST IC0804 European Conference on Energy Efficiency in Large Scale Distributed Systems - Volume 8046 | 2013
Mateusz Jarus; Sébastien Varrette; Ariel Oleksiak; Pascal Bouvry
Due to growth of energy consumption by HPC servers and data centers many research efforts aim at addressing the problem of energy efficiency. Hence, the use of low power processors such as Intel Atom and ARM Cortex have recently gained more interest. In this article, we compare performance and energy efficiency of cutting-edge high-density HPC platform enclosures featuring either very high-performing processors such as Intel Core i7 or E7 yet having low power-efficiency, or the reverse i.e. energy efficient processors such as Intel Atom, AMD Fusion or ARM Cortex A9 yet with limited computing capacity. Our objective was to quantify in a very pragmatic way these general purpose CPUs using a set of reference benchmarks and applications run in an HPC environment, the trade-off that could exist between computing and power efficiency.
grid computing | 2005
Axel W. Krings; Jean-Louis Roch; Samir Jafar; Sébastien Varrette
This paper presents a new approach for certifying the correctness of program executions in hostile environments, where tasks or their results have been corrupted due to benign or malicious act. Extending previous results in the restricted context of independent tasks, we introduce a probabilistic certification that establishes whether the results of computations are correct. This probabilistic approach does not make any assumptions about the attack and certification errors are only due to unlucky random choices. Bounds associated with certification are provided for general graphs and for tasks with out-tree dependencies found in a medical image analysis application that motivated the research.
parallel symbolic computation | 2007
Jean-Louis Roch; Sébastien Varrette
In [6], a new approach for certifying the correctness of program executions in hostile environments has been proposed. The authors presented probabilistic certification by massive attack detection through two algorithms MCT and EMCT. The execution to certify is represented by a macro-data flow graph which is used to randomly extract some tasks to be re-executed on safe resources in order to determine whether the execution is correct or faulty. Bounds associated with certification have been provided for general graphs and for tasks with out-tree dependencies. In this paper, we extend those results with a cost analysis of parallel certification based on on-line scheduling by work-stealing. In particular, we focus on Divide & Conquer algorithms that are commonly used in symbolic computations and demonstrate the efficiency of EMCT for the certification of the resulting Fork-Join graph. Finally, we show how to combine EMCT with BCH codes to make a matrix-vector product both tolerant to falsifications and massive attacks.
EE-LSDS 2013 Revised Selected Papers of the COST IC0804 European Conference on Energy Efficiency in Large Scale Distributed Systems - Volume 8046 | 2013
Mateusz Guzek; Sébastien Varrette; Valentin Plugaru; Johnatan E. Pecero; Pascal Bouvry
With a growing concern on the considerable energy consumed by HPC platforms and data centers, research efforts are targeting toward green approaches with higher energy efficiency. In particular, virtualization is emerging as the prominent approach to mutualize the energy consumed by a single server running multiple Virtual Machines VMs instances. However, little understanding has been obtained about the potential overhead in energy consumption and the throughput reduction for virtualized servers and/or computing resources, nor if it simply suits an environment as high-demanding as a High Performance Computing HPC platform. In this paper, a novel holistic model for the power of HPC node and its eventual virtualization layer is proposed. More importantly, we create and validate an instance of the proposed model using concrete measures taken on the Grid5000 platform. In particular, we use three widespread virtualization frameworks, namely Xen, KVM, and VMware ESXi and compare them with a baseline environment running in native mode. The conducted experiments were performed on top of benchmarking tools that reflect an HPC usage, i.e. HPCC, IOZone and Bonnie++. To abstract from the specifics of a single architecture, the benchmarks were run using two different hardware configurations, based on Intel and AMD processors. The benchmark scores are presented for all configurations to highlight their varying performance. The measured data is used to create a statistical holistic model of power of a machine that takes into account impacts of its components utilization metrics, as well as used application, virtualization, and hardware. The purpose of the model is to enable estimation of energy consumption of HPC platforms in areas such as simulation, scheduling or accounting.
database and expert systems applications | 2004
Samir Jafar; Sébastien Varrette; Jean-Louis Roch
To achieve correct execution of peer-to-peer applications on nonreliable resources, we present a portable and distributed algorithm that provides fault tolerance and result checking. Two kinds of faults are considered: node failure or disconnection and result forgery. This algorithm is based on the knowledge of the macro data-flow dependencies between the application tasks. It provides correct execution with respect to a probabilistic certificate. We have implemented it on top of Athapascan programming interface and experimental results are presented.
symposium on computer architecture and high performance computing | 2004
Sébastien Varrette; Jean-Louis Roch; Franck Leprévost
Large scale cluster, peer-to-peer computing systems and grid computer systems gather thousands of nodes for computing parallel applications. At this scale, it raises the problem of the result checking of the parallel execution of a program on an unsecured grid. This domain is the object of numerous works, either at the hardware or at the software level. We propose here an original software method based on the dynamic computation of the data-flow associated to a partial execution of the program on a secure machine. This data-flow is a summary of the execution: any complete execution of the program on an unsecured remote machine with the same inputs supplies a flow which summary has to correspond to the one obtained by partial execution.
network and system security | 2013
Benoît Bertholon; Sébastien Varrette; Pascal Bouvry
With the advent of the Cloud Computing (CC) paradigm and the explosion of new Web Services proposed over the Internet (such as Google Office Apps, Dropbox or Doodle just to cite a few of them), the protection of the programs at the heart of these services becomes more and more crucial, especially for the companies making business on top of these services. In parallel, the overwhelming majority of modern websites use the JavaScript programming language as all modern web browsers - either on desktops, game consoles, tablets or smart phones - include JavaScript interpreters making it the most ubiquitous programming language in history. Thus, JavaScript is the core technology of most web services. In this context, this article focuses on novel obfuscation techniques to protect JavaScript program contents.
computer and information technology | 2011
Carlos J. Barrios Hern´ndez; Daniel A. Sierra; Sébastien Varrette; Dino Martin Lopez Pacheco
Nowadays, power consumption in computer systems is an active and important subject of discussion in both research and political communities. Indeed, increasing the performance of such computer systems frequently requires increasing the number of resources, thus leading to higher power consumption and a negative impact on the environment. Some strategies to reduce the energy consumption are based on hardware modifications, the usage of energy-aware software and new rules of computation resources utilization. This paper analyzes the energy consumption of HPC platforms and Grid computing infrastructures from two different perspectives: (i) cost in energy due to idle/active status and (ii) cost in energy due to data transfer.