Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adrien Lebre is active.

Publication


Featured researches published by Adrien Lebre.


international conference on cloud computing and services science | 2012

Adding Virtualization Capabilities to the Grid’5000 Testbed

Daniel Balouek; Alexandra Carpen Amarie; Ghislain Charrier; Frédéric Desprez; Emmanuel Jeannot; Emmanuel Jeanvoine; Adrien Lebre; David Margery; Nicolas Niclausse; Lucas Nussbaum; Olivier Richard; Christian Pérez; Flavien Quesnel; Cyril Rohr; Luc Sarzyniec

Almost ten years after its premises, the Grid’5000 testbed has become one of the most complete testbed for designing or evaluating large-scale distributed systems. Initially dedicated to the study of High Performance Computing, the infrastructure has evolved to address wider concerns related to Desktop Computing, the Internet of Services and more recently the Cloud Computing paradigm. This paper present recent improvements of the Grid’5000 software and services stack to support large-scale experiments using virtualization technologies as building blocks. Such contributions include the deployment of customized software environments, the reservation of dedicated network domain and the possibility to isolate them from the others, and the automation of experiments with a REST API. We illustrate the interest of these contributions by describing three different use-cases of large-scale experiments on the Grid’5000 testbed. The first one leverages virtual machines to conduct larger experiments spread over 4000 peers. The second one describes the deployment of 10000 KVM instances over 4 Grid’5000 sites. Finally, the last use case introduces a one-click deployment tool to easily deploy major IaaS solutions. The conclusion highlights some important challenges of Grid’5000 related to the use of OpenFlow and to the management of applications dealing with tremendous amount of data.


Concurrency and Computation: Practice and Experience | 2013

Cooperative and reactive scheduling in large-scale virtualized platforms with DVMS

Flavien Quesnel; Adrien Lebre; Mario Südholt

One of the principal goals of cloud computing is the outsourcing of the hosting of data and applications, thus enabling a per‐usage model of computation. Data and applications may be packaged in virtual machines (VM), which are themselves hosted by nodes, that is, physical machines. Several frameworks have been designed to manage VMs on pools of physical machines; most of them, however, do not efficiently address a major objective of cloud providers: maximizing system utilization while ensuring the QoS. Several approaches promote virtualization capabilities to improve this trade‐off. However, the dynamic scheduling of a large number of VMs as part of a large distributed infrastructure is subject to important and hard scalability problems that become even worse when VM image transfers have to be managed. Consequently, most current frameworks schedule VMs statically using a centralized control strategy. In this article, we present distributed VM scheduler, a framework that enables VMs to be scheduled cooperatively and dynamically in large‐scale distributed systems. We describe, in particular, how several VM reconfigurations can be dynamically calculated in parallel and applied simultaneously. Reconfigurations are enabled by partitioning the system (i.e., nodes and VMs) on the fly. Partitions are created with a minimum of resources necessary to find a solution to the reconfiguration problem. Moreover, we propose an algorithm to handle deadlocks that may appear because of the partitioning policy. We have evaluated our prototype through simulations and compared our approach with a centralized one. The results show that our scheduler permits VMs to be reconfigured more efficiently: the time needed to manage thousands of VMs on hundreds of machines is typically reduced to a tenth or less. Copyright


international conference on cluster computing | 2006

I/O Scheduling Service for Multi-Application Clusters

Adrien Lebre; Guillaume Huard; Yves Denneulin; Przemyslaw Sowa

Distributed applications, especially the ones being I/O intensive, often access the storage subsystem in a nonsequential way (stride requests). Since such behaviours lower the overall system performance, many applications use parallel I/O libraries such as ROMIO to gather and reorder requests. In the meantime, as cluster usage grows, several applications are often executed concurrently, competing for access to storage subsystems and, thus, potentially canceling optimisations brought by parallel I/O libraries. The aIOLi project aims at optimising the I/O accesses within the cluster and providing a simple POSIX API. This article presents an extension of aIOLi to address the issue of disjoint accesses generated by different concurrent applications in a cluster. In such a context, good trade-off has to be assessed between performance, fairness and response time. To achieve this, an I/O scheduling algorithm together with a requests aggregator that considers both application access patterns and global system load, have been designed and merged into aIOLi This improvement led to the implementation of a new generic framework pluggable into any I/O file system layer. A test composed of two concurrent IOR benchmarks has shown improvements on read accesses by a factor ranging from 3.5 to 35 with POSIX calls and from 3.3 to 5 with ROMIO, both reference benchmarks have been executed on a traditional NFS server without any additional optimisations


international conference on parallel processing | 2011

Cooperative dynamic scheduling of virtual machines in distributed systems

Flavien Quesnel; Adrien Lebre

Cloud Computing aims at outsourcing data and applications hosting and at charging clients on a per-usage basis. These data and applications may be packaged in virtual machines (VM), which are themselves hosted by nodes, i.e. physical machines. n nConsequently, several frameworks have been designed to manage VMs on pools of nodes. Unfortunately, most of them do not efficiently address a common objective of cloud providers: maximizing system utilization while ensuring the quality of service (QoS). The main reason is that these frameworks schedule VMs in a static way and/or have a centralized design. n nIn this article, we introduce a framework that enables to schedule VMs cooperatively and dynamically in distributed systems. We evaluated our prototype through simulations, to compare our approach with the centralized one. Preliminary results showed that our scheduler was more reactive. As future work, we plan to investigate further the scalability of our framework, and to improve reactivity and fault-tolerance aspects.


symposium on computer architecture and high performance computing | 2004

Performance evaluation of a prototype distributed NFS server

Rafael Bohrer Ávila; Philippe Olivier Alexandre Navaux; Pierre Lombard; Adrien Lebre; Yves Denneulin

A high-performance file system is normally a key point for large cluster installations, where hundreds or even thousands of nodes frequently need to manage large volumes of data. While most solutions usually make use of dedicated hardware and/or specific distribution and replication protocols, the NFSP (NFS Parallel) project aims at improving performance within a standard NFS client/server system. In this paper we investigate the possibilities of a replication model for the NFS server, which is based on Lasy Release Consistency (LRC). A prototype has been built upon the user-level NFSv2 server and a performance evaluation is carried out.


parallel, distributed and network-based processing | 2011

Operating Systems and Virtualization Frameworks: From Local to Distributed Similarities

Flavien Quesnel; Adrien Lebre

Virtualization technologies radically changed the way in which distributed architectures are exploited. With the contribution of VM capabilities and with the emergence of IaaS platforms, more and more frameworks tend to manage VMs across distributed architectures like operating systems handle processes on a single node. Taking into account that most of these frameworks follow a centralized model -- where roughly one node is in charge of the management of VMs -- and considering the growing size of infrastructures in terms of nodes and VMs, new proposals relying on more autonomic and decentralized approaches should be submitted. Designing and implementing such models is a tedious and complex task. However, as well as research studies on OSes and hyper visors are complementary at the node level, we advocate that virtualization frameworks can benefit from lessons learnt from distributed operating system proposals. In this article, we motivate such a position by analyzing similarities between OSes and virtualization frameworks. More precisely, we focus on the management of processes and VMs, first at the node level and then on a cluster scale. From our point of view, such investigations can guide the community to design and implement new proposals in a more autonomic and distributed way.


international conference on parallel processing | 2003

Improving the Performances of a Distributed NFS Implementation

Pierre Lombard; Yves Denneulin; Olivier Valentin; Adrien Lebre

Our NFS implementation, NFSP (NFS Parallele) aims at providing some transparent ways to aggregate unused disk space by means of dividing a usually centralized NFS server into smaller entities: a meta-data server and I/O servers. This paper illustrates the issues related to increasing the performances of such an implementation. Two different approaches have been taken: distributing the load across several servers and implementing the server in a more efficient and intrusive way (in kernel mode). The results obtained with both versions are given and compared to the ones of the first user-mode implementation.


ieee/acm international symposium cluster, cloud and grid computing | 2015

Adding Storage Simulation Capacities to the SimGrid Toolkit: Concepts, Models, and API

Adrien Lebre; Arnaud Legrand; Frédéric Suter; Pierre Veyre

For each kind of distributed computing infrastructures, i.e., clusters, grids, clouds, data centers, or supercomputers, storage is a essential component to cope with the tremendous increase in scientific data production and the ever-growing need for data analysis and preservation. Understanding the performance of a storage subsystem or dimensioning it properly is an important concern for which simulation can help by allowing for fast, fully repeatable, and configurable experiments for arbitrary hypothetical scenarios. However, most simulation frameworks tailored for the study of distributed systems offer no or little abstractions or models of storage resources. In this paper, we detail the extension of SimGrid, a versatile toolkit for the simulation of large-scale distributed computing systems, with storage simulation capacities. We first define the required abstractions and propose a new API to handle storage components and their contents in SimGrid-based simulators. Then we characterize the performance of the fundamental storage component that are disks and derive models of these resources. Finally we list several concrete use cases of storage simulations in clusters, grids, clouds, and data centers for which the proposed extension would be beneficial.


trust security and privacy in computing and communications | 2013

Advanced Validation of the DVMS Approach to Fully Distributed VM Scheduling

Flavien Quesnel; Adrien Lebre; Jonathan Pastor; Mario Südholt; Daniel Balouek

The holy grail for Infrastructure as a Service (IaaS) providers is to maximize the utilization of their infrastructure while ensuring the quality of service (QoS) for the virtual machines they host. Although the frameworks in charge of managing virtual machines (VM) on pools of physical ones (PM) have been significantly improved, enabling to manage large-scale infrastructures composed of hundreds of PMs, most of them do not efficiently handle the aforementioned objective. The main reason is that advanced scheduling policies are subject to important and hard scalability problems, that become even worse when VM image transfers have to be considered. In this article, we provide a new validation of the Distributed VM Scheduler approach (DVMS) in a twofold manner. First, we provide a formal proof of the algorithm based on temporal logic. Second, we discuss large-scale evaluations involving up to 4.7K VMs distributed over 467 nodes of the Grid5000 testbed. As far as we know, these experiments constitute the largest in vivo validation that has been performed so far with decentralized VM schedulers. These results show that a cooperative approach such as ours permits to fix overload problems in a reactive and scalable way.


international conference on parallel processing | 2003

Distributed file system for clusters and Grids

Olivier Valentin; Pierre Lombard; Adrien Lebre; Christian Guinet; Yves Denneulin

NFSG aims at providing a solution for file accesses within a cluster of clusters. Criteria of easiness (installation, administration, usage) but also efficiency as well as a minimal hardware and software intrusivity have led our developments. By using several facilities such as distributed file systems (NFSP) and a high-performance data transfer utility (GXfer), we hope to offer a software architecture fully compatible with the ubiquitous NFS protocol. Thanks to a distributed storage (especially multiple I/O servers provided by NFSP), several parallel streams may be used when copying a file from one cluster to another within a same grid. This technique improves data transfers by connecting distributed file system at both ends. The GXfer component implements this functionality. Thus, performances only reachable with dedicated and expensive hardware may be achieved.

Collaboration


Dive into the Adrien Lebre's collaboration.

Top Co-Authors

Avatar

Flavien Quesnel

École des mines de Nantes

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mario Südholt

École des mines de Nantes

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christian Pérez

École normale supérieure de Lyon

View shared research outputs
Top Co-Authors

Avatar

Frédéric Suter

École normale supérieure de Lyon

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jonathan Pastor

École des mines de Nantes

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge