Flavien Quesnel
École des mines de Nantes
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Flavien Quesnel.
international conference on cloud computing and services science | 2012
Daniel Balouek; Alexandra Carpen Amarie; Ghislain Charrier; Frédéric Desprez; Emmanuel Jeannot; Emmanuel Jeanvoine; Adrien Lebre; David Margery; Nicolas Niclausse; Lucas Nussbaum; Olivier Richard; Christian Pérez; Flavien Quesnel; Cyril Rohr; Luc Sarzyniec
Almost ten years after its premises, the Grid’5000 testbed has become one of the most complete testbed for designing or evaluating large-scale distributed systems. Initially dedicated to the study of High Performance Computing, the infrastructure has evolved to address wider concerns related to Desktop Computing, the Internet of Services and more recently the Cloud Computing paradigm. This paper present recent improvements of the Grid’5000 software and services stack to support large-scale experiments using virtualization technologies as building blocks. Such contributions include the deployment of customized software environments, the reservation of dedicated network domain and the possibility to isolate them from the others, and the automation of experiments with a REST API. We illustrate the interest of these contributions by describing three different use-cases of large-scale experiments on the Grid’5000 testbed. The first one leverages virtual machines to conduct larger experiments spread over 4000 peers. The second one describes the deployment of 10000 KVM instances over 4 Grid’5000 sites. Finally, the last use case introduces a one-click deployment tool to easily deploy major IaaS solutions. The conclusion highlights some important challenges of Grid’5000 related to the use of OpenFlow and to the management of applications dealing with tremendous amount of data.
Concurrency and Computation: Practice and Experience | 2013
Flavien Quesnel; Adrien Lebre; Mario Südholt
One of the principal goals of cloud computing is the outsourcing of the hosting of data and applications, thus enabling a per‐usage model of computation. Data and applications may be packaged in virtual machines (VM), which are themselves hosted by nodes, that is, physical machines. Several frameworks have been designed to manage VMs on pools of physical machines; most of them, however, do not efficiently address a major objective of cloud providers: maximizing system utilization while ensuring the QoS. Several approaches promote virtualization capabilities to improve this trade‐off. However, the dynamic scheduling of a large number of VMs as part of a large distributed infrastructure is subject to important and hard scalability problems that become even worse when VM image transfers have to be managed. Consequently, most current frameworks schedule VMs statically using a centralized control strategy. In this article, we present distributed VM scheduler, a framework that enables VMs to be scheduled cooperatively and dynamically in large‐scale distributed systems. We describe, in particular, how several VM reconfigurations can be dynamically calculated in parallel and applied simultaneously. Reconfigurations are enabled by partitioning the system (i.e., nodes and VMs) on the fly. Partitions are created with a minimum of resources necessary to find a solution to the reconfiguration problem. Moreover, we propose an algorithm to handle deadlocks that may appear because of the partitioning policy. We have evaluated our prototype through simulations and compared our approach with a centralized one. The results show that our scheduler permits VMs to be reconfigured more efficiently: the time needed to manage thousands of VMs on hundreds of machines is typically reduced to a tenth or less. Copyright
international conference on parallel processing | 2011
Flavien Quesnel; Adrien Lebre
Cloud Computing aims at outsourcing data and applications hosting and at charging clients on a per-usage basis. These data and applications may be packaged in virtual machines (VM), which are themselves hosted by nodes, i.e. physical machines. Consequently, several frameworks have been designed to manage VMs on pools of nodes. Unfortunately, most of them do not efficiently address a common objective of cloud providers: maximizing system utilization while ensuring the quality of service (QoS). The main reason is that these frameworks schedule VMs in a static way and/or have a centralized design. In this article, we introduce a framework that enables to schedule VMs cooperatively and dynamically in distributed systems. We evaluated our prototype through simulations, to compare our approach with the centralized one. Preliminary results showed that our scheduler was more reactive. As future work, we plan to investigate further the scalability of our framework, and to improve reactivity and fault-tolerance aspects.
ieee international conference on green computing and communications | 2013
Flavien Quesnel; Hemant Kumar Mehta; Jean-Marc Menaud
Power management has become one of the main challenges for data center infrastructures. Currently, the cost of powering a server is approaching the cost of the server hardware itself, and, in a near future, the former will continue to increase, while the latter will go down. In this context, virtualization is used to decrease the number of servers, and increase the efficiency of the remaining ones. If virtualization can be used to positively impact on the data center energy consumption, this new abstraction layer disconnects user services (hosted on a virtual machine) from their operating cost. In this paper, we propose an approach and a model to estimate the total power consumption of a virtual machine, by taking into account its static (e.g. memory) and dynamic (e.g. CPU) consumption of resources. This model permits to reconnect each VM to its corresponding operating cost, and provides more information to virtual infrastructure providers and users to optimize their infrastructure/applications. It can be observed from results of experiments that the proposed method outperforms the methods found in the literature that only consider the dynamic consumption of resources.
Archive | 2014
Marin Bertier; Frédéric Desprez; Gilles Fedak; Adrien Lèbre; Anne-Cécile Orgerie; Jonathan Pastor; Flavien Quesnel; Jonathan Rouzaud-Cornabas; Cédric Tedeschi
To accommodate the ever-increasing demand for Utility Computing (UC) resources while taking into account both energy and economical issues, the current trend consists in building even larger data centers in a few strategic locations. Although, such an approach enables to cope with the actual demand while continuing to operate UC resources through centralized software system, it is far from delivering sustainable and efficient UC infrastructures. In this scenario, we claim that a disruptive change in UC infrastructures is required in the sense that UC resources should be managed differently, considering locality as a primary concern. To this aim, we propose to leverage any facilities available through the Internet in order to deliver widely distributed UC platforms that can better match the geographical dispersal of users as well as the unending resource demand. Critical to the emergence of such locality-based UC (LUC) platforms is the availability of appropriate operating mechanisms. We advocate the implementation of a unified system driving the use of resources at an unprecedented scale by turning a complex and diverse infrastructure into a collection of abstracted computing facilities that is both easy to operate and reliable. By deploying and using such a LUC Operating System on backbones, our ultimate vision is to make possible to host/operate a large part of the Internet by its internal structure itself: a scalable and nearly infinite set of resources delivered by any computing facilities forming the Internet, starting from the larger hubs operated by ISPs, governments, and academic institutions to any idle resources that may be provided by end users.
parallel, distributed and network-based processing | 2011
Flavien Quesnel; Adrien Lebre
Virtualization technologies radically changed the way in which distributed architectures are exploited. With the contribution of VM capabilities and with the emergence of IaaS platforms, more and more frameworks tend to manage VMs across distributed architectures like operating systems handle processes on a single node. Taking into account that most of these frameworks follow a centralized model -- where roughly one node is in charge of the management of VMs -- and considering the growing size of infrastructures in terms of nodes and VMs, new proposals relying on more autonomic and decentralized approaches should be submitted. Designing and implementing such models is a tedious and complex task. However, as well as research studies on OSes and hyper visors are complementary at the node level, we advocate that virtualization frameworks can benefit from lessons learnt from distributed operating system proposals. In this article, we motivate such a position by analyzing similarities between OSes and virtualization frameworks. More precisely, we focus on the management of processes and VMs, first at the node level and then on a cluster scale. From our point of view, such investigations can guide the community to design and implement new proposals in a more autonomic and distributed way.
international conference on parallel processing | 2011
Adrien Lebre; Paolo Anedda; Massimo Gaggero; Flavien Quesnel
Although the use of virtual environments provided by cloud computing infrastructures is gaining consensus from the scientific community, running applications in these environments is still far from reaching the maturity of more usual computing facilities such as clusters or grids. Indeed, current solutions for managing virtual environments are mostly based on centralized approaches that barter large-scale concerns such as scalability, reliability and reactivity for simplicity. However, considering current trends about cloud infrastructures in terms of size (larger and larger) and in terms of usage (cross-federation), every large-scale concerns must be addressed as soon as possible to efficiently manage next generation of cloud computing platforms. In this work, we propose to investigate an alternative approach leveraging DIStributed and COoperative mechanisms to manage Virtual EnviRonments autonomicallY (DISCOVERY). This initiative aims at overcoming the main limitations of the traditional server-centric solutions while integrating all mandatory mechanisms into a unified distributed framework. The system we propose to implement, relies on a peer-to-peer model where each agent can efficiently deploy, dynamically schedule and periodically checkpoint the virtual environments they manage. The article introduces the global design of the DISCOVERY proposal and gives a preliminary description of its internals.
Archive | 2012
Daniel Balouek; Alexandra Carpen-Amarie; Ghislain Charrier; Frédéric Desprez; Emmanuel Jeannot; Emmanuel Jeanvoine; Adrien Lèbre; David Margery; Nicolas Niclausse; Lucas Nussbaum; Olivier Richard; Christian Pérez; Flavien Quesnel; Cyril Rohr; Luc Sarzyniec
trust security and privacy in computing and communications | 2013
Flavien Quesnel; Adrien Lebre; Jonathan Pastor; Mario Südholt; Daniel Balouek
IEEE International Scalable Computing Challenge (SCALE 2013), held in conjunction with CCGrid'2013 | 2013
Daniel Balouek; Adrien Lèbre; Flavien Quesnel