Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Emmanuel Jeannot is active.

Publication


Featured researches published by Emmanuel Jeannot.


acm symposium on parallel algorithms and architectures | 2007

Bi-objective scheduling algorithms for optimizing makespan and reliability on heterogeneous systems

Jack J. Dongarra; Emmanuel Jeannot; Erik Saule; Zhiao Shi

We tackle the problem of scheduling task graphs onto a heterogeneous set of machines, where each processor has a probability of failure governed by an exponential law. The goal is to design algorithms that optimize both makespan and reliability. First, we provide an optimal scheduling algorithm for independent unitary tasks where the objective is to maximize the reliability subject to makespan minimization. For the bi-criteria case, we provide an algorithm that approximates the Pareto-curve. Next, for independent non-unitary tasks, we show that the product {failure rate}x {unitary instruction execution time} is crucial to distinguish processors in this context. Based on these results we are able to let the user choose a trade-off between reliability maximization and makespan minimization. For general task graphs we provide a method for converting scheduling heuristics on heterogeneous cluster into heuristics that take reliability into account. Here again, we show how we can help the user to select a trade-off between makespan and reliability.


european conference on parallel processing | 2010

Near-optimal placement of MPI processes on hierarchical NUMA architectures

Emmanuel Jeannot; Guillaume Mercier

MPI process placement can play a deterministic role concerning the application performance. This is especially true with nowadays architecture (heterogenous, multicore with different level of caches, etc.). In this paper, we will describe a novel algorithm called TreeMatch that maps processes to resources in order to reduce the communication cost of the whole application. We have implemented this algorithm and will discuss its performance using simulation and on the NAS benchmarks.


high performance distributed computing | 2002

Adaptive online data compression

Emmanuel Jeannot; Bjorn Knutsson

Quickly transmitting large datasets in the context of distributed computing on wide area networks can be achieved by compressing data before transmission, However such an approach is not efficient when dealing with higher speed networks. Indeed, the time to compress a large file and to send it is greater than the time to send the uncompressed file. In this paper we explore and enhance an algorithm that allows us to overlap communications with compression and to automatically adapt the compression effort to currently available network and processor resources.


IEEE Transactions on Parallel and Distributed Systems | 2006

On the distribution of sequential jobs in random brokering for heterogeneous computational grids

Vandy Berten; Joël Goossens; Emmanuel Jeannot

Scheduling stochastic workloads is a difficult task. In order to design efficient scheduling algorithms for such workloads, it is required to have a good in-depth knowledge of basic random scheduling strategies. This paper analyzes the distribution of sequential jobs and the system behavior in heterogeneous computational grid environments where the brokering is done in such a way that each computing element has a probability to be chosen proportional to its number of CPUs and (new from the previous paper) its relative speed. We provide the asymptotic behavior for several metrics (queue-sizes, slowdowns, etc.) or, in some cases, an approximation of this behavior. We study these metrics for a variety of workload configurations (load, distribution, etc.). We compare our probabilistic analysis to simulations in order to validate our results. These results provide a good understanding of the system behavior for each metric proposed. This enables us to design advanced and efficient algorithms for more complex cases.


international conference on parallel processing | 2001

Triplet: A clustering scheduling algorithm for heterogeneous systems

Bertrand Cirou; Emmanuel Jeannot

The goal of the OURAGAN project is to provide access of meta-computing resources to Scilab users. We present here an approach that consists, given a Scilab script, in scheduling and executing this script on a heterogeneous cluster of machines. One of the most effective scheduling technique is called clustering which consists in grouping tasks on virtual processors (clusters) and then mapping clusters onto real processors. In this paper we study and apply the clustering technique for heterogeneous systems. We present a clustering algorithm called Triplet, study its performance and compare it to the HEFT algorithm. We show that Triplet has good characteristics and outperforms HEFT in most of the cases.


IEEE Transactions on Parallel and Distributed Systems | 2014

Process Placement in Multicore Clusters:Algorithmic Issues and Practical Techniques

Emmanuel Jeannot; Guillaume Mercier; François Tessier

Current generations of NUMA node clusters feature multicore or manycore processors. Programming such architectures efficiently is a challenge because numerous hardware characteristics have to be taken into account, especially the memory hierarchy. One appealing idea to improve the performance of parallel applications is to decrease their communication costs by matching the communication pattern to the underlying hardware architecture. In this paper, we detail the algorithm and techniques proposed to achieve such a result: first, we gather both the communication pattern information and the hardware details. Then we compute a relevant reordering of the various process ranks of the application. Finally, those new ranks are used to reduce the communication costs of the application.


international conference on cloud computing and services science | 2012

Adding Virtualization Capabilities to the Grid’5000 Testbed

Daniel Balouek; Alexandra Carpen Amarie; Ghislain Charrier; Frédéric Desprez; Emmanuel Jeannot; Emmanuel Jeanvoine; Adrien Lebre; David Margery; Nicolas Niclausse; Lucas Nussbaum; Olivier Richard; Christian Pérez; Flavien Quesnel; Cyril Rohr; Luc Sarzyniec

Almost ten years after its premises, the Grid’5000 testbed has become one of the most complete testbed for designing or evaluating large-scale distributed systems. Initially dedicated to the study of High Performance Computing, the infrastructure has evolved to address wider concerns related to Desktop Computing, the Internet of Services and more recently the Cloud Computing paradigm. This paper present recent improvements of the Grid’5000 software and services stack to support large-scale experiments using virtualization technologies as building blocks. Such contributions include the deployment of customized software environments, the reservation of dedicated network domain and the possibility to isolate them from the others, and the automation of experiments with a REST API. We illustrate the interest of these contributions by describing three different use-cases of large-scale experiments on the Grid’5000 testbed. The first one leverages virtual machines to conduct larger experiments spread over 4000 peers. The second one describes the deployment of 10000 KVM instances over 4 Grid’5000 sites. Finally, the last use case introduces a one-click deployment tool to easily deploy major IaaS solutions. The conclusion highlights some important challenges of Grid’5000 related to the use of OpenFlow and to the management of applications dealing with tremendous amount of data.


international conference on cluster computing | 2006

Robust task scheduling in non-deterministic heterogeneous computing systems

Zhiao Shi; Emmanuel Jeannot; Jack J. Dongarra

The paper addresses the problem of matching and scheduling of DAG-structured application to both minimize the makespan and maximize the robustness in a heterogeneous computing system. Due to the conflict of the two objectives, it is usually impossible to achieve both goals at the same time. We give two definitions of robustness of a schedule based on tardiness and miss rate. Slack is proved to be an effective metric to be used to adjust the robustness. We employ epsiv-constraint method to solve the bi-objective optimization problem where minimizing the makespan and maximizing the slack are the two objectives. Overall performance of a schedule considering both makespan and robustness is defined such that user have the flexibility to put emphasis on either objective. Experiment results are presented to validate the performance of the proposed algorithm


Journal of Parallel and Distributed Computing | 1999

Compact DAG representation and its symbolic scheduling

Michel Cosnard; Emmanuel Jeannot; Tao Yang

Scheduling large task graphs is an important issue in parallel computing. In this paper we tackle the two following problems: (1) how to schedule a task graph, when it is too large to fit into memory? (2) How to build a generic program such that parameter values of a task graph can be given at run-time? Our answers feature the parameterized task graph (PTG), which is a symbolic representation of the task graph. We propose a dynamic scheduling algorithm which takes a PTG as an entry and allows us to generate a generic program. We present a theoretical study which shows that our algorithm finds good schedules for coarse-grain task graphs, has a low memory cost, and a low computational complexity. When the average number of operations of each task is large enough, we prove that the scheduling overhead is negligible with respect to the makespan. We also provide experimental results that demonstrate the feasibility of our approach using several compute-intensive kernels found in numerical scientific applications.


european conference on parallel processing | 2008

Bi-objective Approximation Scheme for Makespan and Reliability Optimization on Uniform Parallel Machines

Emmanuel Jeannot; Erik Saule; Denis Trystram

We study the problem of scheduling independent tasks on a set of related processors which have a probability of failure governed by an exponential law. We are interested in the bi-objective analysis, namely simultaneous optimization of the makespan and the reliability. We show that this problem can not be approximated by a single schedule. A similar problem has already been studied leading to a

Collaboration


Dive into the Emmanuel Jeannot's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Frédéric Desprez

École normale supérieure de Lyon

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jesús Carretero

Instituto de Salud Carlos III

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Martin Quinson

École normale supérieure de Lyon

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stephen L. Scott

Oak Ridge National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge