Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jean-Marc Nicod is active.

Publication


Featured researches published by Jean-Marc Nicod.


Scientific Programming | 2005

Managing data persistence in network enabled servers

Eddy Caron; Bruno Del-Fabbro; Frédéric Desprez; Emmanuel Jeannot; Jean-Marc Nicod

The GridRPC model [17] is an emerging standard promoted by the Global Grid Forum (GGF) that defines how to perform remote client-server computations on a distributed architecture. In this model data are sent back to the client at the end of every computation. This implies unnecessary communications when computed data are needed by an other server in further computations. Since, communication time is sometimes the dominant cost of remote computations, this cost has to be lowered. Several tools instantiate the GridRPC model such as NetSolve developed at the University of Tennessee, Knoxville, USA, and DIET developed at LIP laboratory, ENS Lyon, France. They are usually called Network Enabled Servers (NES). In this paper, we present a discussion of the data management solutions chosen for these two NES (NetSolve and DIET) as well as experimental results.


Concurrency and Computation: Practice and Experience | 2007

DTM: a service for managing data persistency and data replication in network-enabled server environments

Bruno Del-Fabbro; David Laiymani; Jean-Marc Nicod; Laurent Philippe

Network‐enabled server (NES) environments are valuable candidates to provide simple computing Grid access. These environments allow transparent access to a set of computational servers via Remote Procedure Call mechanisms. In this context, a challenge is to increase performances by decreasing data traffic. This paper presents DTM (Data Tree Manager), a data management service for NES environments. Based on the notions of data persistency and data replication, DTM proposes a set of efficient policies which minimize computation times by decreasing data transfers between the clients and the platform. From the end‐user point of view, DTM is accessible through a simple and transparent API. We describe DTM and its implementation in the DIET (Distributed Interactive Engineering Toolbox) platform. We also present a set of experimental results which show the feasibility and the efficiency of our approach. Copyright


conference on automation science and engineering | 2014

Prognostics-based scheduling in a distributed platform: Model, complexity and resolution

Nathalie Herr; Jean-Marc Nicod; Christophe Varnier

In the field of production scheduling, this paper addresses the problem of maximizing the production horizon of a heterogeneous platform composed of identical parallel machines and which has to provide a given production service. Each machine is supposed to be able to provide several throughputs corresponding to different operating conditions. The key point is to select the appropriate profile for each machine during the whole production horizon. The use of Prognostics and Health Management (PHM) results in the form of Remaining Useful Life (RUL) allows to adapt the schedule to the wear and tear of machines. In the homogeneous case, we propose the Longest Remaining Useful Life first algorithm (LRUL) to find a solution and we prove its optimality. The NP-Completeness of the general case is then shown. Many heuristics are finally proposed to cope with the decision problem and are compared through simulation results. Simulations assess the efficiency of these heuristics. Distance to the theoretical maximal value comes close to 5% for the most efficient ones.


ieee conference on prognostics and health management | 2014

Prognostic Decision Making to extend a platform useful life under service constraint

Nathalie Herr; Jean-Marc Nicod; Christophe Varnier

This paper adresses the problem of optimizing the useful life of a heterogeneous distributed platform which has to produce a given production service. The purpose is to provide a production scheduling that maximizes the production horizon. The use of Prognostics and Health Management (PHM) results in the form of Remaining Useful Life (RUL) allows to adapt the schedule to the wear and tear of equipment. This work comes within the scope of Prognostics Decision Making (DM). Each considered machine is supposed to be able to provide several throughputs corresponding to different operating conditions. The key point is to select the appropriate profile for each machine during the whole useful life of the platform. Many heuristics are proposed to cope with this decision problem and are compared through simulation results. Simulations assess the efficiency of these heuristics. Distance to the theoretical maximal value comes close to 10% for the most efficient ones. A repair module performing a revision of the schedules provided by the heuristics is moreover proposed to enhance the results. First results are promising.


ieee conference on prognostics and health management | 2015

A post-prognostics decision approach to optimize the commitment of fuel cell systems in stationary applications

Stéphane Chrétien; Nathalie Herr; Jean-Marc Nicod; Christophe Varnier

The use of fuel cells appears to be of growing interest as a potential alternative to conventional power systems. Fuel cell systems suffer however from insufficient durability and their lifetime may be improved. Prognostics results in the form of Remaining Useful Life are proposed to be used in a Prognostics and Health Management (PHM) framework to maximize the global useful life of a multi-stack fuel cell system under service constraint. Convex optimization is used to define the contribution of each stack to a global needed power output. A Mirror-Prox for Saddle Points method is proposed to cope with the assignment problem. Resolution method is detailed and promising simulation results are provided.


International Journal of Parallel, Emergent and Distributed Systems | 2012

Assessing new approaches to schedule a batch of identical intree-shaped workflows on a heterogeneous platform

Sékou Diakité; Jean-Marc Nicod; Laurent Philippe; Lamiel Toch

In this paper, we consider the makespan optimisation when scheduling a batch of identical workflows on a heterogeneous platform as a service-oriented grid or a micro-factory. A job is represented by a directed acyclic graph (DAG) with typed tasks and no fork nodes (in-tree precedence constraints). The processing resources are able to process a set of task types, each with unrelated processing cost. The objective function is to minimise the execution makespan of a batch of identical workflows while most of the works concentrate on the throughput in this case. Three algorithms are studied in this context: a classical list algorithm and two algorithms based on new approaches, a genetic algorithm and a steady-state algorithm. The contribution of this paper is both on the adaptation of these algorithms to the particular case of batches of identical workflows and on the performance analysis of these algorithms regarding the makespan. We show the benefits of their adaptation, and we show that the algorithm performance depends on the structure of the workflow, on the size of the batch and on the platform characteristics.


parallel computing | 2011

Mapping workflow applications with types on heterogeneous specialized platforms

Anne Benoit; Alexandru Dobrila; Jean-Marc Nicod; Laurent Philippe

In this paper, we study the problem of optimizing the throughput of coarse-grain workflow applications, for which each task of the workflow is of a given type, and subject to failures. The goal is to map such an application onto a heterogeneous specialized platform, which consists of a set of processors that can be specialized to process one type of tasks. The objective function is to maximize the throughput of the workflow, i.e., the rate at which the data sets can enter the system. If there is exactly one task per processor in the mapping, then we prove that the optimal solution can be computed in polynomial time. However, the problem becomes NP-hard if several tasks can be assigned to the same processor. Several polynomial time heuristics are presented for the most realistic specialized setting, in which tasks of the same type can be mapped onto the same processor, but a processor cannot process two tasks of different types. Also, we give an integer linear program formulation of this problem, which allows us to find the optimal solution (in exponential time) for small problem instances. Experimental results show that the best heuristics obtain a good throughput, much better than the throughput obtained with a random mapping. Moreover, we obtain a throughput close to the optimal solution in the particular cases on which the optimal throughput can be computed (small problem instances or particular mappings).


ieee international symposium on parallel distributed processing workshops and phd forum | 2010

Throughput optimization for micro-factories subject to task and machine failures

Anne Benoit; Alexandru Dobrila; Jean-Marc Nicod; Laurent Philippe

In this paper, we study the problem of optimizing the throughput for micro-factories subject to failures. The challenge consists in mapping several tasks of different types onto a set of machines. The originality of our approach is the failure model for such applications in which not only the machines are subject to failures but the reliability of a task may depend on its type. The failure rate is unrelated: a probability of failure is associated to each couple (task type, machine). We consider different kind of mappings: in one-to-one mappings, each machine can process only a single task, while several tasks of the same type can be processed by the same machine in specialized mappings. Finally, general mappings have no constraints. The optimal one-to-one mapping can be found in polynomial time for particular problem instances, but the problem is NP-hard in most of the cases. For the most realistic case of specialized mappings, which turns out to be NP-hard, we design several polynomial time heuristics and a linear program allows us to find the optimal solution (in exponential time) for small problem instances. Experimental results show that the best heuristics obtain a good throughput, much better than the throughput achieved with a random mapping. Moreover, we obtain a throughput close to the optimal solution in the particular cases where the optimal throughput can be computed.


2008 First International Conference on Distributed Framework and Applications | 2008

Processing identical workflows on SOA grids: Comparison of three approaches

Sékou Diakité; Jean-Marc Nicod; Laurent Philippe

In this paper we consider the scheduling of a batch of workflows on a service oriented grid. A job is represented by a directed acyclic graph without forks (intree) but with typed tasks. The processors are distributed and each processor have a set of services that carry out equivalent task types. The objective function is to minimize the makespan of the batch execution. Three algorithms are studied in this context: an online algorithm, a genetic algorithm and a steady-state algorithm. The contribution of this paper is on the experimental analysis of these algorithms and on their adaptation to the context. We show that their performances depend on the size and complexity of the batch and on the characteristics of the execution platform.


international conference on parallel processing | 2011

A genetic algorithm with communication costs to schedule workflows on a SOA-Grid

Jean-Marc Nicod; Laurent Philippe; Lamiel Toch

In this paper we study the problem of scheduling a collection of workflows, identical or not, on a SOA (Service Oriented Architecture) grid . A workflow (job) is represented by a directed acyclic graph (DAG) with typed tasks. All of the grid hosts are able to process a set of typed tasks with unrelated processing costs and are able to transmit files through communication links for which the communication times are not negligible. The goal of our study is to minimize the maximum completion time (makespan) of the workflows. To solve this problem we propose a genetic approach. The contributions of this paper are both the design of a Genetic Algorithm taking the communication costs into account and its performance analysis.

Collaboration


Dive into the Jean-Marc Nicod's collaboration.

Top Co-Authors

Avatar

Laurent Philippe

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Bruno Del-Fabbro

University of Franche-Comté

View shared research outputs
Top Co-Authors

Avatar

Christophe Varnier

University of Franche-Comté

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nathalie Herr

University of Franche-Comté

View shared research outputs
Top Co-Authors

Avatar

Alexandru Dobrila

University of Franche-Comté

View shared research outputs
Top Co-Authors

Avatar

Anne Benoit

École normale supérieure de Lyon

View shared research outputs
Top Co-Authors

Avatar

David Laiymani

University of Franche-Comté

View shared research outputs
Top Co-Authors

Avatar

Frédéric Desprez

École normale supérieure de Lyon

View shared research outputs
Top Co-Authors

Avatar

Frédéric Lombard

University of Franche-Comté

View shared research outputs
Researchain Logo
Decentralizing Knowledge