Gil Utard
University of Picardie Jules Verne
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gil Utard.
Journal of Automated Reasoning | 2005
Bernard Jurkowiak; Chu Min Li; Gil Utard
Because of the inherent NP-completeness of SAT, many SAT problems currently cannot be solved in a reasonable time. Usually, in order to tackle a new class of SAT problems, new ad hoc algorithms must be designed. Another way to solve a new problem is to use a generic solver and employ parallelism to reduce the solve time. In this paper we propose a parallelization scheme for a class of SAT solvers based on the DPLL procedure. The scheme uses a dynamic load-balancing mechanism based on work-stealing techniques to deal with the irregularity of SAT problems. We parallelize Satz, one of the best generic SAT solvers, with our scheme to obtain a parallel solver called PSatz. The first experimental results on random 3-SAT problems and a set of well-known structured problems show the efficiency of PSatz. PSatz is freely available and runs on any networked workstations under Unix/Linux.
Electronic Notes in Discrete Mathematics | 2001
Bernard Jurkowiak; Chu Min Li; Gil Utard
Abstract We presents the parallelization of Satz using work stealing for workload balancing, based on the master/slave communication model. We define a simple way to evaluate the workload of every busy slave. The master then steals the first remaining subtree of the most loaded slave for an idle slave. Special attention is paid to prevent pingpong phenomenon. Our approach easily supports fault tolerance computing and accumulation of intermediate results over time. Encouraging experimental results are presented. We thank Dominique Lazure for material help and Laure Devendeville for fruitful discussions. This work is partially supported by grant of “po le de modelisation de la region de Picardie”.
cluster computing and the grid | 2004
Gil Utard; Antoine Vernois
In this paper we present a quantitative study of data survival in peer to peer storage systems. We first recall two main redundancy mechanisms: replication and erasure codes, which are used by most peer to peer storage systems like OceanStore, PAST or CFS, to guarantee data durability. Second we characterize peer to peer systems according to a volatility factor (a peer is free to leave the system at anytime) and to an availability factor (a peer is not permanently connected to the system). Third we model the behavior of a system as a Markov chain and analyse the average life time of data (MTTF) according to the volatility and availability factors. We also present the cost of the repair process based on these redundancy schemes to recover failed peers. The conclusion of this study is that when there is no high availability of peers, a simple replication scheme may be more efficient than sophisticated erasure codes.
international conference on supercomputing | 2004
Olivier Cozette; Abdou Guermouche; Gil Utard
In this paper, we present a new way to improve performance of the factorization of large sparse linear systems which cannot fit in memory. Instead of rewriting a large part of the code to implement an out-of-core algorithm with explicit I/O, we modify the paging mechanisms in such a way that I/O are transparent. This approach will be helpful to study the key points for getting performance with large problems on under sized memory machines with an explicit out-of-core scheme. The modification is done thanks to the MMUM&MMUSSEL software tool which allows the management of the paging activity at the application level. We designed a first paging polic that is well adapted for the parallel multifrontal solver MUMPS We present here a study and we give our preliminary results.
ieee international conference on high performance computing data and analytics | 1999
Eddy Caron; Olivier Cozette; Dominique Lazure; Gil Utard
The PaLaDiN (PArallel LArge Data set In Network of workstations) project is concerned with parallel out-of-core application running on cluster of workstations or PCs. In such architectures, each node has a virtual memory manager and a first idea is to use this feature to run “parallel out-of-core” application as a parallel in-core one. The out-of-core part of the problem, i.e. the schedule of data fetch and data write-back, is relegated to the operating system.
ieee international conference on high performance computing data and analytics | 2000
Eddy Caron; Dominique Lazure; Gil Utard
In this paper, we present an analytical performance model of the parallel left-right looking out-of-core LU factorization algorithm. We show the accuracy of the performance prediction for a prototype implementation in the ScaLAPACK library. We will show that with a correct distribution of the matrix and with an overlapof IO by computation, we obtain performances similar to those of the in-core algorithm. To get such performances, the size of the physical main memory only need to be proportional to the product of the matrix order (not the matrix size) by the ratio of the IO bandwidth and the computation rate: There is no need of large main memory for the factorization of huge matrix!
cluster computing and the grid | 2005
Cyril Randriamaro; Olivier Soyez; Gil Utard; Francis Wlazinski
This article presents distributions for data storage in a P2P system. In peer to peer storage system we have to face a continuous stream of peer failures. So to insure data durability data are usually disseminated using a dispersal redundant scheme and a dynamic data reconstruction process is used to rebuild lost data. There is an important communication traffic to maintain data integrity. So, it is important to reduce the impact of this reconstruction process on peer. To minimize end user traffic according to the reconstruction process, distribution must take into account a new measure: The maximum disturbance cost of a peer. To begin with, we define a static distribution scheme which minimizes this reconstruction cost based on prime numbers theory. We compare this distribution with the random distribution, the most used in data distribution.
european conference on parallel processing | 2001
Jonathan Ilroy; Cyrille Randriamaro; Gil Utard
Since the definition of the MPI-IO, a standard interface for parallel IO, some implementations are available for cluster of workstations. In this paper we focus on the ROMIO implementation (from Argonne Laboratory), running on PVFS. PVFS [5] is a Parallel Virtual File System developed at Clemson University. This file system uses local file systems of I/O nodes in a cluster to store data on disks. Data is striped among disks with a stripe parameter. The ROMIO implementation is not aware of the particular data-distribution of PVFS. We show how to improve performances of collective I/O of MPI-IO on such a parallel and distributed file system: the optimization avoids the data-redistribution induced by the PVFS file system. We show performance results on typical file access schemes found in data-parallel applications, and compare to the performances of the original PVFS port.
cluster computing and the grid | 2003
Olivier Cozette; Cyril Randriamaro; Gil Utard
Grand challenge applications have to process large amounts of data, and then require high performance IO systems. Cluster computing is a good alternative to proprietary system for building cost effective IO intensive platform: some cluster architectures won sorting benchmark (MinuteSort, Datamation)! Recent advances in IO component technologies (disk, controller and network) let us expect higher IO performance for data intensive applications on cluster. The counterpart of this evolution is that much stress is put on the different buses (memory, IO) of each node which cannot be scaled. In this paper we investigate a strategy we called READ/sup 2/ (Remote Efficient Access to Distant Device) to reduce this stress. With READ/sup 2/ any cluster node accesses directly to remote disk: the remote processor and the remote memory are removed from the control and data path: Inputs/Outputs dont interfere with the host processor and the host memory activity. With READ/sup 2/ strategy, a cluster can be considered as a shared disk architecture instead of a shared nothing one. This papers describes an implementation of READ/sup 2/ on Myrinet Networks. First experimental results show IO performance improvement.
ieee international conference on high performance computing data and analytics | 1998
Alexis Agahi; Robert D. Russell; Gil Utard
For many I/O intensive problems, an effective file system has to respect hard application. specific constraints. This is why it is reasonable to assume that each program requiring high performance I/O needs a dedicated file system tailored to its own specifications. In this paper we present the design of a high performance file system built using modules. This system is aimed at matching the needs of a large class of applications.