Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Frédéric Desprez is active.

Publication


Featured researches published by Frédéric Desprez.


ieee international conference on high performance computing data and analytics | 2006

Diet: A Scalable Toolbox to Build Network Enabled Servers on the Grid

Eddy Caron; Frédéric Desprez

Among existing grid middleware approaches, one simple, powerful, and flexible approach consists of using servers available in different administrative domains through the classical client-server or Remote Procedure Call (RPC) paradigm. Network Enabled Servers implement this model also called GridRPC. Clients submit computation requests to a scheduler whose goal is to find a server available on the grid. The aim of this paper is to give an overview of a middleware developed by the GRAAL team called DIET (for Distributed Interactive Engineering Tool-box). DIET is a hierarchical set of components used for the development of applications based on computational servers on the grid.


ieee international conference on cloud computing technology and science | 2010

Forecasting for Grid and Cloud Computing On-Demand Resources Based on Pattern Matching

Eddy Caron; Frédéric Desprez; Adrian Muresan

The Cloud phenomenon brings along the cost-saving benefit of dynamic scaling. As a result, the question of efficient resource scaling arises. Prediction is necessary as the virtual resources that Cloud computing uses have a setup time that is not negligible. We propose an approach to the problem of workload prediction based on identifying similar past occurrences of the current short-term workload history. We present in detail the Cloud client resource auto-scaling algorithm that uses the above approach to help when scaling decisions are made, as well as experimental results by using real-world traces from Cloud and Grid platforms. We also present an overall evaluation of this approach, its potential and usefulness for enabling efficient auto-scaling of Cloud user resources.


international conference on cloud computing | 2009

Cloud Computing Resource Management through a Grid Middleware: A Case Study with DIET and Eucalyptus

Eddy Caron; Frédéric Desprez; David Loureiro; Adrian Muresan

The cloud phenomenon is quickly growing towards becoming the de facto standard of Internet computing, storage and hosting both in industry and academia. The large scalability possibilities offered by cloud platforms can be harnessed not only for services and applications hosting but also as a raw on-demand computing resource. This paper proposes the use of a cloud system as a raw computational on-demand resource for a grid middleware. We illustrate a proof of concept by considering the DIET-solve grid middleware and the EUCALYPTUS open-source cloud platform.


european conference on parallel processing | 2004

From Heterogeneous Task Scheduling to Heterogeneous Mixed Parallel Scheduling

Frédéric Suter; Frédéric Desprez; Henri Casanova

Mixed-parallelism, the combination of data- and task-parallelism, is a powerful way of increasing the scalability of entire classes of parallel applications on platforms comprising multiple compute clusters. While multi-cluster platforms are predominantly heterogeneous, previous work on mixed-parallel application scheduling targets only homogeneous platforms. In this paper we develop a method for extending existing scheduling algorithms for task-parallel applications on heterogeneous platforms to the mixed-parallel case.


Journal of Grid Computing | 2006

Simultaneous Scheduling of Replication and Computation for Data-Intensive Applications on the Grid

Frédéric Desprez; Antoine Vernois

Managing large datasets has become one major application of Grids. Life science applications usually manage large databases that should be replicated to scale applications. The growing number of users and the simple access to Internet-based application has stressed Grid middleware. Such environment are thus asked to manage data and schedule computation tasks at the same time. These two important operations have to be tightly coupled. This paper presents an algorithm (Scheduling and Replication Algorithm, SRA) that combines data management and scheduling using a steady-state approach. Using a model of the platform, the number of requests as well as their distribution, the number and size of databases, we define a linear program to satisfy all the constraints at every level of the platform in steady-state. The solution of this linear program will give us a placement for the databases on the servers as well as providing, for each kind of job, the server on which they should be executed. Our theoretical results are validated using simulation and logs from a large life science application.


grid computing | 2011

Pattern Matching Based Forecast of Non-periodic Repetitive Behavior for Cloud Clients

Eddy Caron; Frédéric Desprez; Adrian Muresan

The Cloud phenomenon brings along the cost-saving benefit of dynamic scaling. As a result, the question of efficient resource scaling arises. Prediction is necessary as the virtual resources that Cloud computing uses have a setup time that is not negligible. We propose an approach to the problem of workload prediction based on identifying similar past occurrences of the current short-term workload history. We present in detail the Cloud client resource auto-scaling algorithm that uses the above approach to help when scaling decisions are made, as well as experimental results by using real-world Cloud client application traces. We also present an overall evaluation of this approach, its potential and usefulness for enabling efficient auto-scaling of Cloud user resources.


international symposium on parallel and distributed processing and applications | 2008

Relaxing Synchronization in a Parallel SystemC Kernel

Philippe Combes; Eddy Caron; Frédéric Desprez; Bastien Chopard; Julien Zory

SystemC has become a very popular standardized language for the modeling of system-on-chip (SoC) devices. However, due to the ever increasing complexity of SoC designs, the ever longer simulation times affect SoC exploration potential and time-to-market. In order to reduce these times, we have developed a parallel SystemC kernel. Because the SystemC semantics require a high level of synchronization which can dramatically affect the performance gains, we investigate in this paper some ways to reduce the synchronization overheads. We validate then our approaches against an academic design model and a real, industrial application.


Journal of Parallel and Distributed Computing | 2010

On cluster resource allocation for multiple parallel task graphs

Henri Casanova; Frédéric Desprez; Frédéric Suter

Many scientific applications can be structured as parallel task graphs (PTGs), that is, graphs of data-parallel tasks. Adding data parallelism to a task-parallel application provides opportunities for higher performance and scalability, but poses additional scheduling challenges. In this paper, we study the off-line scheduling of multiple PTGs on a single, homogeneous cluster. The objective is to optimize performance without compromising fairness among the PTGs. We consider the range of previously proposed scheduling algorithms applicable to this problem, from both the applied and the theoretical literature, and we propose minor improvements when possible. Our main contribution is an extensive evaluation of these algorithms in simulation, using both synthetic and real-world application configurations, using two different metrics for performance and one metric for fairness. We identify a handful of algorithms that provide good trade-offs when considering all these metrics. The best algorithm overall is one that structures the schedule as a sequence of phases of increasing duration based on a makespan guarantee produced by an approximation algorithm.


workflows in support of large scale science | 2013

Toward fine-grained online task characteristics estimation in scientific workflows

Rafael Ferreira da Silva; Gideon Juve; Ewa Deelman; Tristan Glatard; Frédéric Desprez; Douglas Thain; Benjamín Tovar; Miron Livny

Task characteristics estimations such as runtime, disk space, and memory consumption, are commonly used by scheduling algorithms and resource provisioning techniques to provide successful and efficient workflow executions. These methods assume that accurate estimations are available, but in production systems it is hard to compute such estimates with good accuracy. In this work, we first profile three real scientific workflows collecting fine-grained information such as process I/O, runtime, memory usage, and CPU utilization. We then propose a method to automatically characterize workflow task needs based on these profiles. Our method estimates task runtime, disk space, and memory consumption based on the size of tasks input data. It looks for correlations between the parameters of a dataset, and if no correlation is found, the dataset is divided into smaller subsets by using a clustering technique. Task behavior estimates are done based on the ratio parameter/input data size if they are correlated, or based on the mean value. However, task dependencies in scientific workflows lead to a chain of estimation errors. To correct such errors, we propose an online estimation process based on the MAPE-K loop where task executions are constantly monitored and estimates are updated accordingly. Experiment results show that our online estimation process yields much more accurate predictions than an offline approach, where all task needs are estimated at once.


international parallel and distributed processing symposium | 2003

One-step algorithm for mixed data and task parallel scheduling without data replication

Vincent Boudet; Frédéric Desprez; Frédéric Suter

In this paper we propose an original algorithm for mixed data and task parallel scheduling. The main specificities of this algorithm are to simultaneously perform the allocation and scheduling processes, and avoid data replication. The idea is to base the scheduling on an accurate evaluation of each task of the application depending on the processor grid. Then no assumption is made with regard to the homogeneity of the execution platform. The complexity of our algorithm is given. Performance achieved by our schedules both in homogeneous and heterogeneous worlds, are compared to data-parallel executions for two applications: the complex matrix multiplication and the Strassen decomposition.

Collaboration


Dive into the Frédéric Desprez's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Frédéric Suter

École normale supérieure de Lyon

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yves Caniou

École normale supérieure de Lyon

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Martin Quinson

École normale supérieure de Lyon

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Benjamin Depardon

École normale supérieure de Lyon

View shared research outputs
Top Co-Authors

Avatar

Raphaël Bolze

École normale supérieure de Lyon

View shared research outputs
Researchain Logo
Decentralizing Knowledge