Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yvan Fournier is active.

Publication


Featured researches published by Yvan Fournier.


2013 IEEE Symposium on Large-Scale Data Analysis and Visualization (LDAV) | 2013

In-Situ visualization in fluid mechanics using Catalyst: A case study for Code Saturne

Benjamin Lorendeau; Yvan Fournier; Alejandro Ribes

Numerical simulations using supercomputers are producing an increasingly larger volume of data to be visualized. In this context, Catalyst is a prototype In-Situ visualization library developed by Kitware to help reduce the data post-treatment overhead. On the other side, Code Saturne is a Computational Fluid Dynamics code used at Eléctricité de France (EDF), one of the biggest electricity producers in Europe, for its large scale numerical simulations. In this article we present a study case where Catalyst is integrated into Code Saturne. We evaluate the feasibility and performance of this integration by running two test cases in one of our corporate supercomputers.


Archive | 2010

Towards Petascale Computing with Parallel CFD codes

Andrew G. Sunderland; M. Ashworth; Charles Moulinec; N. Li; Juan Uribe; Yvan Fournier

Many world leading high-end computing (HEC) facilities are now offering over 100 Teraflops/s of performance and several initiatives have begun to look forward to Petascale computing5 (1015 flop/s). Los Alamos National Laboratory and Oak Ridge National Laboratory (ORNL) already have Petascale systems, which are leading the current (Nov 2008) TOP500 list [1]. Computing at the Petascale raises a number of significant challenges for parallel computational fluid dynamics codes. Most significantly, further improvements to the performance of individual processors will be limited and therefore Petascale systems are likely to contain 100,000+ processors. Thus a critical aspect for utilising high Terascale and Petascale resources is the scalability of the underlying numerical methods, both with execution time with the number of processors and scaling of time with problem size. In this paper we analyse the performance of several CFD codes for a range of datasets on some of the latest high performance computing architectures. This includes Direct Numerical Simulations (DNS) via the SBLI [2] and SENGA2 [3] codes, and Large Eddy Simulations (LES) using both STREAMS LES [4] and the general purpose open source CFD code Code Saturne [5].


symposium on computer architecture and high performance computing | 2010

Accelerating Computational Fluid Dynamics on the IBM Blue Gene/P Supercomputer

Pascal Vezolle; Jerry Heyman; Bruce D. D'Amora; Gordon W. Braudaway; Karen A. Magerlein; John Harold Magerlein; Yvan Fournier

Computational Fluid Dynamics (CFD) is an increasingly important application domain for computational scientists. In this paper, we propose and analyze optimizations necessary to run CFD simulations consisting of multi-billion-cell mesh models on large processor systems. Our investigation leverages the general industrial Navier-Stokes open-source CFD application, Code_Saturne, developed by Electricité de France (EDF). Our work considers emerging processor features such as many-core, Symmetric Multi-threading (SMT), Single Instruction Multiple Data (SIMD), Transactional Memory, and Thread Level Speculation. Initially, we have targeted per-node performance improvements by reconstructing the code and data layouts to optimally use multiple threads. We present a general loop transformation that will enable the compiler to generate OpenMP threads effectively with minimal impact to overall code structure. A renumbering scheme for mesh faces is proposed to enhance thread-level parallelism and generally improve data locality. Performance results on IBM Blue Gene/P supercomputer and Intel Xeon Westmere cluster are included.


Proceedings of the 2nd Workshop on In Situ Infrastructures for Enabling Extreme-scale Analysis and Visualization | 2016

In situ statistical analysis for parametric studies

Théophile Terraz; Bruno Raffin; Alejandro Ribes; Yvan Fournier

In situ processing proposes to reduce storage needs and I/O traffic by processing results of parallel simulations as soon as they are available in the memory of the compute processes. We focus here on computing in situ statistics on the results of N simulations from a parametric study. The classical approach consists in running various instances of the same simulation with different values of input parameters. Results are then saved to disks and statistics are computed post mortem, leading to very I/O intensive applications. Our solution is to develop Melissa, an in situ library running on staging nodes as a parallel server. When starting, simulations connect to Melissa and send the results of each time step to Melissa as soon as they are available. Melissa implements iterative versions of classical statistical operations, enabling to update results as soon as a new time step from a simulation is available. Once all statistics ar updated, the time step can be discarded. We also discuss two different approaches for scheduling simulation runs: the jobs-in-job and the multi-jobs approaches. Experiments run instances of the Computational Fluid Dynamics Open Source solver Code_Saturne. They confirm that our approach enables one to avoid storing simulation results to disk or in memory.


Topological and Statistical Methods for Complex Data, Tackling Large-Scale, High-Dimensional, and Multivariate Data Spaces | 2015

In-Situ Visualization in Computational Fluid Dynamics Using Open-Source tools: Integration of Catalyst into Code_Saturne

Alejandro Ribes; Benjamin Lorendeau; Julien Jomier; Yvan Fournier

The volume of data produced by numerical simulations performed on high performance computers is becoming increasingly large. The visualization of these large post-generated volumes of data is currently a bottleneck for the realization of engineering and physics studies in industrial environments. In this context, Catalyst is a prototype in-situ visualization library developed by Kitware to help reduce the data post-treatment overhead. Additionally, Code_Saturne is a Computational Fluid Dynamics code developed by EDF, one of the largest electricity producers in Europe, for its large scale simulations. Both Catalyst and Code_Saturne are open-source software. In this chapter we present a case study where Catalyst is coupled with Code_Saturne. We evaluate the feasibility and performance of this integration by running several use cases in one of our corporate supercomputers.


Concurrency and Computation: Practice and Experience | 2013

Multiple threads and parallel challenges for large simulations to accelerate a general Navier–Stokes CFD code on massively parallel systems

Yvan Fournier; Jérôme Bonelle; Pascal Vezolle; Jerry Heyman; Bruce D. D'Amora; Karen A. Magerlein; John Harold Magerlein; Gordon W. Braudaway; Charles Moulinec; Andrew G. Sunderland

Computational fluid dynamics is an increasingly important application domain for computational scientists. In this paper, we propose and analyze optimizations necessary to run CFD simulations consisting of multibillion‐cell mesh models on large processor systems. Our investigation leverages the general industrial Navier–Stokes CFD application, Code_Saturne, developed by Electricité de France for incompressible and nearly compressible flows. In this paper, we outline the main bottlenecks and challenges for massively parallel systems and emerging processor features such as many‐core, transactional memory, and thread level speculation. We also present an approach based on an octree search algorithm to facilitate the joining of mesh parts and to build complex larger unstructured meshes of several billion grid cells. We describe two parallel strategies of an algebraic multigrid solver and we detail how to introduce new levels of parallelism based on compiler directives with OpenMP, transactional memory and thread level speculation, for finite volume cell‐centered formulation and face‐based loops. A renumbering scheme for mesh faces is proposed to enhance thread‐level parallelism. Copyright


ieee international conference on high performance computing, data, and analytics | 2017

In-situ Visualization for Computation Workflows

Alejandro Ribes; Ovidiu Mircescu; Anthony Geay; Yvan Fournier

The open-source numerical simulation platform SALOME provides a set of services to create simulation workflows that connect different computation units. These computation units can be different solvers that communicate to create a complex multi-physics simulation. The SALOME platform can execute such a workflow on a distributed network of computers or on a supercomputer. This article presents the integration of in-situ visualization using Catalyst into the computation workflows module of the SALOME platform. This integration allows complex simulations to easily use in-situ visualization and requires no development efforts.


ICNAAM 2010: International Conference of Numerical Analysis and Applied Mathematics 2010 | 2010

Renumbering Methods to Unleash Multi‐Threaded Approaches for a General Navier‐Stokes Implementation

Pascal Vezolle; Yvan Fournier; Nicolas Tallet; Jerrold Heymans; Bruce D. D'Amora

Our investigation leverages the general industrial Navier‐Stokes open‐source Computational Fluid Dynamics (CFD) application, Code_Saturne, developed by Electricite de France (EDF). We deal with how to take advantage of the emerging processor features such as many‐cores, Simultaneous Multi‐Threading (SMT) and Thread Level Speculation (TLS), through a mixed MPI/multithreads approach. We focus here on the per‐node performance improvements and present the constraints for a multithreads implementation to solve the general 3D Navier‐Stokes equations using a finite volume discretization into polyhedral cells. We describe a simple and efficient mesh numbering scheme allowing us to introduce OpenMP and Thread Level Speculation implementations with minimal impact to overall code structure.


ASME 2007 Pressure Vessels and Piping Conference | 2007

Methodology of Fluid Flow Evaluation in the Lower Core and in Fuel Assembly Legs of a PWR With Code_Saturne

Christelle Le Maître-Vurpillot; Yvan Fournier

In order to better understand the stresses to which fuel rods are subjected, we need to improve our knowledge of the fluid flow inside the core and the fuel assembly, and we are particularly interested in the first spacer grid region, as fuel rod fretting has sometimes been observed. It has been seen experimentally in previous years that rotating mixing grids on EDF’s lock-up (a fuel assembly section subjected to fluid flow, with 2 spacer grid and 2 mixing grids) could let to different vibration levels with some fuel assembly types. This seems to confirm that the influence of fluid flow is of primary importance for fuel rod vibration (and thus fretting). A series of calculations are thus run with our incompressible Navier-Stockes solver, Code_Saturne with a classical RANS turbulence model. We limit to a scale of few assemblies for practical reasons. At this scale, most of the features of the fuel rods, nozzles, and guides tubes are represented, though the geometry of the spacer grids is still much simplified, and details such as debris-trapping grids are ignored. We have analysed the axial and transverse velocities for configuration with different fuel assembly types, and calculated an approximation of efforts on individual fuel rods. Local scale results are mainly qualitative, but they already enable us to obtain a better understanding of the effect of nozzle shape or heterogeneous fuel assemblies on the fluid flow. The nozzle geometry (and fuel rod cap positions) have a major influence on the levels of transverse velocity attained. We have run a good number of sensitivity verifications and built a solid methodology, when defining these calculations. To better validate local scale calculations, EDF R&D has built a small lower fuel rod assembly mock-up “BORA 3×7” (3×3 assemblies with 7×7 rods each). This mock-up is being used to obtain the velocity data. The calculations described in this paper are not refined enough to be able to directly correlate fluid flow and fuel cladding fretting in a quantitative manner, but are a step in this direction and may already improve our understanding of the local loads and spatial load variations a fuel assembly is subjected to.Copyright


Computers & Fluids | 2011

Optimizing Code_Saturne computations on Petascale systems

Yvan Fournier; Jérôme Bonelle; Charles Moulinec; Z. Shang; Andrew G. Sunderland; Juan Uribe

Collaboration


Dive into the Yvan Fournier's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge