Michel Rasquin
University of Colorado Boulder
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michel Rasquin.
ieee symposium on large data analysis and visualization | 2011
Nathan D. Fabian; Kenneth Moreland; David C. Thompson; Andrew C. Bauer; Pat Marion; Berk Gevecik; Michel Rasquin; Kenneth E. Jansen
As high performance computing approaches exascale, CPU capability far outpaces disk write speed, and in situ visualization becomes an essential part of an analysts workflow. In this paper, we describe the ParaView Coprocessing Library, a framework for in situ visualization and analysis coprocessing. We describe how coprocessing algorithms (building on many from VTK) can be linked and executed directly from within a scientific simulation or other applications that need visualization and analysis. We also describe how the ParaView Coprocessing Library can write out partially processed, compressed, or extracted data readable by a traditional visualization application for interactive post-processing. Finally, we will demonstrate the librarys scalability in a number of real-world scenarios.
Computing in Science and Engineering | 2014
Michel Rasquin; Cameron W. Smith; Kedar Chitale; E. Seegyoung Seol; Benjamin A. Matthews; Jeffrey L. Martin; Onkar Sahni; Raymond M. Loy; Mark S. Shephard; Kenneth E. Jansen
Massively parallel computation provides an enormous capacity to perform simulations on a timescale that can change the paradigm of how scientists, engineers, and other practitioners use simulations to address discovery and design. This work considers an active flow control application on a realistic and complex wing design that could be leveraged by a scalable, fully implicit, unstructured flow solver and access to high-performance computing resources. The article describes the active flow control application; then summarizes the main features in the implementation of a massively parallel turbulent flow solver, PHASTA; and finally demonstrates the methods strong scalability at extreme scale. Scaling studies performed with unstructured meshes of 11 and 92 billion elements on the Argonne Leadership Computing Facilitys Blue Gene/Q Mira machine with up to 786,432 cores and 3,145,728 MPI processes.
SIAM Journal on Scientific Computing | 2018
Cameron W. Smith; Michel Rasquin; Dan Ibanez; Kenneth E. Jansen; Mark S. Shephard
The scalability of unstructured mesh-based applications depends on partitioning methods that quickly balance the computational work while reducing communication costs. Zhou et al. [SIAM J. Sci. Com...
ieee international conference on high performance computing data and analytics | 2016
Utkarsh Ayachit; Andrew C. Bauer; Earl P. N. Duque; Greg Eisenhauer; Nicola J. Ferrier; Junmin Gu; Kenneth E. Jansen; Burlen Loring; Zarija Lukić; Suresh Menon; Dmitriy Morozov; Patrick O'Leary; Reetesh Ranjan; Michel Rasquin; Christopher P. Stone; Venkatram Vishwanath; Gunther H. Weber; Brad Whitlock; Matthew Wolf; K. John Wu; E. Wes Bethel
A key trend facing extreme-scale computational science is the widening gap between computational and I/O rates, and the challenge that follows is how to best gain insight from simulation data when it is increasingly impractical to save it to persistent storage for subsequent visual exploration and analysis. One approach to this challenge is centered around the idea of in situ processing, where visualization and analysis processing is performed while data is still resident in memory. This paper examines several key design and performance issues related to the idea of in situ processing at extreme scale on modern platforms: scalability, overhead, performance measurement and analysis, comparison and contrast with a traditional post hoc approach, and interfacing with simulation codes. We illustrate these principles in practice with studies, conducted on large-scale HPC platforms, that include a miniapplication and multiple science application codes, one of which demonstrates in situ methods in use at greater than 1M-way concurrency.
arXiv: Fluid Dynamics | 2014
Kedar C. Chitale; Michel Rasquin; Onkar Sahni; Mark S. Shephard; Kenneth E. Jansen
Multi-element wings are popular in the aerospace community due to their high lift performance. Turbulent flow simulations of these configurations require very fine mesh spacings especially near the walls, thereby making use of a boundary layer mesh necessary. However, it is difficult to accurately determine the required mesh resolution a priori to the simulations. In this paper we use an anisotropic adaptive meshing approach including adaptive control of elements in the boundary layers and study its effectiveness for two multi-element wing configurations. The results are compared with experimental data as well as nested refinements to show the efficiency of adaptivity driven by error indicators, where superior resolution in wakes and near the tip region through adaptivity are highlighted.
arXiv: Fluid Dynamics | 2014
Kedar C. Chitale; Michel Rasquin; Jeffrey D. Martin; Kenneth E. Jansen
This paper presents flow simulation results of the EUROLIFT DLR-F11 multi-element wing configuration, obtained with a highly scalable finite element solver, PHASTA. This work was accomplished as a part of the 2nd high lift prediction workshop. In-house meshes were constructed with increasing mesh density for analysis. A solution adaptive approach was used as an alternative and its effectiveness was studied by comparing its results with the ones obtained with other meshes. Comparisons between the numerical solution obtained with unsteady RANS turbulence model and available experimental results are provided for verification and discussion. Based on the observations, future direction for adaptive research and simulations with higher fidelity turbulence models is outlined.
international conference on big data | 2014
Hong Yi; Michel Rasquin; Igor A. Bolotnov
Large-scale simulations conducted on supercomputers such as leadership-class computing facilities allow researchers to simulate and study complex problems with high fidelity, and thus have become indispensable in diverse areas of science and engineering. These high-fidelity simulations generate vast amount of data which is becoming more and more difficult to transform into knowledge using traditional visual analysis approaches. For instance, there are tremendous challenges in analyzing big data produced by high-fidelity simulations in order to gain meaningful insight into complex phenomena such as turbulent two-phase flows. The traditional workflow, which consists in conducting simulations on supercomputers and recording enormous raw simulation data to disk for further post-processing and visualization, is no longer a viable approach due to prohibitive cost of disk access and considerable amount of time spent on data transfer. Visual Analytics approaches for big data have to be researched and employed to address the problem of knowledge discovery from such large-scale simulations. One approach to tackle this issue is to couple a numerical simulation with in-situ visualization so that the post-processing and visualization occurs while the simulation is running. This in-situ approach minimizes data storage by extracting and visualizing important features of the data directly within the simulation without saving the raw data to disk. In addition, in-situ visualization allows users to steer the simulation by adjusting input parameters while the simulation is ongoing. In this paper, we present our approach for in-situ visualization of simulation data generated by massively parallel finite-element computational fluid dynamics solver (PHASTA) instrumented and linked with ParaView Catalyst. We demonstrate our in-situ visualization and simulation steering capability with a fully resolved turbulent flow through 2×2 reactor subchannel complex geometry. In addition, we present results from our in-situ visualization for turbulent flow simulations conducted on the supercomputers Cray XK7 “Titan” at Oak Ridge National Laboratory and IBM BlueGene/Q “Mira” at Argonne National Laboratory up to 32,768 cores and examine the overhead of in-situ visualization and its effect on code performance.
32nd AIAA Applied Aerodynamics Conference | 2014
Michel Rasquin; Kedar C. Chitale; Mohammed Ali; Kenneth E. Jansen
Simulations of high lift configurations like multi-element wings have been an active area of research in aerodynamics. Following our participation in the 2 AIAA CFD High Lift Prediction Workshop (HiLiftPW-2), the EUROLIFT DLR-F11 multi-element wing geometry is considered in this work, as it is representative of a wide-body commercial aircraft that features a continuous full span slat and flap in landing setting. The complexity and size of this geometric model present serious challenges for meshing and high-fidelity modeling of these systems. Moreover, full scale DES simulations of such geometries require very fine mesh resolution in key locations of the computational domain (e.g. boundary layers, wake of the flap, tips, etc) and massively parallel resources to solve the Navier-Stokes equations in a timely manner. To achieve these fine resolutions in an efficient manner, an adaptive approach is highly desirable. In this paper, the computational approach used for these DES simulations is a mature finite-element flow solver (PHASTA) employed with anisotropic adaptive meshing and partitioning procedures. The Reynolds number is set to Re = 1.35M and an angle of attack of 7 is considered. The initial mesh design for a 7 angle of attack with DES is discussed, along with the associated adapted mesh. Preliminary results are also provided to identify what is resolved on the initial and adapted meshes and what, in future work, will be required to bring further agreement with the experiments. RANS simulations on the same meshes are also discussed since they serve as an initial condition for the DES.
Nuclear Science and Engineering | 2018
Joseph J. Cambareri; Michel Rasquin; Andre Gouws; Ramesh Balakrishnan; Kenneth E. Jansen; Igor A. Bolotnov
Abstract Absorbing heat from the fuel rod surface, water as coolant can undergo subcooled boiling within a pressurized water reactor (PWR) fuel rod bundle. Because of the buoyancy effect, the vapor bubbles generated will then rise along and interact with the subchannel geometries. Reliable prediction of bubble behavior is of immense importance to ensure safe and stable reactor operation. However, given a complex engineering system like a nuclear reactor, it is very challenging (if not impossible) to conduct high-resolution measurements to study bubbly flows under reactor operation conditions. The lack of a fundamental two-phase-flow database is hindering the development of accurate two-phase-flow models required in more advanced reactor designs. In response to this challenge, first-principles–based numerical simulations are emerging as an attractive alternative to produce a complementary data source along with experiments. Leveraged by the unprecedented computing power offered by state-of-the-art supercomputers, direct numerical simulation (DNS), coupled with interface tracking methods, is becoming a practical tool to investigate some of the most challenging engineering flow problems. In the presented research, turbulent bubbly flow is simulated via DNS in single PWR subchannel geometries with auxiliary structures (e.g., supporting spacer grid and mixing vanes). The geometric effects these structures exert on the bubbly flow are studied with both a conventional time-averaging approach and a novel dynamic bubble tracking method. The new insights obtained will help inform better two-phase models that can contribute to safer and more efficient nuclear reactor systems.
ERCOFTAC series | 2015
Xavier Dechamps; Michel Rasquin; Kenneth E. Jansen; Gérard Degrez
The study of duct flow for electrically conducting liquid metal fluids exposed to an externally applied magnetic field is acknowledged to be a good approach for a better understanding of the fundamental properties of magnetohydrodynamic (MHD) turbulence.