Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stephane Ethier is active.

Publication


Featured researches published by Stephane Ethier.


Physics of Plasmas | 2006

Gyro-kinetic simulation of global turbulent transport properties in tokamak experiments

W.X. Wang; Zhihong Lin; W. M. Tang; W. W. Lee; Stephane Ethier; Jerome L. V. Lewandowski; G. Rewoldt; T. S. Hahm; J. Manickam

A general geometry gyro-kinetic model for particle simulation of plasma turbulence in tokamak experiments is described. It incorporates the comprehensive influence of noncircular cross section, realistic plasma profiles, plasma rotation, neoclassical (equilibrium) electric fields, and Coulomb collisions. An interesting result of global turbulence development in a shaped tokamak plasma is presented with regard to nonlinear turbulence spreading into the linearly stable region. The mutual interaction between turbulence and zonal flows in collisionless plasmas is studied with a focus on identifying possible nonlinear saturation mechanisms for zonal flows. A bursting temporal behavior with a period longer than the geodesic acoustic oscillation period is observed even in a collisionless system. Our simulation results suggest that the zonal flows can drive turbulence. However, this process is too weak to be an effective zonal flow saturation mechanism.


international conference on parallel processing | 2011

Compressing the incompressible with ISABELA: in-situ reduction of spatio-temporal data

Sriram Lakshminarasimhan; Neil Shah; Stephane Ethier; Scott Klasky; Robert Latham; Robert B. Ross; Nagiza F. Samatova

Modern large-scale scientific simulations running on HPC systems generate data in the order of terabytes during a single run. To lessen the I/O load during a simulation run, scientists are forced to capture data infrequently, thereby making data collection an inherently lossy process. Yet, lossless compression techniques are hardly suitable for scientific data due to its inherently random nature; for the applications used here, they offer less than 10% compression rate. They also impose significant overhead during decompression, making them unsuitable for data analysis and visualization that require repeated data access. To address this problem, we propose an effective method for In-situ Sort-And-B-spline Error-bounded Lossy Abatement (ISABELA) of scientific data that is widely regarded as effectively incompressible. With ISABELA, we apply a preconditioner to seemingly random and noisy data along spatial resolution to achieve an accurate fitting model that guarantees a ≥ 0.99 correlation with the original data. We further take advantage of temporal patterns in scientific data to compress data by ≈ 85%, while introducing only a negligible overhead on simulations in terms of runtime. ISABELA significantly outperforms existing lossy compression methods, such as Wavelet compression. Moreover, besides being a communication-free and scalable compression technique, ISABELA is an inherently local decompression method, namely it does not decode the entire data, making it attractive for random access.


conference on high performance computing (supercomputing) | 2004

Scientific Computations on Modern Parallel Vector Systems

Leonid Oliker; Andrew Canning; Jonathan Carter; John Shalf; Stephane Ethier

Computational scientists have seen a frustrating trend of stagnating application performance despite dramatic increases in the claimed peak capability of high performance computing systems. This trend has been widely attributed to the use of superscalar-based commodity components who’s architectural designs offer a balance between memory performance, network capability, and execution rate that is poorly matched to the requirements of large-scale numerical computations. Recently, two innovative parallel-vector architectures have become operational: the Japanese Earth Simulator (ES) and the Cray X1. In order to quantify what these modern vector capabilities entail for the scientists that rely on modeling and simulation, it is critical to evaluate this architectural paradigm in the context of demanding computational algorithms. Our evaluation study examines four diverse scientific applications with the potential to run at ultrascale, from the areas of plasma physics, material science, astrophysics, and magnetic fusion. We compare performance between the vector-based ES and X1, with leading superscalar-based platforms: the IBM Power3/4 and the SGI Altix. Our research team was the first international group to conduct a performance evaluation study at the Earth Simulator Center; remote ES access in not available. Results demonstrate that the vector systems achieve excellent performance on our application suite - the highest of any architecture tested to date. However, vectorization of a particle-in-cell code highlights the potential difficulty of expressing irregularly structured algorithms as data-parallel programs.


conference on high performance computing (supercomputing) | 2005

Leading Computational Methods on Scalar and Vector HEC Platforms

Leonid Oliker; Jonathan Carter; Michael F. Wehner; Andrew Canning; Stephane Ethier; Arthur A. Mirin; David Parks; Patrick H. Worley; Shigemune Kitawaki; Yoshinori Tsuda

The last decade has witnessed a rapid proliferation of superscalar cache-based microprocessors to build high-end computing (HEC) platforms, primarily because of their generality, scalability, and cost effectiveness. However, the growing gap between sustained and peak performance for full-scale scientific applications on conventional supercomputers has become a major concern in high performance computing, requiring significantly larger systems and application scalability than implied by peak performance in order to achieve desired performance. The latest generation of custom-built parallel vector systems have the potential to address this issue for numerical algorithms with sufficient regularity in their computational structure. In this work we explore applications drawn from four areas: atmospheric modeling (CAM), magnetic fusion (GTC), plasma physics (LBMHD3D), and material science (PARATEC). We compare performance of the vector-based Cray X1, Earth Simulator, and newly-released NEC SX-8 and Cray X1E, with performance of three leading commodity-based superscalar platforms utilizing the IBM Power3, Intel Itanium2, and AMD Opteron processors. Our work makes several significant contributions: the first reported vector performance results for CAM simulations utilizing a finite-volume dynamical core on a high-resolution atmospheric grid; a new data-decomposition scheme for GTC that (for the first time) enables a breakthrough of the Teraflop barrier; the introduction of a new three-dimensional Lattice Boltzmann magneto-hydrodynamic implementation used to study the onset evolution of plasma turbulence that achieves over 26Tflop/s on 4800 ES promodity-based superscalar platforms utilizing the IBM Power3, Intel Itanium2, and AMD Opteron processors, with modern parallel vector systems: the Cray X1, Earth Simulator (ES), and the NEC SX-8. Additionally, we examine performance of CAM on the recently-released Cray X1E. Our research team was the first international group to conduct a performance evaluation study at the Earth Simulator Center; remote ES access is not available. Our work builds on our previous efforts [16, 17] and makes several significant contributions: the first reported vector performance results for CAM simulations utilizing a finite-volume dynamical core on a high-resolution atmospheric grid; a new datadecomposition scheme for GTC that (for the first time) enables a breakthrough of the Teraflop barrier; the introduction of a new three-dimensional Lattice Boltzmann magneto-hydrodynamic implementation used to study the onset evolution of plasma turbulence that achieves over 26Tflop/s on 4800 ES processors; and the largest PARATEC cell size atomistic simulation to date. Overall, results show that the vector architectures attain unprecedented aggregate performance across our application suite, demonstrating the tremendous potential of modern parallel vector systems.


Journal of Physics: Conference Series | 2005

Gyrokinetic particle-in-cell simulations of plasma microturbulence on advanced computing platforms

Stephane Ethier; William Tang; Zhihong Lin

Since its introduction in the early 1980s, the gyrokinetic particle-in-cell (PIC) method has been very successfully applied to the exploration of many important kinetic stability issues in magnetically confined plasmas. Its self-consistent treatment of charged particles and the associated electromagnetic fluctuations makes this method appropriate for studying enhanced transport driven by plasma turbulence. Advances in algorithms and computer hardware have led to the development of a parallel, global, gyrokinetic code in full toroidal geometry, the gyrokinetic toroidal code (GTC), developed at the Princeton Plasma Physics Laboratory. It has proven to be an invaluable tool to study key effects of low-frequency microturbulence in fusion plasmas. As a high-performance computing applications code, its flexible mixed-model parallel algorithm has allowed GTC to scale to over a thousand processors, which is routinely used for simulations. Improvements are continuously being made. As the US ramps up its support for the International Tokamak Experimental Reactor (ITER), the need for understanding the impact of turbulent transport in burning plasma fusion devices is of utmost importance. Accordingly, the GTC code is at the forefront of the set of numerical tools being used to assess and predict the performance of ITER on critical issues such as the efficiency of energy confinement in reactors.


conference on high performance computing (supercomputing) | 2003

Grid -Based Parallel Data Streaming implemented for the Gyrokinetic Toroidal Code

Scott Klasky; Stephane Ethier; Zhihong Lin; K. Martins; Douglas McCune; Ravi Samtaney

We have developed a threaded parallel data streaming approach using Globus to transfer multi-terabyte simulation data from a remote supercomputer to the scientist’s home analysis/visualization cluster, as the simulation executes, with negligible overhead. Data transfer experiments show that this concurrent data transfer approach is more favorable compared with writing to local disk and then transferring this data to be post-processed. The present approach is conducive to using the grid to pipeline the simulation with post-processing and visualization. We have applied this method to the Gyrokinetic Toroidal Code (GTC), a 3-dimensional particle-in-cell code used to study micro-turbulence in magnetic confinement fusion from first principles plasma theory.


Other Information: PBD: 15 Sep 2003 | 2003

Grid-based Parallel Data Streaming Implemented for the Gyrokinetic Toroidal Code

Scott Klasky; Stephane Ethier; Zhihong Lin; K. Martins; Douglas McCune; Ravi Samtaney

We have developed a threaded parallel data streaming approach using Globus to transfer multi-terabyte simulation data from a remote supercomputer to the scientists home analysis/visualization cluster, as the simulation executes, with negligible overhead. Data transfer experiments show that this concurrent data transfer approach is more favorable compared with writing to local disk and then transferring this data to be post-processed. The present approach is conducive to using the grid to pipeline the simulation with post-processing and visualization. We have applied this method to the Gyrokinetic Toroidal Code (GTC), a 3-dimensional particle-in-cell code used to study microturbulence in magnetic confinement fusion from first principles plasma theory.


ieee international conference on high performance computing data and analytics | 2011

ISABELA-QA: query-driven analytics with ISABELA-compressed extreme-scale scientific data

Sriram Lakshminarasimhan; John Jenkins; Zhenhuan Gong; Hemanth Kolla; S. Ku; Stephane Ethier; J.H. Chen; Choong-Seock Chang; Scott Klasky; Robert Latham; Robert B. Ross; Nagiza F. Samatova

Efficient analytics of scientific data from extreme-scale simulations is quickly becoming a top-notch priority. The increasing simulation output data sizes demand for a paradigm shift in how analytics is conducted. In this paper, we argue that query-driven analytics over compressed - rather than original, full-size - data is a promising strategy in order to meet storage-and-I/O-bound application challenges. As a proof-of-principle, we propose a parallel query processing engine, called ISABELA-QA that is designed and optimized for knowledge priors driven analytical processing of spatio-temporal, multivariate scientific data that is initially compressed, in situ, by our ISABELA technology. With ISABELA-QA, the total data storage requirement is less than 23%-30% of the original data, which is upto eight-fold less than what the existing state-of-the-art data management technologies that require storing both the original data and the index could offer. Since ISABELA-QA operates on the metadata generated by our compression technology, its underlying indexing technology for efficient query processing is light-weight; it requires less than 3% of the original data, unlike existing database indexing approaches that require 30%-300% of the original data. Moreover, ISABELA-QA is specifically optimized to retrieve the actual values rather than spatial regions for the variables that satisfy user-specified range queries - a functionality that is critical for high-accuracy data analytics. To the best of our knowledge, this is the first technology that enables query-driven analytics over the compressed spatio-temporal floating-point double- or single-precision data, while offering a light-weight memory and disk storage footprint solution with parallel, scalable, multi-node, multi-core, GPU-based query processing.


international conference on data engineering | 2012

ISOBAR Preconditioner for Effective and High-throughput Lossless Data Compression

Eric R. Schendel; Ye Jin; Neil Shah; J.H. Chen; Choong-Seock Chang; S. Ku; Stephane Ethier; Scott Klasky; Robert Latham; Robert B. Ross; Nagiza F. Samatova

Efficient handling of large volumes of data is a necessity for exascale scientific applications and database systems. To address the growing imbalance between the amount of available storage and the amount of data being produced by high speed (FLOPS) processors on the system, data must be compressed to reduce the total amount of data placed on the file systems. General-purpose loss less compression frameworks, such as zlib and bzlib2, are commonly used on datasets requiring loss less compression. Quite often, however, many scientific data sets compress poorly, referred to as hard-to-compress datasets, due to the negative impact of highly entropic content represented within the data. An important problem in better loss less data compression is to identify the hard-to-compress information and subsequently optimize the compression techniques at the byte-level. To address this challenge, we introduce the In-Situ Orthogonal Byte Aggregate Reduction Compression (ISOBAR-compress) methodology as a preconditioner of loss less compression to identify and optimize the compression efficiency and throughput of hard-to-compress datasets.


Physics of Plasmas | 2010

Nonlinear flow generation by electrostatic turbulence in tokamaks

W.X. Wang; P. H. Diamond; T. S. Hahm; Stephane Ethier; G. Rewoldt; W. M. Tang

Global gyrokinetic simulations have revealed an important nonlinear flow generation process due to the residual stress produced by electrostatic turbulence of ion temperature gradient (ITG) modes and trapped electron modes (TEMs). In collisionless TEM (CTEM) turbulence, nonlinear residual stress generation by both the fluctuation intensity and the intensity gradient in the presence of broken symmetry in the parallel wavenumber spectrum is identified for the first time. Concerning the origin of the symmetry breaking, turbulence self-generated low frequency zonal flow shear has been identified to be a key, universal mechanism in various turbulence regimes. Simulations reported here also indicate the existence of other mechanisms beyond E×B shear. The ITG turbulence driven “intrinsic” torque associated with residual stress is shown to increase close to linearly with the ion temperature gradient, in qualitative agreement with experimental observations in various devices. In CTEM dominated regimes, a net toroi...

Collaboration


Dive into the Stephane Ethier's collaboration.

Top Co-Authors

Avatar

Leonid Oliker

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Scott Klasky

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zhihong Lin

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John Shalf

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Jonathan Carter

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Andrew Canning

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

T. S. Hahm

Princeton Plasma Physics Laboratory

View shared research outputs
Top Co-Authors

Avatar

Choong-Seock Chang

Princeton Plasma Physics Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge