Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dries Kimpe is active.

Publication


Featured researches published by Dries Kimpe.


international conference on big data | 2014

FusionFS: Toward supporting data-intensive scientific applications on extreme-scale high-performance computing systems

Dongfang Zhao; Zhao Zhang; Xiaobing Zhou; Tonglin Li; Ke Wang; Dries Kimpe; Philip H. Carns; Robert B. Ross; Ioan Raicu

State-of-the-art, yet decades-old, architecture of high-performance computing systems has its compute and storage resources separated. It thus is limited for modern data-intensive scientific applications because every I/O needs to be transferred via the network between the compute and storage resources. In this paper we propose an architecture that hss a distributed storage layer local to the compute nodes. This layer is responsible for most of the I/O operations and saves extreme amounts of data movement between compute and storage resources. We have designed and implemented a system prototype of this architecture - which we call the FusionFS distributed file system - to support metadata-intensive and write-intensive operations, both of which are critical to the I/O performance of scientific applications. FusionFS has been deployed and evaluated on up to 16K compute nodes of an IBM Blue Gene/P supercomputer, showing more than an order of magnitude performance improvement over other popular file systems such as GPFS, PVFS, and HDFS.


Astronomy and Astrophysics | 2005

Resonantly damped fast MHD kink modes in longitudinally stratified tubes with thick non-uniform transitional layers

I. Arregui; T. Van Doorsselaere; Jesse Andries; M. Goossens; Dries Kimpe

Resonantly damped fast kink quasi-modes are computed in fully resistive magnetohydrodynamics (MHD) for two-dimensional equilibrium models. The equilibrium model is a straight cylindrically symmetric flux tube with a plasma density that is non-uniform both across and along the loop. The non-uniform layer across the loop is not restricted to be thin, but its thickness can reach values up to the loop diameter. Our results indicate that the period and damping of coronal loop oscillations mainly depend on the density contrast and the inhomogeneity length-scale and are independent of the details of longitudinal stratification, depending on the weighted mean density, weighted with the wave energy. For fully non-uniform loops, quasi-modes can interact with resistive Alfven eigenmodes leading to avoided crossings and gaps in the complex frequency plane. The present study extends previous studies on coronal loop oscillations in one-dimensional equilibrium models with thick boundary layers and in equilibria with longitudinally stratified loops under the thin boundary approximation, and allow for a better comparison between observations and theory raising the prospect of coronal seismology using the time damping of coronal loop oscillations.


international conference on computational science | 2005

The COOLFluiD framework: design solutions for high performance object oriented scientific computing software

Andrea Lani; Tiago Quintino; Dries Kimpe; Herman Deconinck; Stefan Vandewalle; Stefaan Poedts

The numerical simulation of complex physical phenomena is a challenging endeavor. Software packages developed for such purpose should combine high performance and extreme flexibility, in order to allow an easy integration of new algorithms, models and functionalities, without penalizing run-time efficiency. COOLFluiD is an object-oriented framework for multi-physics simulations using multiple numerical methods on unstructured grids, aiming at satisfying these needs. To this end, specific design patterns and advanced techniques, combining static and dynamic polymorphism, have been employed to attain modularity and efficiency. Some of the main design and implementation solutions adopted in COOLFluiD are presented in this paper, in particular the Perspective and the Method-Command Patterns, used to implement respectively the physical models and the numerical modules.


Astronomy and Astrophysics | 2005

On the effect of the initial magnetic polarity and of the background wind on the evolution of CME shocks

Emmanuel Chané; Carla Jacobs; B. van der Holst; Stefaan Poedts; Dries Kimpe

The shocks and magnetic clouds caused by Coronal Mass Ejections (CMEs) in the solar corona and interplanetary (IP) space play an important role in the study of space weather. In the present paper, numerical simulations of some simple CME models were performed by means of a finite volume, explicit solver to advance the equations of ideal magnetohydrodynamics. The aim is to quantify here both the effect of the background wind model and of the initial polarity on the evolution of the IP CMEs and the corresponding shocks.
To simulate the CMEs, a high density-pressure plasma blob is superposed on different steady state solar wind models. The evolution of an initially non-magnetized plasma blob is compared with that of two magnetized ones (with both normal and inverse polarity) and the differences are analysed and quantified. Depending on the launch angle of the CME and the polarity of the initial flux rope, the velocity of the shock front and magnetic cloud is decreased or increased. Also the spread angle of the CME and the evolution path of the CME in the background solar wind is substantially different for the different CME models and the different wind models. A quantitative comparison of these simulations shows that these effects can be quite substantial and can clearly affect the geo-effectiveness and the arrival time of the events.


ieee international conference on high performance computing data and analytics | 2010

Accelerating I/O Forwarding in IBM Blue Gene/P Systems

Venkatram Vishwanath; Mark Hereld; Kamil Iskra; Dries Kimpe; Vitali A. Morozov; Michael E. Papka; Robert B. Ross; Kazutomo Yoshii

Current leadership-class machines suffer from a significant imbalance between their computational power and their I/O bandwidth. I/O forwarding is a paradigm that attempts to bridge the increasing performance and scalability gap between the compute and I/O components of leadership-class machines to meet the requirements of data-intensive applications by shipping I/O calls from compute nodes to dedicated I/O nodes. I/O forwarding is a critical component of the I/O subsystem of the IBM Blue Gene/P supercomputer currently deployed at several leadership computing facilities. In this paper, we evaluate the performance of the existing I/O forwarding mechanisms for BG/P and identify the performance bottlenecks in the current design. We augment the I/O forwarding with two approaches: I/O scheduling using a work-queue model and asynchronous data staging. We evaluate the efficacy of our approaches using microbenchmarks and application-level benchmarks on leadership class systems.


Astronomy and Astrophysics | 2006

Inverse and normal coronal mass ejections : evolution up to 1 AU

Emmanuel Chané; B. van der Holst; Carla Jacobs; Stefaan Poedts; Dries Kimpe

Simulations of Coronal Mass Ejections (CMEs) evolving in the interplanetary (IP) space from the Sun up to 1 AU are performed in the framework of ideal magnetohydrodynamics (MHD) by the means of a finite volume, explicit solver. The aim is to quantify the effect of the initiation parameters, such as the initial magnetic polarity, on the evolution and on the geo-effectiveness of CMEs. The CMEs are simulated by means of a very simple model: a high density and high pressure magnetized plasma blob is superposed on a background steady state solar wind model with an initial velocity and launch direction. The simulations show that the initial magnetic polarity substantially affects the IP evolution of the CMEs influencing the propagation velocity, the shape, the trajectory and even the geo-effectiveness. We also tried to reproduce the physical values (density, velocity, and magnetic field) observed by the ACE spacecraft after the halo CME event that occurred on April 4, 2000.


international conference on cluster computing | 2010

Optimization Techniques at the I/O Forwarding Layer

Kazuki Ohta; Dries Kimpe; Jason Cope; Kamil Iskra; Robert B. Ross; Yutaka Ishikawa

I/O is the critical bottleneck for data-intensive scientific applications on HPC systems and leadership-class machines. Applications running on these systems may encounter bottlenecks because the I/O systems cannot handle the overwhelming intensity and volume of I/O requests. Applications and systems use I/O forwarding to aggregate and delegate I/O requests to storage systems. In this paper, we present two optimization techniques at the I/O forwarding layer to further reduce I/O bottlenecks on leadership-class computing systems. The first optimization pipelines data transfers so that I/O requests overlap at the network and file system layer. The second optimization merges I/O requests and schedules I/O request delegation to the back-end parallel file systems. We implemented these optimizations in the I/O Forwarding Scalability Layer and them on the T2K Open Supercomputer at the University of Tokyo and the Surveyor Blue Gene/P system at the Argonne Leadership Computing Facility. On both systems, the optimizations improved application I/O throughput, but highlighted additional areas of I/O contention at the I/O forwarding layer that we plan to address.


high performance distributed computing | 2012

Enabling event tracing at leadership-class scale through I/O forwarding middleware

Thomas Ilsche; Joseph Schuchart; Jason Cope; Dries Kimpe; Terry Jones; Andreas Knüpfer; Kamil Iskra; Robert B. Ross; Wolfgang E. Nagel; Stephen W. Poole

Event tracing is an important tool for understanding the performance of parallel applications. As concurrency increases in leadership-class computing systems, the quantity of performance log data can overload the parallel file system, perturbing the application being observed. In this work we present a solution for event tracing at leadership scales. We enhance the I/O forwarding system software to aggregate and reorganize log data prior to writing to the storage system, significantly reducing the burden on the underlying file system for this type of traffic. Furthermore, we augment the I/O forwarding system with a write buffering capability to limit the impact of artificial perturbations from log data accesses on traced applications. To validate the approach, we modify the Vampir tracing toolset to take advantage of this new capability and show that the approach increases the maximum traced application size by a factor of 5x to more than 200,000 processes.


Scientific Programming | 2006

Reusable object-oriented solutions for numerical simulation of PDEs in a high performance environment

Andrea Lani; Tiago Quintino; Dries Kimpe; Herman Deconinck; Stefan Vandewalle; Stefaan Poedts

Object-oriented platforms developed for the numerical solution of PDEs must combine flexibility and reusability, in order to ease the integration of new functionalities and algorithms. While designing similar frameworks, a built-in support for high performance should be provided and enforced transparently, especially in parallel simulations. The paper presents solutions developed to effectively tackle these and other more specific problems (data handling and storage, implementation of physical models and numerical methods) that have arisen in the development of COOLFluiD, an environment for PDE solvers. Particular attention is devoted to describe a data storage facility, highly suitable for both serial and parallel computing, and to discuss the application of two design patterns, Perspective and Method-Command-Strategy, that support extensibility and run-time flexibility in the implementation of physical models and generic numerical algorithms respectively.


IEEE Transactions on Parallel and Distributed Systems | 2016

Towards Exploring Data-Intensive Scientific Applications at Extreme Scales through Systems and Simulations

Dongfang Zhao; Ning Liu; Dries Kimpe; Robert B. Ross; Xian-He Sun; Ioan Raicu

The state-of-the-art storage architecture of high-performance computing systems was designed decades ago, and with todays scale and level of concurrency, it is showing significant limitations. Our recent work proposed a new architecture to address the I/O bottleneck of the conventional wisdom, and the system prototype (FusionFS) demonstrated its effectiveness on up to 16 K nodes-the scale on par with todays largest supercomputers. The main objective of this paper is to investigate FusionFSs scalability towards exascale. Exascale computers are predicted to emerge by 2018, comprising millions of cores and billions of threads. We built an event-driven simulator (FusionSim) according to the FusionFS architecture, and validated it with FusionFSs traces. FusionSim introduced less than 4 percent error between its simulation results and FusionFS traces. With FusionSim we simulated workloads on up to two million nodes and find out almost linear scalability of I/O performance; results justified FusionFSs viability for exascale systems. In addition to the simulation work, this paper extends the FusionFS system prototype in the following perspectives: (1) the fault tolerance of file metadata is supported, (2) the limitations of the current system design is discussed, and (3) a more thorough performance evaluation is conducted, such as N-to-1 metadata write, system efficiency, and more platforms such as Amazon Cloud.

Collaboration


Dive into the Dries Kimpe's collaboration.

Top Co-Authors

Avatar

Robert B. Ross

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Stefaan Poedts

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Stefan Vandewalle

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Dong Dai

Texas Tech University

View shared research outputs
Top Co-Authors

Avatar

Jason Cope

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Kamil Iskra

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Philip H. Carns

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Yong Chen

Texas Tech University

View shared research outputs
Top Co-Authors

Avatar

Carla Jacobs

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Emmanuel Chané

Katholieke Universiteit Leuven

View shared research outputs
Researchain Logo
Decentralizing Knowledge