Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wolfgang Eckhardt is active.

Publication


Featured researches published by Wolfgang Eckhardt.


Journal of Chemical Theory and Computation | 2014

ls1 mardyn: The Massively Parallel Molecular Dynamics Code for Large Systems

Christoph Niethammer; Stefan Becker; Martin Bernreuther; Martin Buchholz; Wolfgang Eckhardt; Alexander Heinecke; Stephan Werth; Hans-Joachim Bungartz; Colin W. Glass; Hans Hasse; Jadran Vrabec; Martin Horsch

The molecular dynamics simulation code ls1 mardyn is presented. It is a highly scalable code, optimized for massively parallel execution on supercomputing architectures and currently holds the world record for the largest molecular simulation with over four trillion particles. It enables the application of pair potentials to length and time scales that were previously out of scope for molecular dynamics simulation. With an efficient dynamic load balancing scheme, it delivers high scalability even for challenging heterogeneous configurations. Presently, multicenter rigid potential models based on Lennard-Jones sites, point charges, and higher-order polarities are supported. Due to its modular design, ls1 mardyn can be extended to new physical models, methods, and algorithms, allowing future users to tailor it to suit their respective needs. Possible applications include scenarios with complex geometries, such as fluids at interfaces, as well as nonequilibrium molecular dynamics simulation of heat and mass transfer.


international supercomputing conference | 2013

591 TFLOPS Multi-trillion Particles Simulation on SuperMUC

Wolfgang Eckhardt; Alexander Heinecke; Reinhold Bader; Matthias Brehm; Nicolay Hammer; Herbert Huber; Hans-Georg Kleinhenz; Jadran Vrabec; Hans Hasse; Martin Horsch; Martin Bernreuther; Colin W. Glass; Christoph Niethammer; Arndt Bode; Hans-Joachim Bungartz

Anticipating large-scale molecular dynamics simulations (MD) in nano-fluidics, we conduct performance and scalability studies of an optimized version of the code ls1 mardyn. We present our implementation requiring only 32 Bytes per molecule, which allows us to run the, to our knowledge, largest MD simulation to date. Our optimizations tailored to the Intel Sandy Bridge processor are explained, including vectorization as well as shared-memory parallelization to make use of Hyperthreading. Finally we present results for weak and strong scaling experiments on up to 146016 Cores of SuperMUC at the Leibniz Supercomputing Centre, achieving a speed-up of 133k times which corresponds to an absolute performance of 591.2 TFLOPS.


Future Generation Computer Systems | 2010

A precompiler to reduce the memory footprint of multiscale PDE solvers in C

Hans-Joachim Bungartz; Wolfgang Eckhardt; Tobias Weinzierl; Christoph Zenger

A PDE solvers value is increasingly co-determined by its memory footprint, as the increase of computational multicore power overtakes the memory access speed, and as memory restricts the maximum experiment size. Tailoring a code to require less memory is technical challenging, error-prone, and hardware-dependent. Object-oriented code typically consumes much memory, though developers favour such high-level languages offering meaningful models and good maintainability. We augment the language C++ with new keywords branding records to be memory-critical. Our precompiler DaStGen then transforms this augmented specification into plain C++ optimised for low memory requirements. Hereby, it encodes multiple attributes with fixed range within one variable, and it reduces the number of bits per floating point value. The tool also generates one user-defined MPI data type per class and, thus, facilitates the construction of parallel codes with small messages.


parallel processing and applied mathematics | 2009

A blocking strategy on multicore architectures for dynamically adaptive PDE solvers

Wolfgang Eckhardt; Tobias Weinzierl

This paper analyses a PDE solver working on adaptive Cartesian grids. While a rigorous element-wise formulation of this solver offers great flexibility concerning dynamic adaptivity, and while it comes along with very low memory requirements, the realisations speed can not cope with codes working on patches of regular grids--in particular, if the latter deploy patches to several cores. Instead of composing a grid of regular patches, we suggest to identify regular patches throughout the recursive, element-wise grid traversal. Our code then unrolls the recursion for these regular grid blocks automatically, and it deploys their computations to several cores. It hence benefits from multicores on regular subdomains, but preserves its simple, element-wise character and its ability to handle arbitrary dynamic refinement and domain topology changes.


Archive | 2015

Supercomputing for Molecular Dynamics Simulations

Alexander Heinecke; Wolfgang Eckhardt; Martin Horsch; Hans-Joachim Bungartz

This chapter outlines the work “Supercomputing forMolecular Dynamics Simulations: HandlingMulti-Trillion Particles in Nanofluidics” and defines the overall scope of this book. Several flavors of molecular dynamics (MD) simulation are introduced, andwe point out the different requirements onMDdepending on the field inwhichMD is applied. Sincewe focus on the application ofMD in the relatively new domain of process engineering, we discuss which ideas from molecular biology and its mature simulation codes can be re-used and which need to be re-thought. This is necessary since both molecular models as well as particle numbers used in computational molecular engineering noticeably vary from molecular biology. Furthermore, we outline the methodology and structure if this book.


Computers & Mathematics With Applications | 2014

Hybrid molecular-continuum methods: From prototypes to coupling software

Philipp Neumann; Wolfgang Eckhardt; Hans-Joachim Bungartz

In this contribution, we review software requirements in hybrid molecular-continuum simulations. For this purpose, we analyze a prototype implementation which combines two frameworks-the Molecular Dynamics framework MarDyn and the framework Peano for spatially adaptive mesh-based simulations-and point out particular challenges of a general coupling software. Based on this analysis, we discuss the software design of our recently published coupling tool. We explain details on its overall structure and show how the challenges that arise in respective couplings are resolved by the software.


international conference on computational science | 2008

DaStGen--A Data Structure Generator for Parallel C++ HPC Software

Hans-Joachim Bungartz; Wolfgang Eckhardt; Miriam Mehl; Tobias Weinzierl

Simulation codes often suffer from high memory requirements. This holds in particular if they are memory-bounded, and, with multicore systems coming up, the problem will become even worse as more and more cores have to share the memory connections. To optimise data structures with respect to memory manually is error-prone and cumbersome. This paper presents the tool DaStGen translating classes declared in C++ syntax and augmented by new keywords into plain C++ code. The tool automates the record optimisation, as it analyses the potential range of each attribute, and as the user can restrict this range further. Herefrom, the generated code stores multiple attributes within one single primitive type. Furthermore, the tool derives user-defined MPI data types for each class. Using the tool reduces any algorithms memory footprint, it speeds up memory-bounded applications such as CFD codes, and it hides technical details of MPI applications from the programmer.


european conference on parallel processing | 2015

Optimized Force Calculation in Molecular Dynamics Simulations for the Intel Xeon Phi

Nikola Tchipev; Amer Wafai; Colin W. Glass; Wolfgang Eckhardt; Alexander Heinecke; Hans-Joachim Bungartz; Philipp Neumann

We provide details on the shared-memory parallelization for manycore architectures of the molecular dynamics framework ls1-mardyn, including an optimization of the SIMD vectorization for multi-centered molecules. The novel shared-memory parallelization scheme allows to retain Newton’s third law optimization and exhibits very good scaling on many-core devices such as a full Xeon Phi card running 240 threads. The Xeon Phi can thus be exploited and delivers comparable performance as IvyBridge nodes in our experiments.


international symposium on parallel and distributed computing | 2012

Memory-Efficient Implementation of a Rigid-Body Molecular Dynamics Simulation

Wolfgang Eckhardt; Tobias Neckel

Molecular dynamics simulations are usually optimized with regard to runtime rather than memory consumption. In this paper, we investigate two distinct implementational aspects of the frequently used Linked-Cell algorithm for rigid-body molecular dynamics simulations: the representation of particle data for the force calculation, and the layout of data structures in memory. We propose a low memory footprint implementation, which comes with no costs in terms of runtime. To prove the approach, it was implemented in the programme Mardyn and evaluated on a standard cluster as well as on a Blue Gene/P for representative scenarios.


parallel computing | 2015

On-the-fly memory compression for multibody algorithms.

Wolfgang Eckhardt; Robert Glas; Denys Korzh; Stefan Wallner; Tobias Weinzierl

Memory and bandwidth demands challenge developers of particle-based codes that have to scale on new architectures, as the growth of concurrency outperforms improvements in memory access facilities, as the memory per core tends to stagnate, and as communication networks cannot increase bandwidth arbitrary. We propose to analyse each particle of such a code to find out whether a hierarchical data representation storing data with reduced precision caps the memory demands without exceeding given error bounds. For admissible candidates, we perform this compression and thus reduce the pressure on the memory subsystem, lower the total memory footprint and reduce the data to be exchanged via MPI. Notably, our analysis and transformation changes the data compression dynamically, i.e. the choice of data format follows the solution characteristics, and it does not require us to alter the core simulation code.

Collaboration


Dive into the Wolfgang Eckhardt's collaboration.

Top Co-Authors

Avatar

Martin Horsch

Kaiserslautern University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hans Hasse

Kaiserslautern University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stephan Werth

Kaiserslautern University of Technology

View shared research outputs
Top Co-Authors

Avatar

Amer Wafai

University of Stuttgart

View shared research outputs
Top Co-Authors

Avatar

Herbert Huber

Bavarian Academy of Sciences and Humanities

View shared research outputs
Researchain Logo
Decentralizing Knowledge