Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Klaus Weide is active.

Publication


Featured researches published by Klaus Weide.


parallel computing | 2009

Extensible component-based architecture for FLASH, a massively parallel, multiphysics simulation code

Anshu Dubey; Katie Antypas; Murali K. Ganapathy; Lynn B. Reid; Katherine Riley; Daniel J. Sheeler; Andrew R. Siegel; Klaus Weide

FLASH is a publicly available high performance application code which has evolved into a modular, extensible software system from a collection of unconnected legacy codes. FLASH has been successful because its capabilities have been driven by the needs of scientific applications, without compromising maintainability, performance, and usability. In its newest incarnation, FLASH3 consists of inter-operable modules that can be combined to generate different applications. The FLASH architecture allows arbitrarily many alternative implementations of its components to co-exist and interchange with each other, resulting in greater flexibility. Further, a simple and elegant mechanism exists for customization of code functionality without the need to modify the core implementation of the source. A built-in unit test framework providing verifiability, combined with a rigorous software maintenance process, allow the code to operate simultaneously in the dual mode of production and development. In this paper we describe the FLASH3 architecture, with emphasis on solutions to the more challenging conflicts arising from solver complexity, portable performance requirements, and legacy codes. We also include results from user surveys conducted in 2005 and 2007, which highlight the success of the code.


Journal of Parallel and Distributed Computing | 2014

A survey of high level frameworks in block-structured adaptive mesh refinement packages

Anshu Dubey; Ann S. Almgren; John B. Bell; Martin Berzins; Steven R. Brandt; Greg L. Bryan; Phillip Colella; Daniel T. Graves; Michael J. Lijewski; Frank Löffler; Brian W. O'Shea; Brian Van Straalen; Klaus Weide

Over the last decade block-structured adaptive mesh refinement (SAMR) has found increasing use in large, publicly available codes and frameworks. SAMR frameworks have evolved along different paths. Some have stayed focused on specific domain areas, others have pursued a more general functionality, providing the building blocks for a larger variety of applications. In this survey paper we examine a representative set of SAMR packages and SAMR-based codes that have been in existence for half a decade or more, have a reasonably sized and active user base outside of their home institutions, and are publicly available. The set consists of a mix of SAMR packages and application codes that cover a broad range of scientific domains. We look at their high-level frameworks, their design trade-offs and their approach to dealing with the advent of radical changes in hardware architecture. The codes included in this survey are BoxLib, Cactus, Chombo, Enzo, FLASH, and Uintah. A survey of mature openly available state-of-the-art structured AMR libraries and codes.Discussion of their frameworks, challenges and design trade-offs.Directions being pursued by the codes to prepare for the future many-core and heterogeneous platforms.


Astrophysical Journal Supplement Series | 2012

Imposing a Lagrangian Particle Framework on an Eulerian Hydrodynamics Infrastructure in Flash

Anshu Dubey; C. Daley; John A. ZuHone; Paul M. Ricker; Klaus Weide; C. Graziani

In many astrophysical simulations, both Eulerian and Lagrangian quantities are of interest. For example, in a galaxy cluster merger simulation, the intracluster gas can have Eulerian discretization, while dark matter can be modeled using particles. FLASH, a component-based scientific simulation code, superimposes a Lagrangian framework atop an adaptive mesh refinement Eulerian framework to enable such simulations. The discretization of the field variables is Eulerian, while the Lagrangian entities occur in many different forms including tracer particles, massive particles, charged particles in particle-in-cell mode, and Lagrangian markers to model fluid-structure interactions. These widely varying roles for Lagrangian entities are possible because of the highly modular, flexible, and extensible architecture of the Lagrangian framework. In this paper, we describe the Lagrangian framework in FLASH in the context of two very different applications, Type Ia supernovae and galaxy cluster mergers, which use the Lagrangian entities in fundamentally different ways.


Physics of Plasmas | 2017

Numerical modeling of laser-driven experiments aiming to demonstrate magnetic field amplification via turbulent dynamo

P. Tzeferacos; A. Rigby; A. F. A. Bott; A. R. Bell; R. Bingham; A. Casner; Fausto Cattaneo; E. Churazov; J. Emig; Norbert Flocke; F. Fiuza; Cary Forest; J. Foster; Carlo Alberto Graziani; J. Katz; M. Koenig; C. K. Li; J. Meinecke; R. D. Petrasso; H.-S. Park; B. A. Remington; J. S. Ross; Dongsu Ryu; D. D. Ryutov; Klaus Weide; T. G. White; Brian Reville; Francesco Miniati; A. A. Schekochihin; D. H. Froula

The universe is permeated by magnetic fields, with strengths ranging from a femtogauss in the voids between the filaments of galaxy clusters to several teragauss in black holes and neutron stars. The standard model behind cosmological magnetic fields is the nonlinear amplification of seed fields via turbulent dynamo to the values observed. We have conceived experiments that aim to demonstrate and study the turbulent dynamo mechanism in the laboratory. Here, we describe the design of these experiments through simulation campaigns using FLASH, a highly capable radiation magnetohydrodynamics code that we have developed, and large-scale three-dimensional simulations on the Mira supercomputer at the Argonne National Laboratory. The simulation results indicate that the experimental platform may be capable of reaching a turbulent plasma state and determining the dynamo amplification. We validate and compare our numerical results with a small subset of experimental data using synthetic diagnostics.


ieee international conference on high performance computing data and analytics | 2014

Evolution of FLASH, a multi-physics scientific simulation code for high-performance computing

Anshu Dubey; Katie Antypas; Alan Clark Calder; Christopher S. Daley; Bruce Fryxell; Brad Gallagher; Donald Q. Lamb; Dongwook Lee; Kevin Olson; Lynn B. Reid; Paul Rich; Paul M. Ricker; Katherine Riley; R. Rosner; Andrew R. Siegel; Noel T. Taylor; Klaus Weide; Francis Xavier Timmes; Natasha Vladimirova; John A. ZuHone

The FLASH code has evolved into a modular and extensible scientific simulation software system over the decade of its existence. During this time it has been cumulatively used by over a thousand researchers to investigate problems in astrophysics, cosmology, and in some areas of basic physics, such as turbulence. Recently, many new capabilities have been added to the code to enable it to simulate problems in high-energy density physics. Enhancements to these capabilities continue, along with enhancements enabling simulations of problems in fluid-structure interactions. The code started its life as an amalgamation of already existing software packages and sections of codes developed independently by various participating members of the team for other purposes. The code has evolved through a mixture of incremental and deep infrastructural changes. In the process, it has undergone four major revisions, three of which involved a significant architectural advancement. Along the way, a software process evolved that addresses the issues of code verification, maintainability, and support for the expanding user base. The software process also resolves the conflicts arising out of being in development and production simultaneously with multiple research projects, and between performance and portability. This paper describes the process of code evolution with emphasis on the design decisions and software management policies that have been instrumental in the success of the code. The paper also makes the case for a symbiotic relationship between scientific research and good software engineering of the simulation software.


Concurrency and Computation: Practice and Experience | 2012

Optimization of multigrid based elliptic solver for large scale simulations in the FLASH code

Christopher S. Daley; Marcos Vanella; Anshu Dubey; Klaus Weide; Elias Balaras

FLASH is a multiphysics multiscale adaptive mesh refinement (AMR) code originally designed for simulation of reactive flows often found in Astrophysics. With its wide user base and flexible applications configuration capability, FLASH has a dual task of maintaining scalability and portability in all its solvers. The scalability of fully explicit solvers in the code is tied very closely to that of the underlying mesh. Others such as the Poisson solver based on a multigrid method have more complex scaling behavior. Multigrid methods suffer from processor starvation and dominating communication costs at coarser grids with increase in the number of processors. In this paper, we propose a combination of uniform grid mesh with AMR mesh, and the merger of two different sets of solvers to overcome the scalability limitation of the Poisson solver in FLASH. The principal challenge in the proposed merger is the efficiency of the communication algorithm to map the mesh back and forth between uniform grid and AMR. We present two different parallel mapping algorithms and also discuss results from performance studies of the two implementations. Copyright


Software - Practice and Experience | 2015

Ongoing verification of a multiphysics community code: FLASH

Anshu Dubey; Klaus Weide; Dongwook Lee; John Bachan; Christopher S. Daley; Samuel Olofin; Noel T. Taylor; Paul Rich; Lynn B. Reid

When developing a complex, multi‐authored code, daily testing on multiple platforms and under a variety of conditions is essential. It is therefore necessary to have a regression test suite that is easily administered and configured, as well as a way to easily view and interpret the test suite results. We describe the methodology for verification of FLASH, a highly capable multiphysics scientific application code with a wide user base. The methodology uses a combination of unit and regression tests and an in‐house testing software that is optimized for operation under limited resources. Although our practical implementations do not always comply with theoretical regression‐testing research, our methodology provides a comprehensive verification of a large scientific code under resource constraints.Copyright


ieee international conference on high performance computing data and analytics | 2013

Pragmatic optimizations for better scientific utilization of large supercomputers

Anshu Dubey; Alan Clark Calder; Christopher S. Daley; Robert Fisher; C. Graziani; George C. Jordan; Donald Q. Lamb; Lynn B. Reid; Dean M. Townsley; Klaus Weide

Advances in modeling and algorithms, combined with growth in computing resources, have enabled simulations of multiphysics–multiscale phenomena that can greatly enhance our scientific understanding. However, on currently available high-performance computing (HPC) resources, maximizing the scientific outcome of simulations requires many trade-offs. In this paper we describe our experiences in running simulations of the explosion phase of Type Ia supernovae on the largest available platforms. The simulations use FLASH, a modular, adaptive mesh, parallel simulation code with a wide user base. The simulations use multiple physics components: hydrodynamics, gravity, a sub-grid flame model, a three-stage burning model, and a degenerate equation of state. They also use Lagrangian tracer particles, which are then post-processed to determine the nucleosynthetic yields. We describe the simulation planning process, and the algorithmic optimizations and trade-offs that were found to be necessary. Several of the optimizations and trade-offs were made during the course of the simulations as our understanding of the challenges evolved, or when simulations went into previously unexplored physical regimes. We also briefly outline the anticipated challenges of, and our preparations for, the next-generation computing platforms.


computational science and engineering | 2013

The software development process of FLASH, a multiphysics simulation code

Anshu Dubey; Katie Antypas; Alan Clark Calder; Bruce Fryxell; D. Q. Lamb; Paul M. Ricker; Lynn B. Reid; Katherine Riley; R. Rosner; Andrew R. Siegel; F. X. Timmes; Natalia Vladimirova; Klaus Weide

The FLASH code has evolved into a modular and extensible scientific simulation software system over the decade of its existence. During this time it has been cumulatively used by over a thousand researchers in several scientific communities (i.e. astrophysics, cosmology, high-energy density physics, turbulence, fluid-structure interactions) to obtain results for research. The code started its life as an amalgamation of two already existing software packages and sections of other codes developed independently by various participating members of the team for other purposes. In the evolution process it has undergone four major revisions, three of which involved a significant architectural advancement. A corresponding evolution of the software process and policies for maintenance occurred simultaneously. The code is currently in its 4.x release with a substantial user community. Recently there has been an upsurge in the contributions by external users; some provide significant new capability. This paper outlines the software development and evolution processes that have contributed to the success of the FLASH code.


Proceedings of the 3rd Workshop on Fault-tolerance for HPC at extreme scale | 2013

Fault tolerance using lower fidelity data in adaptive mesh applications

Anshu Dubey; Prateeti Mohapatra; Klaus Weide

Many high performance scientific simulation codes use checkpointing for multiple reasons. In addition to having the flexibility to complete the simulation in multiple job submissions, it has also provided an adequate recovery mechanism up to the current generation of platforms. With the advent of million-way parallelism, application codes are looking for additional options for recovery that may or may not be transparent to the applications. In many instances the applications can make the best judgement about the acceptability of the recovered solution. In this paper, we explore one option for recovering from multiple faults in codes using block-structured adaptive mesh refinement (AMR). The AMR codes have easy access to low-fidelity solution in the same physical space where they are also computing higher-fidelity solution. When a fault occurs, this low-fidelity solution can be used to reconstruct the higher fidelity solution in-flight. We report our findings from one implementation of such a strategy in FLASH, a block-structured adaptive mesh refinement community code for simulation of reactive compressible flows. In all our experiments the mechanism proved to be within the error bounds of the considered applications.

Collaboration


Dive into the Klaus Weide's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge