Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Julian Cummings is active.

Publication


Featured researches published by Julian Cummings.


Engineering With Computers | 2006

A virtual test facility for the efficient simulation of solid material response under strong shock and detonation wave loading

Ralf Deiterding; Raul Radovitzky; Sean Mauch; Ludovic Noels; Julian Cummings; D. I. Meiron

A virtual test facility (VTF) for studying the three-dimensional dynamic response of solid materials subject to strong shock and detonation waves has been constructed as part of the research program of the Center for Simulating the Dynamic Response of Materials at the California Institute of Technology. The compressible fluid flow is simulated with a Cartesian finite volume method and treating the solid as an embedded moving body, while a Lagrangian finite element scheme is employed to describe the structural response to the hydrodynamic pressure loading. A temporal splitting method is applied to update the position and velocity of the boundary between time steps. The boundary is represented implicitly in the fluid solver with a level set function that is constructed on-the-fly from the unstructured solid surface mesh. Block-structured mesh adaptation with time step refinement in the fluid allows for the efficient consideration of disparate fluid and solid time scales. We detail the design of the employed object-oriented mesh refinement framework AMROC and outline its effective extension for fluid–structure interaction problems. Further, we describe the parallelization of the most important algorithmic components for distributed memory machines and discuss the applied partitioning strategies. As computational examples for typical VTF applications, we present the dynamic deformation of a tantalum cylinder due to the detonation of an interior solid explosive and the impact of an explosion-induced shock wave on a multi-material soft tissue body.


The Journal of Supercomputing | 2002

A Virtual Test Facility for the Simulation of Dynamic Response in Materials

Julian Cummings; Michael Aivazis; Ravi Samtaney; Raul Radovitzky; Sean Mauch; D. I. Meiron

The Center for Simulating Dynamic Response of Materials at the California Institute of Technology is constructing a virtual shock physics facility for studying the response of various target materials to very strong shocks. The Virtual Test Facility (VTF) is an end-to-end, fully three-dimensional simulation of the detonation of high explosives (HE), shock wave propagation, solid material response to pressure loading, and compressible turbulence. The VTF largely consists of a parallel fluid solver and a parallel solid mechanics package that are coupled together by the exchange of boundary data. The Eulerian fluid code and Lagrangian solid mechanics model interact via a novel approach based on level sets. The two main computational packages are integrated through the use of Pyre, a problem solving environment written in the Python scripting language. Pyre allows application developers to interchange various computational models and solver packages without recompiling code, and it provides standardized access to several data visualization engines and data input mechanisms. In this paper, we outline the main components of the VTF, discuss their integration via Pyre, and describe some recent accomplishments in large-scale simulation using the VTF.


international parallel and distributed processing symposium | 2011

Moving the Code to the Data - Dynamic Code Deployment Using ActiveSpaces

Ciprian Docan; Manish Parashar; Julian Cummings; Scott Klasky

Managing the large volumes of data produced by emerging scientific and engineering simulations running on leadership-class resources has become a critical challenge. The data has to be extracted off the computing nodes and transported to consumer nodes so that it can be processed, analyzed, visualized, archived, etc. Several recent research efforts have addressed data-related challenges at different levels. One attractive approach is to offload expensive I/O operations to a smaller set of dedicated computing nodes known as a staging area. However, even using this approach, the data still has to be moved from the staging area to consumer nodes for processing, which continues to be a bottleneck. In this paper, we investigate an alternate approach, namely moving the data-processing code to the staging area rather than moving the data. Specifically, we present the Active Spaces framework, which provides (1) programming support for defining the data-processing routines to be downloaded to the staging area, and (2) run-time mechanisms for transporting binary codes associated with these routines to the staging area, executing the routines on the nodes of the staging area, and returning the results. We also present an experimental performance evaluation of Active Spaces using applications running on the Cray XT5 at Oak Ridge National Laboratory. Finally, we use a coupled fusion application workflow to explore the trade-offs between transporting data and transporting the code required for data processing during coupling, and we characterize the sweet spots for each option.


Journal of Physics: Conference Series | 2009

Scaling to 150K cores: recent algorithm and performance engineering developments enabling XGC1 to run at scale

Mark Adams; Seung-Hoe Ku; Patrick H. Worley; Eduardo F. D'Azevedo; Julian Cummings; Cindy Chang

Particle-in-cell (PIC) methods have proven to be effective in discretizing the Vlasov-Maxwell system of equations describing the core of toroidal burning plasmas for many decades. Recent physical understanding of the importance of edge physics for stability and transport in tokamaks has lead to development of the first fully toroidal edge PIC code – XGC1. The edge region poses special problems in meshing for PIC methods due to the lack of closed flux surfaces, which makes field-line following meshes and coordinate systems problematic. We present a solution to this problem with a semi-field line following mesh method in a cylindrical coordinate system. Additionally, modern supercomputers require highly concurrent algorithms and implementations, with all levels of the memory hierarchy being efficiently utilized to realize optimal code performance. This paper presents a mesh and particle partitioning method, suitable to our meshing strategy, for use on highly concurrent cache-based computing platforms.


parallel, distributed and network-based processing | 2010

EFFIS: An End-to-end Framework for Fusion Integrated Simulation

Julian Cummings; Jay F. Lofstead; Karsten Schwan; Alexander Sim; Arie Shoshani; Ciprian Docan; Manish Parashar; Scott Klasky; Norbert Podhorszki; Roselyne Barreto

The purpose of the Fusion Simulation Project is to develop a predictive capability for integrated modeling of magnetically confined burning plasmas. In support of this mission, the Center for Plasma Edge Simulation has developed an End-to-end Framework for Fusion Integrated Simulation (EFFIS) that combines critical computer science technologies in an effective manner to support leadership class computing and the coupling of complex plasma physics models. We describe here the main components of EFFIS and how they are being utilized to address our goal of integrated predictive plasma edge simulation.


grid computing | 2010

Experiments with Memory-to-Memory Coupling for End-to-End Fusion Simulation Workflows

Ciprian Docan; Fan Zhang; Manish Parashar; Julian Cummings; Norbert Podhorszki; Scott Klasky

Scientific applications are striving to accurately simulate multiple interacting physical processes that comprise complex phenomena being modeled. Efficient and scalable parallel implementations of these coupled simulations present challenging interaction and coordination requirements, especially when the coupled physical processes are computationally heterogeneous and progress at different speeds. In this paper, we present the design, implementation and evaluation of a memory-to-memory coupling framework for coupled scientific simulations on high-performance parallel computing platforms. The framework is driven by the coupling requirements of the Center for Plasma Edge Simulation, and it provides simple coupling abstractions as well as efficient asynchronous (RDMA-based) memory-to-memory data transport mechanisms that complement existing parallel programming systems and data sharing frameworks. The framework enables flexible coupling behaviors that are asynchronous in time and space, and it supports dynamic coupling between heterogeneous simulation processes without enforcing any synchronization constraints. We evaluate the performance and scalability of the coupling framework using a specific coupling scenario, on the Jaguar Cray XT5 system at Oak Ridge National Laboratory.


Journal of Physics: Conference Series | 2009

Whole-volume integrated gyrokinetic simulation of plasma turbulence in realistic diverted-tokamak geometry

C S Chang; S Ku; P Diamond; Mark Adams; Roselyne Barreto; Yang Chen; Julian Cummings; Eduardo F. D'Azevedo; G Dif-Pradalier; Stephane Ethier; Leslie Greengard; T. S. Hahm; F Hinton; David E. Keyes; Scott Klasky; Zhihong Lin; J Lofstead; G Park; Scott E. Parker; Norbert Podhorszki; K Schwan; Arie Shoshani; Deborah Silver; M Wolf; Patrick H. Worley; H Weitzner; E Yoon; Denis Zorin

Performance prediction for ITER is based upon the ubiquitous experimental observation that the plasma energy confinement in the device core is strongly coupled to the edge confinement for an unknown reason. The coupling time-scale is much shorter than the plasma transport time-scale. In order to understand this critical observation, a multi-scale turbulence-neoclassical simulation of integrated edge-core plasma in a realistic diverted geometry is a necessity, but has been a formidable task. Thanks to the recent development in high performance computing, we have succeeded in the integrated multiscale gyrokinetic simulation of the ion-temperature-gradient driven turbulence in realistic diverted tokamak geometry for the first time. It is found that modification of the self-organized criticality in the core plasma by nonlocal core-edge coupling of ITG turbulence can be responsible for the core-edge confinement coupling.


Journal of Physics: Conference Series | 2007

Coupled simulation of kinetic pedestal growth and MHD ELM crash

G. Y. Park; Julian Cummings; C. S. Chang; Norbert Podhorszki; Scott Klasky; S. Ku; A.Y. Pankin; R Samtaney; Arie Shoshani; P. Snyder; H. Strauss; L. Sugiyama

Edge pedestal height and the accompanying ELM crash are critical elements of ITER physics yet to be understood and predicted through high performance computing. An entirely self-consistent first principles simulation is being pursued as a long term research goal, and the plan is planned for completion in time for ITER operation. However, a proof-of-principle work has already been established using a computational tool that employs the best first principles physics available at the present time. A kinetic edge equilibrium code XGC0, which can simulate the neoclassically dominant pedestal growth from neutral ionization (using a phenomenological residual turbulence diffusion motion superposed upon the neoclassical particle motion) is coupled to an extended MHD code M3D, which can perform the nonlinear ELM crash. The stability boundary of the pedestal is checked by an ideal MHD linear peeling-ballooning code, which has been validated against many experimental data sets for the large scale (type I) ELMs onset boundary. The coupling workflow and scientific results to be enabled by it are described.


Journal of Computational Chemistry | 2008

Manager-worker-based model for the parallelization of quantum Monte Carlo on heterogeneous and homogeneous networks.

Michael T. Feldmann; Julian Cummings; David R. Kent; Richard P. Muller; William A. Goddard

A manager–worker‐based parallelization algorithm for Quantum Monte Carlo (QMC‐MW) is presented and compared with the pure iterative parallelization algorithm, which is in common use. The new manager–worker algorithm performs automatic load balancing, allowing it to perform near the theoretical maximal speed even on heterogeneous parallel computers. Furthermore, the new algorithm performs as well as the pure iterative algorithm on homogeneous parallel computers. When combined with the dynamic distributable decorrelation algorithm (DDDA) [Feldmann et al., J Comput Chem 28, 2309 (2007)], the new manager–worker algorithm allows QMC calculations to be terminated at a prespecified level of convergence rather than upon a prespecified number of steps (the common practice). This allows a guaranteed level of precision at the least cost. Additionally, we show (by both analytic derivation and experimental verification) that standard QMC implementations are not “perfectly parallel” as is often claimed.


Engineering With Computers | 2008

Generic programming techniques for parallelizing and extending procedural finite element programs

Fehmi Cirak; Julian Cummings

We outline an approach for extending procedural finite-element software components using generic programming. A layer of generic software components consisting of C++ containers and algorithms is used for parallelization of the finite-element solver and for solver coupling in multi-physics applications. The advantages of generic programming in connection with finite-element codes are discussed and compared with those of object-oriented programming. The use of the proposed generic programming techniques is demonstrated in a tutorial fashion through basic illustrative examples as well as code excerpts from a large-scale finite-element program for serial and parallel computing platforms.

Collaboration


Dive into the Julian Cummings's collaboration.

Top Co-Authors

Avatar

Scott Klasky

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Norbert Podhorszki

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

S. Ku

Princeton Plasma Physics Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

C.S. Chang

Courant Institute of Mathematical Sciences

View shared research outputs
Top Co-Authors

Avatar

Scott E. Parker

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Arie Shoshani

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Choong-Seock Chang

Princeton Plasma Physics Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge