Reuben D. Budiardja
University of Tennessee
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Reuben D. Budiardja.
The Astrophysical Journal | 2012
Eirik Endeve; Christian Y. Cardall; Reuben D. Budiardja; Samuel W. Beck; Alborz Bejnood; Ross J. Toedte; Anthony Mezzacappa; John M. Blondin
We extend our investigation of magnetic field evolution in three-dimensional flows driven by the stationary accretion shock instability (SASI) with a suite of higher-resolution idealized models of the post-bounce core-collapse supernova environment. Our magnetohydrodynamic simulations vary in initial magnetic field strength, rotation rate, and grid resolution. Vigorous SASI-driven turbulence inside the shock amplifies magnetic fields exponentially; but while the amplified fields reduce the kinetic energy of small-scale flows, they do not seem to affect the global shock dynamics. The growth rate and final magnitude of the magnetic energy are very sensitive to grid resolution, and both are underestimated by the simulations. Nevertheless our simulations suggest that neutron star magnetic fields exceeding
The Astrophysical Journal | 2010
Eirik Endeve; Christian Y. Cardall; Reuben D. Budiardja; Anthony Mezzacappa
Computer Physics Communications | 2011
Reuben D. Budiardja; Christian Y. Cardall
10^{14}
Astrophysical Journal Supplement Series | 2014
Christian Y. Cardall; Reuben D. Budiardja; Eirik Endeve; Anthony Mezzacappa
international conference on supercomputing | 2014
Mark R. Fahey; Reuben D. Budiardja; Lonnie D. Crosby; Stephen McNally
~G can result from dynamics driven by the SASI, \emph{even for non-rotating progenitors}.
Proceedings of the Second International Workshop on HPC User Support Tools | 2015
Reuben D. Budiardja; Mark R. Fahey; Robert T. McLay; Prasad Maddumage Don; Bilel Hadri; Doug James
We begin an exploration of the capacity of the stationary accretion shock instability (SASI) to generate magnetic fields by adding a weak, stationary, and radial (but bipolar) magnetic field, and in some cases rotation, to an initially spherically symmetric fluid configuration that models a stalled shock in the post-bounce supernova environment. In axisymmetric simulations, we find that cycles of latitudinal flows into and radial flows out of the polar regions amplify the field parallel to the symmetry axis, typically increasing the total magnetic energy by about 2 orders of magnitude. Non-axisymmetric calculations result in fundamentally different flows and a larger magnetic energy increase: shearing associated with the SASI spiral mode contributes to a widespread and turbulent field amplification mechanism, boosting the magnetic energy by almost 4 orders of magnitude (a result which remains very sensitive to the spatial resolution of the numerical simulations). While the SASI may contribute to neutron star magnetization, these simulations do not show qualitatively new features in the global evolution of the shock as a result of SASI-induced magnetic field amplification.
Journal of Physics: Conference Series | 2012
Eirik Endeve; C Y Cardali; Reuben D. Budiardja; Anthony Mezzacappa
Abstract We describe an implementation to solve Poissonʼs equation for an isolated system on a unigrid mesh using FFTs. The method solves the equation globally on mesh blocks distributed across multiple processes on a distributed-memory parallel computer. Test results to demonstrate the convergence and scaling properties of the implementation are presented. The solver is offered to interested users as the library PSPFFT . Program summary Program title: PSPFFT Catalogue identifier: AEJK_v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEJK_v1_0.html Program obtainable from: CPC Program Library, Queenʼs University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 110 243 No. of bytes in distributed program, including test data, etc.: 16 332 181 Distribution format: tar.gz Programming language: Fortran 95 Computer: Any architecture with a Fortran 95 compiler, distributed memory clusters Operating system: Linux, Unix Has the code been vectorized or parallelized?: Yes, using MPI. An arbitrary number of processors may be used (subject to some constraints). The program has been tested on from 1 up to ∼ 13 000 processors. RAM: Depends on the problem size, approximately 170 MBytes for 48 3 cells per process. Classification: 4.3, 6.5 External routines: MPI ( http://www.mcs.anl.gov/mpi/ ), FFTW ( http://www.fftw.org ), Silo ( https://wci.llnl.gov/codes/silo/ ) (only necessary for running test problem). Nature of problem: Solving Poissonʼs equation globally on unigrid mesh distributed across multiple processes on distributed memory system. Solution method: Numerical solution using multidimensional discrete Fourier Transform in a parallel Fortran 95 code. Unusual features: This code can be compiled as a library to be readily linked and used as a blackbox Poisson solver with other codes. Running time: Depends on the size of the problem, but typically less than 1 second per solve.
arXiv: Astrophysics | 2008
Eirik Endeve; Christian Y. Cardall; Reuben D. Budiardja; Anthony Mezzacappa
GenASiS (General Astrophysical Simulation System) is a new code being developed initially and primarily, though by no means exclusively, for the simulation of core-collapse supernovae on the worlds leading capability supercomputers. This paper—the first in a series—demonstrates a centrally refined coordinate patch suitable for gravitational collapse and documents methods for compressible nonrelativistic hydrodynamics. We benchmark the hydrodynamics capabilities of GenASiS against many standard test problems; the results illustrate the basic competence of our implementation, demonstrate the strengths and limitations of the HLLC relative to the HLL Riemann solver in a number of interesting cases, and provide preliminary indications of the codes ability to scale and to function with cell-by-cell fixed-mesh refinement.
Proceedings of the XSEDE16 Conference on Diversity, Big Data, and Science at Scale | 2016
Reuben D. Budiardja; Kapil Agrawal; Mark R. Fahey; Robert T. McLay; Doug James
The University of Tennessee, Knoxville acquired a Cray XC30 supercomputer, called Darter, with a peak performance of 248.9 Teraflops. Darter was deployed in late March of 2013 with a very aggressive production timeline - the system was deployed, accepted, and placed into production in only 2 weeks. The Spring Experiment for the Center for Analysis and Prediction of Storms CAPS largely drove the accelerated timeline, as the experiment was scheduled to start in mid-April. The Consortium for Advanced Simulation of Light Water Reactors CASL project also needed access and was able to meet their tight deadlines on the newly acquired XC30. Darters accelerated deployment and operations schedule resulted in substantial scientific impacts within the research community as well as immediate real-world impacts such as early severe tornado warnings [1].
arXiv: Instrumentation and Methods for Astrophysics | 2015
Reuben D. Budiardja; Christian Y. Cardall; Eirik Endeve
XALT collects accurate, detailed, and continuous job-level and link-time data and stores that data in a database; all the data collection is transparent to the users. The data stored can be mined to generate a picture of the compilers, libraries, and other software that users need to run their jobs successfully, highlighting the products that researchers use. We showcase how data collected by XALT can be easily mined into a digestible format by presenting data from four separate HPC centers. XALT is already used by many HPC centers around the world due to its usefulness and complementariness to existing logs and databases. Centers with XALT have a much better understanding of library and executable usage and patterns. We also present new functionality in XALT - namely the ability to anonymize data and early work in providing seamless access to provenance data.