Christian Y. Cardall
University of Tennessee
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Christian Y. Cardall.
Astrophysical Journal Supplement Series | 2004
Matthias Liebendörfer; O. E. Bronson Messer; Anthony Mezzacappa; Stephen W. Bruenn; Christian Y. Cardall; F.-K. Thielemann
We present an implicit finite difference representation for general relativistic radiation hydrodynamics in spherical symmetry. Our code, AGILE-BOLTZTRAN, solves the Boltzmann transport equation for the angular and spectral neutrino distribution functions in self-consistent simulations of stellar core collapse and postbounce evolution. It implements a dynamically adaptive grid in comoving coordinates. A comoving frame in the momentum phase space facilitates the evaluation and tabulation of neutrino-matter interaction cross sections but produces a multitude of observer corrections in the transport equation. Most macroscopically interesting physical quantities are defined by expectation values of the distribution function. We optimize the finite differencing of the microscopic transport equation for a consistent evolution of important expectation values. We test our code in simulations launched from progenitor stars with 13 solar masses and 40 solar masses. Half a second after core collapse and bounce, the protoneutron star in the latter case reaches its maximum mass and collapses further to form a black hole. When the hydrostatic gravitational contraction sets in, we find a transient increase in electron flavor neutrino luminosities due to a change in the accretion rate. The μ- and τ-neutrino luminosities and rms energies, however, continue to rise because previously shock-heated material with a nondegenerate electron gas starts to replace the cool degenerate material at their production site. We demonstrate this by supplementing the concept of neutrinospheres with a more detailed statistical description of the origin of escaping neutrinos. Adhering to our tradition, we compare the evolution of the 13 M⊙ progenitor star to corresponding simulations with the multigroup flux-limited diffusion approximation, based on a recently developed flux limiter. We find similar results in the postbounce phase and validate this MGFLD approach for the spherically symmetric case with standard input physics.
The Astrophysical Journal | 2012
Eirik Endeve; Christian Y. Cardall; Reuben D. Budiardja; Samuel W. Beck; Alborz Bejnood; Ross J. Toedte; Anthony Mezzacappa; John M. Blondin
We extend our investigation of magnetic field evolution in three-dimensional flows driven by the stationary accretion shock instability (SASI) with a suite of higher-resolution idealized models of the post-bounce core-collapse supernova environment. Our magnetohydrodynamic simulations vary in initial magnetic field strength, rotation rate, and grid resolution. Vigorous SASI-driven turbulence inside the shock amplifies magnetic fields exponentially; but while the amplified fields reduce the kinetic energy of small-scale flows, they do not seem to affect the global shock dynamics. The growth rate and final magnitude of the magnetic energy are very sensitive to grid resolution, and both are underestimated by the simulations. Nevertheless our simulations suggest that neutron star magnetic fields exceeding
The Astrophysical Journal | 2010
Eirik Endeve; Christian Y. Cardall; Reuben D. Budiardja; Anthony Mezzacappa
Physical Review D | 2003
Christian Y. Cardall; Anthony Mezzacappa
10^{14}
Physical Review D | 2005
Christian Y. Cardall; Eric J. Lentz; Anthony Mezzacappa
Computer Physics Communications | 2011
Reuben D. Budiardja; Christian Y. Cardall
~G can result from dynamics driven by the SASI, \emph{even for non-rotating progenitors}.
Astrophysical Journal Supplement Series | 2014
Christian Y. Cardall; Reuben D. Budiardja; Eirik Endeve; Anthony Mezzacappa
We begin an exploration of the capacity of the stationary accretion shock instability (SASI) to generate magnetic fields by adding a weak, stationary, and radial (but bipolar) magnetic field, and in some cases rotation, to an initially spherically symmetric fluid configuration that models a stalled shock in the post-bounce supernova environment. In axisymmetric simulations, we find that cycles of latitudinal flows into and radial flows out of the polar regions amplify the field parallel to the symmetry axis, typically increasing the total magnetic energy by about 2 orders of magnitude. Non-axisymmetric calculations result in fundamentally different flows and a larger magnetic energy increase: shearing associated with the SASI spiral mode contributes to a widespread and turbulent field amplification mechanism, boosting the magnetic energy by almost 4 orders of magnitude (a result which remains very sensitive to the spatial resolution of the numerical simulations). While the SASI may contribute to neutron star magnetization, these simulations do not show qualitatively new features in the global evolution of the shock as a result of SASI-induced magnetic field amplification.
The Astrophysical Journal | 2001
Jason Pruet; George M. Fuller; Christian Y. Cardall
Experience with core-collapse supernova simulations shows that accurate accounting of total particle number and 4-momentum can be a challenge for computational radiative transfer. This accurate accounting would be facilitated by the use of particle number and 4-momentum transport equations that allow transparent conversion between volume and surface integrals in both configuration and momentum space. Such conservative formulations of general relativistic kinetic theory in multiple spatial dimensions are presented in this paper, and their relevance to core-collapse supernova simulations is described.
Physica Scripta | 2013
Eirik Endeve; Christian Y. Cardall; R D Budiardja; Anthony Mezzacappa; John M. Blondin
Many astrophysical phenomena exhibit relativistic radiative flows. While velocities in excess of v{approx}0.1c can occur in these systems, it has been common practice to approximate radiative transfer to O(v/c). In the case of neutrino transport in core-collapse supernovas, this approximation gives rise to an inconsistency between lepton number transfer and lab-frame energy transfer, which have different O(v/c) limits. A solution used in spherically symmetric O(v/c) simulations has been to retain, for energy accounting purposes, the O(v{sup 2}/c{sup 2}) terms in the lab-frame energy transfer equation that arise from the O(v/c) neutrino number transport equation. Avoiding the proliferation of such extra O(v{sup 2}/c{sup 2}) terms in the absence of spherical symmetry motivates a special relativistic formalism, which we exhibit in coordinates sufficiently general to encompass Cartesian, spherical, and cylindrical coordinate systems.
arXiv: Astrophysics | 2008
Eirik Endeve; Christian Y. Cardall; Reuben D. Budiardja; Anthony Mezzacappa
Abstract We describe an implementation to solve Poissonʼs equation for an isolated system on a unigrid mesh using FFTs. The method solves the equation globally on mesh blocks distributed across multiple processes on a distributed-memory parallel computer. Test results to demonstrate the convergence and scaling properties of the implementation are presented. The solver is offered to interested users as the library PSPFFT . Program summary Program title: PSPFFT Catalogue identifier: AEJK_v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEJK_v1_0.html Program obtainable from: CPC Program Library, Queenʼs University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 110 243 No. of bytes in distributed program, including test data, etc.: 16 332 181 Distribution format: tar.gz Programming language: Fortran 95 Computer: Any architecture with a Fortran 95 compiler, distributed memory clusters Operating system: Linux, Unix Has the code been vectorized or parallelized?: Yes, using MPI. An arbitrary number of processors may be used (subject to some constraints). The program has been tested on from 1 up to ∼ 13 000 processors. RAM: Depends on the problem size, approximately 170 MBytes for 48 3 cells per process. Classification: 4.3, 6.5 External routines: MPI ( http://www.mcs.anl.gov/mpi/ ), FFTW ( http://www.fftw.org ), Silo ( https://wci.llnl.gov/codes/silo/ ) (only necessary for running test problem). Nature of problem: Solving Poissonʼs equation globally on unigrid mesh distributed across multiple processes on distributed memory system. Solution method: Numerical solution using multidimensional discrete Fourier Transform in a parallel Fortran 95 code. Unusual features: This code can be compiled as a library to be readily linked and used as a blackbox Poisson solver with other codes. Running time: Depends on the size of the problem, but typically less than 1 second per solve.