Towards fidelity and scalability in non-vacuum mergers
TTowards fidelity and scalability in non-vacuum mergers
Steven L. Liebling, Carlos Palenzuela, and Luis Lehner Long Island University, Brookville, New York 11548, USA Departament de F´ısica & IAC3, Universitat de les Illes Balears, Palma de Mallorca, Baleares E-07122, Spain Perimeter Institute, 31 Caroline St, Waterloo, ON N2L 2Y5, Canada (Dated: February 19, 2020)We study the evolution of two fiducial configurations for binary neutron stars using two differentgeneral relativistic hydrodynamics (GRHD), distributed adaptive mesh codes. One code,
Had ,has for many years been used to study mergers of compact object binaries, while a new code,
MHDuet , has been recently developed with the experience gained with the older one as well asseveral novel features for scalability improvements. As such, we examine the performance of each,placing particular focus on future requirements for the extraction of gravitational wave signaturesof non-vacuum binaries.
I. INTRODUCTION
The merger of a binary neutron star (BNS) system pro-duces a wide range of potentially observable signals andhence provides an ideal event for studying high energyastrophysics through gravitational and electromagneticwaves as well as (for sufficiently close systems) neutri-nos. Recently, the very first merger of such a system,GW170817, was observed both by LIGO and conven-tional telescopes. Much of our understanding of this sys-tem arises from comparing these observations with the re-sults of numerical codes simulating such a system. A sec-ond binary neutron star event, labeled GW190425, wasdetected during O3, but only through gravitational wavesdue to its farther distance from the earth. A large frac-tion of future BNS observations will only be detectablegravitationally, which underscores the need for a thor-ough understanding of “golden” type events –observedthrough multiple messengers so that this knowledge canbe exploited in the rest of them.Numerical simulations have been providing an evermore detailed description of non-vacuum binaries andobservational opportunities, though the complexity ofphysics involved is such that, while qualitative featuresare under control (prior to merger), improvements atthe quantitative level are still required. Moreover, asthe signal-to-noise ratio of detectors continues to im-prove (in current and future 3rd generation detectors),increased accuracy will be expected from theoreticallyconstructed waveforms and their ability to reveal the be-havior of dense matter in strongly gravitating and dy-namical regimes.A number of codes capable of evolving a BNS throughmerger with fully nonlinear general relativity exist (forreviews see Refs. [1, 2]). Such codes generally are writ-ten for distribution among a large number of computenodes and with the ability to refine particular spatial re-gions for certain times. These implementations are calleddistributed adaptive mesh refinement (AMR) codes, andone of such codes,
Had , has been used extensively bythe authors to model compact object mergers (involv-ing black holes, neutron stars and boson stars) within General Relativity and extensions to it. A number of ef-forts are currently underway to improve scalability andtake advantage of massive parallel infrastructures. Theoverarching goal being to improve the study of rele-vant systems exploiting advances (and related challenges)in hardware. For instance, work is already underwaywith discontinuous Galerkin methods [3, 4] and earlyexplorations of spectral methods for discontinuous solu-tions [5]. Other potential improvements have also beenexplored through possible GPU usage [6], wavelet adap-tivity [7], etc. Clearly, there is fertile ground at sev-eral levels for considerable gains, though much effort liesahead to converge to the ideal set of options.Recently, we have developed a new code that builds onthe experience gained with
Had but which has a numberof important new features and improvements, called
MH-Duet . Most importantly this code leverages the parallelinfrastructure
Samrai developed at Lawrence LivermoreNational Laboratory to enable large scale, high resolu-tion simulations necessary for the GW astrophysics ofnon-vacuum mergers to firmly move into the quantitativeregime. In this work, we compare the results for thesetwo codes for two particular situations to assess relativeaccuracy of obtained solutions and discuss scalability ofthe new implementation.The modeling of neutron stars requires the choice of anequation of state (EOS) that describes certain propertiesof the fluid constituting the star and serves to close theequations of motion. For this comparison, we considertwo particular EOSs compatible with current constraints.In particular, we adopt the so-called SLy EOS which issoft and relatively compressible. The second one, MS1b,is by contrast quite stiff. These EOSs have been adoptedby various groups, including for example Ref. [8] in thecontext of hybridization and Ref. [9] which discussed er-rors and convergence. We note that while the MS1b EOSis under some pressure due to EOS inferences from grav-itational wave observations [10–12] it nevertheless servesas a useful testbed for our purposes. a r X i v : . [ g r- q c ] F e b II. SETUP
We describe briefly here the evolution equations andthe numerical schemes employed by the two codes of ourcomparison. The Einstein equations are written as evolu-tion equations using the 3+1 decomposition of the CCZ4formalism [13, 14] with constraint damping parameters κ z = 0 . κ c = 1. We also use the standard 1+log andGamma-freezing gauge conditions [14] with η = 1 .
5. Allthese damping parameters are constant in the near zoneup to r = 60 km (i.e., just beyond the region occupiedby the stars) and then decay with 1/r. The magneto-hydrodynamical (MHD) equations describing the fluidare written as a system of conservation laws [15, 16],namely ∂ t U + ∂ k F k ( U ) = S ( U ) (1)where U is the vector of evolved fields and F k ( U ) theircorresponding fluxes. These fluxes are non-linear butthey depend only on the fields themselves and not ontheir derivatives. Although the full MHD system is im-plemented in both codes, the studies presented here setsthe initial magnetic field to zero. A. Numerical schemes
In both codes the discretization of the continuum equa-tions is performed using the Method of Lines, which sep-arates the time and space discretizations, on a regularCartesian grid. The spatial discretization for the evolu-tion of the spacetime fields is performed using fourth-order accurate, spatial difference operators satisfyingsummation by parts [17], which are based on Taylor ex-pansions and are therefore suitable for smooth solutions.The high-frequency modes, which can not be accuratelyresolved by our numerical grids, are damped by apply-ing an artificial Kreiss-Oliger (KO) sixth-order dissipa-tion operator with coefficient σ = 0 . ∂ t U i = − x (cid:16) F xi +1 / − F xi − / (cid:17) + S i where F xi ± / are the set of fluxes along the x -directionevaluated at the interfaces between two neighboring cells,located at x i ± / . The crucial issue when dealing withshocks is how to approximately solve the Riemann prob-lem, by reconstructing the fluxes at the interfaces suchthat no spurious oscillations appear in the solutions. Thedifferences between finite-volume and finite-difference ap-proaches arise in the method of computation of thesefluxes, and are summarized below in the description ofthe codes.An important feature of relativistic fluids is that theevolved or conserved fields are different from the physi-cal or primitive ones appearing in the fluxes and sources.This relation is highly non-linear and the recovery ofprimitive fields from the conserved ones involves a nu-merical root solver which is one of the most delicate partsof the code. For this comparison to be meaningful, wehave implemented the same algorithm for the recovery,similar to the one presented in [22]. Another delicateissue is how to deal with the regions outside the stars,usually filled with a low density atmosphere. Here too,the two codes are very similar in the way in which floorvalues and certain energy conditions are enforced on theconserved fields.The time integration of the resulting semi-discreteequations is performed by using either a 3 rd or 4 th orderRunge-Kutta scheme [23], which ensures the stability andconvergence of the solution. We adopt a Courant param-eter, defined as the ratio between the timestep and thegrid size, such that ∆ t l = λ c ∆ x l on each refinement level l to guarantee that the Courant-Friedrichs-Levy (CFL)condition is satisfied.We next discuss details particular to each of the twocodes compared here. Had
The relativistic hydrodynamics equations are dis-cretized in space by using HRSC method based on finitevolumes. In particular, the fields at the interfaces arecomputed with a piecewise parabolic reconstruction [24],and then the fluxes at the interfaces are obtained from theHarten-Lax-van Leer-Einfeldt flux formula [20, 25]. Thetime integration is performed with a third order Strong-Stability-Preserving Runge-Kutta [23, 26] with a CFLfactor λ c = 0 .
25, such that the Total Variation is pre-served.To ensure sufficient resolution, we employ AMR viathe
Had computational infrastructure that provides dis-tributed, Berger-Oliger style AMR [27, 28] with full sub-cycling in time, together with the tapered treatment ofartificial boundaries [29]. More details of the
Had codecan be found in recent studies with it of BNS merg-ers [22, 30, 31] MHDuet
The code
MHDuet has been automatically generatedby using
Simflowny [32, 33], an open-source platform toeasily implement scientific dynamical models by means ofa domain specific language, to run under the
Samrai in-frastructure [34, 35], which provides parallelization andadaptive mesh refinement. We incorporated a novelAMR boundary treatment that uses an internal denseoutput interpolator with the information from all the RKsub-steps on the coarse grid to compute accurately theinterpolated solution on the fine one [36, 37]. This al-gorithm (i.e., Berger-Oliger without order reduction) isnot only accurate, but also fast and efficient because usesminimal bandwidth when compared with the tapered ap-proach of
Had . Moreover, we have extended the algo-rithm to allow arbitrary resolution ratios between con-secutive AMR grids. The combination of this algorithmwith the efficient parallelization provided by Samrai ,allow us to scale at least up to 10K processes in binaryblack hole simulations even with high order discrete op-erators [19]. Since a GRMHD code involves higher com-putational load per grid-point, scaling in the non-vacuumcase is further enhanced.We have also implemented high-order finite-differenceHRSC methods to solve the fluid fields, similar to those in[9, 38]. The flux formula used consist on a Lax-Friedrichssplitting [21], which combines the fluxes and the fields ateach node i as follows: F ± i = 12 ( F i ± λU i ) (2)where λ is the maximum propagation speed of thesystem in the neighboring points. Then, we recon-struct the fluxes of each interface using the values { F ± } from the neighboring nodes. MHDuet already in-corporates some commonly used reconstructions, likethe Weighted-Essentially-Non-Oscillatory (WENO) re-constructions [21, 39] and MP5 [40], as well as otherimplementations like the FDOC families [41]. For thenumerical simulations of binary neutron stars presentedbelow we use MP5, which is our preferred choice for itsrobustness and ability to preserve sharp profiles. The tapered approach rigorously ensures no loss of accu-racy/convergence –respecting that of the combination of time-integrator and spatial derivative operators– for smooth solutions,though with significant overhead and at a consequent cost in ef-ficiency. The approach in
MHDuet ensures 4th order accuracyfor smooth solutions.
The time integration is performed with the classicalfourth order Runge-Kutta, which has only been shown topreserve the Total Variation, under certain conditions ,for values of the Courant factor below 2 / λ c = 0 .
4, although we also per-form a simulation with λ c = 0 .
25 to illustrate consistencyand accuracy of the solutions obtained within this codeand also to compare with results obtained using
Had . B. Initial data and EOS
We consider the coalescence of equal mass neutron starbinaries in quasi-circular orbit configuration. The initialdata is created with
Lorene using two different realis-tic EOSs at zero temperature, fitted as piecewise poly-tropes [42]. The parameters of these EOSs, commonlyknown as SLy and MS1b, are summarized in Table I.During the evolution we employ a hybrid EOS, wherethe zero temperature effects are implemented with thepiecewise polytrope, while thermal effects are modeledby an additional pressure contribution given by the idealgas EOS with Γ = 1 .
75. The total mass M = 2 . M (cid:12) is the same for the two binaries with different EOSs, as isthe initial separation d = 52 .
42 km and the initial angularfrequency Ω = 1428 rad/s.
TABLE I: Characterization of the two different EOSs usedin this work. Each EOS is defined as a piecewise polytropewith n = 4 segments and with K [ CGS ] = 3 . × and Γ = 1 . ρ i expressed in cgs units.EOS Γ Γ Γ log ρ log ρ log ρ SLy 3.005 2.988 2.851 14.165 14.7 15.0MS1b 3.456 3.011 1.425 14.05556938 14.7 15.0
C. Analysis
For each of the simulations, we calculate a number ofquantities as described here. We compute a retarded time u in terms of the tortoise coordinate r ∗ as u = t − r ∗ (3) r ∗ = R + 2 M log (cid:18) R M − (cid:19) (4) R = r (cid:18) M r (cid:19) , (5)where R is Schwarzschild radius and r the isotropic ra-dius. In terms of our Cartesian coordinates, the isotropicradius is simply r = x + y + z . In the way the flux operators are written.
The dominant mode of the gravitational wave strain h is presented as rh = A e − i φ (6)in terms of which the gravitational wave angular fre-quency is defined as ω = dφdt = −(cid:61) (cid:32) ˙ h h (cid:33) . (7) III. RESULTS
Let us first overview the numerical setup and the re-sults of the numerical simulations. Each binary wasevolved within
Had at two different resolutions, as shownin Table II. The number of levels and most AMR param-eters are identical for both runs, but the higher resolu-tion run adopted a resolution 50% higher than, and athreshold for refinement half that of, the medium resolu-tion run. The total number of levels of refinement for allthese runs was five with the finest resolution achieved of∆ x = 0 .
25 (in code units) = 0 .
38 km.As also shown in Table II, the
MHDuet runs dif-fered only in their overall resolution, with a 25% in-crease between consecutive resolutions. These simula-tions use five refinement levels, achieving a finest resolu-tion ∆ x = 160 m . TABLE II: Details of the various simulations presented here.For each simulation, the name of the code used, the name ofthe EOS characterizing the fluid, the name we use to referto the simulation, the number of levels, the number of pointsin each direction for the coarse level, the finest grid spacingachieved, and finally the CFL ratio of the timestep to gridspacing are displayed.Code EOS Name N ∆ x ∆ t/ ∆ x Had
SLy low (L) 5 81 0.37 0.25
Had
SLy medium (M) 5 121 0.25 0.25
Had
MS1b low (L) 5 81 0.37 0.25
Had
MS1b medium (M) 5 121 0.25 0.25
MHDuet
SLy low (L) 5 154 0.208 0.25
MHDuet
SLy low (L) 5 154 0.208 0.4
MHDuet
SLy medium (M) 5 192 0.167 0.4
MHDuet
SLy high (H) 5 240 0.134 0.4
MHDuet
SLy finest (F) 5 300 0.106 0.4
MHDuet
MS1b low (L) 5 154 0.208 0.4
MHDuet
MS1b medium (M) 5 192 0.167 0.4
MHDuet
MS1b high (H) 5 240 0.134 0.4
Results from the
Had runs are shown in Fig. 1. Ap-parent in the figure, the evolutions agree at early times,but the medium resolution merges a bit earlier than thelow resolution for both EOSs.The
MHDuet runs were similar to the
Had runs inthat the SLy case appears to be closer to the convergentregime than the MS1b case which clearly requires higherresolutions, as shown in Fig. 2. A direct comparison of the highest resolution simula-tion available for each binary and for each code can befound in Fig. 3. Clearly, the deviations increase as thebinary proceeds on their orbits, being more significantfor the MS1b EOS. Note though that the
MHDuet runsgenerally have higher resolution than the
Had runs.Because we have four different resolutions for the SLycase with the
MHDuet code, we can study in detail howthese errors behave. In Fig. 4 we display in the top panelthe waveform for the different resolutions. The middlepanel displays the difference in phase between the differ-ent resolutions. Clearly, it converges to zero rapidly. Thebottom panel shows that, for the lowest resolution triad,it converges roughly to fourth order, while it converges toa faster rate for the highest resolution one. This conver-gence was measured only with the phase of a radiativesignal, but nevertheless such high order convergence isnot typically observed in BNS simulations (for referencesee, e.g. [43, 44]). The high convergence order observedin the wave zone can be attributed to a couple factorsthat go along with the use of high order methods: theuse of adaptive re-gridding designed to satisfy an errorthreshold and the fact that the fluid discontinuities arenot volume filling.Assuming a constant convergence factor of 4, we cancalculate the Richardson extrapolation of the GW phase.By subtracting this phase from each of phases obtainedin our simulations, we get another measure of the phaseerrors. As we display in Fig. 5, these errors for suchlong waveforms is of the order of 1 radian over 12 cy-cles, although it remain below 0.1 radians for most of thecoalescence and it only increases in the last 2-3 orbits.We can also analyze the various radiative quantitiesthat are extracted on a spherical surface. We extracton three spherical surfaces at the different radii r = { , , } km, and plot those results for the finestresolution SLy run in Fig. 6. That these quantities largelyagree gives some indication that their extraction is cor-rect. We can make a quantitative statement by extrapo-lating the waveforms to infinity and subtracting each ofthe signals obtained on each surface. This measure indi-cates that the error due to the finite size extraction radiiare much smaller than the discretization errors.To examine the errors associated with our use of alarge CFL factor λ c = 0 . λ c ≈ . MHDuet codeand compare the results in Fig. 7. The phase differencebetween the two simulations suggests a small error untilmerger, as compared to the phase errors between differentresolutions displayed in Fig. 4. This comparison furthersuggests that accurate (and efficient) simulations can beperformed with the
MHDuet code using such a largeCFL factor. R e ( h , ) / M MS1b EOS SLY EOS M Ω
500 1000 1500 2000 u/M φ
500 1000 1500 2000 2500 u/M medhigh
FIG. 1:
Had evolutions for both EOSs and different resolutions. The GW strain is shown in the top frame, the GW frequencyis shown in the middle , and the GW phase is shown at bottom . R e ( h , ) / M MS1b EOS SLY EOS M Ω u/M φ u/M lowmedhighfinest FIG. 2:
MHDuet evolutions for both EOSs and different resolutions. Between any two runs, the resolution increases by 25%.Quantities shown are the same ones shown in Fig. 1. u/M R e ( h , ) / M MS1b EOS u/MSLY EOS
HADSamrai
FIG. 3: Strains for the highest resolution runs of the two different codes and two different EOSs. R e ( h , ) / M FinestHighMediumLow | ∆ φ | F-HH-MM-L u/M C o n v e r g e n c e O r d e r F-H-MH-M-L
FIG. 4: Errors in phase between the different resolutions for the SLy
MHDuet runs.
Top:
The strains for the four differentresolutions.
Middle:
Differences in phase between successive resolutions. These differences decrease toward zero as resolutionincreases indicating convergence.
Bottom:
The convergence order estimated with just the phase differences, indicating highorder convergence. The early significant variations are a consequence of differences being too small. u/M -5 -4 -3 -2 -1 | ∆ φ | Ext-Very HighExt-HighExt-MediumExt-Low
FIG. 5: Phase errors relative to the Richardson extrapolatedphase for the
MHDuet
SLy case. The differences betweenthe extrapolated phase and the various resolutions are shown.That they decrease with resolution is another indication ofconvergence. R e ( h , ) / M r = 200 r = 300 r = 400 u/M | ∆ φ | φ ext − φ φ ext − φ φ ext − φ FIG. 6: Results from the three different extraction surfacesfor the finest SLy run. The top panel shows the strain andthe bottom panel shows the differences between the phaseobtained on each surface and the phase obtained by extrapo-lating to infinite radius with the outermost two surfaces.
IV. SUMMARY
We have presented binary neutron star simulations us-ing two different EOS to compare two codes implementedin separate computational infrastructures. These codessolve the same evolution equations for both the Ein-stein and the MHD equations, and share the numeri-cal schemes for smooth solutions. However, they imple-ment different High-Resolution-Shock-Capturing meth-ods: while
Had uses an approach strongly inspired onfinite-volume methods, where the fields are reconstructed R e ( h , ) / M CFL = 0 . CFL = 0 . u/M -5 -4 -3 -2 -1 | ∆ φ | FIG. 7: Phase error for the
MHDuet
SLy low resolutionruns with two different CFL ratios. That the phase difference(shown in the bottom panel) is small, compared to Fig. 4,indicate that the use of a CFL factor of 0 .
40 is not causinga large error in the solution. The ability to run with a highCFL factor is itself an efficient aspect of the
MHDuet code. processes e ff i c i e n c y CometStampede2Ideal
FIG. 8: Weak scaling test of the
MHDuet code on stampede2 and comet . This run used fixed meshes with twolevels. More recent tests have indicated (see Figure 15 of [19])scaling efficiencies of 80% extend to at least 10,000 processes. at the interfaces and a flux-formula is used for calcu-lating the fluxes at such interfaces,
MHDuet relies onfinite-difference methods, which involve a high order re-construction of a combination of fluxes and fields (i.e.,Lax-Friedrichs flux splitting). Another important differ-ence is the treatment of the AMR boundaries, which ismore efficient in
MHDuet due to the use of a minimumbandwidth. Finally
MHDuet is built using the publicinfrastructure
Samrai , which has been developed for anumber years and which can reach the exascale, at leastfor some choice of problems. For our main purposes,that of compact binary simulations, we have achievedgood scaling up to 10,000 processes with our high-orderschemes (see Fig. 8 for weak scaling results with up to1,500 processes with the GRMHD code and Fig. 15 of[19]).The results presented here show several features. First,we have shown that the solutions of both codes are consis-tent and agree up to certain level. A more detailed anal-ysis of one of these cases with
MHDuet indicates thatthe solution roughly converges to fourth order. Richard-son extrapolation allows us to estimate the errors in thephase, which remain below 0.2 radians for most of the co-alescence for the highest resolution case with ∆ x = 160 m and only increases to order unity in the last few orbits.Another important technical result is that the speedand efficiency of the MHDuet code is better than
Had ,being able to achieve speeds of ≈ M (cid:12) /hour when run-ning on 500-1000 processors for the highest resolutionsimulations of binary neutron stars presented here. Thisspeed is roughly 4 times faster than the one reached by Had , which is moreover running at a resolution ≈ MHDuet does roughly85ms per month running on 192 processors with a mini-mum resolution of ∆ x = 0 .
21, while
Had takes approx-imately 20ms per month on 256 processors with even aslightly coarser minimum resolution of ∆ x = 0 . MHDuet .In particular, the ability to handle tabulated equationof state, an approximate neutrino treatment and resis- tive magneto hydrodynamics, which are already availablein
Had . These physics ingredients are gradually beingincorporated in
MHDuet . Certainly, results presentedhere are encouraging for this enterprise.
Acknowledgments
We thank Will East and David Neilsen for discus-sions during this work. This work was supported bythe NSF under grants PHY-1827573 and PHY-1912769.CP acknowledges support from the Spanish Ministry ofEconomy and Competitiveness grant AYA2016-80289-P (AEI/FEDER, UE). LL was supported in part byNSERC through a Discovery Grant, and CIFAR. Compu-tations were performed at XSEDE, Marenostrum and theNiagara supercomputer at the SciNet HPC Consortium.Computer resources at MareNostrum and the technicalsupport provided by Barcelona Supercomputing Centerwere obtained thanks to time granted through the 17 th PRACE regular call (project Tier-0 GEEFBNSM, P.I.CP). SciNet is funded by: the Canada Foundation forInnovation; the Government of Ontario; Ontario Re-search Fund - Research Excellence; and the Universityof Toronto. Research at Perimeter Institute is supportedby the Government of Canada and by the Province of On-tario through the Ministry of Research, Innovation andScience. [1] M. Shibata and K. Hotokezaka, “Merger and massejection of neutron star binaries,”
Annual Review ofNuclear and Particle Science no. 1, (2019) 41–64, https://doi.org/10.1146/annurev-nucl-101918-023625 . https://doi.org/10.1146/annurev-nucl-101918-023625 .[2] D. Radice, S. Bernuzzi, and A. Perego, “The Dynamicsof Binary Neutron Star Mergers and of GW170817,” arXiv:2002.03863 [astro-ph.HE] .[3] J. M. Miller and E. Schnetter, “An operator-based localdiscontinuous Galerkin method compatible with theBSSN formulation of the Einstein equations,” Class.Quant. Grav. no. 1, (2017) 015003, arXiv:1604.00075 [gr-qc] .[4] F. Hbert, L. E. Kidder, and S. A. Teukolsky,“General-relativistic neutron star evolutions with thediscontinuous Galerkin method,” Phys. Rev.
D98 no. 4,(2018) 044041, arXiv:1804.02003 [gr-qc] .[5] J. Piotrowska, J. M. Miller, and E. Schnetter, “SpectralMethods in the Presence of Discontinuities,”
J. Comput.Phys. (2019) 527–547, arXiv:1712.09952 [cs.NA] .[6] A. G. M. Lewis and H. P. Pfeiffer, “GPU-acceleratedsimulations of isolated black holes,”
Class. Quant. Grav. no. 9, (2018) 095017, arXiv:1804.09101 [gr-qc] .[7] M. Fernando, D. Neilsen, H. Lim, E. Hirschmann, andH. Sundar, “Massively Parallel Simulations of Binary Black Hole Intermediate-Mass-Ratio Inspirals,” arXiv:1807.06128 [gr-qc] .[8] Dudi, Reetika and Pannarale, Francesco and Dietrich,Tim and Hannam, Mark and Bernuzzi, Sebastiano andOhme, Frank and Bruegmann, Bernd, “Relevance oftidal effects and post-merger dynamics for binaryneutron star parameter estimation,” Phys. Rev.
D98 no. 8, (2018) 084061, arXiv:1808.09749 [gr-qc] .[9] S. Bernuzzi and T. Dietrich, “Gravitational waveformsfrom binary neutron star mergers with high-orderweighted-essentially-nonoscillatory schemes in numericalrelativity,”
Phys. Rev.
D94 no. 6, (2016) 064062, arXiv:1604.07999 [gr-qc] .[10]
LIGO Scientific, Virgo
Collaboration, B. P. Abbott et al. , “Properties of the binary neutron star mergerGW170817,”
Phys. Rev. X9 no. 1, (2019) 011001, arXiv:1805.11579 [gr-qc] .[11] D. Radice, A. Perego, F. Zappa, and S. Bernuzzi,“GW170817: Joint Constraint on the Neutron StarEquation of State from Multimessenger Observations,” Astrophys. J. no. 2, (2018) L29, arXiv:1711.03647[astro-ph.HE] .[12] I. Harry and T. Hinderer, “Observing and measuringthe neutron-star equation-of-state in spinning binaryneutron star systems,”
Class. Quant. Grav. no. 14,(2018) 145010, arXiv:1801.09972 [gr-qc] . [13] D. Alic, C. Bona-Casas, C. Bona, L. Rezzolla, andC. Palenzuela, “Conformal and covariant formulation ofthe Z4 system with constraint-violation damping,” Phys. Rev. D no. 6, (Mar., 2012) 064040, arXiv:1106.2254 [gr-qc] .[14] M. Bezares, C. Palenzuela, and C. Bona, “Final fate ofcompact boson star mergers,” Phys. Rev. D (Jun,2017) 124005. https://link.aps.org/doi/10.1103/PhysRevD.95.124005 .[15] M. Anderson et al. , “Simulating binary neutron stars:dynamics and gravitational waves,” Phys. Rev.
D77 (2008) 024006, arXiv:0708.2720 [gr-qc] .[16] D. Neilsen, S. L. Liebling, M. Anderson, L. Lehner,E. OConnor, and C. Palenzuela, “Magnetized neutronstars with realistic equations of state and neutrinocooling,”
Physical Review D no. 10, (May, 2014) . http://dx.doi.org/10.1103/PhysRevD.89.104029 .[17] G. Calabrese, L. Lehner, O. Reula, O. Sarbach, andM. Tiglio, “Summation by parts and dissipation fordomains with excised regions,” Classical and QuantumGravity (Dec., 2004) 5735–5757, gr-qc/0308007 .[18] G. Calabrese, L. Lehner, O. Reula, O. Sarbach, andM. Tiglio, “Summation by parts and dissipation fordomains with excised regions,” Classical and QuantumGravity (Dec., 2004) 5735–5757, gr-qc/0308007 .[19] C. Palenzuela, B. Miano, D. Vigan, A. Arbona,C. Bona-Casas, A. Rigo, M. Bezares, C. Bona, andJ. Mass, “A Simflowny-based finite-difference code forhigh-performance computing in numerical relativity,” Class. Quant. Grav. no. 18, (2018) 185007, arXiv:1806.04182 [physics.comp-ph] .[20] E. F. Toro, Riemann solvers and numerical methods forfluid dynamics: a practical introduction; 2nd ed.
Springer, Berlin, 1999. http://cds.cern.ch/record/404378 .[21] C.-W. Shu,
Essentially non-oscillatory and weightedessentially non-oscillatory schemes for hyperbolicconservation laws , pp. 325–432. Springer BerlinHeidelberg, Berlin, Heidelberg, 1998. https://doi.org/10.1007/BFb0096355 .[22] C. Palenzuela, S. L. Liebling, D. Neilsen, L. Lehner,O. L. Caballero, E. O’Connor, and M. Anderson,“Effects of the microphysical equation of state in themergers of magnetized neutron stars with neutrinocooling,”
Phys. Rev. D no. 4, (Aug., 2015) 044045, arXiv:1505.01607 [gr-qc] .[23] C.-W. Shu and S. Osher, “Efficient implementation ofessentially non-oscillatory shock-capturing schemes,” J.Comput. Phys. no. 2, (1988) 439–471.[24] P. Colella and P. R. Woodward, “The PiecewiseParabolic Method (PPM) for Gas-DynamicalSimulations,” Journal of Computational Physics (Sept., 1984) 174–201.[25] A. Harten, P. D. Lax, and B. van Leer, “On upstreamdifferencing and godunov-type schemes for hyperbolicconservation laws,” SIAM Review no. 1, (1983)35–61, https://doi.org/10.1137/1025002 . https://doi.org/10.1137/1025002 .[26] S. Gottlieb and C.-W. Shu, “Total variation diminishingrunge-kutta schemes,” Math. Comput. no. 221,(Jan., 1998) 7385. https://doi.org/10.1090/S0025-5718-98-00913-2 .[27] “Had.” http://had.liu.edu/ .[28] S. L. Liebling, “Singularity threshold of the nonlinear sigma model using 3D adaptive mesh refinement,” Phys. Rev. D no. 4, (Aug., 2002) 041703, gr-qc/0202093 .[29] L. Lehner, S. L. Liebling, and O. Reula, “Amr, stabilityand higher accuracy,” Class. Quant. Grav. (2006)S421–S446. gr-qc/0510111.[30] L. Lehner, S. L. Liebling, C. Palenzuela, and P. M.Motl, “m=1 instability and gravitational wave signal inbinary neutron star mergers,” Phys. Rev.
D94 no. 4,(2016) 043003, arXiv:1605.02369 [gr-qc] .[31] L. Sagunski, J. Zhang, M. C. Johnson, L. Lehner,M. Sakellariadou, S. L. Liebling, C. Palenzuela, andD. Neilsen, “Neutron star mergers as a probe ofmodifications of general relativity with finite-rangescalar forces,” arXiv:1709.06634 [gr-qc] .[32] A. Arbona, A. Artigues, C. Bona-Casas, J. Mass´o,B. Mi˜nano, A. Rigo, M. Trias, and C. Bona,“Simflowny: A general-purpose platform for themanagement of physical models and simulationproblems,”
Computer Physics Communications (Oct., 2013) 2321–2331.[33] A. Arbona, B. Mi˜nano, A. Rigo, C. Bona,C. Palenzuela, A. Artigues, C. Bona-Casas, andJ. Mass´o, “Simflowny 2: An upgraded platform forscientific modelling and simulation,”
Computer PhysicsCommunications (Aug., 2018) 170–181, arXiv:1702.04715 [cs.MS] .[34] R. D. Hornung and S. R. Kohn, “Managing applicationcomplexity in the samrai object-oriented framework,”
Concurrency and Computation: Practice andExperience no. 5, (2002) 347–368. http://dx.doi.org/10.1002/cpe.652 .[35] B. T. Gunney and R. W. Anderson, “Advances inpatch-based adaptive mesh refinement scalability,” Journal of Parallel and Distributed Computing (2016) 65 – 84. .[36] P. McCorquodale and P. Colella, “A high-orderfinite-volume method for conservation laws on locallyrefined grids,” Commun. Appl. Math. Comput. Sci. no. 1, (2011) 1–25. https://doi.org/10.2140/camcos.2011.6.1 .[37] B. Mongwane, “Toward a Consistent Framework forHigh Order Mesh Refinement Schemes in NumericalRelativity,” Gen.Rel.Grav. no. 5, (2015) 60, arXiv:1504.07609 [gr-qc] .[38] D. Radice and L. Rezzolla, “THC: a new high-orderfinite-difference high-resolution shock-capturing code forspecial-relativistic hydrodynamics,” A&A (Nov.,2012) A26, arXiv:1206.6502 [astro-ph.IM] .[39] G.-S. Jiang and C.-W. Shu, “Efficient implementationof weighted eno schemes,”
Journal of ComputationalPhysics no. 1, (1996) 202 – 228. .[40] A. Suresh and H. Huynh, “Accuratemonotonicity-preserving schemes with rungekutta timestepping,”
Journal of Computational Physics no. 1,(1997) 83 – 99. .[41] C. Bona, C. Bona-Casas, and J. Terradas, “Linearhigh-resolution schemes for hyperbolic conservationlaws: TVB numerical evidence,”
Journal ofComputational Physics (Apr., 2009) 2266–2281, arXiv:0810.2185 [gr-qc] .[42] J. S. Read, B. D. Lackey, B. J. Owen, and J. L.Friedman, “Constraints on a phenomenologicallyparametrized neutron-star equation of state,” PhysicalReview D no. 12, (Jun, 2009) . http://dx.doi.org/10.1103/PhysRevD.79.124032 .[43] T. Dietrich, S. Bernuzzi, B. Bruegmann, and W. Tichy,“High-resolution numerical relativity simulations ofspinning binary neutron star mergers,” in Proceedings,26th Euromicro International Conference on Parallel,Distributed and Network-based Processing (PDP 2018):Cambridge, UK, March 21-23, 2018 , pp. 682–689. 2018. arXiv:1803.07965 [gr-qc] .[44] L. Baiotti, T. Damour, B. Giacomazzo, A. Nagar, andL. Rezzolla, “Accurate numerical simulations ofinspiralling binary neutron stars and their comparisonwith effective-one-body analytical models,”
Phys. Rev.
D84 (2011) 024017, arXiv:1103.3874 [gr-qc] .[45] D. Radice, L. Rezzolla, and F. Galeazzi, “High-OrderFully General-Relativistic Hydrodynamics: newApproaches and Tests,”
Class. Quant. Grav. (2014)075012, arXiv:1312.5004 [gr-qc]arXiv:1312.5004 [gr-qc]