Collisional N-Body Dynamics Coupled to Self-Gravitating Magnetohydrodynamics Reveals Dynamical Binary Formation
Joshua E. Wall, Stephen L. W. McMillan, Mordecai-Mark Mac Low, Ralf S. Klessen, Simon Portegies Zwart
DDraft version January 7, 2019
Typeset using L A TEX twocolumn style in AASTeX62
Collisional N-Body Dynamics Coupled to Self-Gravitating Magnetohydrodynamics Reveals Dynamical BinaryFormation
Joshua E. Wall, Stephen L. W. McMillan, Mordecai-Mark Mac Low,
2, 3
Ralf S. Klessen,
4, 5 andSimon Portegies Zwart Drexel University, Department of Physics and Astronomy, Disque Hall, 32 S 32nd St., Philadelphia, PA 19104, USA Department of Astrophysics, American Museum of Natural History, 79th St at Central Park West, New York, NY 10024, USA Center for Computational Astrophysics, Flatiron Institute, 162 Fifth Ave., New York, NY 10010, USA Heidelberg University, Center for Astronomy, Institute for Theoretical Astrophysics, Albert-Ueberle-Str. 2, 69120 Heidelberg, Germany Heidelberg University, Interdisciplinary Center for Scientific Computing, INF 205, 69120, Heidelberg, Germany Leiden Observatory, Leiden University, Niels Bohrweg 2, 2333 Leiden, Netherlands (Dated: January 7, 2019; Received December 21, 2018)
Submitted to ApJABSTRACTWe describe a star cluster formation model that includes individual star formation from self-gravitating, magnetized gas, coupled to collisional stellar dynamics. The model uses the AstrophysicalMulti-purpose Software Environment (
AMUSE ) to integrate an adaptive-mesh magnetohydrodynamicscode (
FLASH ) with a fourth order Hermite N-body code ( ph4 ), a stellar evolution code (
SeBa ), and amethod for resolving binary evolution ( multiples ). This combination yields unique star formationsimulations that allow us to study binaries formed dynamically from interactions with both other starsand dense, magnetized gas subject to stellar feedback during the birth and early evolution of stellarclusters. We find that for massive stars, our simulations are consistent with the observed dynami-cal binary fractions and mass ratios. However, our binary fraction drops well below observed valuesfor lower mass stars, presumably due to unincluded binary formation during initial star formation.Further, we observe a build up of binaries near the hard-soft boundary that may be an importantmechanism driving early cluster contraction.
Keywords: star formation — star clusters INTRODUCTIONThe study of star cluster formation through simula-tions is a non-linear physical problem with a wide rangeof scales. Clusters form from turbulent, magnetizedmolecular clouds that are parsecs across, yet the individ-ual star formation process happens at scales of a singleAU or less (Mac Low & Klessen 2004). Further com-plicating the issue, star formation contains a complexfeedback loop in which stars forming in one epoch affectproximal regions of current and future star formationthrough radiation, winds and supernova feedback. Thegravitational contraction of molecular clouds, star for-mation, stellar evolution, dynamical binary formation,
Corresponding author: Joshua [email protected] and cluster assembly and virialization all take place ontimescales of 1–10 Myr. Resolving the relevant physicalprocesses on all size and time scales is computationallychallenging. As a result, approximations for star for-mation are used that include sink particles represent-ing entire clusters (Gatto et al. 2017), simplified stellarfeedback (Dale et al. 2014), or softened gravitational dy-namics for stars (Federrath et al. 2010), or simulationsneglect important dynamical agents such as magneticfields (Rosen et al. 2016) in order to make the problemtractable.In this study we describe numerical methods to resolvethe dynamics of the stars and gas in order to study theformation of star clusters from gas collapse. This in-cludes coupling of the magnetohydrodynamics (MHD)code
FLASH (Fryxell et al. 2000), the N-body code ph4 (McMillan, S. et al. 2012), and the stellar evolution code a r X i v : . [ a s t r o - ph . GA ] J a n Wall et al.
SeBa (Portegies Zwart & Verbunt 1996) using the Astro-physical MUlti-purpose Software Environment (
AMUSE
Pelupessy et al. 2013), and implementation of a subgridmodel for the formation of individual stars from sinkparticles. Since we focus on cluster formation and evo-lution as opposed to individual star formation, we havechosen to use the initial mass function (IMF) as an inputrather than a result of our simulations. To accomplishthis, we sample a Kroupa IMF (Kroupa 2001) using aPoisson process,but still individually form each star ina way that conserves mass both locally and globally.The natural environment to develop these methods is
AMUSE , as the original intention in the development of
AMUSE was to allow for the coupling of different astro-physical codes for multiphysics simulations (PortegiesZwart et al. 2013). Further, multiple N-body and stel-lar evolution codes already exist in
AMUSE , allowing usto change methods as needed. For example we couldswitch between
SeBa or MESA (Paxton et al. 2011) forstellar evolution depending on the level of detail desiredand computational cost acceptable. This allows us torepresent the stars in
FLASH , ph4 and SeBa as a singledata structure that can be modified by any of the abovecodes, followed by propagation of the updated informa-tion to all other running codes. Interfacing
FLASH intothe
AMUSE environment allows us to couple the gravi-tational potentials computed by
FLASH and ph4 using agravity bridge (see Sect. 2) directly using code in Pythonwithout major rewrites of either code.In addition to interfacing
FLASH with
AMUSE , we havealso made several additions to
FLASH itself. For the heat-ing and cooling of the gas we have modified the meth-ods for atomic heating and cooling of Joung & Mac Low(2006) and Ib´a˜nez-Mej´ıa et al. (2016) with the molecu-lar and dust cooling methods of Seifried et al. (2011),which themselves are based on Neufeld et al. (1995). Todo this we have added our own model of heating fromthe photoelectric effect to dust using either the calcula-tions from Wolfire et al. (2003) or Weingartner & Draine(2001), which can be chosen with a parameter switch.Finally, to solve for the both the degreee of ionizationand temperature of the gas as well as the dust tempera-ture we have implemented our own integrators based onwell known methods.We reserve a detailed examination of these modifica-tions to a subsequent paper, where we will also detailmodifications we have made to include feedback fromindividual stars. In this work, we focus on the couplingof gravity between
FLASH and ph4 using a gravity bridge.In Sect. 2 we explain the concept of a gravity bridgeand how we implement it, while in Sect. 2.2 we verifyour implementation. In Sect. 3 we describe our method for introducing star particles in regions of high gas den-sity, and for handling binary or higher-order systems inSect. 4. We define a demonstration problem in Sect. 5,and describe dynamical binary formation occurring inour models in Sect. 6. Finally, we summarize our re-sults in Sect. 7. GRAVITY BRIDGE2.1.
Implementation
Central to our implementation is the requirement tohave fully collisional N-body dynamics calculated forstars evolving in gas-rich regions. To allow for physi-cal interaction between the gas in
FLASH and the starsin an N-body code, we implement a gravity bridge (Fu-jii et al. 2007) between the two codes. The method is a“kick-drift-kick,” leapfrog-type integration scheme withroots in the symplectic map method used by Wisdom& Holman (1991) to integrate the motions of planetsin the solar system. In Wisdom & Holman (1991), theplanets followed an analytic Kepler orbit around the Sunwhile being perturbed perodically by each other’s gravi-tational acceleration. The scheme was extended by Fujiiet al. (2007) to integrate a star cluster subject to tidaleffects inside a parent galaxy. While the method haspreviously been used to couple gas in smoothed particlehydrodynamics (SPH) to stars contained in an N-bodycode (e.g. Pelupessy & Portegies Zwart 2012), we havefor the first time implemented this method with an Eule-rian, adaptive mesh refinement (AMR) grid code. Herewe briefly describe the
AMUSE bridge method, followingFujii et al. (2007).If we define the equation of evolution for our solution f ( q ( t ) , p ( t ); t ) in terms of the Poisson bracket dfdt = { f, H } , (1)where H is the Hamiltonian of the system, and definean evolution operator D H D H = ddt = { · , H } , (2)the formal solution for f ( t ) is f ( t + ∆ t ) = e ∆ tD H f ( t ) . (3)Yoshida (1990) noted that if H (and therefore D H ) isseparated into kinetic and potential energy terms, H = K ( p ) + U ( q ) (with coordinates q and momenta p ), and D K and D U are defined as in Eq. 2, then the exponentialin Eq. 3 can be approximated as e ∆ tD H = e ∆ t ( D K + D U ) (4)= e ∆ t l (cid:89) k =1 e a k D K e b k D U + O (∆ t n ) , (5) -body—MHD l , n and constants a k , b k . In the simplestcase, l = 2 , n = 2, a = a = 1 / b = 1, and b = 0,and the total evolution of f ( t ) becomes a second-orderintegration scheme upon Taylor expansion f ( t + ∆ t ) = e ∆ t D K e ∆ tD U e ∆ t D K f ( t ) (6)= (cid:18) t D K (cid:19) (1 + ∆ tD U ) (cid:18) t D K (cid:19) f ( t ) . (7)We immediately recover the familiar kick-drift-kick for-mulation of the leapfrog integrator, since D U q i = { q i , H U } = p i m i = ˙ q i = v i (8) D K p i = { p i , H K } = − m i ∇ V K = F g = m i a g , (9)and the evolution of the system reduces to v (cid:48) i = v i + a i ( x ) 12 ∆ t (10) x (cid:48) i = x i + v (cid:48) i ∆ t (11) v (cid:48) i = v (cid:48) i + a i ( x (cid:48) ) 12 ∆ t. (12)Wisdom & Holman (1991) noted that the Hamilto-nian of a system comprising two or more coupled sub-systems can alternatively be split into a set of secularevolution terms describing the internal evolution of eachsubsystem and perturbation terms consisting of deltafunctions, representing the interactions between the sub-systems. Following Wisdom & Holman (1991) and Fujiiet al. (2007), we split our Hamiltonian for each systeminto a sum of terms, D K and D D , representing, respec-tively, the kick due to the external perturbation andthe drift due to internal (unperturbed) evolution. Re-gardless of the split, the Yoshida (1990) decomposition(Eq. 5) still applies, and the evolution of the system canbe described by a scheme of the same form as Eq. 7.In our simulations, the subsystems are the stars (mod-eled using ph4 ) and the gas (modeled using FLASH ), so D K is computed for the stars using the gravitationalacceleration due to the gas, and for the gas using thegravitational acceleration due to the stars. For the driftoperators, instead of deriving the drift from the Hamil-tonian as was done in Eq. 8, we use each subsystem’sinternal integration scheme as shown in Figure 1. Thismeans we now also introduce any error from the in-ternal schemes (fourth-order for ph4 and second-orderfor FLASH ) to the formally symplectic integration of thebridge, but we gain the ability to couple the codes grav-itationally. This error is found to be generally small,even for fairly large bridge timesteps (see Sect. 2.2). Our integration scheme for stars iskick v (cid:48) s, = v s, + ∆ t a g → s, (13)drift x s, , v (cid:48) s, = ph4 (cid:0) x s, , v (cid:48) s, , ∆ t (cid:1) (14)kick v s, = v (cid:48) s, + ∆ t a g → s, (15)where a g → s is the gravitational acceleration on the starsdue to the gas. The stars receive an initial velocity kickfrom the gas, then drift alone, then get a final velocitykick from the gas. The same considerations lead to asimilar procedure for the gas: v (cid:48) g, = v g, + ∆ t a s → g, (16) x g, , v (cid:48) g, = FLASH (cid:0) x g, , v (cid:48) g, , ∆ t (cid:1) (17) v g, = v (cid:48) s, + ∆ t a s → g, , (18)where a s → g is the gravitational acceleration on the gasdue to the stars.At each bridge step, the gravitational acceleration dueto gas in each cell of the MHD code on the stars a g → s iscalculated at the locations of each star in ph4 , and thegravitational acceleration of each star on the gas a s → g iscalculated at each cell site in FLASH . For obtaining thegravitational acceleration of the gas in
FLASH , we usethe first order finite differences of the potential calcu-lated by the
FLASH multigrid solver (Ricker 2008). Forthe acceleration of the particles on the gas, we initiallyused the acceleration directly from ph4 . However testingshowed that combining two different methods for starson gas and gas on stars led to violations of Newton’sThird Law. We have therefore included a cloud in cellmapping of the stellar masses onto the grid itself fol-lowed by the same multigrid potential and accelerationsolution method, allowing us to use the same solver inboth bridge directions to properly conserve momentumduring the interactions.The method averages the gravitational acceleration ofone code on the other over the bridge time step, so theerror in the bridge depends on the time step ∆ t . Testingwith different timesteps has shown that ∆ t ∼ t ff / t ff is the minimum free-fall timein the simulation, although in practice we set bridgetimesteps much smaller than this, generally ∼ . t h ,where ∆ t h is the hydro time step. Runs at this time stepstill mean that each code is taking numerous steps inde-pendently, making the whole simulation more efficientoverall. Also, since the two codes drift independently,they can be in principle evolved in parallel for anotherimprovement in speed. Wall et al.
Kick Gas -> Stars Kick Gas -> Stars Evolution Stellar Dynamics Evolution Gas Dynamics Kick Stars -> Gas Kick Stars -> Gas FLASH ph4
Figure 1.
The bridge scheme implemented in
AMUSE us-ing
FLASH for hydrodynamics, ph4 for N-body, and
SeBa forstellar evolution.
Verification
To test the gravity bridge we perform the test used byFederrath et al. (2010) for sink particles when they werefirst incorporated into
FLASH . This consists of embed-ding three particles at different radii on circular orbitscentered on a cloud of gas. The gas does not evolveand acts as a static potential, This tests the actual in-teraction between gas and particles, unlike imposing abackground static potential without representation onthe grid. The density of the gas varies as ρ ( r ) = ρ ( r o )( r o /r ) , (19)where ρ ( r o ) = 3 . × − g cm − and r o = 5 × cm,which implies a gas mass of roughly 3 M (cid:12) inside r o .The three particles are placed at distances of 10 cm,2 × cm and 3 × cm from the center of the gascloud and have masses 10 − M (cid:12) such that the inter-particle gravity is very small compared to that of thegas. Each particle starts with a translational velocityfor circular orbits v = ( GM ( r ) /r ) / = 895 m s − , (20)which we lower by 2.3, 1.1, and 0.8% respectively toaccount for the non-singular nature of gravity on thegrid at the origin (Federrath et al. 2010).To check energy conservation we integrated 10 orbitsof the innermost particle to compare against Federrathet al. (2010), with the result shown in Fig. 2. Our inte-gration appears to close the orbits as well as the integra-tion in Federrath et al. (2010), which used a second order y ( c m ) ×10 orbits = 10 cm2×10 cm3×10 cm Figure 2.
Orbital paths of three test particles after 10 or-bits in an isothermal density profile where the gravitationalacceleration of the particles is due to the bridge.
Figure 3.
The fractional absolute error in radius of thethree test particles. leapfrog scheme. However their integration producedlarger errors in the outermost orbit, while ours showsthe most error in the innermost orbit. Federrath et al.(2010) attributed the error in the outer orbit to the finiteeffects of the grid (deviations from spherical symmetry)at the edges of the grid. Our model does not show thiseffect strongly as the density drops smoothly to the edge -body—MHD Figure 4.
The fractional energy error of the three test par-ticles. of the computational domain, while in the Federrath testthe cloud has a sharp edge at ∼ × cm where thedensity changes by three orders of magnitude.Although the orbits in Figure 2 are well closed, theydo oscillate slightly about the proper path. This ismore clearly seen in a plot of the fractional radial error(Fig. 3). The resulting energy error, shown in Fig. 4,never rises above ∼ ∼
10 yr. The expected stability of symplectic integratorsis evident, and the energy error does not grow noticeablywith time. STAR FORMATIONCapturing the range of scales in simulations is one ofthe core challenges to overcome in conducting studies ofstar cluster formation and the ISM in general. In orderto account for the effects of the surrounding medium,including its large scale turbulence, magnetic fields andfeedback, simulation boxes need to have sizes of tensto hundreds of parsecs. However in order to properlycapture star formation for stars as small as a solar mass,including binary star formation, simulations need to beable to resolve the Jeans length λ J λ J = ( πc s /Gρ ) / (21)which is on the order of or below a single AU. (Recentwork does suggest that perhaps only hundreds of AUneed be resolved; Sadavoy & Stahler 2017.) To resolve the Jeans length in pure Eulerian hydrody-namics λ J must be resolved by at least four grid cells(Truelove et al. 1997), while in MHD at least six cells perJeans length are needed to resolve Alfv´en waves (Heitschet al. 2001), and as many as 32 cells per Jeans lengthwould be needed to properly resolve self-consistent for-mation of magnetic fields through the microturbulentdynamo (Federrath et al. 2011). These requirementsgenerally set the physical scales of the simulation, withthe computational expense increasing with dynamicalrange.Many authors overcome this difficulty when simulat-ing star formation in large clouds by adding so calledsink particles that represent entire clusters, essentiallytruncating the small scales. Clusters are created fromJeans unstable gas that is collected in sink particles(Bate et al. 1995; Krumholz et al. 2004; Federrath et al.2010) on the grid (Dale et al. 2014; Gatto et al. 2017).This method requires taking random samples from theIMF, which in turn requires that enough gas be collectedthat the high-mass end of the IMF can be sampled ap-propriately. This typically means that around 100 M (cid:12) to 150 M (cid:12) must be collected in a sink particle, whichonce sampled for a cluster population becomes a singlepoint source for all of the cluster’s feedback.The second difficulty in modeling star formationcomes from the effects of feedback at the protostellarand pre-main sequence phases. During the protostellardisk phase, accretion luminosity reduces fragmentation(Krumholz et al. 2007; Bate 2009; Peters et al. 2010,2011). This luminosity, along with protostellar jet driv-ing, is expected to reduce the efficiency of envelopeaccretion (Matzner & Jumper 2015). All of these willhave an effect on the final main sequence star that re-sults. Generally these effects are replaced by a local starformation efficiency parameter usually on the order of0 . .
5, which represents the fraction of gas that sur-vived the accretion process from the stars initial outergas envelope (e.g. Padoan et al. 2014).Here we use an efficiency parameter to account forproto- and pre-stellar effects, creating our stars as mainsequence objects. But to actually create the stars wetake a different, and perhaps more historical, approachcompared to recent simulations. Instead of sampling anIMF directly after collecting mass, we choose instead totake a Poisson sampling of the number of stars in a givenmass bin in the IMF for any given star forming regionas proposed by Sormani et al. (2017), P i ( n ) = e − λ i λ n i i /n i ! , (22)where λ i = f i M/ (cid:104) m i (cid:105) , M is the total mass for a specificsample, (cid:104) m (cid:105) is the average mass in the i th bin for the Wall et al. total range of the IMF sampled over, f i is the fractionof the total mass in the i th bin for the IMF range, and n is the number of stars for a specific sampling for whichthe probability P is returned.The idea of Poisson sampling for mass values has beenused before to choose from the IMF (Elmegreen 1997).It has the added mathematical benefit that even whensampling one star at a time the sum of all the sam-ples will always reproduce the parent sample, since theproduct of the subset Poisson distributions of n i , n j withmean values λ i , λ j is equal to the Poisson distributionof the entire set N with the mean being λ i + λ j : P ( N ) = N (cid:88) i =0 P ( n i , λ i | n j , λ j ) (23)= N (cid:88) i =0 λ n i i n i ! e − λ i λ n j j n j ! e − λ j (24)= ( λ i + λ j ) N N ! e λ i + λ j , (25)from the binomial theorem.On the face of it, it would seem that the same consid-erations of having enough mass to sample each mass binappropriately would apply to our Poisson sampling aswell (see for example Sormani et al. 2017), since even aninput of 1 M (cid:12) of gas can result in an unphysical 20 M (cid:12) star, even if we lack enough mass in the simulation tocreate it. Suggestions to overcome this difficulty in starformation methods have been to violate local mass con-servation by sampling from all sink particles at once,or to simply sample all gas over a given density thresh-old throughout the simulation (Fujii & Portegies Zwart2015).Instead we invert this process. We use the sink parti-cle routines of Federrath et al. (2010) in FLASH to iden-tify star forming regions. Every time a sink particleforms because a region has become Jeans unstable, wecreate a list of stellar masses for that sink particle bysampling the IMF with our Poisson process with 10 M (cid:12) of stars created at once, before the sink accretes anygas. The number of stars from the Poisson sampling ineach mass bin is returned, and then we randomly samplefrom the Kroupa (2001) IMF in each bin to give actualmasses to every star, with the sampled mass bracketedbetween 0 .
08 M (cid:12) to 150 M (cid:12) . We choose a minimummass of 0.08 M (cid:12) for the IMF. We then randomize theentire list of stars created. Once the sink particle ob-tains enough mass to create the first star in the list, thisstar is removed from the list and placed into the sim-ulation, after which the sink must then gather enoughmass for the second star in the list. This method allows us to form particles star by star,without any violation of mass conservation. Each parti-cle can take on the local momentum, mass and velocityof the sink at the time of formation. Also if a massivestar forms, it has the chance to shut down local (or nonlocal) star formation in the simulation by preventing fur-ther accretion, allowing the effects of feedback on starformation to be properly analyzed. Furthermore, sincestars are formed individually, gravitational interactionsbetween stars and with the surrounding gas can leadto binary formation and stellar ejections that can haveimportant dynamical effects on the clusters and theirsurrounding natal gas clouds.Each gas-gathering sink particle has an accretion ra-dius of 2.5 times the smallest grid cell, to capture thelocal flow for accretion of the gas (Federrath et al. 2010).Since this is the best resolved location we have for starformation, star particles are placed randomly within thisradius using an isothermal spherical density profile (Bin-ney & Tremaine 2011). This allows some stars to formon the edges of these regions, but with smaller probabil-ity. We sample the velocity for the star from a Gaussianprofile centered on the sink velocity, using the local gassound speed as the variance. Fig. 5 is the resulting massdistribution for a typical small cloud (10 M (cid:12) ) run usingthis method. M ( M )10 N Kroupa IMFStars
Figure 5.
The mass function of stars from run M3f pre-sented in the Results section. The Kroupa IMF is shown forcomparison, normalized to the same number of total stars asin the simulation, here 1100. -body—MHD MULTIPLE STARSThe formation of close binaries and higher-order sys-tems can lead to the effective time step shrinking to asmall fraction of the binary orbital period, preventingfurther integration of the solution due to the extremecomputational expense. Normally, in N-body codes, thesolution to this problem is to introduce a specializedtreatment of close encounters, through regularization ofthe equations of motion (Aarseth & Zare 1974) or someother approximate treatment of close encounters (Porte-gies Zwart et al. 1999). Several of the N-body modulesin AMUSE have the ability to incorporate such treat-ments, but for a general and minimally intrusive solu-tion within the
AMUSE framework, we prefer to handleclose encounters using an external module, as we nowdescribe.The basic simplification in the approach we use is thatthe N-body code manages only the centers of mass ofstable multiple systems. These include binaries, a sta-ble hierarchical triples according to the Mardling (2008)criterion, or a higher-order multiple systems in which theMardling criterion applied to the outermost orbits indi-cates stability. Close encounters are resolved using the multiples module (Portegies Zwart & McMillan 2019),which keeps track of the internal structure of all multi-ple systems and manages interactions between them. Tooperate with this module, an N-body code must be mod-ified to detect close encounters and return immediatelyto the top-level
AMUSE script controlling the simulation,where appropriate means are taken to resolve the en-counter. Such functionality is straightforward to add,and is applied at the end of every N-body step.In our case, the ph4 module checks for pairs of parti-cles that satisfy the stopping conditions that (1) they areapproaching, (2) they have separations less than twicethe sum of their effective dynamical diameters (a tun-able parameter set at runtime to be 100 AU for all starsand twice the semi-major axis of a binary), and (3) theyare relatively unperturbed by their next nearest neigh-bor (with the ratio of accelerations γ p < .
02 in theterminology discussed below in Sect. 6 and representedby Eq. 26). Once the stopping condition is triggered,any internal structure in the two interacting particlesis restored, and the entire system is moved to a sepa-rate code designed for small-N encounters, aptly named smallN (Hut et al. 1995; McMillan & Hut 1996). The smallN code models the encounter as a scattering exper-iment, terminating when the system has resolved itselfinto a collection of mutually unbound single stars or sta-ble multiples (as just defined). The internal structureof the stable multiples is saved, their centers of massare placed back in the N-body code, and the integra- tion continues. In this way, arbitrarily complex hier-archical configurations can form and interact, and theirdynamical histories can easily be monitored. This treat-ment of multiples is unusual in the N-body community,but similar implementations are widely used in MonteCarlo models of cluster dynamics (Chatterjee et al. 2010;Hypki & Giersz 2013).Currently, the internal structure of a multiple is sim-ply frozen until its next close encounter. Secular internalevolution or perturbations due to encounters too wideto trigger a stopping condition are not included. Bina-ries on wide or strongly perturbed orbits are not mergedinto their center of mass; instead, their components arereturned directly to the N-body code. We note that, al-though ph4 evolves only the centers of mass of multiplestar systems, for all feedback and bridge calculations theindividual component stars are used directly. DEMONSTRATION PROBLEMAs a demonstration of our method, we simulate starformation in turbulent spheres of gas. For gas dynam-ics in
FLASH we use the unsplit MHD solver (Lee 2013)with third order PPM reconstruction (Colella & Wood-ward 1984) and the HLLD Riemann solver (Miyoshi &Kusano 2005), while for solving Poisson’s equation forgravity we use the multigrid solver of Ricker (2008). Weinclude feedback in these runs from radiation, winds andsupernovae that we will describe in a subsequent paper,since the results we describe here are not strongly af-fected by the feedback. We initialize the density fieldwith the commonly-used, initially spherically symmet-ric, Gaussian, gas distribution of Bate et al. (1995),while the velocity field is generated with a turbulentKolmogorov velocity spectrum (W¨unsch 2015) for thedense gas. All runs have eight levels of AMR refine-ment with the exception of M3f, which has seven, withrefinement triggered by the Jean’s criterion described inFederrath et al. (2010). All the runs except M3V2 haveall three stellar feedback methods (winds, radiation andsupernovae) switched on, although no run has yet toexperience a supernova event.We use total masses of M =10 , 10 and 10 M (cid:12) andGaussian density profiles with variance r o = 5, 10 and50 pc respectively. These length scales are chosen toroughly match the average density scales of clouds ofthese masses (Stahler & Palla 2008). Note this meansthe larger clouds have significantly longer free fall times.Outside of the sphere the density is chosen to roughlymatch the ISM density for a containing medium basedon the size and density of the sphere itself (i.e. forthe 10 M (cid:12) sphere, with higher density, the containingmedium was assumed to be cold neutral medium, while Wall et al. for the 10 M (cid:12) sphere the containing medium is warmand ionized). Then the temperatures are chosen to keepthe sphere and containing medium in pressure equilib-rium. The physical grid domain sizes D , listed in Tab. 1,are ∼ . . r in each case.All models reported here were initialized with virial ra-tio of kinetic to potential energy of 0.2. We choose thisinitially low virial parameter to ensure quick cloud col-lapse even before all of the turbulence decays within afree-fall time (Mac Low et al. 1998). As expected ourspheres rapidly collapse into filamentary structures andbegin forming stars (see Fig. 6).The four models we analyze are the current snapshotsof our first runs, listed in Tab. 1 and shown in Fig. 6.Note although M5f is both more massive and signifi-cantly older, it also contains a 97M (cid:12) star that is shut-ting down star formation in a large volume. Thereforethe number of stars is comparable to the much youngerstar forming regions in other runs. The larger runs havelarger minimum cell sizes, since all runs have the samenumber of refinement levels, but they also have more in-dividual filaments and cluster forming regions. Finallywe note that simulations of this nature are in generalhighly stochastic and therefore only predictable in a sta-tistical sense. BINARIESGiven the collisional nature of our coupled code, thepossibility exists of dynamical binary formation by in-teractions between stars and with the gas. Indeed, allfour runs examined here formed binaries. Note thathere when we refer to binaries, we mean any particlesthat are bound. Not all of these will necessarily bemerged into root particles in multiples (Sec. 4). The multiples code is a numerical solution to the problemof prohibitively short timesteps, but will only act on thetightest physical multiple systems.To identify binaries in our simulations, we first calcu-late the relative energies of all stars with respect to eachother, keeping only those that are bound to each other.We identify those that are mutually bound, which is ourinitial list of binary candidates. We then test each bi-nary, consisting of stars with masses m and m andsemi-major axis a , for perturbation by any star withmass m p and distance d from the binary center of mass, (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m p m ( d − a ) − m p m ( d + a ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) < γ p m m a , (26)where γ p is the chosen limiting ratio of accelerations. Asa guide for choosing γ p , we consider cases in which allthree stars have equal mass. Then for ratios d/a = 2,5, and 10, inverting Eqn. 26 gives γ p ∼ γ p = 3 . f bin = BS + B (27)for each mass bin is shown in Fig. 8, where B is the totalnumber of binary systems, and S is the total number ofsingle stars. For comparison we include observations of f bin compiled by Duchˆene & Kraus (2013).The lack of low mass binaries is due to the fact that wedo not include any primordial binaries as we form stars,nor do we have high enough resolution to capture the gasdynamics that may lead to primordial binary formation,such as core fragmentation at small scales (Bate 2012).Fig. 9 shows that in absolute numbers, most binariesare close to 1 M (cid:12) , with a steep decline for more massivestars following the IMF. We have no binaries containinga primary star with mass below 0 . (cid:12) , although ourIMF goes down to 0 . (cid:12) in all runs.The value of f bin at the massive end appears remark-ably consistent with the observations, despite our ne-glect of primordial binaries.Indeed, all of our massive binaries have separations r (cid:38) M p and sec-ondary mass M s , the mass ratio q = M s /M p , is shownin Figs. 11 and 12. Comparing with observations in ourmass range, we find our q distribution consistent withKouwenhoven et al. (2005), with a large peak for q < . q ∼ . q > . σ of the Gaussian.We evenly sample σ from 10 − to 1 for each rotationthrough all the data, comparing the mean integratedsquare error of each fit with the full data set, to find theappropriate bandwidth. -body—MHD Table 1.
Parameters for each of the four runs described here includingmass M , total number of stars N s at end of run t end when analysiswas performed, time at first star-forming event t sf , the cell size ∆ x at maximum refinement and the domain size D. Note that M3 andM3f used different random turbulent patterns initially, explaining theirdifferent values of t sf .Run a M (M (cid:12) ) N s t sf (Myr) t end (Myr) ∆ x (pc) D (pc)M3 10
589 7.76 9.14 0.05 12M5f 10 a Runs ending in “f” include radiation, winds, and supernovae.
With fully collisional N-body dynamics, we expectto see a separation of our binaries into hard and softregimes following the Heggie-Hills law (Hills 1975; Heg-gie 1975). For an average effective cluster thermal en-ergy of N kT = (cid:104) m (cid:105) σ v , (28)and a binary energy of x = Gm m a , (29)with soft binaries having x/kT (cid:46) x/kT (cid:38)
1, the separation between the two typesgrows with time. In our runs, we indeed see distincthard and soft populations well separated by a boundaryat σ v ∼ − , the stellar velocity dispersion averagedacross all four runs. It also appears that the soft binariesaccumulate near the hard/soft boundary, which shouldbe an important energy sink for the clusters as these bi-naries are disrupted. Gas dynamical friction could drivebinaries to build up in this way, after which they couldbe disrupted near the maximal soft binary energy andthereby drive the entire cluster to contract (Leigh et al.2014). Higher resolution runs will be needed to confirmif this effect occurs at the smallest binary separations. SUMMARYIn this work we have coupled the Eulerian MHD code
FLASH with stellar evolution and collisional N-body dy-namics using the ph4 and
SeBa codes in the
AMUSE soft-ware framework. We then used
AMUSE to couple the twogravity calculations using a gravity bridge to allow forinteraction between the gas and stars, allowing us tosimulate open cluster formation and early evolution inspherical, turbulent clouds of masses 10 –10 M (cid:12) .We have examined the binary populations produced inour demonstration runs. Despite not injecting any pri- mordial binary population from core or disk fragmen-tation, we find a large number of wide binaries withproperties that suggest they formed by interaction withthe gas.We find that the mass ratios of these binaries appearconsistent with observations, and that the binary frac-tion of massive binaries is close to that observed. Thelack of low-mass or tight binaries that we find suggeststhat those populations are predominantly produced byprimordial core or disk fragmentation, but that the widehierarchical multiple systems in which massive stars oc-cur may be formed by this dynamical mechanism actingon primordial binaries. We find well separated hard andsoft binary populations as predicted by the Heggie-Hillslaw, with evidence of a buildup of soft binaries near theboundary between the groups. Our results suggest thatthe hitherto little considered interaction of stars withgas during the early evolution of stellar clusters whiletheir natal gas remains present, may explain much ofthe wide binary and multiple population, particularlyfor the most massive stars.With publication of this work we make public our in-terface for the FLASH code in the
AMUSE framework, inorder to allow reproduction of this work. We hope ourinterface inspires others to use the coupling ideas behindthis work in ways we might never consider ourselves,in the spirit of scientific discovery. The interface canbe found within the
FLASH directory of the
AMUSE codeat https://github.com/amusecode/amuse. Specific im-plementation details are available from the first authorupon request.We acknowledge M. Davis, C. Federrath, S. Glover,A. Hill, J. Moreno, & E. Pellegrini for useful discussions,and A. van Elteran and I. Pelupessy for assistance withAMUSE, R. Banerjee and D. Seifried for kindly provid-0
Wall et al. y ( p c ) P r o j e c t e d H nu c l e i d e n s i t y ( c m − ) (a) y ( p c ) P r o j e c t e d H nu c l e i d e n s i t y ( c m − ) (b) y ( p c ) P r o j e c t e d H nu c l e i d e n s i t y ( c m − ) (c)
10 5 0 5 10x (pc)10.07.55.02.50.02.55.07.510.0 y ( p c ) P r o j e c t e d H nu c l e i d e n s i t y ( c m − ) (d) Figure 6.
Projected number density along the z -axis for runs (a) M3 (b) M3f (c) M4f and (d) M5f at the last data file fromeach run. The area of the circles representing stars are proportional to their mass, while the locations of sink particles are shownby white star symbols. Feedback is most effective in run (b) where multiple feedback stars have sunk together to the center ofthe cluster and in (d) due to the 97 M (cid:12) star in the center of the image. ing the base code for dust and molecular cooling, andR. W¨unsch for kindly providing a helper script for theinitial conditions. Analysis and plotting for this workwas done using yt (Turk et al. 2011). M-MML wasadditionally supported by the Alexander-von-HumboldtStiftung. We acknowledge NASA grant NNX14AP27G,which supported this work, and the Dutch National Su-percomputing Center SURFSara grant 15520 that pro- vided computing resources for our simulations. RSKacknowledges support from the Deutsche Forschungsge-meinschaft (DFG) via SFB 881 ”The Milky Way Sys-tem” (sub-projects B1, B2 and B8), and SPP 1573”Physics of the ISM”. Furthermore RSK thanks theEuropean Research Council for funding under the Eu-ropean Communitys Seventh Framework Programmevia the ERC Advanced Grant ”STARLIGHT” (projectnumber 339177). -body—MHD x ( p c ) P r o j e c t e d H nu c l e i d e n s i t y ( c m − ) (a) x ( p c ) P r o j e c t e d H nu c l e i d e n s i t y ( c m − ) (b) x ( p c ) P r o j e c t e d H nu c l e i d e n s i t y ( c m − ) (c)
20 10 0 10 20z (pc)201001020 x ( p c ) P r o j e c t e d H nu c l e i d e n s i t y ( c m − ) (d) Figure 7.
Projected number density along the y -axis for runs (a) M3 (b) M3f (c) M4f and (d) M5f at the last data file fromeach run. These images are zoomed out by a factor of ∼ (cid:12) star, causing many cluster membersto become unbound. REFERENCES
Aarseth, S. J., & Zare, K. 1974, Celestial Mechanics, 10, 185Bate, M. R. 2009, MNRAS, 392, 1363—. 2012, MNRAS, 419, 3115Bate, M. R., Bonnell, I. A., & Price, N. M. 1995, MNRAS,277, 362 Binney, J., & Tremaine, S. 2011, Galactic Dynamics:(Second Edition), Princeton Series in Astrophysics(Princeton University Press)Chatterjee, S., Fregeau, J. M., Umbreit, S., & Rasio, F. A.2010, ApJ, 719, 915 Wall et al. Primary Mass (M )0.00.20.40.60.81.0 f b i n Simulation DataDuchene & Kraus (2013)
Figure 8.
The fraction of all stars in binaries by stellarmass. Our simulations produce massive binaries at a rateconsistent with observations, but very few low-mass binaries.The mass errors shown are the bin widths, while the f bin error is given by the Poisson statistical error N − / s . Primary Mass (M )0.02.55.07.510.012.515.017.520.0 N u m b e r o f S t a r s i n B i n a r i e s Figure 9.
The number of stars in binaries binned by stellarmass.Colella, P., & Woodward, P. R. 1984, J. Comput. Phys., 54,174Dale, J. E., Ngoumou, J., Ercolano, B., & Bonnell, I. A.2014, MNRAS, 442, 694Doane, D. P. 1976, The American Statistician, 30, 181Duchˆene, G., & Kraus, A. 2013, ARA&A, 51, 269 a (AU)10 B i n a r y M a ss ( M ) v ~ k m / s v ~ k m / s M3M3fM4fM5f v = v = . k / m s v ~ k m / s v ~ c s a t K Figure 10.
The binary mass and semi-major axis, with linesrepresenting several cluster thermal energies overplotted ontheir corresponding binary potential masses and radii. Alsoshown is the mean thermal energy of all the stars averagedacross all four simulations. There is a clear separation intohard and soft binaries, as expected from the Heggie-HillsLaw. M (M )10 M ( M ) q = q = . q = . q = . M3M3fM4fM5f
Figure 11.
The mass ratio of the primary to the secondaryof the 77 binaries we have found in our runs so far. Alsoshown are several lines of constant mass ratio q for compar-ison.Elmegreen, B. G. 1997, ApJ, 486, 944Federrath, C., Banerjee, R., Clark, P. C., & Klessen, R. S.2010, ApJ, 713, 269 -body—MHD C D F F r e q u e n c y KDE = 0.03
Figure 12.