A new mass model for nuclear astrophysics: crossing 200 keV accuracy
AA new mass model for nuclear astrophysics: crossing 200 keV accuracy
M. Shelley ∗ and A. Pastore † Department of Physics, University of York, Heslington, York, Y010 5DD, UK (Dated: February 16, 2021)By using a machine learning algorithms, we present an improved nuclear mass table with a rootmean square deviation of less than 200 ˙keV. The model is equipped with statistical error bars in orderto compare with available experimental data. We use the resulting model to predict the compositionof the outer crust of a neutron star. By means of simple Monte Carlo methods, we propagate thestatistical uncertainties of the mass model to the equation of state of the system.
PACS numbers: 21.30.Fe 21.65.-f 21.65.Mn
I. INTRODUCTION
Neutron stars (NS) are fascinating objects: with a typical mass of M ≈ . M (cid:12) and radius R ≈
12 km [1], theyrepresent the ideal laboratory to study properties of nuclear matter under extreme conditions. The strong pressuregradient forces the matter within the NS to arrange into layers with different properties [2]. Going from the mostexternal regions of the star to its centre the matter density ρ spans several orders of magnitude from ≈ − ρ to ≈ ρ , where ρ = 0 .
16 fm − ≈ . × g cm − is the typical value of the density at the centre of an atomicnucleus [3].The external region of a cold non-accreting NS is named the outer crust. It consists of a Coulomb lattice comprisingfully-ionized atoms with Z protons and N neutrons. As discussed in Ref. [4], at β -equilibrium the composition ofeach layer of the crust at a given pressure P is obtained by minimising the Gibbs free energy per nucleon. The latteris the sum of three main contributions: the nuclear , electronic and lattice . Since a large fraction of nuclei presentin the outer crust are extremely neutron rich, their binding energy is not known experimentally, and consequentlyone has to rely on a nuclear mass model. Several models are available within the scientific literature with a typicalaccuracy, i.e., the root mean square (RMS) deviation of the residuals, of 500 keV [5]. The most accurate are those ofWang and Liu [6], having a typical RMS of ≈ ∗ Electronic address: [email protected] † Electronic address: [email protected] a r X i v : . [ nu c l - t h ] F e b composition of the outer crust of a NS. As previously done in Ref. [15], we perform a full error analysis of the massmodel and we use a Monte Carlo procedure to propagate these statistical uncertainties through to the final equationof state (EoS).The article is organised as follows: in Sec. II we briefly introduce the concept of Gaussian processes and their usefor regression, and in Sec. III we discuss the nuclear mass model and the improvement provided by the GP. In Sec.IVwe illustrate our results concerning the outer crust, and finally we present our conclusions in Sec.V. II. GAUSSIAN PROCESS REGRESSION
We now introduces Gaussian processes, and their use as a regression tool. A Jupyter notebook is available as Sup-plementary Material; it was used to create Fig. 1, and contains additional plots which give a step-by-step introduction.A Gaussian process (GP) is an infinite-dimensional Gaussian distribution. Similar to how a one dimensional (1D)Gaussian distribution has a mean µ and variance σ , a GP has a mean function µ ( x ), and a covariance function k ( x , x (cid:48) ),also known as the kernel. In principle, x can be a vector of length d representing a point in a d -dimensional inputspace, but we will just consider the case d = 1 for now, i.e., where x is a single number. Just as we can drawrandom samples (numbers) from a 1D Gaussian distribution, we can also draw random samples from a GP, whichare functions f ( x ). The kernel k ( x, x (cid:48) ) tells us the typical correlation between the value of f at any 2 inputs x and x (cid:48) , and entirely determines the behaviour of the GP (relative to the mean function). For simplicity, we use here aconstant mean function of 0.GPs can be used for regression of data if the underlying process generating the data is smooth and continuous.See Ref. [24] for a thorough introduction to GPs for regression and machine learning. Many software packagesare available for GP regression; in the current article we use the Python package GPy [25]. For a set of data Y ( x ) = { y ( x ) , y ( x ) , . . . y n ( x n ) } , instead of assuming a fixed functional form for the interpolating function, wetreat the data as originating from a Gaussian process GP : Y ( x ) ∼ GP ( µ ( x ) , k ( x, x (cid:48) )) . (1)We make no parametric assumption on the shape of the interpolating function, making GPs a very flexible tool.We adopt the commonly used RBF (radial basis function) kernel, also known as the squared exponential or Gaussian,which yields very smooth samples f ( x ), and has the form k RBF ( x, x (cid:48) ) = η exp (cid:34) − ( x − x (cid:48) ) (cid:96) (cid:35) , (2)where η , (cid:96) are parameters to be optimised for a given Y . Both have easily interpretable meanings: η gives the typicalmagnitude of the oscillations of f ( x ), and (cid:96) the typical correlation length in x . When | x − x (cid:48) | is small, the correlationis large, and we expect f ( x ) and f ( x (cid:48) ) to have similar values. As | x − x (cid:48) | grows beyond a few correlation lengths (cid:96) ,the correlation between f ( x ) and f ( x (cid:48) ) drops rapidly to 0.In Fig. 1 we show a simple demonstration of GP regression, where the underlying true function generating the data(dotted line) is simply y = sin( x ). The GP mean (solid line) here represents average of all possible samples passingthrough the data Y (crosses), i.e. the mean prediction. The GP mean is smooth, and interpolates exactly all datapoints. Outside the input domain, it approaches 0. As we would expect, the quality of the GP regressions is greatestwhere there is more data available, in this case 0 ≤ x ≤ σ ( ≈ η .Clearly some sets of kernel parameters lead to better regression. For example, if (cid:96) is smaller than the typical dataspacing, the GP mean will approach 0 in between data points, making it useless for interpolation; if η is too large,the size of the confidence intervals will be overestimated. These parameters can be optimised by maximising thelikelihood — see Ref. [26] for more details — as has been done in Fig. 1. III. NUCLEAR MASSES
Nuclear mass models are used to reproduce the nuclear binding energies of all known nuclei, ≈ ≈ -2 0 2 4 6 8 x -1.5-1-0.500.511.5 y TrueDataGP
FIG. 1: Colors online. Demonstration of Gaussian process regression. The true function is y = sin( x ), and the data points areat x = { , . , , . , } . The solid line represents the GP mean, and the shaded areas give the 2- σ confidence intervals. Seetext for details. the extrapolated ones ( ≈ DZ10 model), and is ableto reproduce all known masses with a root mean square deviation of σ RMS ≈ . B exp ( N, Z ) are equal to the theoretical ones B th ( N, Z | a ) up to a Gaussian error ε ( N, Z ) as B exp ( N, Z ) = B th ( N, Z | a ) + ε ( N, Z ) , (3)where B th ( N, Z ) is the binding energy calculated using DZ10. In Fig. 2, we illustrate the residuals for DZ10 as afunction of the nucleon number A = N + Z . One clearly sees that these residuals show structures, thus indicating thepresence of some missing physics that is not properly described by the model. In Fig.3, we plot the same residualsas a histogram (labelled ‘DZ10’). On the same figure we also draw a Gaussian with mean 0 and width fixed to theRMS of the residuals. The height of the Gaussian is fitted on the residuals. We observe that the residuals display aGaussian distribution.A more detailed statistical test can be performed on these residuals to verify that they do not follow a regularGaussian distribution — see for example Refs. [15, 33] for more details — but for the current discussion a qualitativeanalysis is sufficient.Having identified that there is room to improve the accuracy of the model, the most natural option to take is toadd new terms [29]. For example, a version of the Duflo-Zucker model with 33 parameters is available. Although theRMS reduces to ≈
300 keV, the extra terms appear poorly constrained [29], and therefore the model is unsuitable forextrapolation. We refer the reader to Ref. [34] for a detailed discussion on poorly constrained parameters.Instead of explicitly creating new terms for a given mass model, we can take advantage of machine learning methods.For example, in Refs. [13, 15], the authors have adjusted a neural network on the residuals of the DZ10 model in orderto reduce the discrepancy between theory and experiment. The NN is able to reduce this discrepancy to a typicalRMS of ≈
350 keV [15].NN are often very complex models, with several hundred free parameters. As discussed in [14], a Gaussian processrepresents a valid alternative to a NN; the main advantages are the very small number of adjustable parameters, asdiscussed in Sec.II, and the superior performance on the database of nuclear masses when compared with a NN [14]. − − B t h − B ex p [ M e V ] DZ1050 100 150 200 250 A − . . . B t h − B ex p [ M e V ] DZ10-GP
FIG. 2: Residuals as a function of nucleon number A , for the DZ10 and DZ10-GP models. See text for details. − − B th − B exp [ MeV ] FIG. 3: Distributions of the residuals for the DZ10 and DZ10-GP models, for measured masses. Gaussian fits to the residualsare also shown, with the mean fixed to 0, and the standard deviation to that of the residuals. See text for details.
A. Augmenting the DZ10 model with a GP
Having introduced the GP in Sec.II, we now apply it to the case of nuclear masses. As done in Ref. [14], we considerthe same kernel given in Eq.1, but now in the 2-dimensional case, meaning there are now three adjustable parameters.We also use a fourth parameter σ n , named the nugget . The use of the nugget carries several advantages, includingnumerical stability [35], and improved predictions [36]. The kernel we use is then given by η = 0 . + . − . . . . . ρ N ρ N = 2 . + . − . .
32 0 .
40 0 .
48 0 .
56 0 . η . . . ρ Z .
55 2 .
70 2 .
85 3 . ρ N .
80 1 .
95 2 . ρ Z ρ Z = 1 . + . − . FIG. 4: Posterior distributions of GP parameters obtained through MCMC sampling. The horizontal and vertical lines indicatethe optimal parameter values obtained by maximising the likelihood. See text for details. k RBF ( x, x (cid:48) ) = η e − ( N − N (cid:48) )22 ρ N − ( Z − Z (cid:48) )22 ρ Z + σ n δ xx (cid:48) , (4)where in the present case x = ( N, Z ), and η , ρ Z , ρ N are the adjustable parameters. Following Ref. [14], ρ N and ρ Z are interpreted as correlation lengths in the neutron and proton directions, while η gives the strength of thecorrelation between neighbouring nuclei.The addition of the nugget term means that the GP mean does not necessarily pass directly through each datapoint, and that the confidence intervals only shrink to a minimum of σ n . After a preliminary investigation, we fixed σ n to 0 . η, ρ N , ρ Z are determined through maximising the likelihood for the GP. SeeRef. [26] for details. In Fig.4, we illustrate the posterior distribution of the parameters in form of a corner plot. Thedistributions were obtained with Markov Chain Monte Carlo (MCMC) sampling [37]. The plot illustrates the shapesof the distributions around the optimal parameter set, and it provides us with the error bars for the parameters andinformation about their correlations. In this case we see that all parameters are very well determined by the residualsdata, and a weak correlation is observed between η and ρ N , and between η and ρ Z .A very interesting result is that the two coherence lengths ρ N,Z are as large as, or greater than, 2. This meansthat, if we know the residual for a nucleus with mass number A , we can infer properties of the nucleus with A ± B th (appearing in Eq. 3) as B th = B DZ − GP , which we name DZ10-GP .In the lower panel of Fig. 2, we illustrate the residuals obtained from the DZ10-GP model. We clearly see that theGP has been able to capture the missing physics of the DZ10 model, in particular the spikes observed in the upperpanel of Fig. 2.The total RMS of this combined model is σ = 178 keV, which at the moment is probably among the lowest valuesever obtained using a mass model fitted on all the available masses, with a total of 10 + 4 = 14 adjustable parameters.We observe that the maximum discrepancy between theory and experiment is now always lower than 1 MeV, and thestructure observed in Fig. 2 (upper panel) has now disappeared (lower panel), with the new residuals approachinga true white noise. The presence or not of white noise in the model may represent a lower bound on the accuracyone can achieve with a theoretical model, as discussed in Ref. [8]; we leave such an interesting analysis for a futureinvestigation.Having created the DZ10-GP model, we now benchmark its extrapolations on the set of ≈
750 nuclear massesobtained via indirect measurements [27]. The results are presented in Fig.5. The original DZ10 model gives an RMS − B th − B exp [ MeV ] FIG. 5: Same as Fig. 3, but for extrapolated masses. See text for details. B t h − B ex p [ M e V ] Z =
28 DZ10DZ10-GPExp.30 32 34 36 38 40 42 44 46 48 50 52 54 N B t h − B ex p [ M e V ] Z =
29 DZ10DZ10-GPExp.
FIG. 6: Residuals for the DZ10 and DZ10-GP models, for the Z = 28 and Z = 29 isotopic chains. The vertical dashed linesrepresent the transition from nuclei used for training to nuclei for which predictions are made. of 1 .
426 MeV. The inclusion of GP corrections help to reduce the RMS to 1 .
100 MeV. It is worth noting that someoutliers are still present. We have checked that the 6 nuclei with a residual larger than 6 MeV are all in the region ofsuper-heavy nuclei with Z ≥
32 36 40 44 48 52 56 60 64 N -0.500.511.522.5 G P ( N , Z ) [ M e V ] Z = Z = FIG. 7: GP correction for Z=28 and Z=29. The vertical dashed lines represent the transition from nuclei used for training tonuclei for which predictions are made. The shaded ares represent the GP error bars. See text for details. that it tends to give large discrepancies at the edges. Even the inclusion of the statistical error bars of DZ10 arenot enough to explain such a discrepancy. On the contrary, the use of the GP helps to flatten out the discrepancies,and produces predictions very close to the data in the extrapolated region. By considering the experimental and thetheoretical error bars, we observe that our DZ10-GP model reproduces these data reasonably well.In Fig. 7, we show the evolution, along two isotopic chains, of the GP’s contribution to binding energy. We seethat these contributions drop to zero as the neutron-rich region is approached. On the same figure, we also report theevolution of a 1- σ error bar provided by the GP. As discussed previously, we notice that the error bars grow towardsthe neutron drip-line, where we have little or no available data to constrain the GP.This behaviour can be understood from the value of the GP’s correlation length for neutrons, ρ N = 2 .
67: byconstruction the GP predictions tend to the mean of the data, in this case zero, after ≈ ρ N . This meansthat the GP will be effective in describing extrapolated neutron-rich nuclei with at most ≈ IV. OUTER CRUST
To determine the chemical composition of the outer crust, we minimise the Gibbs free energy per particle, which isdefined as [39] g = E nuc ( N, Z ) + E e ( A, Z ) + E l ( A, Z ) + Pρ b , (5)where ρ b is the baryonic density. The three terms E nuc , E e , E l are the nuclear , electronic and lattice energies pernucleon respectively [40]. The pressure P arises only from lattice and electron contributions as P = P L + P e . Formore details, we refer to Ref. [39], where the entire formalism has been discussed in great detail.The novelty of the current approach is in the treatment of the nuclear term, which takes the form E nuc ( N, Z ) = Zm p + N m n A − B ( N, Z ) A (6)where m p ( n ) is the mass of the proton (neutron) and B is the nuclear binding energy given by the mass model. Inthe current article, we use the mass model DZ10-GP as discussed in Sec.II. The composition given by the currentmass model is given in Tab. IV. By comparing the DZ10-GP results with those obtained using only the DZ10 model,we observe some discrepancies in the extrapolated region at low P . In particular we notice that the improved massmodel (DZ10-GP) predicts the existence of Zr, that is not considered in the original DZ10 model. At higher P , thetwo mass models tend to give very similar results. This is simple to understand since, as discussed in Sec. II, the GPcorrection tends to zero for large extrapolations, as seen in Fig. 7. DZ10 DZ10-GP P max [MeVfm − ] N Z P max [MeVfm − ] N Z3.30 · −
30 26 3.30 · −
30 264.36 · −
34 28 4.36 · −
34 283.56 · −
36 28 3.56 · −
36 284.02 · −
38 28 4.02 · −
38 281.03 · −
50 36 1.03 · −
50 365.59 · −
50 34 5.59 · −
50 341.76 · −
50 32 5.59 · −
50 321.77 · −
50 301.58 · −
50 28 3.22 · −
50 281.82 · −
82 42 1.21 · −
82 423.31 · −
82 40 1.81 · −
82 404.83 · −
82 38 3.31 · −
82 384.86 · −
82 36 4.84 · −
82 36TABLE I: Composition of the outer crust of a NS using the DZ10 and DZ10-GP mass models. In the first and fourth columnswe report the maximum value of pressure at which the nucleus is found using the minimisation procedure. The horizontal lineseparates the measured and extrapolated masses reported in AME2016 [27].
Since our goal is to obtain the statistical uncertainties of the equation of state, we perform a simple Monte Carlosampling of the error bars of our DZ10-GP model (under a Gaussian assumption). We generate 10 new mass tables,and we use them to calculate the composition of the outer crust.Using a frequentist approach [41], we define the existence probability of each nucleus as the ratio of the number oftimes a given nucleus appears in the various EoS at a given pressure, divided by the total number of mass tables. SeeRef. [15] for more details.In Fig.8, we show the evolution of the existence probability for each nucleus in the outer crust as a function of thepressure of the star. We notice that, as confirmed by other authors [38], the favourable configurations are those closeto the neutron shell closures at N= 50 and N= 82. However, due to the large error bars, there is a non-negligibleprobability for several nuclei to be present within the outer crust. For example in the transition region from N=50 toN=82 at P ≈ − MeV fm − , we observe that other nuclei may also be present as Ruthenium, Niobium or Zirconium.We finally notice that given the structure of the error bars of the model as illustrated in Fig.6, the existence probabilitybecomes smaller thus reflecting the uncertainty of the mass model used. To have a better picture of the outer crust,a further reduction of the error bars and thus an increase of the model accuracy is thus required.Using the same data set, we also define a statistical uncertainty for the EoS: by counting the 10 EoS built before,we define the 68%, 95%, and 99% quantiles of the counts, i.e., 1- σ , 2- σ and 3- σ deviations, under the assumption thatthe errors follow a Gaussian distribution. The results are presented in Fig.9. We observe that the largest uncertaintiesare located close to the transition from N=50 to N=82 at P ≈ . × − [MeV fm − ] and approaching the transitionto the inner crust at P ≈ × − [MeV fm − ]. V. CONCLUSIONS
By using a Gaussian process fitted to the residuals of the Duflo-Zucker mass model, we have been able to create amass model with a global RMS of less than 200 keV. The resulting DZ10-GP model has the major advantage of havinga very limited amount of parameters (10 in the original DZ model plus 4 for the GP), but it is also one of the veryfew mass models equipped with error bars [29, 42]. The values of the mass table are available in the SupplementaryMaterial.Apart form its simplicity and reduced computational cost, GP has also the advantage of providing automatically
P [MeV fm -3 ] P r ob a b ilit y Zn Ni Ni Mo Ru Zr Nb Y Zr Sr Kr Y Sr FIG. 8: Existence probability of a given nucleus within the outer crust as a function of the pressure, obtained via a MonteCarlo sampling using the DZ10-GP mass table. See text for details. × -5 × -4 × -4 × -4 ρ b [fm -3 ] × -4 × -4 × -4 × -4 P [ M e V f m - ] σ confidence2 σ confidence1 σ confidenceAverage FIG. 9: Equation of state, including statistical uncertainties, of the outer crust of a NS, calculated using the DZ10-GP massmodel. See text for details. error bars [43]. This is not the case for other machine learning methods as for example decision trees [33] or neuralnetworks [18]. As discussed in an Editorial in Physical Review A [44]; the use of theoretical error bars is fundamentalin order to make a reasonable comparison with experimental data, especially when using the mass model to performextrapolations: this is not common practice among different mass models, apart form few exceptions [29, 42, 45].In this article, we have also applied the GP-DZ10 mass model to study the composition of the outer crust of aneutron star, paying particular attention to the role of statistical errors and how they propagate to the final EoS.Following the methodology presented in Ref. [15], we have defined an existence probability of a nucleus within thecrust. Such a quantity help us identifying the possible accuracy problems related with our model and it may helpprioritising future experimental proposal to further improve our knowledge of the crust of a neutron star.0
Acknowledgements
This work has been supported by STFC Grant No. ST/P003885/1. We also thank A. Gration for training us onthe usage of Gaussian process regression. [1] S. Greif, G. Raaijmakers, K. Hebeler, A. Schwenk, and A. Watts, Monthly Notices of the Royal Astronomical Society ,5363 (2019).[2] N. Chamel and P. Haensel, Living Reviews in relativity , 10 (2008).[3] P. Ring and P. Schuck, The Nuclear Many-Body Problem (Springer-Verlag, 1980).[4] N. Chamel, R. Pavlov, L. Mihailov, C. J. Velchev, Z. K. Stoyanov, Y. Mutafchieva, M. Ivanovich, J. Pearson, and S. Goriely,Physical Review C , 055804 (2012).[5] A. Sobiczewski, Y. A. Litvinov, and M. Palczewski, Atomic Data and Nuclear Data Tables , 1 (2018).[6] N. Wang and M. Liu, Physical Review C , 051303 (2011).[7] R. Wolf, D. Beck, K. Blaum, C. B¨ohm, C. Borgmann, M. Breitenfeldt, N. Chamel, S. Goriely, F. Herfurth, M. Kowalska,et al., Physical review letters , 041101 (2013).[8] J. Barea, A. Frank, J. G. Hirsch, and P. Van Isacker, Physical review letters , 102501 (2005).[9] G. T. Garvey and I. Kelson, Physical Review Letters , 197 (1966).[10] J. W. Clark, in Scientific applications of neural nets (Springer, 1999), pp. 1–96.[11] S. Athanassopoulos, E. Mavrommatis, K. Gernoth, and J. W. Clark, Nuclear Physics A , 222 (2004).[12] S. Athanassopoulos, E. Mavrommatis, K. Gernoth, and J. W. Clark, arXiv preprint nucl-th/0511088 (2005).[13] R. Utama, J. Piekarewicz, and H. Prosper, Physical Review C , 014311 (2016).[14] L. Neufcourt, Y. Cao, W. Nazarewicz, F. Viens, et al., Physical Review C , 034318 (2018).[15] A. Pastore, D. Neill, H. Powell, K. Medler, and C. Barton, Physical Review C , 035804 (2020).[16] M. Leshno, V. Y. Lin, A. Pinkus, and S. Schocken, Neural Networks , 861 (1993), ISSN 0893-6080, URL .[17] K. Xu, J. Li, M. Zhang, S. S. Du, K.-i. Kawarabayashi, and S. Jegelka, arXiv preprint arXiv:2009.11848 (2020).[18] A. Pastore and M. Carnini, arXiv preprint arXiv:2012.06605 (2020).[19] L. S. Bastos and A. O?Hagan, Technometrics , 425 (2009).[20] A. Pastore, M. Shelley, S. Baroni, and C. Diget, Journal of Physics G: Nuclear and Particle Physics , 094003 (2017).[21] M. G. E. Shelley, P. Becker, A. Gration, and A. Pastore, Acta Physica Polonica B (2019).[22] R. M. Neal, Bayesian learning for neural networks , vol. 118 (Springer Science & Business Media, 2012).[23] J. Duflo and A. Zuker, Physical Review C , R23 (1995).[24] C. E. Rasmussen and C. K. I. Williams, Gaussian Processes for Machine Learning , Adaptive Computation and MachineLearning (MIT Press, Cambridge, Mass, 2006), ISBN 978-0-262-18253-9.[25] GPy,
GPy: A gaussian process framework in python , http://github.com/SheffieldML/GPy (since 2012).[26] A. Gration and M. I. Wilkinson, Monthly Notices of the Royal Astronomical Society , 4878 (2019).[27] M. Wang, G. Audi, F. Kondev, W. Huang, S. Naimi, and X. Xu, Chinese Physics C , 030003 (2017).[28] A. Zuker, in (SISSA Medialab, 2011), vol. 100, p. 083.[29] C. Qi, Journal of Physics G: Nuclear and Particle Physics , 045104 (2015).[30] A. Pastore, Journal of Physics G: Nuclear and Particle Physics , 052001 (2019).[31] S. N. Lahiri, Annals of Statistics pp. 386–404 (1999).[32] G. Bertsch and D. Bingham, Physical review letters , 252501 (2017).[33] M. Carnini and A. Pastore, Journal of Physics G: Nuclear and Particle Physics (2020).[34] T. Nikˇsi´c and D. Vretenar, Physical Review C , 024333 (2016).[35] R. M. Neal, arXiv:physics/9701026 (1997), physics/9701026.[36] R. B. Gramacy and H. K. H. Lee, Statistics and Computing , 713 (2012), ISSN 1573-1375.[37] C. J. Geyer, Statistical science pp. 473–483 (1992).[38] J. Pearson, S. Goriely, and N. Chamel, Physical Review C , 065810 (2011).[39] G. Baym, C. Pethick, and P. Sutherland, The Astrophysical Journal , 299 (1971).[40] D. Basilico, D. P. Arteaga, X. Roca-Maza, and G. Col`o, Physical Review C , 035802 (2015).[41] R.J.Barlow, A Guide to the Use of Statistical Methods in the Physical Sciences (John Wiley, 1989).[42] S. Goriely and R. Capote, Physical Review C , 054318 (2014).[43] J. Dobaczewski, W. Nazarewicz, and P. Reinhard, Journal of Physics G: Nuclear and Particle Physics , 074001 (2014).[44] A. Join and F. Referee, Phys Rev A , 040001 (2011).[45] T. Haverinen and M. Kortelainen, Journal of Physics G: Nuclear and Particle Physics44