Ergodic sampling of the topological charge using the density of states
Guido Cossu, David Lancaster, Biagio Lucini, Roberto Pellegrini, Antonio Rago
EEur. Phys. J. C manuscript No. (will be inserted by the editor)
Ergodic sampling of the topological charge using the density of states
Guido Cossu , David Lancaster a,3 , Biagio Lucini b,4 , Roberto Pellegrini , AntonioRago c,3 Braid Technologies, Shibuya 2-24-12, Tokyo, Japan School of Physics and Astronomy, The University of Edinburgh, Edinburgh EH9 3JZ, UK Centre for Mathematical Sciences, Plymouth University, Plymouth, PL4 8AA, United Kingdom Department of Mathematics, Computational Foundry, Swansea University (Bay Campus), Swansea SA2 8EN, UKFebruary 9, 2021
Abstract
In lattice calculations, the approach to the con-tinuum limit is hindered by the severe freezing of the topo-logical charge, which prevents ergodic sampling in configu-ration space. In order to significantly reduce the autocorre-lation time of the topological charge, we develop a densityof states approach with a smooth constraint and use it tostudy SU(3) pure Yang Mills gauge theory near the contin-uum limit.Our algorithm relies on simulated tempering across arange of couplings, which guarantees the decorrelation ofthe topological charge and ergodic sampling of topologicalsectors. Particular emphasis is placed on testing the accu-racy, efficiency and scaling properties of the method. In theirmost conservative interpretation, our results provide firm ev-idence of a sizeable reduction of the exponent z related tothe growth of the autocorrelation time as a function of theinverse lattice spacing.PACS: 11.15.Ha, 12.38.Aw, 12.38.Ge In four spacetime Euclidean dimensions, gauge field con-figurations can be classified according to their topologicalcharge Q , defined as Q = π (cid:90) d x ε µνρσ Tr (cid:0) G µν G ρσ (cid:1) , (1)where G µν ( x ) is the field tensor. This integral can be re-phrased as a mapping from a three-sphere to a SU ( ) group,exposing the fact that Q is related to the homotopy group π : S (cid:55)→ SU ( ) . With the normalisation in equation (1), Q is integer-valued. a e-mail: [email protected] b e-mail: [email protected] c e-mail: [email protected] d Work done prior to joining Amazon. χ , the susceptibility of Q per unit of volume, is relatedto the mass of the η (cid:48) particle by the Witten-Veneziano for-mula [1, 2]. Therefore, the topological charge Q and thetopological susceptibility χ play a phenomenologically rel-evant role in the theory of the strong interactions and in anytheoretically convenient generalisation of the latter such asthe large- N limit of SU ( N ) gauge theories [3]. It is hardlysurprising then that precise measurements of Q and χ havebeen one of the most active areas of activity of Lattice GaugeTheory since its earliest days (see e.g. [4, 5] for recent re-views). In addition, more recent investigations have used lat-tice determinations of topological quantities to estimate themass of axions from first principles in order to determine abound for their density in the Universe [6–8].On a finite lattice with periodic boundary conditions inall directions, gauge configurations split into topological sec-tors labelled by the value of Q and separated by free energybarriers [9]. Monte Carlo simulations have shown a sharpgrowth of the height of these barriers as a →
0, with a con-sequent freezeout of the topological sector in numerical cal-culations [10]. The phenomenon becomes even more pro-nounced as the number of colours N increases [11, 12]. Thisdramatic slowing down of Q and the resulting very large au-tocorrelation times pose serious problems of ergodicity insimulations and cause a very slow convergence of physi-cal observables such as hadron masses to their thermody-namic limit values [13, 14]. Since the toroidal topology ofthe domain plays a crucial role in the freezeout, other setupshave been proposed based for example on open boundaryconditions in time [15] (used at large N in [16, 17]) or onnon-orientable manifolds [18]. While these setups are verypromising, their properties and potential drawbacks such asthe loss of translational invariance with open boundary con-ditions in time, need further study.In this work, we take an alternative approach to the prob-lem of topological freezing without abandoning the use of a r X i v : . [ h e p - l a t ] F e b periodic boundary conditions in all directions. For the the-ory formulated on a torus, we build a sequence of couplingsthat interpolate from a relatively strong value to a weak cou-pling value and we perform numerical simulations using analgorithm in which the coupling is also a dynamical vari-able taking values in the prebuilt set. More specifically, weuse the density of states approach [19, 20] formulated witha Gaussian constraint [21] and tempering [22, 23]. The ideabehind this approach is that the strong coupling values willact as an ergodic reservoir of topological sectors, while thetempering guarantees thermal equilibrium of the latter in theMonte Carlo at weaker coupling. Preliminary results of ourinvestigation have appeared in [24].The rest of the work is organised as follows. In Sect. 2,we recall some key properties of the density of states ap-proach. Sect. 3 develops the formalism for a Gaussian con-straint and discusses a simplifying approximation. The lat-tice gauge theory model, the simulations and results are de-scribed in Sect. 4. This is a long section as it covers our dis-cussion of the topological charge and demonstrates that thedensity of states approach leads it to couple more weaklyto the long timescale modes of the system. Techniques toanalyse the autocorrelation time in this situation are devel-oped. We work on three lattice sizes and present our resultsconcerning scaling in Sect. 5, before summarising our find-ings in Sect. 6. Two appendices provide more technical de-tails: Appendix A compares a multi-histogram calculationusing our reference simulations with the density of statesapproach; Appendix B contains tables recording details ofthe simulations performed in this study. The density of states technique has been shown to be usefulin a variety of models with continuous variables. For exam-ple: U(1)[20], complex action[25], Bose gas[26]. Other ver-sions of the technique have also been used to study SU(3)gauge theory [27, 28]. In this paper we further develop thetechnique and use it to make a detailed study of SU(3) latticegauge theory that extends the results of [24].The density of states approach developed in [19] recon-structs canonical expectation values from a sufficiently largeand closely spaced set of functional integrals each constrainedto have central action S i . These constrained functional in-tegrals provide an approximate way to define observableson the micro-canonical ensemble at S i and while the origi-nal development of the subject employed a sharp constraint,in this paper we use a Gaussian constraint of width δ . Thechoice of an analytic Gaussian constraint allows simulationsto use the global HMC algorithm which commonly under- lies simulations of lattice gauge theory. The constrained func-tional integrals are defined using double bracket notation as, (cid:104)(cid:104) B [ φ ] (cid:105)(cid:105) i = N i (cid:90) D φ B [ φ ] e − a i S [ φ ] e − ( S [ φ ] − S i ) / δ . (2)Where B and φ denote generic operators and fields re-spectively. The need for the term e − a i S [ φ ] is explained below. N i is a normalisation factor written as a constrained func-tional integral without the operator B , N i = (cid:90) D φ e − a i S [ φ ] e − ( S [ φ ] − S i ) / δ . (3)Noting the definition of the density of states, ρ ( S ) = (cid:90) D φ δ ( S [ φ ] − S ) . (4)We see that since the expression for N i involves only theaction, the functional integral defining it can be replaced byan ordinary integral over the density of states, N i = (cid:90) dS ρ ( S ) e − a i S e − ( S − S i ) / δ . (5)So the N i are closely related to the density of statesevaluated at S i . This is most apparent in the limit δ → a i ’s. However,at finite δ , the constraint is wide enough to require the term e − a i S [ φ ] to compensate the growth of the density of states inthe measure and to ensure that the mean constrained action (cid:104)(cid:104) S (cid:105)(cid:105) really is at S i . The value of a i needed to accomplishthis is determined by an iterative procedure described be-low. The simulations which underly the method provide es-timates of these double bracketed expectations (2) for a setof closely spaced S i labelled by the index i = . . . N TEMPER ,which we refer to as “tempers” (we reserve the term “replica”to refer to repeats of the whole simulation procedure withidentical parameters, but different random number sequences).Having chosen an appropriate set of central action S i to char-acterise the tempers, the method consists of two phases. Robbins Monro (RM) Phase to determine a i through a pro-cedure of minimising (cid:104)(cid:104) S − S i (cid:105)(cid:105) i for an iterative sequence of a i ’s. This quantity converges to zero when a i becomes thederivative of the (log) density of states at S i . Knowledge ofthe final a i ’s is sufficient to construct a piecewise approxi-mation to the density of states that can immediately be usedto estimate canonical expectation values of operators thatdepend only on the action, such as moments of the plaque-tte. Measurement Phase (MP) at fixed a i as determined by theRM phase. This is necessary to compute canonical expecta-tion values of generic operators through a reweighting pro-cedure. For example, the topological charge requires moreinformation than is contained in the density of states alone.Moreover, the measurement phase provides dynamic infor-mation about the efficiency of the method through the auto-correlation.We are particularly concerned about the accuracy, the re-lationship between expectation values computed using RMphase and those based on the measurement phase (MP), theautocorrelation time and the efficiency of the tempering method.To achieve these aims we have repeated the whole simula-tion procedure consisting of both phases, N REPLICA times, asthis provides a way of estimating the errors inherent in themethod.While the density of states method is applicable to anymodel, and indeed has been used to study phase transitions[20], the Yang Mills theory we study here is far from anyphase transition and all results are smooth functions of pa-rameters. The conclusions that rely on working in this con-text are not restricted to lattice gauge theory but are expectedto hold for other models in which there is a single couplingmultiplying the action as this most closely mimics simplestatistical mechanics systems.
According to the density of states approach, log ρ ( S ) is ap-proximated as piecewise linear with coefficients given by the a i appearing in (2), ρ ( S ) = ρ i e a i ( S − S i ) , ( S i − + S i ) < S < ( S i + S i + ) . (6)Where ρ i is shorthand for ρ ( S i ) . By convention we take S i to grow with i . Piecewise continuity leads to, ρ i + = ρ i e ( a i + a i + )( S i + − S i ) / . (7)This relation along with the ( S i , a i ) is sufficient to com-pute the piecewise approximation to the density of states interms of the value ρ i c at some base index i c . As ρ varies overmany orders of magnitude, the base index is chosen depend-ing on the particular problem. When reweighting to coupling β it is chosen so as to minimise | a i c − β | .To avoid unduly long formulae, the expressions givenhere assume fixed spacing S i + − S i = δ S and also fixed Gaus-sian width δ i = δ . However, motivated by the desire to en-sure effective mixing across the range of tempers, variablespacing is used in simulations.Besides δ and δ S another parameter provides a usefulscale to the problem. This parameter is model dependentand we define σ to be the standard deviation of action fluc-tuations in the unconstrained system. The strength of the Gaussian constraint is determined by the dimensionless ra-tio δ / σ . Although σ depends on the coupling, for this studyit does not change very much over the narrow range of tem-pers and thus provides a useful scale. For the pure gaugeSU(3) studied here, a i turns out to be approximately linear: a i ≈ a i c − ( i − i c ) δ S / σ for indices close to the base index i c where σ can be regarded as constant. Consequently forsmall k , ρ i c + k ≈ ρ i c e ka ic δ S e − k δ S / σ . However, we empha-sise that this observation relies on the smoothness of themodel and limited range of tempers and should not be re-garded as any kind of modified version of the density ofstates method.3.1 Canonical expectation values of the action and itsvariance: LLRThe parameters a i obtained from the RM phase for each S i provide the piecewise estimate of the log of the density ofstates, so without any measurement phase we have access tothe full probability density of the action for a range of cou-plings β . Estimates can be made of canonical expectationvalues of functions of the action, the most common being (cid:104) S (cid:105) and (cid:104) ( S − (cid:104) S (cid:105) ) (cid:105) which in the lattice gauge theory stud-ied here are directly related to moments of the plaquette.We shall call this approach “LLR” as a shorthand to distin-guish it from the “reweighting” approach discussed in thenext subsection.Direct application of the piecewise approximation (6),(7)leads to the following expression for the partition function. Z LLR = (cid:90) dS ρ ( S ) e − β S (8) = ∑ i ρ i e − β S i a i − β (cid:104) e ( a i − β ) δ S / − e − ( a i − β ) δ S / (cid:105) ≈ Z LLR (cid:48) = δ S ∑ i ρ i e − β S i . (9)And similar formulae for the expectation values (cid:104) S (cid:105) and (cid:104) ( S − (cid:104) S (cid:105) ) (cid:105) . These results, because of their reliance on thepiecewise approximation, should be taken to be accurate toorder δ S / σ .The further approximation on the third line can be seenas an expansion for ( a i − β ) δ S (cid:28) (cid:48) ” approach is simpler yet remark-ably accurate. Because of its advantages in exposition wedevelop some formalism for this LLR (cid:48) approximation anddefine weights v i : v i = ρ i e − β S i (10)then (cid:104) f ( S ) (cid:105) LLR (cid:48) = ∑ i v i f ( S i ) ∑ i v i . (11) These weights can be normalised to ∑ i v i =
1, but for nu-merical reasons in the course of intermediate computations,it is convenient to choose the weight at the central index v i c to be unity by defining ρ i c = ρ ( S i c ) = e β S ic .The full LLR formulae are straightforward to derive from(8), but ugly in comparison with (10) and (11).3.2 Canonical expectation values of arbitrary operators:reweightingObservables that are not simply functions of the action cannot be computed using the density of states alone and ameasurement phase is needed The measurements are per-formed on a set of constrained tempering runs with parame-ters ( S i , a i ) and reweighting is then used to obtain the canon-ical expectation value: (cid:104) B [ φ ] (cid:105) = Z (cid:90) D φ B [ φ ] e − β S [ φ ] . (12)To relate this to the constrained functional integrals weconsider sums of spaced Gaussians. We intend to use thefunctional integral equivalent of the approximate identitypresented below, K ( x ) = √ π δ x δ ∑ i e − ( x − i δ x ) / δ = + ∑ i > e − (cid:16) i πδδ x (cid:17) cos ( i π x δ x )= + e − (cid:16) πδδ x (cid:17) cos ( π x δ x ) + . . . ≈ / δ x .This identity should be more properly understood in its in-tegral form, (cid:90) Γ dx f ( x ) K ( x ) ≈ (cid:90) Γ dx f ( x ) (14)where f ( x ) are smooth test functions of width exceeding δ or δ x defined over the domain Γ .This approximation is extremely accurate and can bechecked in this one-dimensional context either analyticallyfor Gaussian test functions, or numerically to understand theerrors introduced by a finite set of Gaussians, and by thechoice of the ratio δ / δ x . In the limit δ → δ x .Returning to (12) we introduce the functional integralgeneralisation of the sum (13) into the expression to obtain: (cid:104) B [ φ ] (cid:105) ≈ Z δ S √ πδ ∑ i (cid:90) D φ B [ φ ] e − β S [ φ ] e − ( S [ φ ] − S i ) / δ . (15)In the light of the discussion above, the accuracy of theapproximation is rather better than anticipated from the deltafunction limit, but nonetheless still contains an order δ S / σ term when there are only a finite set of Gaussians.Notice that from the definition of the double bracket quan-tities it is possible to reweight in the usual sense, that is, toinclude part of the exponential term as a measured operatorrather than use for importance sampling in the Monte Carlosimulation. (cid:104)(cid:104) B [ φ ] e ( a i − β ) S (cid:105)(cid:105) i = N i (cid:90) D φ B [ φ ] e − β S [ φ ] e − ( S [ φ ] − S i ) / δ , (16)this resembles the terms in the sum (15), so the canoni-cal expectation value can be written in terms of the doublebracket quantities as: (cid:104) B [ φ ] (cid:105) ≈ Z δ S √ πδ ∑ i N i (cid:104)(cid:104) B [ φ ] e ( a i − β ) S (cid:105)(cid:105) i . (17)With, Z = δ S √ πδ ∑ i N i (cid:104)(cid:104) e ( a i − β ) S (cid:105)(cid:105) i . (18)The normalisation factors, N i , only involve the action(5), so they can be evaluated using the piecewise approxi-mation. This introduces an additional approximation of or-der δ S / σ . N i ≈ ∑ j ρ j (cid:90) ( S j + + S j ) / ( S j − + S j ) / dS e a j ( S − S j ) × e − a i S e − ( S − S i ) / δ . (19)So the N i can be analytically evaluated in terms of trun-cated Gaussian integrals or error functions. Using this ex-pression in (17) and (18), we obtain the reweighting formulafor fixed separation between Gaussians: (cid:104) B [ φ ] (cid:105) ≈ ∑ i N i (cid:104)(cid:104) B [ φ ] e ( a i − β ) S (cid:105)(cid:105) i ∑ i N i (cid:104)(cid:104) e ( a i − β ) S (cid:105)(cid:105) i . (20)We define the following weights w i = δ S δ N i √ πδ e ( a i − β ) S i (21)where the prefactors are motivated by the simple relationbetween w i and the LLR (cid:48) weights, v i (10), in various limitsexplored in section (3.2.2) below. The reweighting formula takes its final form, which over-all is accurate to order δ S / σ . (cid:104) B [ φ ] (cid:105) ≈ ∑ i w i (cid:104)(cid:104) B [ φ ] e ( a i − β )( S − S i ) (cid:105)(cid:105) i ∑ i w i (cid:104)(cid:104) e ( a i − β )( S − S i ) (cid:105)(cid:105) i . (22)This expectation value is defined in terms of the ratio ofsums and it is important to avoid bias in estimating the ratio.Bootstrap techniques are used as explained later. If the fluctuations of the action are ignored and we approxi-mate (22) by neglecting the exponential factors in the doublebrackets, we obtain a simpler formula (cid:104) B [ φ ] (cid:105) ≈ ∑ i w i (cid:104)(cid:104) B [ φ ] (cid:105)(cid:105) i ∑ i w i . (23)At the risk of confusion in the lattice gauge theory com-munity, we term this approximation “quenched”, followingthe more general usage in the theory of disordered systems.In an attempt to justify this approximation we first con-sider the term ( a i − β ) . Provided that the range of reweight-ing is sufficiently narrow to allow the approximate linearrelation a i = a i c − ( i − i c ) δ S / σ , and noting that ( a i c − β ) < δ S / σ , we have: a i − β ≈ − ( i − i c ) δ S / σ . (24)The double bracket expectation of the second term inthe exponent vanishes, (cid:104)(cid:104) S − S i (cid:105)(cid:105) i =
0, according to the tun-ing of a i during the RM phase. Of course, because this sec-ond term appears in the exponent, fluctuations of the actioncause higher moments to contribute. However, the role ofthe Gaussian constraint is precisely to limit the size of thesefluctuations and we expect even moments to scale as: (cid:104)(cid:104) ( S − S i ) n (cid:105)(cid:105) i ∼ δ n . (25)Therefore, provided the exponential can be expanded,we may anticipate the overall result: (cid:104)(cid:104) e ( a i − β )( S − S i ) (cid:105)(cid:105) i ∼ + O ( δ δ S / σ ) . (26)If, by ignoring possible correlations between S and B [ φ ] ,we make a similar assumption for the numerator of (22), wemay expect that the quenched approximation differs fromthe full formula by terms of order δ δ S / σ . Since, in practi-cal work, we set δ / δ S to be a value of order 1, the quenchedapproximation is plausibly accurate to the same order δ S / σ as was the full reweighting formula (22). However, it willbecome clear in the numerical work that while the quenchedapproximation is often good, it can lead to very differentpredictions in some circumstances.The quenched approximation has a much simpler struc-ture than the full expression and it can be interpreted as areweighting at each time step or configuration of the Monte Carlo simulation in the measurement phase. This allows fora straightforward understanding of the quenched autocorre-lation as that of the evolution of the reweighted quantity.Indeed, this is the basis for the autocorrelation time reportedin [24].In the full expression, (22), the expectation value is de-fined in terms of the ratio of sums. The signal to evaluate theautocorrelation time will be dominated by the larger of theautocorrelators of the numerator and denominator, in prac-tice always the numerator. It is then important to avoid biasin estimating the ratio and bootstrap techniques are used asexplained later. Further intuition about the reweighting formula comes fromconsidering various limits. Firstly, consider the limit δ / σ → δ S , fixed. The weightssimplify, w i = v i , and moreover in this limit fluctuations ofthe action are suppressed, so the reweighting formula be-comes the quenched version (23) resembling the density ofstates result in the LLR (cid:48) approximation (11) extended to op-erators that do not solely depend on the action.For fixed spacing δ S , without taking any limit, the weightscan be written as: w i = √ π δ S δ v i ∑ j ρ j ρ i e a j ( S i − S j ) √ πδ (cid:90) S j − S i + δ S / S j − S i − δ S / dSe ( a j − a i ) S e − S / δ . (27)When the range of reweighting is sufficiently narrow toallow the the approximate linear behaviour a i = a i c − ( i − i c ) δ S / σ , the term ( ρ j / ρ i ) e a j ( S i − S j ) =
1, and the integral be-comes a function of j − i , so the sum is independent of i andthe weights w i are proportional to the LLR (cid:48) weights v i .Numerically, because the ranges over which we reweightare small, the a i are linear and the weights obey w i ∝ v i rather accurately. When the sum of weights is normalisedto unity, w i and v i differ by at most 4 × − .The limit δ → ∞ , corresponding to no constraint, resem-bles the situation usually treated with the multi-histogramapproach (see Appendix A for a general comparison). Rely-ing on the approximate linearity of the a i , the expression(27) can be simplified by expanding in σ / δ . However,fluctuations are not suppressed in this limit and the reweight-ing formula (22) does not reduce to the multi-histogram for-mula.A more interesting limit is to take δ / σ → δ S / σ → δ S / δ fixed. Again we assume a narrowrange and rely on linearity of the a i to obtain, w i = δ S δ v i σ √ δ + σ → δ S δ v i . (28) � ���� � ���� � ���� � ���� � ��� � ���� � ���� � ���� � ���� � ��� � ��� � ��� � � � ��� � ��� � ��� � ��� � ��� � ��� � � ���� �� � ���� �� � �� � �� � � ����� � ����� � ����� � ����� � ���� � ����� � ����� � ����� � ����� � ��� � ��� � ��� � � � ��� � ��� � ��� � ��� � ��� � ��� � � ���� �� � � � � � � �� � � � � � � � � ���� �� � �� � �� � Figure 1
Reference simulations using an unconstrained approach il-lustrating the range of parameters under consideration over the latticesizes 16 , 20 , 24 . Top: Plaquette expectation. Bottom: Plaquette vari-ance, including a volume factor for scaling. Since fluctuations are suppressed in this limit the reweight-ing formula again becomes the quenched version (23).
Pure Yang Mills lattice gauge theory has been used as atesting ground for the density of states method since [19]studied SU(2) with a sharp “top hat” constraint and eitherMetropolis or Heat Bath algorithm. The smooth Gaussianconstraint allows the HMC method to be used as describedfor SU(2) in [21]. The tempering method was introduced forSU(3) in [24].We work with SU(3) in four dimensions and employ astandard Wilson action, on 16 , and 24 lattices. TheHMC algorithm was used with the same trajectory lengthand integrator as [24] and these parameters allow us to vali-date our simulations in comparison with results in [10].Preliminary simulations generated for the purposes ofcomparison used the canonical unconstrained approach ratherthan the density of states method. These simulations con-sider several size lattices and a range of couplings amongwhich many trajectories at constant boxsize can be identifiedto define a continuum limit. Figure 1 illustrates the rangeof parameters considered and these are given in table 1. At Table 1
Parameters of canonical reference simulations. For each lat-tice size, the range of β was spanned by 24 simulations.Size β range Number of sweeps16 . < β < .
20 10 . < β < .
35 5 × . < β < .
49 10 larger values of β the dynamics of the topological chargebecomes extremely slow. All simulations, both these uncon-strained comparison runs and later constrained runs, use theHiRep code [29, 30] and are initialised by thermalising arandom configuration.As can be seen in figure 1, the β values for each lat-tice size all overlap in the approximate range 6 . < β < . . × at 16 , 3 . × at 20 and 7 . × at 24 . The standard deviation of the action inthe unconstrained case is about σ =
196 at 16 , σ = and σ =
391 at 24 in the centre of the reweightingrange. This parameter provides the scale for the spacing δ S of order 210 at 16 , 310 at 20 and 420 at 24 . The width ofthe Gaussian constraints were set to be about 1.2 times thespacing in each case, δ / δ S = . H ( a i , φ , S i ) which incorporates theGaussian constraint (see equation (2)). H ( a i , φ , S i ) = a i S [ φ ] + ( S [ φ ] − S i ) / δ . (29)Because the constraint might allow energy barriers toblock the simulation dynamics there is concern that den-sity of states simulations will be slower than unconstrainedsimulations. For pure gauge theory even the unconstrainedsimulations are notorious for the slow dynamics apparent intopological modes. In an attempt to overcome this problemwe use tempering [22] (this was called the replica exchangemethod in [23, 24]). Tempering is a technique that consid-ers an ensemble of simulations over a range of β , or in our Available at https://github.com/claudiopica/HiRep. � ���� � ��� � ���� � ��� � ���� � ��� � ���� � ��� � � � � � �� � �� � �� � �� � � ���� ��� � � ���������� � ��������������� � �� � �� � Figure 2
Average swap probabilities during the course of the mea-surement phase. Averaged over all replicas. Errors are a combinationof statistical and from the variation between replicas. The overlappingsymbols indicate that the probabilities for 16 , 20 and 24 are indis-tinguishable. case S i , and allows configurations to migrate over this range.Configurations suffering from slow dynamics at large β candiffuse to small β where they evolve more quickly before re-turning to small β . We chose a set of N TEMPER =
24 temperseach with a fixed central action S i . The central energies arelisted in table 12 in Appendix B and correspond to a range ofcoupling given in table 2. These ranges are much narrowerthan the ranges of the unconstrained reference simulations. Table 2
Parameters of measurement phase of density of states simu-lations. There were 24 tempers, or simulations at specific values of S i .To aid comparison, the range of central action is given in terms of acoupling range below.Size β range Number of sweeps16 . < β < .
16 2 × . < β < .
32 2 × . < β < .
46 2 × At regular intervals (every 15 HMC steps in the mea-surement phase), the tempering method swaps configura-tions between pairs of tempers with the following probabil-itymin ( , exp { H ( a , φ , S ) + H ( a , φ , S ) − H ( a , φ , S ) − H ( a , φ , S ) } ) . (30)This preserves detailed balance of the entire system ofmultiple tempers and aids ergodicity.In order to ensure that swap probabilities remain con-stant across the range of tempers, the values of the set ofcentral energies S i and the width of the Gaussians δ i werechosen so that the overlap of probability distributions remainfixed. This amounts to a requirement on the ratio of the spac-ing δ S and the standard deviation of the constrained action � ������ � ������ � ������ � ������ � ������ � ������ � ������ � ������ � ������ � ������ � ������ � ������ � ����� � ����� � ����� � ����� � ����� � ����� � ����� � �� � � � � � � � � � �� ��� � ���� Figure 3
Evolution of configurations showing how swaps betweenadjacent tempers lead to global mixing. Example for 24 over a shortpart of the run for one replica near the middle of the measurementphase. The bold line shows the history of a certain configuration. (given by δ i / (cid:113) + δ i / σ ). The absolute values of δ i / σ and δ S / σ determine the potential accuracy of the method. Theeffect of this tuning was monitored through the swap prob-abilities as shown in figure 2 where the aim was for a swapprobability of 0.5. Effects from the edge of the temper rangeare apparent, but do not extend far into the main range.Figure 3 shows the swap history over a short part ofthe measurement phase of the simulation illustrating howthe pair swaps enable mixing though the whole space. Forall lattice sizes, more than half the configurations appear atsome point in every temper. In other words, they appear atevery possible central energy. Moreover, every configurationalways appears in at least half the tempers.4.2 ObservablesBesides the plaquette expectation value which is an actiondependent quantity and hence can be computed using eitherof the techniques outlined in section 2, it turned out that thevariation of the plaquette provides a sensitive test of the ac-curacy of the method.Other observables were defined after configurations hadbeen evolved according to the Wilson flow [31]. The sim-plest such observables were the action density E and itsclover version E sym which have been studied before in thecontext of Wilson flow [31].At 16 the flow time was set to t = to t = .
125 and at 24 to t = . t : √ d t = L / . (31)Where d is the dimension ( d =
4) and L is the linear sizeof the box in lattice units. Fixing the extent of the smoothing to 1 / t that was scaled according to the size of the lattice as de-scribed above. The charge was defined in terms of plaquette-like quantities on the smoothed configuration [31, 32] thatamount to the continuum expression given in the introduc-tion (1).4.3 RM PhaseWith a single coupling constant and far from any phase tran-sition, the value of a i for a given central energy S i will beclose to the value of the coupling β at which (cid:104) S − S i (cid:105) = (cid:104) S (cid:105) on β which provides a value that is used as a seedfor the RM phase. Since the seed is known so accurately,the role of the RM phase is simply to improve rather than tosearch for the solution in a complex landscape. The litera-ture [33] indicates that the efficiency of the Robbins Monrotechnique can depend sensitively on the relaxation param-eters. This may be true for high dimension problems com-mon in AI but is not the case for this improvement of a onedimensional problem. Indeed, a simple Gaussian model forthe RM process allows the effect of different RM schedulesto be explored for much longer convergence times than arepossible for the real system. The model consists of sampling (cid:104)(cid:104) S − S i (cid:105)(cid:105) from a Gaussian of width δ / (cid:112) + δ / σ andmean − ( a i − a ∗ i ) δ / ( + δ / σ ) where a ∗ i is the assumedsolution. It was found that there was little advantage in moreelaborate tuning than the choice of parameters n and c inthe RM iteration: a i ( n ) = a i ( n − ) − cn + n (cid:104)(cid:104) S − S i (cid:105)(cid:105) i . (32)In the RM phase, (cid:104)(cid:104) S − S i (cid:105)(cid:105) i was computed from 50 mea-surements on consecutive HMC configurations with fixed a i . Test runs which made an update according to (32) ev-ery measurement had the same behaviour and were barelyany faster. These simulations were performed without anyreplica swaps during the evaluation of (cid:104)(cid:104) S − S i (cid:105)(cid:105) i because,while these do not usually affect final convergence, they doprevent it at the very edges of the temper range. Moreoverthey lead to larger early fluctuations which would require adifferent tuning regime. Swaps were allowed between eval-uations of (cid:104)(cid:104) S − S i (cid:105)(cid:105) i .The value of c was chosen to be rather smaller than thesupposed optimal one as larger values tended to drive thesystem too far from the accurate seed value. This choice does not compromise the convergence properties of the al-gorithm. The starting iteration n was set to 20 as a compro-mise between excess change over the first few iterations andspeed of convergence later (the value was slightly raised to n =
25 for 24 ).In contrast to the details of the RM schedule, startingand following independent RM simulations was useful bothto test the convergence and to provide a bootstrap mecha-nism for computing errors arising from inaccuracy in thisphase. The whole simulation procedure is repeated for a setof N REPLICA replicas. During the RM phase, each replicaor RM iterate starts from a different random configurationseparately thermalised with a different sequence of randomnumbers, but with the same seed value a i ( ) . We distin-guish the replicas with an upper index a = . . . N REPLICA and in this work there are N REPLICA = n , isdenoted with an overbar as,¯ a i ( n ) = N REPLICA N
REPLICA ∑ a = a ai ( n ) . (33)According to the theory of the Robbins Monro tech-nique [33], different iterates will asymptotically converge toa Gaussian distribution of width proportional to 1 / √ n . Thevariance in replica space is therefore a convenient conver-gence parameter, C i ( n ) = N REPLICA N
REPLICA ∑ a = ( a ai ( n ) − ¯ a i ( n )) . (34) Examples of typical convergence are shown in figure 4. Idealbehaviour is for the convergence parameter to decrease as1 / n .During the RM phase the behaviour of the convergenceparameter is characterised by a noisy initial phase after whichthe scaling regime is reached. The shape of these curvesvaries considerably between replicas and tempers but all even-tually display the asymptotic behaviour.The primary convergence criterion is to ensure that con-vergence parameter has reached the asymptotic regime, butthis still leaves some flexibility in exactly when to stop. Inthe present work we first ensure that the convergence hasreached the asymptotic phase and terminate after a fixednumber of iterations n max . n max =
442 iterations for 16 and500 in the case of 20 and 24 .The error in the final value of a i may be estimated fromthe standard deviation, (cid:112) C i ( n max ) / N REPLICA , at the end ofthe RM phase. When averaged over tempers, this was 2 . × − , 1 . × − , 1 . × − for 16 , 20 and 24 respec-tively. Note that these values are similar to the differencebetween the final mean value ¯ a i ( n max ) and the initial seed � ������ � ������� � ������ � ������� � ������ � ������� � ������ � ������� � � � ��� � ��� � ��� � ��� � ��� � � � � � �� � ��������� � ��� � � � ���� �� � ���� �� � ���� �� � ���� �� � ���� �� � ���� �� � ���� �� � ���� �� � ���� �� � � � ��� � ��� � ��� � ��� � ��� � �� � � � ��� � � � � � � � � � � � � �� � ��������� � ��� Figure 4
Convergence during the RM phase at 20 for the temper withcentral action 363310. Top: history of a i ( n ) showing 8 independentreplicas all starting from the seed 6.28980. The mean is also shown asthe thick line. Bottom: history of convergence parameter C i ( n ) definedin the text. value. As is clear from the behaviour of the convergence pa-rameter of the RM phase convergence shown in figure 4 ittakes considerable effort to improve the accuracy of the a i . The LLR procedure of section 3.1 was followed to computethe plaquette and its variation based solely on the S i and the a i determined by the RM phase. This was done separatelyfor each of the 8 replicas and at the final stage errors areevaluated using bootstrap. Note that the scales are differentfor each size shown in figure 5. For the values of β shownin these plots, the original runs using traditional HMC weresupplemented by additional unconstrained runs at 20 and24 intended to improve statistics.The range of β considered for these measurements waslimited because deviations from a smooth plot appear nearthe edges of the temper range. The full LLR version of theweights (10) changes abruptly from an approximately lineardependence on β and it is straightforward to choose a cutoffthat amounts to discarding 3 tempers at each boundary. Theresulting ranges are given in table 3.Although the plaquette can easily be measured using tra-ditional techniques, it is reassuring that we obtain the sameresults, with high accuracy, using the density of states method. Table 3
Ranges of β for LLR predictions based on the RM phase.Size β range16 . ≤ β ≤ . . ≤ β ≤ . . ≤ β ≤ . The mean plaquette shown in the first column of figure 5appears the same whether computed using density of statesinformation from the RM phase or reweighting results fromthe measurement phase. The plaquette variance shown in thesecond column of figure 5 provides a more sensitive test. In-deed, an early 20 run with shorter RM phase and values of a i only known to accuracy of around 1 × − (comparedto 3 × − for the data shown in figure 5), led to errors inthe variance about 5 times bigger then in the data presentedhere. The third column of figure 5 is discussed in the nextsection.4.4 Measurement PhaseThe measurement phase has a duration of 2 × HMCsteps with potential swaps every 15 steps and was repeatedfor each of the 8 RM replicas and also for all three latticesizes. There are two distinct contributions to the error: fromuncertainly in the a i arising from the RM phase and fromstatistical fluctuation in the measurement phase due to lim-ited length runs. We refer to these as the RM error and MPerror respectively.The set of β at which to compute canonical expecta-tion values by the reweighting procedure of section 3.2 asadapted for simulations below, are chosen sufficiently closeto give a good approximation to a continuous curve in theplots. We choose a β spacing of 0 .
002 irrespective of latticesize.
In order to relate the formalism developed in section 3 to theoutput of simulations it is helpful to explicitly recognise thatin the measurement phase of the constrained Monte Carlo,the double bracket quantities appearing in (22) are identifiedas averages over HMC configurations or time-steps t for aparticular temper i . The usual estimator is: (cid:104)(cid:104) B [ φ ] e ( a i − β )( S − S i ) (cid:105)(cid:105) i = T ∑ t e ( a i − β )( S i [ t ] − S i ) B i [ t ] . (35)Where S i [ t ] and B i [ t ] are the measurements of the respec-tive observables at that configuration or time-step.For each HMC configuration and separately for each RMreplica labelled by index a , we perform the following temper � ����� � ��� � ����� � ����� � ����� � ����� � ���� � ����� � ���� � ���� � ��� � ���� � ���� � ���� � ���� � � ���� �� � ������������� � �� � ���������������� � ������ � ����� � ������ � ����� � ���� � ���� � ��� � ���� � ���� � ���� � ���� � � ���� �� � � � � � � �� � � � � � � � � ������������� � ��������� � ��� � �� � ���������������� � ������ � ����� � ������ � ����� � ���� � ���� � ��� � ���� � ���� � ���� � ���� � � ���� �� � � � � � � �� � � � � � � � � ������������� � ��������� � ���������� � �� � ��������������� � ������� � ������� � ����� � ����� � ���� � ����� � ����� � ����� � ����� � ����� � ���� � ���� � ���� � ���� � ���� � ��� � ���� � ���� � ���� � � ���� �� � ������������� � �� � ���������������� � ������ � ������ � ����� � ������ � ������ � ������ � ���� � ���� � ���� � ���� � ���� � ��� � ���� � ���� � ���� � � ���� �� � � � � � � �� � � � � � � � � ������������� � ��������� � ��� � �� � ���������������� � ������ � ������ � ����� � ������ � ������ � ������ � ���� � ���� � ���� � ���� � ���� � ��� � ���� � ���� � ���� � � ���� �� � � � � � � �� � � � � � � � � ������������� � ��������� � ���������� � �� � ��������������� � ������� � ������� � ������ � ����� � ������ � ����� � ������ � ����� � ������ � ����� � ������ � ���� � ���� � ���� � ���� � ���� � ���� � � ���� �� � ������������� � �� � ���������������� � ������ � ������ � ������ � ������ � ������ � ����� � ������ � ������ � ���� � ���� � ���� � ���� � ���� � ���� � � ���� �� � � � � � � �� � � � � � � � � ������������� � ��������� � ��� � �� � ���������������� � ������ � ������ � ������ � ������ � ������ � ����� � ������ � ������ � ���� � ���� � ���� � ���� � ���� � ���� � � ���� �� � � � � � � �� � � � � � � � � ������������� � ��������� � ���������� � �� � ��������������� � ������� � ������� Figure 5
Predictions for the mean plaquette and its variance. Rows show 16 , 20 and 24 . The first column shows the mean plaquette with errorsthat are too small to be visible and this appears the same whether computed using information from the RM phase (LLR) or reweighting resultsfrom the measurement phase. The second and third columns show the plaquette variance computed using LLR and reweighting respectively. Errorsare shown with shaded regions. All plots also show data points obtained using traditional unconstrained methods. sums appearing in the numerator and denominator of equa-tion (22). Usually, the variance of the operator is also a quan-tity of interest so computations involving B i [ t ] are made atthe same time. B a [ t ] = ∑ i w ai e ( a ai − β )( S ai [ t ] − S i ) B ai [ t ] (36) = ∑ i w ai [ t ] B ai [ t ] (37) w a [ t ] = ∑ i w ai e ( a ai − β )( S ai [ t ] − S i ) = ∑ i w ai [ t ] . (38) Where we have defined the time dependent weights, w ai [ t ] = w ai e ( a ai − β )( S ai [ t ] − S i ) . (39)Which in the quenched approximation reduce to the or-dinary static weights w ai (21).Using these definitions the reweighting expression (22)for a particular RM replica, a , becomes (cid:104) B [ φ ] (cid:105) = ∑ t B a [ t ] ∑ t w a [ t ] . (40)The integrated autocorrelation time is defined as the max-imum of the autocorrelation time in numerator and denomi-nator and is computed according to the prescription in [34]. � � � ���� � ��� � ���� � ��� � ���� � ��� � ���� � � � � � �� � �� � �� � �� � � � � ��� � � � � �� � ������ � ������ Figure 6
The average normalised weight for β near the centre of thetemper range. Shown for 16 , β = . β are verysimilar. The time dependent weight (39) is averaged over all measure-ments and all RM replicas. Errors are too small to be visible. In order to estimate the ratio that appears in the reweight-ing formula (40) without bias, a bootstrap technique is usedover the 2 × measurements divided by the autocorrela-tion time. The same bootstrap selection is employed for bothnumerator and denominator. This procedure also yields anestimate of the error of the ratio due to the fluctuations inthe measurements. We call this statistical error the “mea-surement phase (MP) error”.The procedure above is repeated for the N REPLICA differ-ent RM replicas. A separate analysis over these reweightedmeasurements gives the error associated with imprecision inknowledge of the value of the a i ’s that were derived in theRM phase. Depending on the observable, the relative sizesof the errors arising from each source, RM or measurement,is different and in plots the two different contributions to theerror are represented by distinct shaded regions.The static weights, named w i and defined by equation(21) in the formalism above, are computed in such a way asto avoid danger of numerical overflow, based on the param-eters ( S i , a i ) and the desired β for reweighting using gener-alisations of the formulae (21) adapted for variable actionspacing. When the reweighting parameter β happens to besimilar to a value of a i in the middle of the temper range, thestatic weights are largest for tempers that are near that cen-tral part of the range. For time varying weights (39) there arefluctuations and the weights of more distant tempers con-tribute. Although these weights vary for each step of themeasurement simulation, their average is very stable. Figure6 shows how the average normalised time dependent weightvaries across the temper index for β chosen near the centreof the temper range. Plots for other size lattices or other val-ues of β near the centre of the tempering range look verysimilar indicating that the range of tempers that contributeappreciable weight is fairly constant at about 7 tempers forour parameters.As β approaches an edge of the temper range, reweight-ing becomes less accurate since tempers not simulated could have significant weights. The reweighting range is deter-mined by requiring that the average normalised weight foredge tempers be less than 1%. A different criterion based onthe limiting the fraction of weights that are greater than 0.01for example, leads to the same range. The resulting rangesare apparent from the plots, but are explicitly given in table4. There appears to be a slight decrease in the useful fractionof the temper range as the lattice size increases. Table 4
Ranges of β for reweighting predictions based on the mea-surement phase according to criteria in the text.Size β range16 . ≤ β ≤ . . ≤ β ≤ . . ≤ β ≤ . The expectation of the mean plaquette and variance can becomputed via either of the techniques, LLR based on the RMphase or reweighting measurements, and it is interesting tocompare the results. A plot of the mean is indistinguishablefrom the left column of figure 5 though as is apparent fromtables 5-7, the LLR approach leads to greater accuracy. Theplaquette variance according to the reweighting method isshown in the right column of figure 5. The RM errors foreach approach are similar but the reweighting approach isalso subject to MP errors which for the run length chosen,are a few times larger that the RM errors.In order to provide more detail about the relative sizeof the RM errors from uncertainly in the a i from the RMand the MP errors due to limited length runs, tables 5-7show data for a representative value of β chosen in the cen-tre of the range for each lattice size. Overall, because it isonly subject to RM and not MP errors, the LLR approachthat only uses the density of states is more accurate thanreweighting. In this case it is possible to see that the differ-ence between measurements using the full formulae basedon (9) and those using the approximation denoted LLR (cid:48) arealways smaller than the error due to uncertainty in a i . Thetables also show results from unconstrained simulations re-weighted to the reference β using multi-histogram [35] (seeAppendix B). Note that the error from this approach reflectsboth the intrinsic statistical error of the unconstrained simu-lations and also how close the reference β happens to be toone of the widely spaced set of β used in those simulations,so the way it changes with lattice size is not significant.The tables also allow us to compare the use of quenchedand fluctuating weights for reweighted measurements. Bothlead to very similar results for the mean plaquette and its Table 5
Plaquette observables for 16 at β = . (cid:48) or reweighted measurements with quenched(QRewt) or varying weights (Rewt). Multi-histogram results based onunconstrained simulations are given for comparison (see Appendix B).Obs method value RM-err MP-errMean MultiHist 0.605906 3.6e-06 -Mean LLR’ 0.605904 1.8e-06 -Mean LLR 0.605904 1.9e-06 -Mean QRewt 0.605904 1.7e-06 3.4e-06Mean Rewt 0.605905 1.9e-06 4.2e-06Var MultiHist 2.4761e-07 12.2e-10 -Var LLR’ 2.4820e-07 6.3e-10 -Var LLR 2.4813e-07 6.9e-10 -Var QRewt 3.9087e-07 8.6e-10 31.7e-10Var Rewt 2.4446e-07 13.2e-10 29.2e-10 τ int QRewt 2.17 0.05 0.13 τ int Rewt 0.73 0.05 0.03
Table 6
Plaquette observables for 20 at β = . τ int QRewt 1.95 0.02 0.11 τ int Rewt 0.90 0.06 0.04
Table 7
Plaquette observables for 24 at β = . τ int QRewt 1.91 0.06 0.11 τ int Rewt 0.89 0.02 0.03 errors. However there is a clear difference in the result forthe plaquette variance and comparison with either densityof states or traditional simulations favours the approach de-rived in (22), that is, the one with fluctuating weights. Thisresult clearly indicates that the quenched approach is incor-rect, we will continue to study it to illuminate the autocor-relation of the topological charge, but in all further plots ofexpectation values we use the fluctuating weights. For both the plaquette mean and variance, the MP error for the chosenrun length is a few times larger than the RM error.The autocorrelation times displayed in the tables are theintegrated version computed using the Wolff criterion as de-scribed in section 4.6 and are order 1 HMC step remainingfairly constant as β varies. The consequences of the use offluctuating weights are most apparent in the integrated au-tocorrelation time which becomes half the value obtainedin a quenched analysis. The equivalent value for the tradi-tional simulation dropped to a plateau of about 3 HMC stepsfor β >
6. For varying weights, the MP and RM errors areof similar size while for quenched weights the MP error islarger.
Wilson flow corresponds to an operation that cannot be treat-ed without using reweighting. Expectation values for thesymmetric cloverleaf energy density, E sym , after Wilson floware shown in figure 7 and the equivalent results for the en-ergy density after Wilson flow are very similar. For the val-ues of β shown, the traditional unconstrained simulationshave been supplemented with additional runs to improvestatistics. As we have come to expect for the plaquette, themean value of E sym has small error and agrees closely withthe reference simulations for all sizes. The variance is muchbetter behaved than it was for the original plaquette variancewithout Wilson flow and has closer agreement with the un-constrained simulations. Errors arising from RM and mea-surement are of similar size and both less than 0.1% for themean and less than 2% for the variance.A discussion of the autocorrelation time for these ob-servables is postponed until after we discuss the computa-tion of autocorrelation in the case of the topological charge.4.5 Topological ChargeWe devote a whole subsection to this topic because a strongmotivation for studying alternative approaches such as den-sity of states is the freezing of the topological charge in thecontinuum limit of traditional simulations.The expectation of the topological charge must of coursevanish and the left column of figure 8 shows that the reweight-ed measurements always contain zero within the error range,while the rather longer traditional simulations have not yetmet this requirement. Vanishing charge is often regarded asa check that simulations are sufficiently long and the widelyvarying sizes of the error bars for the variance of the tradi-tional simulations at larger sizes, even with the help of sup-plemental runs, indicates that these simulations are not yetergodic. Even the configurations of the measurement phasesimulations rarely change charge at large size, but the swaps � ����� � ����� � ����� � ����� � ����� � ����� � ����� � ���� � ����� � ����� � ����� � ���� � ���� � ��� � ���� � ���� � ���� � ���� � �� � �������� � �� � ����������������� � � � ���� � ��� � ���� � ��� � ���� � ��� � ���� � ��� � ���� � ���� � ��� � ���� � ���� � ���� � ���� � �� � � � � � � �� � � � � � � � � �������� � �������� � �� � ����������������� � ������ � ����� � ������ � ������ � ������ � ������ � ����� � ������ � ������ � ������ � ���� � ���� � ���� � ���� � ���� � ��� � ���� � ���� � ���� � �� � �������� � �� � ����������������� � � � ���� � ���� � ���� � ���� � ��� � ���� � ���� � ���� � ���� � ���� � ���� � ��� � ���� � ���� � ���� � �� � � � � � � �� � � � � � � � � �������� � �������� � �� � ����������������� � ������ � ������ � ������ � ������ � ����� � ������ � ������ � ���� � ���� � ���� � ���� � ���� � ���� � �� � �������� � �� � ����������������� � � � ����� � ���� � ����� � ���� � ����� � ���� � ����� � ���� � ����� � ���� � ���� � ���� � ���� � ���� � ���� � ���� � �� � � � � � � �� � � � � � � � � �������� � �������� � �� � ����������������� Figure 7
The symmetric cloverleaf energy density after Wilson flow using reweighted measurements. Rows for 16 , 20 , 24 . Also shown forcomparison are results from traditional simulations supplemented by additional runs. Left: the mean value. Right: the variance (with a volumefactor). Errors from the measurement and the RM phase are similar but small and are shown as the shaded region barely visible in the case of themean. lead to much more rapid change of fixed central energy tra-jectories. These qualitative considerations are more formallystudied with the autocorrelator below.The effect of Wilson flow on the topological charge iswell known [31] and for measurements on either traditionalunconstrained or constrained simulations the flowed chargebunches close to integer values. Several tempers that canhave different (each almost integer) values of the chargecontribute to the reweighted charge which is no longer ex-pected to be close to an integer.The variance or topological susceptibility is shown inthe right column of figure 8. Note that the scale varies and that the size of the errors of the reweighted results actuallydecreases as the lattice size increases.Tables 8, 9 and 10, show detailed results for a singlevalue of β from the centre of the range at each lattice sizeto illustrate the relative size of the RM and MP error and toexpose the relation between reweighting with quenched orfluctuating weights. The values predicted for mean or vari-ance of the topological charge by either type of reweightingis close. The unconstrained results reweighted according tomulti-histogram always have large errors and become poorerat larger lattice size. Note that, as described in the next sub-section, τ int is computed using a different criterion for fluc-tuating weight than for quenched weights. Both RM and MP ������������������ � � � ���� � ��� � ���� � ��� � ���� � ���� � ��� � ���� � ���� � ���� � ���� � ��������������� � ������ � �� � ��������������� � ������� � ������� � � � ���� �� � ���� �� � ������ �� � ���� �� � ������ �� � ���� �� � ������ �� � ���� �� � ������ �� � ���� � ���� � ��� � ���� � ���� � ���� � ���� � � � � � � �� � � � � � � � � ��������������� � ������ � �������� � �� � ��������������� � ������� � ������� ������������������ � � � ���� � ��� � ���� � ��� � ���� � ���� � ���� � ���� � ���� � ��� � ���� � ���� � ���� � ��������������� � ������ � �� � ��������������� � ������� � ������� � � � ���� �� � ���� �� � ���� �� � ���� �� � ���� �� � ������ �� � ������ �� � ���� � ���� � ���� � ���� � ���� � ��� � ���� � ���� � ���� � � � � � � �� � � � � � � � � ��������������� � ������ � �������� � �� � ��������������� � ������� � ������� ������������������ � � � ���� � ��� � ���� � ��� � ���� � ���� � ���� � ���� � ���� � ���� � ��������������� � ������ � �� � ��������������� � ������� � ������� � � � ���� �� � ���� �� � ���� �� � ���� �� � ���� �� � ���� �� � ���� �� � ���� � ���� � ���� � ���� � ���� � ���� � � � � � � �� � � � � � � � � ��������������� � ������ � �������� � �� � ��������������� � ������� � ������� Figure 8
The topological charge based on reweighted measurements. Also shown for comparison are results from unconstrained simulations withadditional runs. Left: the charge itself which must be zero. Right: the variance of the charge. From top to bottom: 16 , 20 , 24 . Errors from themeasurement and the RM phase are shown as shaded regions. The Q data point for 24 at β = .
43 is off plot. errors for each quantity are fairly similar for quenched andunquenched. MP errors are in all cases larger than RM er-rors.The autocorrelation time and the dramatic difference be-tween the its value for quenched and fluctuating weights isthe subject of the next subsection.4.6 AutocorrelationThe autocorrelation time determines the size of errors andso is used as a way of evaluating the efficiency of competingalgorithms. The particular autocorrelation time relevant tocomputing errors is the integrated form, τ int as distinct from τ exp which characterises the long time decay of the correla-tion function. This distinction is emphasised by Madras andSokal [36], but a connection appeared in more recent workby Wolff [34] (also see [10]) as an estimate for τ exp helpsdetermine an optimal value for the upper limit of the sumdefining τ int . It turns out that the present work illuminatesthe different definitions as it allows us to give an estimatefor τ exp in a case where τ int and τ exp are very different.First observe that the autocorrelation functions for thequenched and fluctuating weight approaches have distinctcharacter. Typical autocorrelation functions, Γ ( t ) , are shownin figure 9. Each autocorrelation function is normalised to Γ ( ) = Table 8
Topological charge observables for 16 at β = . τ int QRewt 91 8 27TC τ int Rewt 26 5 9
Table 9
Topological charge observables for 20 at β = . τ int QRewt 170 11 63TC τ int Rewt 50 5 22
Table 10
Topological charge observables for 24 at β = . τ int QRewt 264 83 128TC τ int Rewt 89 30 52 fers from a rapid decay over a few HMC steps followed bya component with a long timescale. The techniques used toanalyse these cases are different.
The top curve of the upper plot of figure 9, shows that thequenched autocorrelation function for the topological chargehas the usual long timescale decay. We first review the stan-dard technique for computing τ int by analysing this usualcase.The expression for the integrated autocorrelation time is: τ int ( W ) = + W ∑ t = Γ ( t ) Γ ( ) . (41)We follow the prescription of Wolff [34] to determine thewindow, W , by minimising the combination of truncationand statistical errors, estimated as: e − W / τ exp + (cid:114) WT . (42) � � � ��� � ��� � ��� � ��� � ��� � ��� � ��� � ��� � ��� � � � � � �� � ��� � ��� � ��� � ��� � ��� � � � � � � �� � � � � � �� ������������������� ������������������ � � � ���� � ��� � ���� � ��� � � � ���� � ���� � ���� � ���� � ���� � � � � � � �� � � � � � �� ������������������� Figure 9
Quenched and fluctuating autocorrelation functions for thetopological charge for a representative example at β = .
290 for 20 .Both functions are normalised. Top: over short times where for the fluc-tuating weight autocorrelator the first few points show the rapid decaycomponent. Bottom: over long times showing typical fluctuations. Where T is the duration of the measurement simulations,20000 steps. For smooth autocorrelation functions withouttimescales much longer than τ int , the exponential autocorre-lation time, τ exp appearing in this expression, is estimated as S τ int with the parameter S in the range 1 ∼
2. All quenchedreweighted autocorrelation functions behave like the exam-ple shown in figure 9 without any indication of timescaleslonger that τ int which is of the order of 100’s of HMC steps.So we take the conventional value S = . Ae − t / τ fit . (43)We first make a fit in which we set A =
1, thus guar-anteeing the normalisation and the short time behaviour thatappears in τ int . Fits are made over the range 1 to a few 1000’sand we have checked that the value of the right hand cut hasnegligible effect on the fit parameters. The resulting aver-aged values for τ fit are shown, in figure 10. The similaritybetween τ fit ( A = ) and τ int is striking.An alternative fit in which A is also a fitting parameterleads to a rather longer τ fit ( A ) which resembles τ exp accord-ing to the hypothesis τ exp = S τ int and provides reassurance τ BetaUnconstrained τ int MPerr τ fi t RMerr τ fi t Rewt τ int RewtQuenched
Topological
Charge
Autocorrelation
Time τ BetaUnconstrained τ int MPerr τ fi t RMerr τ fi t Rewt τ int RewtQuenched
Topological
Charge
Autocorrelation
Time τ BetaUnconstrained τ int MPerr τ fi t RMerr τ fi t Rewt τ int RewtQuenched
Topological
Charge
Autocorrelation
Time Figure 10
Autocorrelation times obtained from the autocorrelationfunction for topological charge based on reweighting measurementswith fixed weights. Showing the integrated τ int and the fitted τ fit ( A = ) as described in the text for, from top to bottom, 16 , 20 and 24 .Also shown for comparison are results from unconstrained simulations.For each τ the largest source of error is shown with a shaded region.The RM errors of each τ are of similar size but the MP error for τ int islarger. that the choice S = . S using this fitting information as this would have littleeffect on the resulting value of W or τ int ( W ) .Figure 10 also shows error estimates for τ int and τ fit ( A = ) . For a single replica, the error of the integrated autocor-relation time is: ∆ τ int = (cid:114) WT τ int . (44)We regard this as the MP error and observe that it is farlarger than the error for the fitted version, ∆ τ fit , which arises τ BetaUnconstrained τ int MPerr τ int RMerr τ fi t RMerr τ fi t RewtA* τ fi t τ int Rewt Topological
Charge
Autocorrelation
Time τ BetaUnconstrained τ int MPerr τ int RMerr τ fi t RMerr τ fi t RewtA* τ fi t τ int Rewt Topological
Charge
Autocorrelation
Time τ BetaUnconstrained τ int MPerr τ int RMerr τ fi t RMerr τ fi t RewtA* τ fi t τ int Rewt Topological
Charge
Autocorrelation
Time Figure 11
Autocorrelation times obtained from the autocorrelationfunction for topological charge based on reweighting measurementswith fluctuating weights. Showing the integrated τ int , the fitted τ fitL and the combination A τ fitL as described in the text for, from top tobottom, 16 , 20 and 24 . Also shown for comparison are results fromtraditional simulations. For each τ the RM error and the MP error areshown with shaded regions. The MP error for τ int is dominant, the RMerrors of each τ are of similar size and the statistical fitting error of τ fitL is smallest and omitted from the plot. from uncertainty in the fitting procedure. The RM errors aresimilar for each. It is unsatisfactory that the measurementphase statistical errors of two ways of computing the auto-correlation time are so different. The fitting error of τ fit ap-pears to be underestimated. Indeed, when the measurementphase is repeated for a replica with identical RM parame-ters but a different sequence of random numbers, there is asignificant difference in the results of the fit which are wellbeyond the error estimate ∆ τ fit . The action or plaquette observable has a short autocorrela-tion time never more than a few HMC steps irrespective ofexactly which τ is computed. This short timescale appears inthe varying weights (39) and pollutes any long timescale thatmay be present in any observable computed by reweightingwith these weights.In the case of the topological charge the fluctuations ofthe weights do not entirely hide the long timescale compo-nent. Figure 9 shows an example of the fluctuating weightautocorrelation function for the topological charge over bothshort and long times. This autocorrelation function suffersa rapid decay over a few HMC steps, but clearly leaves awell distinguished long timescale component. This featureis observed for all RM replicas and all lattice sizes, thoughwith differing amplitude for the long component. Over thelonger timescales shown in the lower plot of figure 9, thevarying weight autocorrelation function continues to haveshort time fluctuations that manifest as the greater thicknessof the line compared with the quenched version. It is also ap-parent from this plot that long timescale fluctuations persistfor both autocorrelation functions but with larger amplitudefor the quenched than the fluctuating weight case.The Wolff procedure described in the previous subsec-tion for computing the integrated autocorrelation time forfixed weights is not appropriate for the fluctuating weightresults with mixed decay times. Indeed, reference [34] sug-gests that the choice of parameter S must be reexaminedin this situation. We have considered many alternative ap-proaches with the aim of finding a robust prescription. Ourfinal approach is to directly fit the following normalised dou-ble exponential decay according to the two stages describedbelow. ( − A ) e − t / τ fitS + Ae − t / τ fitL . (45)Fitting multiple exponentials has a reputation of beingfraught with problems, but in this case the two τ param-eters take such different values, τ fitL / τ fitS > β and lattice sizes.Nonetheless, the procedure for the fit is in two stages.Firstly, all three parameters of the function are fitted overa short range, up to a cut at about 300 steps. Secondly, withshort timescale parameters A and τ fitS frozen, the single pa-rameter τ fitL is fitted over a much longer time range of morethan 1000 steps. We find that the dependence of τ fitL on theshort range cut is smaller than the statistical errors, and thatthere is negligible dependence on the long range cut when itis taken to be sufficiently large. The same cuts are used forall RM replicas but they take different values according tolattice size (16 : 1500 , : 3000 , : 4000).The integrated autocorrelation time for a double expo-nential decay with such widely separated timescales will, for sufficiently large window, be dominated by the long timescaleand we expect, τ int ≈ A τ fitL . (46)A more accurate equation involving logs along the linesof the one discussed in [10, 34] can be derived, but sinceour approach relies on widely separated scales, the simplerequation (46) is adequate.In view of equation (46), we set the value of S used toestimate τ exp appearing in (42) to S = / A and follow theWolff procedure. Figure 11 shows τ int computed accordingto this technique along with A τ fitL from the fit. The MP errorfor τ int is computed using (44) and as for the quenched case,it is large, making the agreement all the more remarkable.This value of τ int is used to estimate errors in the topologicalcharge measurements and these appear in tables 8, 9, 10, andFigure 8.The proportionality factor A introduced in (45) can beinterpreted as the coupling to the long timescale. Its value ofaround 0 . ± .
05 does not appear to be very sensitive to thevalue of the reweighting coupling β or even to the size ofthe lattice. We expect it to depend on the parameters of thedensity of states method, in particular the ratio δ / σ . While the shape of the autocorrelation function is very dif-ferent for the quenched and the fluctuating weights, it is in-teresting to see if the long time behaviour matches. In otherwords, is τ exp the same? Our best estimates of this quan-tity are based on fits: τ fit ( A ) in the quenched case and τ fitL for fluctuating weights. A comparison of these quantities isnot shown, but although there are large fluctuations, they arecompatible within the errors.To summarise this discussion of the autocorrelation timeof the topological charge: we have used information fromthe fit of a double exponential to the fluctuating weight au-tocorrelation function in order to tune the parameter S , andobtain a better value for the Wolff window leading to a moreaccurate prediction for τ int . We find relations between theintegrated and fitted autocorrelation times. For the quenchedcase: τ int ≈ τ fit ( A = ) , and for varying weights: τ int ≈ A τ fitL .Finally, we observe that our estimates for the long time τ exp behaviour of either quenched or fluctuating weights is simi-lar. We conclude this section by clarifying how these in-sights carry over to the computation of τ int for other observ-ables. The quenched case is straightforward and the stan-dard Wolff procedure outlined at the start of section 4.6.1for the topological charge is used for all observables. Forthe fluctuating weight case, even for quantities such as E sym after Wilson flow that have τ int ≈ ∼
60 for unconstrainedsimulations, there is no clear long timescale remaining that � ���� � ���� � ���� � ���� � ���� � ���� � ���� � ���� � ���� � ���� � ���� � ���� � ���� � ��� � ���� � ��� � ���� � �� � � � � � � � ������� � ������������ � �� � ������������� � �� � ���� � �� � ������������� � �� � ���� � �� � ������������� � �� � � ����� � ����� � ����� � ����� � ����� � ����� � ����� � ����� � ����� � ����� � ����� � ���� � ��� � ���� � ��� � ���� � �� � � � � � � �� � � � � � � � � � ������� � �������� � ������������ � �� � ������������� � �� � ���� � �� � ������������� � �� � ���� � �� � ������������� � �� � Figure 12
Scaling plot of the symmetric clover link energy based onreweighted measurements and also showing traditional results. Threelines correspond to data from the three lattice sizes. The shaded areasrepresent the MP errors which are slightly larger than the RM errors(not shown). would allow any improved estimate of τ exp . Hence S cantake the conventional value, S = . The analysis reported so far has involved figures presentingdensities against β and with the exception of Figure 1, havebeen for a single lattice size. In this section, scaling proper-ties are discussed and the density of states method is used toillustrate the approach to the continuum at fixed box size.As was discussed in section 4.2, Wilson flow introducesa new scale into the system; the parameter t specifying theendpoint of the flow is dimensional and equation (31) showshow it represents the scale over which the configuration issmoothed. Our simulations, both constrained and unconstrain-ed choose t in such a way that that the scale of smoothing isa fixed fraction of the size of the lattice. Then, by comparingresults for couplings, β , corresponding to the same physicalbox size, the Wilson flow scale is physically identical andquantities on different lattice sizes can be compared.Figures 12 and 13 plot global quantities for the box,versus the boxsize and display this for different lattice res- � ��� � � � ��� � ��� � ��� � ��� � � � ��� � ��� � ��� � ��� � ���� � ��� � ���� � ��� � ���� � � � � � � �� � � ������� � ��������������� � ������ � ������������ � �� � ������������� � �� � ���� � �� � ������������� � �� � ���� � �� � ������������� � �� � Figure 13
Scaling plot of the Topological charge susceptibility basedon reweighted measurements and also showing traditional results.Three lines correspond to data from the three lattice sizes. The shadedareas represent the MP errors which are considerably larger than theRM errors (not shown). � �� � ��� � ���� � ���� � ���� � ���� � ���� � ���� � ���� � ��� � ���� � ���� � ���� � ���� � � � � ������� � ������� � ��������������� � ������ � ��������������� � ����������������� � �� � ���� � �� � ������������� � �� � ���� � �� � ������������� � �� � ���� � �� � Figure 14
Overall plot of autocorrelation times for topological chargeboth for unconstrained simulations and density of states. The band ofdata points shows τ int for traditional simulations for each lattice size.The shaded regions display τ int computed using reweighting and indi-cate the statistical error which dominates the RM error. olutions. Namely, the figures show the global observables V E sym , V Var ( E sym ) and Q .The scaling of the symmetric clover link energy at thetop of figure 12 is almost linear and all data from differentsize lattices and from both traditional and density of statessimulations coincide. The variance of this quantity displayedin the lower part of figure 12 shows noisier data and the linesof reweighted data from different size lattices do not quitecoincide.Figure 13 shows the topological charge variance scaling.As was the case in figure 8, the data points from the tradi-tional simulations are noisy and especially for larger lattices,indicate worse ergodicity than the density of states data. We have presented a comprehensive and detailed study ofthe use of the density of states method for SU(3) Yang Millstheory. To do this we derived the reweighting formalism ap- propriate for a smooth Gaussian constraint and emphasisedthat the expression for a canonical expectation value is a ra-tio of weighted sums and that the weights fluctuate becausethey contain a term that depends on the action. We exploredthe quenched approximation in which this variation is ne-glected and found that when the weights vary there is a dra-matic effect on autocorrelation times which is absent for thequenched approximation. However we want to emphasisethat as there is no theoretical basis for the quenched approx-imation and even though it appears to be reasonably accu-rate for the expectation values of most observables, there isa notable exception, the plaquette variance, identifying theapproximation as uncontrollable.We compared predictions for the plaquette and its vari-ance (which are both simple functions of the action) fromthe LLR approach based solely on reconstructing the den-sity of states obtained from RM phase, and the reweight-ing approach based on information from the measurementphase. We found that these were always compatible, thoughreweighting led to greater uncertainty associated with the fi-nite duration of the measurement phase. We noted that theplaquette variance is a delicate test of the accuracy of thetechnique and indeed it is also a good check for multi-histo-gram reweighting of traditional unconstrained simulations.For quantities that are defined after Wilson Flow, andfor which only the reweighting approach is possible, ourmethod allows families of continuum limit trajectories to bedefined. These are characterised by the boxsize associatedwith the value of the coupling constant for which we are per-forming the reweighting. Scaling of the topological chargesusceptibility at finite temperature is an interesting area ofinvestigation [6] and we have demonstrated the possibilityof going to reasonably small lattice spacing with only mod-est size lattices and fairly short runs.The main conclusion of the preliminary work [24] con-cerned the reduction in autocorrelation time for the topolog-ical charge. The conclusions presented here are more nu-anced. Figure 14 shows the growth of the autocorrelationtime for traditional simulations along with τ int for reweightedresults in each of the regions where we have computed it.The rate of growth of τ int for traditional unconstrained sim-ulations is compatible with that reported in the high statis-tics study [10]; namely z ∼ τ int ∝ a − z . Thedensity of states method yields a much smaller value for τ int and more importantly, the rate of growth appears to be muchless. The size of the errors evident in the reweighting datashown in figure 14 prohibit anything better than a crude es-timate. Using three values corresponding to the same boxsize gives z ∼ ± .
5. While the uncertainty in this esti-mate makes it almost meaningless, it is clearly smaller thanthe exponent for unconstrained simulations. This is an ex-citing result as it offers the potential to compute accuratevalues of the topological charge with less effort. However, the small value of τ int is mainly the consequence of the re-duction in the coupling of the topological charge to the longtimescale modes of the problem. These modes have not goneaway, and indeed long timescale modes themselves are oftenthe primary subject of interest in studies of the relative per-formance of different algorithms rather than the topologicalcharge which is frequently used simply as a proxy for study-ing these modes. In the text we discussed our estimate of τ exp which is a better measure of the long timescales of the prob-lem (though, as always, there may be even longer timescalesbeyond the duration of the simulations). This does show adecrease from the equivalent parameter of standard HMCsimulations We highlight that the changes to the HMC algo-rithm necessary for the density of states approach, namelythe Gaussian constraint and tempering swap, do not affectthe scaling behaviour of the timing of our implementationwith the number of lattice points.A notable aspect of this study is that we explored theerrors of the method by tracking 8 replicas for each latticesize. We noticed that there was considerable variation in theconvergence during the RM phase. Our study chose a sim-ple common time duration for this phase, but a more sophis-ticated temper dependent criterion for terminating the RMphase that could put more effort into controlling the scalingconvergence of the RM process, might provide better overallaccuracy.We distinguished two sources of error: the RM errorsand the MP errors. For all results using reweighting the rela-tive contribution of RM and MP errors depend on the partic-ular observable under study. The RM errors can be reducedby increasing the number of replicas in the RM phase andthe MP errors can be reduced with a longer measurementphase. For our choice of N REPLICA and duration of the mea-surement phase the largest contribution to the error for mostobservables was from measurement.This work has kept δ / δ S fixed at 1.2 and scaled δ for dif-ferent size lattices according to a fixed fraction of the stan-dard deviation of the constrained action. These parametersaffect the accuracy and efficiency of the method and a moredetailed study would be welcome. For example, it wouldbe interesting to investigate the dependence of the topolog-ical charge susceptibility on δ / σ while preserving the ratio δ / δ S (along the lines of figure 6 of reference [20] whichstudies the dependence of C V in a U(1) theory), and the cou-pling of the topological charge to the long timescale modes.The number of tempers used in the tempering method was24 for all lattice sizes. A wider range of tempers would beexpected to allow better mixing of regimes with faster andslower Monte Carlo dynamics and thus improve the autocor-relation time. As the lattice size increases, the spacing δ S isexpected to grow as √ V in order to preserve the same levelof accuracy. The corresponding spacing in β will thereforedecrease in the same way. If the efficiency of the tempering depends on the span of β , then more tempers will be needed(with N TEMPER ∼ √ V ). This scaling prediction and the ef-fects of the length of the RM and measurement phases andthe number of replicas on the accuracy require further study. Note Added
While this manuscript was in preparation, two studies ap-peared that report on investigations of the topological charge.Ref. [37] performs a study in SU(6) with a novel algorithmwhere a tempering procedure is used to move across sys-tems with boundary conditions interpolating between par-tially open and periodic; this algorithm is proved to be suc-cessful at reducing the correlation time of the topologicalcharge. Ref. [38] studies topological properties at finite tem-perature using a density of states method that allows it tosample rare events. The relative merits of our algorithm andthose new proposals would need to be assessed by dedicatedstudies.
Acknowledgements
The work of BL is supported in part by the RoyalSociety Wolfson Research Merit Award WM170010 and by the STFCConsolidated Grant ST/P00055X/1. AR is supported by the STFC Con-solidated Grant ST/P000479/1. Numerical simulations have been per-formed on the Swansea SUNBIRD system, provided by the Supercom-puting Wales project, which is part-funded by the European RegionalDevelopment Fund (ERDF) via the Welsh Government, and on theHPC facilities at the HPCC centre of the University of Plymouth.
References
1. E. Witten, Nucl. Phys. B , 269 (1979).2. G. Veneziano, Nucl. Phys. B , 213 (1979).3. G. ’t Hooft, Nucl. Phys. B , 461 (1974).4. E. Vicari and H. Panagopoulos, Phys. Rept. , 93(2009), 0803.1593.5. M. Müller-Preussker, PoS LATTICE2014 , 003 (2015),1503.01254.6. E. Berkowitz, M. I. Buchoff, and E. Rinaldi, Phys. Rev.D , 034507 (2015), 1505.07455.7. C. Bonati et al. , JHEP , 155 (2016), 1512.06746.8. S. Borsanyi et al. , Nature , 69 (2016), 1606.07494.9. P. van Baal, Commun. Math. Phys. , 529 (1982).10. ALPHA, S. Schaefer, R. Sommer, and F. Virotta, Nucl.Phys. B , 93 (2011), 1009.5228.11. L. Del Debbio, H. Panagopoulos, and E. Vicari, JHEP , 044 (2002), hep-th/0204125.12. B. Lucini, M. Teper, and U. Wenger, Nucl. Phys. B ,461 (2005), hep-lat/0401028.13. R. Brower, S. Chandrasekharan, J. W. Negele, andU. Wiese, Phys. Lett. B , 64 (2003), hep-lat/0302005. 14. S. Aoki, H. Fukaya, S. Hashimoto, and T. Onogi, Phys.Rev. D , 054508 (2007), 0707.0396.15. M. Luscher and S. Schaefer, JHEP , 036 (2011),1105.4749.16. A. Amato, G. Bali, and B. Lucini, PoS LATTICE2015 ,292 (2016), 1512.00806.17. M. Cè, M. García Vera, L. Giusti, and S. Schaefer, Phys.Lett. B , 232 (2016), 1607.05939.18. S. Mages et al. , Phys. Rev. D , 094512 (2017),1512.06804.19. K. Langfeld, B. Lucini, and A. Rago, Phys. Rev. Lett. , 111601 (2012), 1204.3243.20. K. Langfeld, B. Lucini, R. Pellegrini, and A. Rago, Eur.Phys. J. C , 306 (2016), 1509.08391.21. R. Pellegrini, B. Lucini, A. Rago, and D. Vadacchino,PoS LATTICE2016 , 276 (2017).22. E. Marinari and G. Parisi, Europhys. Lett. , 451(1992).23. B. Lucini, W. Fall, and K. Langfeld, PoS LAT-TICE2016 , 275 (2016), 1611.00019.24. G. Cossu, B. Lucini, R. Pellegrini, and A. Rago, EPJWeb Conf. , 02005 (2018), 1710.06250.25. K. Langfeld and B. Lucini, Phys. Rev. D , 094502(2014), 1404.7187.26. R. Pellegrini, L. Bongiovanni, K. Langfeld, B. Lucini,and A. Rago, PoS LATTICE2015 , 192 (2016).27. M. Giuliani and C. Gattringer, Phys. Lett. B , 166(2017).28. C. Gattringer, M. Mandl, and P. Torek, Phys. Rev. D , 114517 (2019).29. L. Del Debbio, A. Patella, and C. Pica, Phys. Rev. D ,094503 (2010), 0805.2058.30. L. Del Debbio, B. Lucini, A. Patella, C. Pica, andA. Rago, Phys. Rev. D , 074507 (2009), 0907.3896.31. M. Lüscher, JHEP , 071 (2010), 1006.4518, [Erra-tum: JHEP 03, 092 (2014)].32. M. Luscher, Comput. Phys. Commun. , 199 (2005),hep-lat/0409106.33. J. C. Spall, Introduction to Stochastic Search and Opti-mization (John Wiley & Sons, Inc., Hoboken, NJ, USA,2003).34. ALPHA, U. Wolff, Comput. Phys. Commun. , 143 (2004), hep-lat/0306017, [Erratum: Com-put.Phys.Commun. 176, 383 (2007)].35. A. M. Ferrenberg and R. H. Swendsen, Phys. Rev. Lett. , 1195 (1989).36. N. Madras and A. D. Sokal, J. Statist. Phys. , 109(1988).37. C. Bonanno, C. Bonati, and M. D’Elia, (2020),2012.14000.38. S. Borsanyi and D. Sexty, (2021), 2101.03383.39. C. Gattringer and C. B. Lang, Quantum Chromodynam-ics on the Lattice (Springer Berlin Heidelberg, 2010). (cid:1)(cid:2)(cid:3)(cid:2)(cid:4) (cid:1)(cid:2) (cid:1)(cid:5)(cid:3)(cid:2)(cid:4) (cid:1)(cid:3) (cid:4)(cid:5)(cid:3)(cid:2)(cid:4) (cid:1)(cid:3) (cid:2)(cid:3)(cid:2)(cid:4) (cid:1)(cid:2) (cid:2)(cid:6)(cid:5)(cid:3)(cid:2)(cid:4) (cid:1)(cid:2) (cid:7)(cid:3)(cid:2)(cid:4) (cid:1)(cid:2) (cid:8)(cid:6)(cid:4)(cid:8) (cid:8)(cid:6)(cid:4)(cid:9) (cid:8)(cid:6)(cid:2) (cid:8)(cid:6)(cid:2)(cid:7) (cid:8)(cid:6)(cid:2)(cid:10) (cid:8)(cid:6)(cid:2)(cid:8) (cid:8)(cid:6)(cid:2)(cid:9) (cid:1) (cid:2) (cid:1) (cid:2) (cid:3) (cid:3) (cid:2) (cid:4) (cid:5) (cid:4) (cid:5) (cid:2) (cid:1) (cid:2) (cid:3) (cid:11)(cid:12)(cid:13)(cid:14)(cid:15)(cid:16)(cid:12)(cid:12)(cid:17)(cid:18)(cid:12)(cid:16)(cid:19)(cid:20)(cid:21)(cid:22)(cid:23)(cid:23)(cid:12)(cid:16)(cid:12)(cid:18)(cid:24)(cid:12)(cid:1)(cid:2)(cid:8) (cid:4) Fig. 15
Free energy di↵erence between Multihistogram anddensity of states approaches. Only shown for 16 . The gap isrelated to normalisation as discussed in the text. Errors areshown as shaded regions. that have faster and slower MC dynamics and thus im-prove the autocorrelation time. Indeed, as the latticesize increases, the range of required to preserve thesame level of accuracy narrows and more tempers wouldbe needed to ensure the same level of mixing. Acknowledgements
Grant support and Supercomputing Wales
DL: Please complete
The density of states method reconstructs canonical ex-pectation values from a set of constrained expectationvalues at fixed central action. It is natural to contrastthis approach with traditional multihistogram reweight-ing [17] which reconstructs canonical expectation valuesat intermediate couplings from a set of unconstrainedexpectation values at fixed coupling. Rather than makea detailed comparison, requiring additional unconstrainedruns at values of the coupling commensurate with theset of central actions discussed in the main paper, thisappendix studies multihistogram based on the uncon-strained reference simulations. To recap, these simula-tions were at 24 separate values for each lattice size,covering the ranges listed in section 3 and these rangesare considerably wider than the ranges studied for thedensity of states. In the following, so as to directly com-pare with the LLR or reweighting results we considerthe much narrower ranges that appear in the figures;so only 7, 6, 4 (for 16 , 20 and 24 ) of the 24 runswere necessary. Errors are obtained using a bootstrapapproach of resampling the original data taking intoaccount the autocorrelation time for the observable inquestion.Multihistogram provides access to the partition func-tion or the free energy of the system and allows a di- (cid:1)(cid:2)(cid:1)(cid:3)(cid:4)(cid:4)(cid:1)(cid:2)(cid:1)(cid:3)(cid:5)(cid:1)(cid:2)(cid:1)(cid:3)(cid:5)(cid:4)(cid:1)(cid:2)(cid:1)(cid:3)(cid:6) (cid:5)(cid:2)(cid:1)(cid:5) (cid:5)(cid:2)(cid:1)(cid:7) (cid:5)(cid:2)(cid:3) (cid:5)(cid:2)(cid:3)(cid:8) (cid:5)(cid:2)(cid:3)(cid:9) (cid:5)(cid:2)(cid:3)(cid:5) (cid:5)(cid:2)(cid:3)(cid:7) (cid:1) (cid:2) (cid:3)(cid:4)(cid:5)(cid:6) (cid:7)(cid:7) (cid:6) (cid:8) (cid:3) (cid:9) (cid:10) (cid:3)(cid:11) (cid:12) (cid:6) (cid:13) (cid:8) (cid:14) (cid:2) (cid:10)(cid:11)(cid:12)(cid:13)(cid:14)(cid:15)(cid:13)(cid:16)(cid:17)(cid:11)(cid:12)(cid:12)(cid:11)(cid:18)(cid:13)(cid:19)(cid:20)(cid:13)(cid:21)(cid:22)(cid:11)(cid:23)(cid:24)(cid:17)(cid:15)(cid:12)(cid:20)(cid:25)(cid:20)(cid:26)(cid:12)(cid:27)(cid:28)(cid:19)(cid:13)(cid:29)(cid:3)(cid:5) (cid:1) (cid:30)(cid:21)(cid:22)(cid:27)(cid:21)(cid:26)(cid:12)(cid:19)(cid:13)(cid:20)(cid:21)(cid:11)(cid:31)(cid:24)(cid:17)(cid:15)(cid:12)(cid:20)(cid:32)(cid:20)(cid:26)(cid:12) (cid:1)(cid:2)(cid:1)(cid:3)(cid:4)(cid:5)(cid:1)(cid:2)(cid:1)(cid:3)(cid:6)(cid:1)(cid:2)(cid:1)(cid:3)(cid:6)(cid:7)(cid:1)(cid:2)(cid:1)(cid:3)(cid:6)(cid:6)(cid:1)(cid:2)(cid:1)(cid:3)(cid:6)(cid:8)(cid:1)(cid:2)(cid:1)(cid:3)(cid:6)(cid:5)(cid:8)(cid:2)(cid:7)(cid:9) (cid:8)(cid:2)(cid:7)(cid:8) (cid:8)(cid:2)(cid:7)(cid:10) (cid:8)(cid:2)(cid:7)(cid:5) (cid:8)(cid:2)(cid:7)(cid:11) (cid:8)(cid:2)(cid:4) (cid:8)(cid:2)(cid:4)(cid:3) (cid:8)(cid:2)(cid:4)(cid:7) (cid:8)(cid:2)(cid:4)(cid:4) (cid:1) (cid:2) (cid:3)(cid:4)(cid:5)(cid:6) (cid:7)(cid:7) (cid:6) (cid:8) (cid:3) (cid:9) (cid:10) (cid:3)(cid:11) (cid:12) (cid:6) (cid:13) (cid:8) (cid:14) (cid:2) (cid:12)(cid:13)(cid:14)(cid:15)(cid:16)(cid:17)(cid:15)(cid:18)(cid:19)(cid:13)(cid:14)(cid:14)(cid:13)(cid:20)(cid:15)(cid:21)(cid:22)(cid:15)(cid:23)(cid:24)(cid:13)(cid:25)(cid:26)(cid:19)(cid:17)(cid:14)(cid:22)(cid:27)(cid:22)(cid:28)(cid:14)(cid:29)(cid:30)(cid:21)(cid:15)(cid:31)(cid:7)(cid:1) (cid:1) (cid:32)(cid:23)(cid:24)(cid:29)(cid:23)(cid:28)(cid:14)(cid:21)(cid:15)(cid:22)(cid:23)(cid:13)(cid:33)(cid:26)(cid:19)(cid:17)(cid:14)(cid:22)(cid:34)(cid:22)(cid:28)(cid:14) Fig. 16
Plaquette variance from Multihistogram. Also shownfor comparison are results from unconstrained simulationswith additional runs. From top to bottom: 16 , 20 . Theplot for 24 is not shown as it fluctuates so wildly. Errorsare shown as shaded regions. rect comparison with the density of states approach viaequation (8). A plot of F = ln( Z ) / appears lin-ear over the ranges in question and no useful graph-ical comparison is possible. Figure 15 shows the dif-ference between F computed with multihistogram andwith LLR. Averages are taken over the bootstrap sam-ples in the case of multihistogram, and over RM repli-cas for LLR, and the results are normalised by setting F ( = 6 . Figure 15
Free energy difference between Multihistogram and den-sity of states approaches. Only shown for 16 . The gap is related tonormalisation as discussed in the text. Errors are shown as shaded re-gions. Appendix A: Multi-histogram
The density of states method reconstructs canonical expec-tation values from a set of constrained expectation values atfixed central action. It is natural to contrast this approachwith traditional multi-histogram reweighting [35] which re-constructs canonical expectation values at intermediate cou-plings from a set of unconstrained expectation values at fixedcoupling. The multi-histogram approach does not benefit fromthe tempering update step and hence a timing comparisonwould be meaningless.Rather than make a detailed comparison, requiring addi-tional unconstrained runs at values of the coupling commen-surate with the set of central actions discussed in the mainpaper, this appendix studies multi-histogram based on theunconstrained reference simulations. To recap, these simu-lations were at 24 widely spaced β values for each latticesize, covering the ranges listed in table 1. In the following, soas to directly compare with the density of states results, weonly consider the narrower ranges that appear in the figures;so only 7, 6, 4 (for 16 , 20 and 24 ) of the 24 runs werenecessary. Errors are obtained using a bootstrap approach ofresampling the original data taking into account the autocor-relation time for the observable in question. These errors arereported in the column “RM-err” of tables 5-7 and 8-10.Multihistogram provides access to the partition functionor the free energy of the system and allows a direct com-parison with the density of states approach via equation (9).A plot of F = − ln ( Z ) / β appears linear over the rangesin question and no useful graphical comparison is possible.Figure 15 shows the difference between F computed withmultihistogram and with LLR. Averages are taken over thebootstrap samples in the case of multihistogram, and overRM replicas for LLR, and the results are normalised by set-ting F ( β = . ) =
0. This difference between very differ-ent approaches is compatible with zero at two sigmas.Despite the accuracy suggested by figure 15, the free en-ergy does not encode much more information than the firstmoments of the plaquette expectation values. The plaquettevariance shown in figure 16 should be compared with the � ������ � ����� � ������ � ����� � ���� � ���� � ��� � ���� � ���� � ���� � ���� � � ���� �� � � � � � � �� � � � � � � � � ������������� � ��������� � �������������� � �� � ���������������������� � ������ � ����� � ������ � ������ � ������ � ������ � ���� � ���� � ���� � ���� � ���� � ��� � ���� � ���� � ���� � � ���� �� � � � � � � �� � � � � � � � � ������������� � ��������� � �������������� � �� � ���������������������� Figure 16
Plaquette variance from Multihistogram. Also shown forcomparison are results from unconstrained simulations with additionalruns. From top to bottom: 16 , 20 . The plot for 24 is not shown asplagued by huge fluctuations. Errors are shown as shaded regions. LLR predictions in the middle column of figure 5 (thoughnote a small shift of scale at 20 ). The LLR prediction hassmaller error and is smoother. While the quality of the LLRprediction does decrease for larger lattice sizes, the multihis-togram results deteriorate much more markedly and indeedare useless at 24 . As already emphasised, the reason forpoor performance of the multi-histogram results is not themethod itself, but our choice of data to illustrate it.Multihistogram can also be used for quantities such asthe symmetric cloverleaf or topological charge, both afterWilson flow. These behave better than the plaquette vari-ance, but again the excessive spacing of the coupling in thereference simulations leads to declining quality for the largerlattice sizes. Appendix B: Run details
This appendix records details of the main simulations.Table 11 shows the couplings used for the reference sim-ulations. The last column uses the fitting function from Gat-tringer and Lang [39] to compute the box size. The dura-tion of these reference simulations was between 6 × and12 × steps at 16 , between 5 × and 11 × at 20 and fixed at 9 . × at 24 . Extended simulations to im-prove data points over a restricted range at 20 and 24 werementioned in the text. These simulations were for four β values and two replicas at 20 and for two β values andfour replicas at 24 . For all extended simulations the du-ration was 10 steps with Wilson Flow measurements onlyevery 10 steps.The list of central action for the density of states runs isgiven in Table 12. Spacing, δ S , is variable and can be readoff from the differences in S i . The width of the Gaussianconstraint δ i is also shown and the ratio δ i / δ S is kept fixedat 1.2. The equivalent ranges in terms of the coupling β aregiven in table 2.Table 13 is the result of the RM phase and lists the mean a i across replicas. The RM error of this quantity varies slightlybetween tempers and lattice size, but it is bounded by 4 × − in all cases. Averaged over tempers, the error is 2 . × − , 1 . × − , 1 . × − for 16 , 20 and 24 respec-tively. Table 11
Parameter ranges for the canonical reference simulations.16 Box size (fm)5.78900 5.90359 6.00745 2.234085.80687 5.92367 6.02941 2.151625.82474 5.94373 6.05132 2.073585.84261 5.96377 6.07317 1.999675.86048 5.98380 6.09495 1.929625.87835 6.00380 6.11665 1.863185.89622 6.02376 6.13827 1.800125.91409 6.04370 6.15980 1.740215.93196 6.06360 6.18124 1.683265.94983 6.08345 6.20257 1.629095.96770 6.10327 6.22379 1.577525.98557 6.12303 6.24490 1.528396.00344 6.14274 6.26590 1.481556.02130 6.16240 6.28677 1.436896.03917 6.18201 6.30753 1.394226.05704 6.20156 6.32816 1.353456.07491 6.22104 6.34867 1.314476.09278 6.24047 6.36906 1.277176.11065 6.25983 6.38932 1.241456.12852 6.27913 6.40946 1.207236.14639 6.29837 6.42949 1.174416.16426 6.31754 6.44939 1.142916.18213 6.33665 6.46918 1.112666.20000 6.35570 6.48887 1.08359
Table 12
Parameters S i and δ i used for density of states runs.16 S i δ i S i δ i S i δ i Table 13
Results of RM phase: mean value of a i for each temper.164