Efficient numerical methods for the random-field Ising model: Finite-size scaling, reweighting extrapolation, and computation of response functions
EEfficient numerical methods for the random-field Ising model:Finite-size scaling, reweighting extrapolation, and computation of response functions
Nikolaos G. Fytas
Applied Mathematics Research Centre, Coventry University, Coventry CV1 5FB, United Kingdom
V´ıctor Mart´ın-Mayor
Departamento de F´ısica Te´orica I, Universidad Complutense, E-28040 Madrid, Spain andInstituto de Biocomputaci´on and F´ısica de Sistemas Complejos (BIFI), E-50009 Zaragoza, Spain
It was recently shown [Phys. Rev. Lett. , 227201 (2013)] that the critical behavior of therandom-field Ising model in three dimensions is ruled by a single universality class. This conclusionwas reached only after a proper taming of the large scaling corrections of the model by applying acombined approach of various techniques, coming from the zero- and positive-temperature toolboxesof statistical physics. In the present contribution we provide a detailed description of this combinedscheme, explaining in detail the zero-temperature numerical scheme and developing the generalizedfluctuation-dissipation formula that allowed us to compute connected and disconnected correlationfunctions of the model. We discuss the error evolution of our method and we illustrate the infinitelimit-size extrapolation of several observables within phenomenological renormalization. We presentan extension of the quotients method that allows us to obtain estimates of the critical exponent α of the specific heat of the model via the scaling of the bond energy and we discuss the self-averagingproperties of the system and the algorithmic aspects of the maximum-flow algorithm used. PACS numbers: 75.10.Nr, 02.60.Pn, 75.50.Lk
I. INTRODUCTION
The random-field Ising model (RFIM) is one of thearchetypal disordered systems [1–11], extensively stud-ied due to its theoretical interest, as well as its close con-nection to experiments in condensed matter physics [12–17]. In particular, several important systems can bestudied through the RFIM: diluted antiferromagnets in afield [15], colloid-polymer mixtures [17, 18], colossal mag-netoresistance oxides [19, 20], phase-coexistence in thepresence of quenched disorder [21–23], non-equilibriumphenomena such as the Barkhausen noise in magnetichysteresis [24, 25] or the design of switchable magneticdomains [26], etc.The existence of an ordered ferromagnetic phase for theRFIM, at low temperature and weak disorder, followedfrom the seminal discussion of Imry and Ma [1], whenthe space dimension is greater than two (
D >
2) [27–31].This has provided us with a general qualitative agree-ment on the sketch of the phase boundary, separating theordered ferromagnetic phase from the high-temperatureparamagnetic one. The phase-diagram line separates thetwo phases of the model and intersects the randomnessaxis at the critical value of the disorder strength. Suchqualitative sketching has been commonly used in mostpapers for the RFIM [32–37] and closed form quanti-tative expressions are also known from the early mean-field calculations [37]. However, it is generally true thatthe quantitative aspects of phase diagrams produced bymean-field treatments are very poor approximations.On the theoretical side, a scaling picture isavailable [27–29]. The paramagnetic-ferromagneticphase transition is ruled by a fixed point [in theRenormalization-Group (RG) sense] at temperature T = 0 [14]. The spatial dimension D is replaced by D − θ ,in hyperscaling relations ( θ ≈ D/ D = 3, the ferromag-netic phase disappears upon applying the tiniest randomfield [3]. Even if the statement holds at all orders in per-turbation theory [5], the ferromagnetic phase is stablein D = 3 [31]. Non-perturbative phenomena are obvi-ously at play [39, 40]. Indeed, it has been suggested thatthe scaling picture breaks down because of spontaneoussupersymmetry breaking, implying that more than twocritical exponents are needed to describe the phase tran-sition [41].On the experimental side, a particularly well re-searched realization of the RFIM is the diluted antifer-romagnet in an applied magnetic field [15]. Yet, thereare inconsistencies in the determination of critical ex-ponents. In neutron scattering, different parameteri-zations of the scattering line-shape yield mutually in-compatible estimates of the thermal critical exponent,namely ν = 0 . ν = 1 . η = 0 . a r X i v : . [ c ond - m a t . d i s - nn ] J un time exponential in ξ θ , where ξ denotes the correlationlength. Although sophisticated improvements have ap-peared [48–52], these simulations produced low-accuracydata because they were limited to linear sizes of the or-der of L max ≤
32. Larger sizes can be achieved at T = 0,through the well-known mapping of the ground state tothe maximum-flow optimization problem [53–66]. Yet, T = 0 simulations lack many tools, standard at T > T = 0 and their finite-sizescaling analysis mostly resulted in strong violations ofuniversality [55, 57, 58, 60].The criteria for determining the order of the low tem-perature phase transition and its dependence on the formof the field distribution have been discussed throughoutthe years [37, 67–73]. In fact, different results have beenproposed for different field distributions, like the exis-tence of a tricritical point at the strong disorder regimeof the system, present only in the bimodal distribu-tion [37, 69]. Currently, despite the huge efforts recordedin the literature, a clear picture of the model’s critical be-havior is still lacking. Although the view that the phasetransition of the RFIM is of second-order is well estab-lished [48–50, 64], the extremely small value of the expo-nent β continues to cast some doubts. Moreover, a ratherstrong debate exists with regards to the role of disorder:the available simulations are not able to settle the ques-tion of whether the critical exponents depend on the par-ticular choice of the distribution for the random fields,analogously to the mean-field theory predictions [37].Thus, the whole issue of the model’s critical behavioris under intense investigation [41, 48, 49, 51, 52, 74–78].Recently, progress has been made towards this direc-tion by the present authors [79]. In particular, using acombined approach of state of the art techniques from thepool of statistical physics and graph theory, it was shownthat the universality class of the RFIM is independent ofthe form of the implemented random-field distribution.This, somehow unexpected, according to the current lit-erature, result, was reached only after a proper taming ofthe large scaling corrections, a fact that, although empha-sized many years ago [53], was overlooked in numeroussubsequent relevant investigations of the model. In thecurrent paper we present the full technical details of ournumerical implementation, originally outlined in Ref. [79]and we provide some further numerical results relevantto the scaling behavior of the specific heat and the self-averaging aspects of the model in terms of the magneticsusceptibility and the bond energy. We also discuss thescaling aspects of the implemented maximum-flow algo-rithm.The methods that we shall explain in the present paperwill be useful way beyond the context of the 3D RFIM.The most obvious generalization is of course the RFIMin higher dimensions (see e.g. [80]). However, similarideas can be applied to many disordered systems andshould be useful when one needs to take derivatives, or toperform reweighting extrapolations, with respect to thedisorder-distribution parameters. The ability to obtain these derivatives is most important when the relevantRG fixed-point lies at zero temperature (thus, parame-ters other than temperature should be varied to cross thephase boundaries). For instance, for 2D Ising spin glassesseveral RG fixed-points appear at T = 0 depending onthe nature of the couplings distribution [81]. It should bepossible then to study the corresponding phase bound-aries and RG flows using our formalism. Another difficultproblem that can be tackled with the current prescrip-tion is the diluted antiferromagnet in a uniform externalfield [15]. The ground state of this model is degenerate,and it is thus difficult to sample with uniform probabilityfrom the set of all ground states [59]. Even if in exper-iments the external field is uniform, in simulations it isdesirable to add a small, local random noise to the mag-netic field [52]. The small random magnetic fields makeit possible to employ the full formalism that we derivein the following Sections. Furthermore, the fluctuation-dissipation formulae elucidated below is also valid whenworking at finite (rather than zero) temperature, whichis necessary for some algorithms [48, 51].The outline of paper is as follows: In the followingSec. II we define the model and the random-field distri-butions under study. In Sec. III we outline the T = 0maximum-flow algorithm, and in Sec. IV we define theset of useful physical observables that will be mainly ana-lyzed. However a complication arises: the sought observ-ables cannot be straightfowardly computed, as we ex-plain in Sec. V. The problems are overcome in Sec. VI,where we derive explicitly a fluctuation-dissipation for-malism that allowed us to compute connected and discon-nected correlation functions from the T = 0 data for eachfield distribution distinctively. The use of a reweightingmethod with respect to the disorder strength consistsanother asset at hand of our combinatorial scheme. InSec. VII we give a brief description of our finite-size scal-ing vehicle, the quotients method [82]. In Sec. VIII andon the basis of our main physical result of a single uni-versality class [79], we illustrate the size evolution of sev-eral effective critical exponents and we present a finite-size scaling analysis of additional numerical data for thebond energy. For this latter task, we adopt an extensionof the quotients method, necessary for monitoring thescaling of the effective exponent α of the specific heat.Furthermore, we discuss the self-averaging aspects of themodel, by implementing a proper noise to signal ratiofor the magnetic susceptibility and the bond energy, andwe estimate the critical slowing-down exponent z of thezero-temperature algorithm used to generate the groundstates of the model. Our contribution ends with a sum-mary in Sec. IX. II. MODEL AND RANDOM-FIELDDISTRIBUTIONS
Our S x = ± L and periodic boundary conditions. The Hamilto-nian of the RFIM in a general form may be written as H = − J (cid:88) (cid:104) x,y (cid:105) S x S y − (cid:88) x h x S x , (1)where in the above equation J is the nearest-neighbors’ferromagnetic interaction, which is set to be J = 1.With h x we denote the set of independent quenched ran-dom fields. Common field distributions considered inthe literature are the Gaussian and bimodal distribu-tions [12, 14, 83], for which marginally distinct resultshave been proposed [55, 57, 58, 60].In the current work the quenched random fields h x areextracted from one of the following double Gaussian (dG)or Poissonian (P) distributions (with parameters h R , σ ):dG ( σ ) ( h x ; h R ) = 1 √ πσ (cid:2) e − ( hx − hR )22 σ + e − ( hx + hR )22 σ (cid:3) , (2)P( h x ; σ ) = 12 | σ | e −| h x | /σ . (3)The limiting cases σ = 0 and h R = 0 of Eq. (2) corre-spond to the well-known bimodal (b) and Gaussian (G)distributions, respectively. In the Poissonian and Gaus-sian cases the strength of the random fields is parame-terized by σ , while in the double Gaussian case we shalltake σ = 1 and 2, and vary h R .As we are only interested in a T = 0 study of the modelby estimating ground states via the use of efficient opti-mization methods that will be discussed below, a properchoice of the random-field distributions is of major im-portance in our task. In particular, the main advantageof considering the double Gaussian distribution of Eq. (2)is that one can mimic for certain values of σ the double-peak structure of the bimodal distribution, capturing itseffects and at the same time escaping the implication ofnon-degenerate ground states. As it is well known, forcases of discrete distributions, like the bimodal, degen-eracy complicates the numerical solution of the systemat T = 0, since one has to sweep over all the possibleground states of the system [56, 59]. On the other hand,for the cases of the above distributions (2) and (3), theground state of the system is non-degenerate, so it is suf-ficient to calculate just one ground state in order to getthe necessary information. III. ZERO-TEMPERATURE ALGORITHM
As already discussed extensively in the literature (seeRefs. [84, 85] and references therein), the RFIM capturesessential features of models in statistical physics that arecontrolled by disorder and have frustration. Such systemsshow complex energy landscapes due to the presence oflarge barriers that separate several meta-stable states.If such models are studied using simulations mimickingthe local dynamics of physical processes, it takes an ex-tremely long time to encounter the exact ground state. However, there are cases where efficient methods for find-ing the ground state can be utilized and, fortunately, theRFIM is one such clear case. These methods escape fromthe typical direct physical representation of the system,in a way that extra degrees of freedom are introduced andan expanded problem is finally solved. By expanding theconfiguration space and choosing proper dynamics, thealgorithm practically avoids the need of overcoming largebarriers that exist in the original physical configurationspace. An attractor state in the extended space is foundin time polynomial in the size of the system and whenthe algorithm terminates, the relevant auxiliary fields canbe projected onto a physical configuration, which is theguaranteed ground state.The random field is a relevant perturbation at thepure fixed point, and the random-field fixed point is at T = 0 [27, 86, 87]. Hence, the critical behavior is thesame everywhere along the phase boundary and we canpredict it simply by staying at T = 0 and crossing thephase boundary at the critical field point. This is a con-venient approach because we can determine the groundstates of the system exactly using efficient optimizationalgorithms [53–66, 79, 88–93] through an existing map-ping of the ground state to the maximum-flow optimiza-tion problem [94–96]. A clear advantage of this approachis the ability to simulate large system sizes and disorderensembles in rather moderate computational times. Weshould underline here that, even the most efficient T > T = 0 approach are the absence ofstatistical errors and equilibration problems, which, onthe contrary, are the two major drawbacks encounteredin the T > et al. [63, 64, 98] that removes thesource and sink nodes, reducing memory usage and alsoclarifying the physical connection [64, 98]. For the sake ofcompleteness, we recall here the algorithm we use, whichis exactly the algorithm proposed in Refs. [63, 64, 98].The algorithm starts by assigning an excess x i to eachlattice site i , with x i = h i . Residual capacity variables r ij between neighboring sites are initially set to J . Aheight variable u i is then assigned to each node via aglobal update step. In this global update, the value of u i at each site in the set T = { j | x j < } of negative ex-cess sites is set to zero. Sites with x i ≥ u i setto the length of the shortest path, via edges with posi-tive capacity, from i to T . The ground state is found bysuccessively rearranging the excesses x i , via push oper- Table I. Summary of simulation details.Distribution L min L max N samples ( × )G 8 192 10dG ( σ =1) ( σ =2) ations, and updating the heights, via relabel operations.When no more pushes or relabels are possible, a finalglobal update determines the ground state, so that siteswhich are path connected by bonds with r ij > T have σ i = −
1, while those which are disconnected from T have σ i = 1. A push operation moves excess from asite i to a lower height neighbor j , if possible, that is,whenever x i > r ij >
0, and u j = u i −
1. In a push, theworking variables are modified according to x i → x i − δ , x j → x j + δ , r ij → r ij − δ , and r ji → r ji + δ , with δ = min( x i , r ij ). Push operations tend to move the pos-itive excess towards sites in T . When x i > u i increased to 1 + min { j | r ij > } u j . This is defined as a sin-gle push-relabel step; the number of such steps will beour measure of algorithmic time. In addition, if a set ofhighest sites U becomes isolated, with u i > u j + 1, forall i ∈ U and all j / ∈ U , the height u i for all i ∈ U isincreased to its maximum value, L , as these sites willalways be isolated from the negative excess nodes. Theorder in which sites are considered is given by a queue.In this paper, we have used the first-in-first-out (FIFO)queue [98]. The FIFO structure executes a push-relabelstep for the site i at the front of a list. If any neighboringsite is made active by the push-relabel step, it is addedto the end of the list. If i is still active after the push-relabel step, it is also added to the end of the list. Thisstructure maintains and cycles through the set of activesites. Last but not least, the computational efficiency ofthe algorithm has been increased via the use of periodicglobal updates every L relabels [64, 98].Using the above version of the push-relabel algorithm,we performed large-scale simulations of the RFIM definedabove in Eqs. (1) - (3) for a wide range of the simulationparameters. Our tactic included three steps: Originally,we performed preliminary runs with N samples = 10 ,where N samples counts the number of independent dis-order realizations, to locate the h R - or σ -values (de-pending on the parametrization) of the crossing pointsof the connected correlation length of the system forpairs of lattice sizes of the form ( L, L ), as this is in-dicated in the main heart of the scaling method used(see below). Subsequently, the main part of the sim-ulations took place in these estimated crossing points,with details, in terms of linear system sizes and disorder-averaged ensembles, summarized in Table I. In Ta-ble I L min ( L max ) denotes the minimum(maximum) lin-ear size considered within the sequence of size points L ∈ { , , , , , , , , , } . Finally, weperformed an additional set of simulations for tripletsof systems sizes as shown in Table II in order to computethe critical exponent of the specific heat via the scalingof the bond energy. This will be exemplified in Sec.VIII. IV. OBSERVABLES
An instance of the random fields { h x } is named a sam-ple. Thermal mean values are denoted as (cid:104)· · · (cid:105) , while thesubsequent average over samples is indicated by an over-line. Two most basic quantities are the bond energy andthe order-parameter density: E J = − J (cid:88) (cid:104) x,y (cid:105) S x S y , m = 1 L D (cid:88) x S x . (4)A crucial feature of the RFIM is that we have to dealwith two different correlation functions, namely the dis-connected and the connected propagators.The disconnected propagator, is straightforward tocompute both in real, G (dis) xy , and Fourier space, χ (dis) k : G (dis) xy = (cid:104) S x S y (cid:105) , χ (dis) k = L D (cid:104)| m k | (cid:105) k , (5)where m k = 1 L D (cid:88) x e i k · x S x . (6)In particular, special notations are standard for vanishingwavevector: m k =(0 , , = m (i.e. the order-parameterdensity), and χ (dis) k =(0 , , = χ (dis) (i.e. the disconnectedsusceptibility).On the other hand, we have the connected propagator: G xy = ∂ (cid:104) S x (cid:105) ∂h y . (7)At finite temperature, one could compute G xy from theFluctuation-Dissipation Theorem G xy = 1 T (cid:104) S x S y (cid:105) − (cid:104) S x (cid:105)(cid:104) S y (cid:105) . (8)However, we work directly at T = 0, as explained inSec. III. Therefore, Eq. (8) is clearly unsuitable for us,and the methods of Sec. VI will be needed (see alsoRef. [8]). For later use, we note the symmetry G xy = G yx = G xy + G yx . (9)In fact, our numerical data will never verify this symme-try (because of statistical fluctuations), hence we preferto use the symmetrized propagator ( G xy + G yx ) /
2. Now,the connected propagator in Fourier space is χ k = 1 L D (cid:88) x,y e i k · ( x − y ) G xy + G yx . (10)Again, the case of vanishing wavevector deserves a specialnaming: χ k =(0 , , = χ is the connected susceptibility.From both propagators, we compute the connected, ξ , and disconnected, ξ (dis) , second-moment correlationlengths [38, 99]. Let k min = (2 π/L, , ξ = 12 sin( π/L ) (cid:115) χ χ k min − , (11)where the superscript stands both for the connectedor the disconnected case [100]. Of course, we improveour statistics by computing χ k min = (cid:104) χ k =(2 π/L, , + χ k =(0 , π/L, + χ k =(0 , , π/L ) (cid:105) .Other important quantities are the well-known univer-sal Binder ratio U = (cid:104) m (cid:105)(cid:104) m (cid:105) , (12)and the susceptibilities ratio U = χ (dis) χ , (13)that we use as a platform for investigating the validity ofthe so-called two-exponent scaling scenario, see Sec. VIII. V. PROBLEMS WITH THESTRAIGHTFORWARD APPROACH
Computing response functions is very important. Un-fortunately, the traditional approach for disordered sys-tems (see e.g. [101]) is not feasible at zero temperature.The problem is easily understood by considering the ex-ample of the Monte Carlo computation of the magneticsusceptibility.The traditional approach would start by generating N samples of the random fields according to the appropri-ate probability density w ( { h x } ). Then, one would addto each random field a uniform external field h x −→ h x + H , (14)and the magnetic susceptibility would be estimated as χ naive = 1 N samples N samples (cid:88) s =1 ∂ (cid:104) m s (cid:105) H ∂H (cid:12)(cid:12)(cid:12)(cid:12) H =0 , (15)where (cid:104) m s (cid:105) H is the thermal expectation value of instance s under the displaced magnetic fields in Eq. (14). Yet, aswe explain below, the naive Monte Carlo estimator (15)yields χ naive = 0 with probability one for any smoothrandom-field probability density w ( { h x } ) such as ours,recall Eqs. (2) and (3).The approach outlined in Eq. (15) fails because, atzero temperature, the only spin-assignment with a non-vanishing statistical weight is the ground state for the Hamiltonian (1). The crucial point is that the groundstate is unique, excepting a zero-measure set in the L D -dimensional space spanned by the random-fields. Indeed,consider two arbitrary but fixed spin-assignments, { S (1) x } and { S (2) x } . The condition of equal energy H ( { S (1) x } ) = H ( { S (2) x } ) , (16)defines an hyper-plane in the random-fields space. Thereare 2 L D (2 L D − / { h x } not in these these hyper-planes, eachof the 2 L D possible spin assignments has a distinct en-ergy, and thus the ground state is unique. Furthermore,the ordering of the 2 L D energy levels is fixed away fromthe hyper-planes (which are the locus in random-fieldsspace where level-crossings happen).Now, let us suppose that none of the N samples instancesin Eq. (15) lies exactly in one of the dividing hyper-planes [this happens with probability one for any smooth w ( { h x } )]. Then, for H small enough, the fields displace-ment in Eq. (14) will not cross any of the hyper-planesand thus, adding the field H will leave the ground stateunvaried. In other words, d (cid:104) m s (cid:105) H / d H | H =0 = 0, withprobability one.However, the connected susceptibility is not zero. Theway out of the paradox is simple: the H -derivatives inEq. (15) are actually a sum of Dirac δ -functions, centeredat the precise H values that cause the displaced fields (14)to cross some of the dividing hyper-planes (16). It is theintegral over the random-fields of these Dirac δ -functionswhich produces a finite susceptibility χ > χ = (cid:90) (cid:89) x d h x w ( { h x } ) ∂ (cid:104) m (cid:105) ∂H (cid:12)(cid:12)(cid:12)(cid:12) H =0 . (17)We see the heart of the problem: naive Monte Carloestimations such as Eq. (15) cannot correctly reproduceintegrals such as Eq. (17) when the integrand is a such asingular object as a sum of Dirac’s δ -functions.Nevertheless, people have tried to overcome the zero-measure problem. For instance, one could keep H finiteand compute the Monte Carlo (MC) average[ (cid:104) m (cid:105) ] (MC) H = 1 N samples N samples (cid:88) s =1 (cid:104) m s (cid:105) H , (18)and then try to extrapolate to H → (cid:104) m (cid:105) ] (MC) H / d H . Of course, the smaller is H the largeris the number N samples needed to observe some H -dependency. Yet, reasonable tradeoffs between numberof instances and size of the applied field could be empir-ically found [61].In Sec. VI we explain a completely different approachthat (i) allows to work directly at H = 0 and (ii) avoidsDirac’s δ -functions. How this is possible can be easilyunderstood by considering the following one-dimensionaltoy model.
1. Toy model
Imagine we have a single random field h . In analogywith the general case, let us also assume that the magne-tization, regarded as a function of h , is constant but fora set of R discontinuities: (cid:104) m (cid:105) h = − R (cid:88) i =1 [ m i +1 − m i ] θ ( h − h i ) . (19)In the above expression θ ( x ) is Heaviside step function, θ ( x >
0) = 1 and θ ( x <
0) = 0, and the magnetizationplateaux are monotonically increasing, m i +1 > m i , with m = − m R +1 = 1.Now, if we displace the field, h −→ h + H , and takethe H derivative in Eq. (19), a sum of Dirac δ -functionswill arise, making unfeasible the Monte Carlo method.However, it is useful to take one step back and recallhow the susceptibility is defined. First, we consider theaverage magnetization as a function of the displaced field (cid:104) m (cid:105) ( H ) = (cid:90) ∞−∞ d w ( h ) (cid:104) m (cid:105) h + H . (20)The derivative with respect to H is taken only after com-puting the integral (the random-field probability density w ( h ) must decrease fast enough at infinity to make theintegral convergent). Yet, a change of variable h (cid:48) = h + H yields (cid:104) m (cid:105) ( H ) = (cid:90) ∞−∞ d w ( h − H ) (cid:104) m (cid:105) h . (21)The change of variable is mathematically sound, as it re-lies only on the translational invariance of the integrationmeasure in Eq. (20). If the probability density w ( h ) issmooth, one can now interchange derivative and integralobtaining χ toy model = (cid:90) ∞−∞ d w ( h ) − w ( h ) d w d h (cid:104) m (cid:105) h . (22)The integrand in Eq. (22) is a regular function, allowingfor a Monte Carlo estimation of the form χ (MC)toy model = 1 N samples N samples (cid:88) s =1 (cid:104) m (cid:105) h s − w ( h s ) d w d h (cid:12)(cid:12)(cid:12)(cid:12) h = h s , (23)where the independent random fields h s are obtainedwith weight w ( h ). Note that the summands in Eq. (23) cannot be interpreted as the magnetic susceptibility of agiven instance (there are no Dirac δ -functions). However, χ (MC)toy model does converge to χ toy model in the limit of large N samples . VI. FLUCTUATION-DISSIPATIONFORMALISM
Reweighting methods are a major asset for numericalstudies of critical phenomena [102, 103]: From a single simulation at a given temperature we get a continuous curve for (say) the disconnected susceptibility, χ (dis) ( T ).However, we will be working at zero temperature.Hence, standard reweighting methods are not useful forus. In fact, we shall explain here our extension ofreweighting methods originally devised for percolationstudies [101, 104, 105]. From a single simulation, weextrapolate the mean value of observables to nearby pa-rameters of the disorder distribution. We varied σ forthe Poissonian and Gaussian distributions, see panel (a)in Fig. 1 below for an illustrative flavor, and h R for thedouble Gaussian distribution. These reweighting meth-ods were instrumental for our previous work [79].As we discuss below, a closely related problem is thecomputation of the connected correlation functions (re-call also Sect. V). Our solution for the case of the Gaus-sian distribution, in Sec. VI A, will turn out to be iden-tical to the one in Ref. [8]. However, modifications areneeded for the Poissonian or double Gaussian distribu-tions, which are explained in Secs. VI B and VI C, re-spectively.For all three distributions, we shall compute the con-nected propagator by adding a source ˜ h x to the randomfields: h x −→ h x + (cid:15) ˜ h x , (24)where (cid:15) is a small parameter. At variance with the ran-dom fields { h x } , the sources { ˜ h x } will be arbitrary but fixed : the over-line will indicate and average only withrespect to the { h x } . Then, the connected propagator G xy (= G yx ) follows from the Taylor expansion (cid:104) S y (cid:105) { h x + (cid:15) ˜ h x } = (cid:104) S y (cid:105) + (cid:15) (cid:88) x G yx ˜ h x + O ( (cid:15) ) . (25)In the above expression, (cid:104) S y (cid:105) { h x + (cid:15) ˜ h x } is the thermal ex-pectation value obtained when plugging { h x + (cid:15) ˜ h x } asthe random magnetic fields in the Hamiltonian, Eq. (1).The formalism will be explained in the same way, forall three random-field distributions. We start from thegeneral observation that computing thermal expectationvalues at T = 0 is trivial: one just needs to evaluate thefunction of interest, see Sec. IV, on the ground-state spinassignment corresponding to a given sample (recall thata sample is characterized by a set of random fields { h x } ).In this sense, thermal mean-values are mere functions ofthe { h x } . Next, we observe that a special function ofthe random-fields, when averaged over the { h x } , is equalto the connected propagator. Finally, we show how toperform reweighting extrapolations for a generic functionof the random fields F ( { h x } ).Before we start, let us mention that a practical con-sideration had an important impact in the designing ofour Fluctuation-Dissipation formalism. We simulated alarge number of samples ( ∼ ) on large system sizes( L = 192), see Table I. Clearly, storing in the hard driveall the corresponding ground-state assignments is out ofthe question. Therefore, we need to select beforehand asmall set of quantities to be computed on the ground-state spin assignment and stored on the hard drive. Thissmall set of observables includes E J , m and m k min , re-call Sec. IV, but also the quantities needed to computethe connected propagators and the reweighting extrapo-lations (in all cases, we restricted the wavevectors to abare minimum: k = (0 , ,
0) and k = k min ). A. Gaussian distribution
The combined probability density for our N = L D Gaussian random fields with width parameter σ is w G ( { h x } , σ ) = 1(2 πσ ) N e − σ (cid:80) x h x . (26)Our computation starts from Eq. (25): (cid:104) S y (cid:105) { h x + (cid:15) ˜ h x } == (cid:90) (cid:89) x d h x w G ( { h x } , σ ) (cid:104) S y (cid:105) { h x + (cid:15) ˜ h x } (27)= (cid:90) (cid:89) x d h (cid:48) x w G ( { h (cid:48) x − (cid:15) ˜ h x } , σ ) (cid:104) S y (cid:105) { h (cid:48) x } . (28)In the above expressions the N integrals extend from −∞ to + ∞ . We went from (27) to (28) by changing integra-tion variables as h (cid:48) x = h x + (cid:15) ˜ h x [we shall drop the primefor the dummy integration variables, h (cid:48) x in Eq. (28)].Now, one just needs to Taylor-expand in the small pa-rameter (cid:15) in Eq. (28). A direct comparison with Eq. (25)yields G zy = (cid:90) (cid:89) x d h x w G ( { h x } , σ ) h z (cid:104) S y (cid:105) { h x } σ (29)= h z (cid:104) S y (cid:105) σ . (30)We now use Eq. (10) to compute the propagator in theFourier space χ k = L D (cid:104) h G − k m k + h G k m − k (cid:105) σ , (31)where m k was defined in Eq. (6) and h G k = 1 L D (cid:88) x e i k · x h x . (32)The reader will note that Eq. (31) was obtained inRef. [8] (yet, our argument is sound as well when onestarts directly at T = 0, which is exactly our case). Ourrationale for recalling this fluctuation-dissipation argu-ment here is that the derivation of the new formulae inSecs. VI B and VI C is completely analogous.At this point it should be obvious that, for all theobservables of interest, we are after the computation ofmultidimensional integrals of the form F (cid:12)(cid:12) σ = (cid:90) (cid:89) x d h x w G ( { h x } , σ ) F ( { h x } ) , (33) where F ( { h x } ) could be F = (cid:104) S z S y (cid:105) { h x } , or F = h z (cid:104) S y (cid:105) { h x } , etc. Now, we need to solve three problems:1. Compute derivatives with respect to σ , D σ F . Re-call that σ is the width for the Gaussian weight inEq. (33).2. Extrapolate the expectation values at σ + δσ fromintegrals at σ such as Eq. (33).3. Estimate how large the extrapolation window δσ may be in a numerical simulation.Fortunately, we can solve all three problems with a singletrick. The starting point is F (cid:12)(cid:12) σ + δσ = (cid:90) (cid:89) x d h x w G ( { h x } , σ + δσ ) F ( { h x } ) , (34)= (cid:90) (cid:89) x d h x w G ( { h x } , σ ) ×× F ( { h x } ) R ( { h x } , σ, δσ ) , (35)where the reweighting factor R is just the ratio of prob-ability densities: R ( { h x } , σ, δσ ) = w G ( { h x } , σ + δσ ) w G ( { h x } , σ ) (36)= (cid:18) σσ + δσ (cid:19) N e (cid:104) σ − − ( σ + δσ ) − (cid:105) (cid:80) x h x . (37)The computation of σ -derivatives follows straightfor-wardly by Taylor expanding the reweighting factor in δσ : R ( { h x } , σ, δσ + (cid:15) ) = R ( { h x } , σ, δσ ) (cid:16) (cid:15) D + O ( (cid:15) ) (cid:17) , (38)where D ( { h x } , σ, δσ ) = 1 σ + δσ (cid:104) (cid:80) x h x ( σ + δσ ) − N (cid:105) . (39)Our reweighting formulae can be cast in a more aes-thetically appealing form F (cid:12)(cid:12) σ + δσ = FR σ,δσ (cid:12)(cid:12) σ , D σ F (cid:12)(cid:12) σ + δσ = FR σ,δσ D σ,δσ (cid:12)(cid:12) σ . (40)Note that Eq. (40) refers to a function F of the random-fields only . Explicit dependency on σ , like in G zy = h z (cid:104) S y (cid:105) /σ , is not included but can be taken care ofstraightforwardly.In summary, in order to perform a complete reweight-ing study for each sample we need to store on the harddrive only E J , m k , h G − k m k + h G k m − k (restricting ourselvesto k = (0 , ,
0) and k = k min ), as well as (cid:80) x h x .The final question we need to address is: how large δσ can reasonably be in a Monte Carlo simulation? Ofcourse, the question is ill-posed, because the answer de-pends on how many samples are simulated. In the limitof infinite statistics, one could have arbitrarily large δσ .However, this ideal situation is never reached in practice.As a rule of thumb one may use many different criteria,but all of them boil down to requiring that the typical set of random-fields for σ + δσ could also be typical at σ (or, at least, not too unusual). A particularly simplesuch criterium requires the absolute value of (cid:88) x h x (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) σ + δσ − (cid:88) x h x (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) σ = N [( σ + δσ ) − σ ] (41)to be no larger than the dispersion of (cid:80) x h x at σ , namely √ N σ . The resulting bound is | δσ | ≤ (cid:114) σ N . (42)
B. Poissonian distribution
This case is a straightforward translation of the resultsin Sec. VI A. Since there is not any new idea involved, letus just give the main results.The connected propagator in real space is G zy = h z (cid:104) S y (cid:105)| h z | σ . (43)Note the small, but crucial, difference with Eq. (30): wecorrelate (cid:104) S y (cid:105) with the sign of h z . In the Fourier space,Eq. (30) translates to χ k = L D (cid:104) h P − k m k + h P k m − k (cid:105) σ , (44)where m k was defined in Eq. (6) and h P k = 1 L D (cid:88) x e i k · x h x | h x | . (45)Note, again, that we Fourier-transform the sign of thePoissonian random fields.The reweighting factor is again the ratio of probabilitydensities for the Poisson fields: R ( { h x } , σ, δσ ) = (46)= (cid:18) σσ + δσ (cid:19) N e (cid:104) σ − − ( σ + δσ ) − (cid:105) (cid:80) x | h x | , (47)and the derivative operator follows from a Taylor expan-sion with respect to δσ : R ( { h x } , σ, δσ + (cid:15) ) = R ( { h x } , σ, δσ ) (cid:16) (cid:15) D + O ( (cid:15) ) (cid:17) , (48)where D ( { h x } , σ, δσ ) = 1 σ + δσ (cid:104) (cid:80) x | h x | ( σ + δσ ) − N (cid:105) . (49) The final reweighting formulae can be cast in exactlythe same form that we found for the Gaussian randomfields F (cid:12)(cid:12) σ + δσ = FR σ,δσ (cid:12)(cid:12) σ , D σ F (cid:12)(cid:12) σ + δσ = FR σ,δσ D σ,δσ (cid:12)(cid:12) σ . (50)As for the maximum reasonable reweighting extrapo-lation, we also use an analogous criterium: The absolutevalue of the difference (cid:88) x | h x | (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) σ + δσ − (cid:88) x | h x | (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) σ = N ( σ + δσ ) − N σ (51)should be no larger than the dispersion of (cid:80) x | h x | at σ ,namely √ N σ . The resulting bound is | δσ | ≤ (cid:114) σ N . (52)
C. Double Gaussian distribution
Our formalism for this distribution is slightly morecomplicated. Let us start by explaining how we obtaina random field distributed as prescribed in Eq. (2). Foreach h x we extract two independent random variables.One of them is discrete, η x = ±
1, with 50% probability.The other variable, ϕ x , is gaussian distributed with zeromean and unit variance. Then, we set [the width σ isgiven in Eq. (2)] h x = h R η x + σϕ x . (53)The combined probability density for our 2 N variablesis w dG ( { η x , ϕ x } ) = e − (cid:80) x ϕ x N (2 π ) N/ . (54)In order to understand the origin of the additional com-plications for this distribution, let us add a source (i.e. h x −→ h x + (cid:15) ˜ h x ), while we simultaneously modify theposition of the peaks, (i.e. h R −→ h R + δh R ) [106].Under our circumstances, Eq. (25) reads (cid:104) S y (cid:105) { h x + (cid:15) ˜ h x } (cid:12)(cid:12)(cid:12) h R + δh R == (cid:88) { η x } (cid:90) (cid:89) x d ϕ x w dG ( { η x , ϕ x } ) (cid:104) S y (cid:105) { ˆ h x } , (55)ˆ h x = h R η x + δh R η x + σϕ x + (cid:15) ˜ h x . (56)The sum in Eq. (55) extends to the 2 N possible values ofthe discrete variables η x . The problem now is apparentfrom Eq. (56). For each site, we have a single integrationvariable, namely ϕ x . We need to carry out a change ofvariable that brings Eq. (56) to the form in Eq. (53): ϕ (cid:48) x = ϕ x + δh R η x + (cid:15) ˜ h x σ . (57)In other words, if δh R (cid:54) = 0, there is no way of distinguish-ing δh R η x from the source term (cid:15) ˜ h x .With this caveat in mind, and dropping the prime inthe dummy integration variables, Eq. (55) now reads (cid:104) S y (cid:105) { h x + (cid:15) ˜ h x } (cid:12)(cid:12)(cid:12) h R + δh R == (cid:88) { η x } (cid:90) (cid:89) x d ϕ x w dG (cid:16)(cid:8) η x , ϕ x − δh R η x + (cid:15) ˜ h x σ (cid:9)(cid:17) ×× (cid:104) S y (cid:105) { ˆ h x } , (58)ˆ h x = h R η x + σϕ x . (59)Now, w dG (cid:16)(cid:8) η x , ϕ x − δh R η x + (cid:15) ˜ h x σ (cid:9)(cid:17) = (60)= w dG ( { η x , ϕ x } ) R ( { η x , ϕ x } , δh R ) ×× exp (cid:20) (cid:15)σ (cid:88) x ˜ h x (cid:18) ϕ x − δh R η x σ (cid:19)(cid:21) ×× exp (cid:20) − (cid:15) σ (cid:88) x ˜ h x (cid:21) , where the reweighting factor appropriate for our imple-mentation of the double Gaussian distribution is R ( { η x , ϕ x } , δh R ) = exp (cid:20) δh R σ (cid:88) x η x ϕ x (cid:21) ×× exp (cid:20) − δh R N σ (cid:21) . (61)Taylor-expanding with respect to (cid:15) in Eq. (60) we fi-nally get the connected propagator G zy | h R + δh R = 1 σ R δh R (cid:0) ϕ z − δh R η z σ (cid:1) (cid:104) S y (cid:105) (cid:12)(cid:12)(cid:12)(cid:12) h R . (62)In particular, the correction term − δh R η z was absent forthe Poissonian and Gaussian distributions. Similarly, onecan get the connected propagator in the Fourier space χ k | h R + δh R = L D σ R δh R (cid:28)(cid:18) ˆ ϕ − k − δh R ˆ η − kσ (cid:19) m k ++ (cid:18) ˆ ϕ k − δh R ˆ η k σ (cid:19) m − k (cid:29)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) h R , (63)where m k was defined in Eq. (6) andˆ ϕ k = 1 L D (cid:88) x e i k · x ϕ x , ˆ η k = 1 L D (cid:88) x e i k · x η x . (64)Instead for disconnected observables (e.g. E J , con-nected propagators, etc.) the reweighting formulae takethe standard form F (cid:12)(cid:12) σ + δσ = FR δh R (cid:12)(cid:12) σ , D σ F (cid:12)(cid:12) σ + δσ = FR δh R D δh R (cid:12)(cid:12) σ , (65) ( b ) error s L = 1 6 L = 3 2 L = 2 4 L = 4 8 L = 3 2 L = 6 4 L = 4 8 L = 9 6 x / L L = 1 6 L = 2 4 L = 3 2 L = 4 8 L = 6 4 L = 9 6( a )
Figure 1. (Color online) (a) For several system sizes, we show ξ/L as a function of the strength of the Poissonian randomfield σ . Lines join data obtained from reweighting extrapola-tion (discontinuous lines of the same color come from indepen-dent simulations). In the large- L limit, ξ/L is L -independentat the critical point σ (c) . In the quotients method, we con-sider the ξ/L curves for pair of lattices ( L, L ) and seek the σ where they cross. This crossing is employed for computingeffective, L -dependent critical exponents with Eq. (69). (b)Illustration of statistical errors in the universal ratio ξ/L forthe pairs of the system sizes shown in panel (a). where, in this case, the derivative operator is D ( { η x , ϕ x } , δh R ) = 1 σ (cid:88) x (cid:0) η x φ x − δh R σ (cid:1) . (66)Finally, we need to asses the maximum sensible sizefor δh R . The simplest way to proceed is to compute themoments of the reweighting factor R kδh R = exp (cid:20) N k − k δh R σ (cid:21) . (67)If we now demand the dispersion (i.e., square-root ofvariance) to be as large as the mean-value, we get | δh R | ≤ σ (cid:114) log 2 N . (68)0 s ( s ) = 1 . 7 5 9 7 ; N s a m p l e s = 1 0 s ( s ) = 1 . 7 6 ; N s a m p l e s = 1 0 s ( s ) = 1 . 7 6 2 5 ; N s a m p l e s = 1 0 x / L s Figure 2. (Color online) Universal ratio ξ/L of an L =64 Poissonian RFIM for three different sets of simulations,corresponding to different simulation values σ (s) and differentsets of random realizations. The inset is a mere enlargementof the intermediate regime of σ values. VII. QUOTIENTS METHOD
To extract the values of critical points, critical ex-ponents, and dimensionless universal quantities, we em-ployed the quotients method, also known as phenomeno-logical renormalization [38, 82, 107]. This method allowsfor a particularly transparent study of corrections to scal-ing, that up to now have been considered as the Achilles’heel in the study of the D ≥ L, L ). We start imposing scale-invariance by seekingthe L -dependent critical point: the value of σ ( h R forthe dG), such that ξ L /ξ L = 2 (i.e. the crossing pointfor ξ L /L , see Fig. 1(a)). Now, for dimensionful quanti-ties O , scaling in the thermodynamic limit as ξ x O /ν , weconsider the quotient Q O = O L /O L at the crossing. Fordimensionless magnitudes g , we focus on g L . In eithercase, one has: Q (cross) O = 2 x O /ν + O ( L − ω ) , g (cross)(2 L ) = g ∗ + O ( L − ω ) , (69)where x O /ν , g ∗ and the scaling-corrections exponent ω are universal.Examples of dimensionless quantities are the con-nected and disconnected correlation lengths over the sys-tem size, i.e., ξ/L and ξ (dis) /L , and the Binder ratio U .Instances of dimensionful quantities are then the deriva-tives of ξ , ξ (dis) ( x ξ = 1 + ν ), the connected and dis-connected susceptibilities χ and χ (dis) [ x χ = ν (2 − η ), x χ (dis) = ν (4 − ¯ η )], and the ratio U [ x U = ν (2 η − ¯ η )](see also Sec. IV), which as already noted above will serveas an alternative platform for investigating the validity x ( d i s ) / L G d G ( s = 1 ) d G ( s = 2 ) P x / L G d G ( s = 1 ) d G ( s = 2 ) P n (eff) L - w Figure 3. (Color online) Infinite limit-size extrapolation ofthe effective critical exponent ν . of the so-called two-exponent scaling scenario [8, 9].Let us point out here that an extension of the quotientsmethod using the sequence of three lattice-size points( L, L, L ) will be presented below in Sec. VIII. Thisgeneralization is necessary for the scaling study of of thebond energy of the RFIM, which is governed by a non-diverging back-ground term. VIII. RESULTS AND DISCUSSION
Let us start with a few illustrations on the main heartof the scaling method applied, that is the crossing of theuniversal ratio ξ/L together with the error evolution ofthe presented numerical scheme. As already mentionedabove, we varied σ for the Poissonian and Gaussian dis-tributions, see panel (a) in Fig. 1, and h R for the doubleGaussian distribution. Panel (b) in Fig. 1 shows the sta-tistical errors of the universal ratio corresponding to thepairs of system sizes shown in panel (a) of the same figure.As expected, the error is minimal exactly at the simula-tion point denoted hereafter as σ (s) or h (s) R respectively,and increases further away from it. Furthermore, a com-parative illustration with respect to the errors inducedby the reweighting method and the disorder averagingprocess is shown in Fig. 2 again for the universal ratio ξ/L of an L = 64 Poissonian RFIM and for three sets ofsimulations, as outlined in the figure. Clearly, this latteraccuracy test serves in favor of the proposed scheme.The full application of Eq. (69) to our four random-field distributions has been summarized in Table II ofRef. [79], where all the estimates of critical points, uni-versal ratios, and critical exponents are given, togetherwith the corrections-to-scaling exponent ω = 0 . ω has beenperformed by means of a joint fit, third-order polynomial1 G d G ( s = 1 ) d G ( s = 2 ) P h (eff) L - w Figure 4. (Color online) Infinite limit-size extrapolation ofthe effective critical exponent η . Four solid lines are showncorresponding to the four random-field distributions as inFig. 3, although they not easily discerned due to very closevalues of their linear coefficient terms. in L − ω , for the dimensionless quantities ξ/L , ξ (dis) /L ,and U using data for L ≥
24. We should point out thatjoint fits share the value of some fitting parameters suchas the L → ∞ extrapolation (which is the same for allrandom-field distributions), or the corrections-to-scalingexponent ω (which is common to all magnitudes). Here,we provide some complementary illustrations, showingthe infinite limit-size extrapolation of the effective expo-nents of the correlation length ν , the anomalous dimen-sion η , and the two-exponent difference 2 η − ¯ η , the latterserving as an independent test of the two-exponent scal-ing scenario in the theory of the random-field problem [8].Figures 3 and 4 illustrate the infinite limit-size extrap-olation of the effective exponents ν and η respectively, forall the four type of distributions studied. The solid linesare joint polynomial fits of first order in L − ω includingdata points for L ≥
32, extrapolating to L − ω = 0, asshown by the filled circle in each figure. We remind thereader that for the effective exponent ν we have two setsof data for each of the four distributions coming from theconnected and disconnected correlation lengths [79]. Letus comment here that our estimation ν = 1 . . η = 0 . η = 0 . ν are larger than those for η becausewe compute derivatives as connected correlations [109](see also the discussion in Sec. VI).Subsequently in Fig. 5 we perform an extrapolation ofthe effective exponent difference 2 η − ¯ η , correspondingto the dimensionful quantity U , Eq. (13), in order todiscuss the two-exponent scaling scenario. The solid linesin this figure illustrate a joint polynomial fit, first-orderin L − ω , including data points for L ≥
16, giving 2 η − G d G ( s = 1 ) d G ( s = 2 ) P (2 h - h )(eff) L - w Figure 5. (Color online) Infinite limit-size extrapolation ofthe effective exponent 2 η − ¯ η . ¯ η = 0 . L − ω = 0. However, we should note here that if onefixes the infinite limit-size point (2 η − ¯ η ) | L = ∞ to zero,the fit becomes of better quality in terms of the merit χ / DOF [79]. Unfortunately, in the present D = 3 case,we can not draw a definite conclusion on the validity ofthe two-exponent scaling scenario. Additional work isunder way to tackle this problem at higher dimensions( D >
3) [80].We turn our discussion now to the most controversialissue of the specific heat of the RFIM. The specific heatof the RFIM can be experimentally measured [16] and is,for sure, of great theoretical importance. Yet, it is wellknown that it is one of the most intricate thermodynamicquantities to deal with in numerical simulations, evenwhen it comes to pure systems. For the RFIM, MonteCarlo methods at
T > α , but were restricted torather small systems sizes and have also revealed manyserious problems, i.e., severe violations of self averag-ing [40, 110]. A better picture emerged throughout theyears from T = 0 computations, proposing estimates of α ≈
0. However, even by using the same numerical tech-niques, but different scaling approaches, some inconsis-tencies have been recorded in the literature. The mostprominent was that of Ref. [61], where a strongly nega-tive value of the critical exponent α was estimated. Onthe other hand, experiments on random field and dilutedantiferromagnetic systems suggest a clear logarithmic di-vergence of the specific heat [16].The specific heat can be estimated using ground-statecalculations and applying thermodynamic relations em-ployed by Hartmann and Young [61] and Middleton andFisher [64]. The method relies on studying the singular-ities in the bond-energy density E J [111]. This bond en-ergy density is the first derivative ∂E/∂J of the ground-state energy with respect to the random-field strength,say σ [61, 64]. The derivative of the sample averaged2 - 5 - 4 - 3 s ( s ) = 1 . 7 6 5 2 L = 2 4 L = 4 8 error (EJ) s s ( s ) = 1 . 7 8 3 L = 1 2 L = 2 4 Figure 6. Semi-logarithmic illustration of statistical errorsappearing in the three lattice-size variant of the quotientsmethod. We show results for the Poissonian RFIM and L =12, L = 24, and L = 48. The values of the field strengthwhere the simulations were performed for both pairs of systemsizes are also given in the figure. quantity E J with respect to σ then gives the secondderivative with respect to σ of the total energy and thusthe sample-averaged specific heat C . The singularities in C can also be studied by computing the singular part of E J , as E J is just the integral of C with respect to σ .The general finite-size scaling form assumed is that thesingular part of the specific heat behaves as C s ∼ L α/ν ˜ C (cid:104) ( σ − σ (c) ) L /ν (cid:105) . (70)Thus, one may estimate α by studying the behavior of E J at σ = σ (c) [64]. The computation from the behaviorof E J is based on integrating the above scaling equationup to σ (c) , which gives a dependence of the form E J ( L, σ = σ (c) ) = A + BL ( α − /ν , (71)with A and B non universal constants.Since α − A , forcing us to modifythe standard phenomenological renormalization. We getrid of A by considering three lattice sizes in the follow-ing sequence: ( L , L , L ) = ( L, L, L ). We generalizeEq. (69) by taking now the quotient of the differences Q O = ( O L − O L ) / ( O L − O L ) at the crossings of thepairs ( L, L ) and (2 L, L ), respectively. Applying thisformula to the bond energy we obtain Q (cross) E J = 2 ( α − /ν + O ( L − ω ) . (72)Of course, at variance with the standard two lattice-size phenomenological renormalization, statistical errorsare significantly amplified by the reweighting extrapo-lation, as it can be clearly seen in Fig. 6. Hence, we Table II. Effective critical exponent ratio ( α − /ν usinga three lattice-size variant ( L , L , L ) = ( L, L, L ), seeEq. (72), of the original quotients method.Distribution ( L , L , L ) ( α − /ν G (12 , ,
48) -0.758(11)(16 , ,
64) -0.793(17)(24 , ,
96) -0.860(30)(32 , , ( σ =1) (16 , ,
64) 0.954(66)(24 , ,
96) -0.036(23)(32 , , ( σ =2) (12 , ,
48) -0.735(16)(16 , ,
64) -0.766(16)(24 , ,
96) -0.882(60)(32 , , , ,
48) -1.120(6)(16 , ,
64) -1.089(10)(24 , ,
96) -1.071(42)(32 , , have preferred to carry out an independent set of simula-tions for parameters corresponding to the crossing pointsidentified in the main analysis of the quotients method.Our results for the effective exponent ratio ( α − /ν are given in Table II and their extrapolation is shownin Fig. 7. We have excluded from the fitting procedurethe data of the double Gaussian distribution with σ = 1as their inclusion destabilized the fit. The solid linesin Fig. 7 show a joint polynomial fit, second order in L − ω . The extrapolated value for the exponent ratio is( α − /ν = − . L − ω = 0. Using now our estimate ν = 1 . α = − . σ (c) = 2 .
27, proposed a valueof α = − . σ (c) by a factor of δσ (c) = 10 − results, on a average, in a change of theorder of δα ≈ .
04 in the value of α [112].Following the discussion above, our numerical studiesof disordered systems are carried out near their criticalpoints using finite samples; each sample is a particularrandom realization of the quenched disorder. This makes3 G d G ( s = 2 ) P [( a - 1) / n ](eff) L - w Figure 7. (Color online) Infinite limit-size extrapolation ofthe effective exponent ratio ( α − /ν . it then crucial to study the dependence of some observ-ables with the disorder, the so-called self-averaging prop-erties of the system. The study of these properties in dis-ordered systems has generated in the past years a largeamount of works [40, 113–119], still mostly focused onthe bond- and site-diluted versions of the Ising model in D = 2 and 3.A typical measure of the self-averageness of a randomsystem via a physical quantity A is given from R A =[ (cid:104) A (cid:105) − (cid:104) A (cid:105) ] / (cid:104) A (cid:105) . Aharony and Harris [113] predictedthat the size evolution of R A ( L ) for the random systemdepends on whether the system is controlled by the pureor the random fixed point, i.e., R A ( L ) ∝ L ( α/ν ) pure forpure fixed point, and R A ( L ) ∝ const (cid:54) = 0 for randomfixed point respectively, as L → ∞ . In the case of therandom fixed point, the system has no self-averaging. Onthe other hand, the system exhibits weak self-averagingin the case of the pure fixed point. Clearly enough, thesystem is expected to be self-averaging if R A →
0, as L → ∞ .The RFIM is a nice candidate to check the analyticalpredictions of Aharony and Harris on self-averaging [113],monitoring the infinite limit-size extrapolation of R A . Inparticular, we investigated here the behavior of the ra-tio for two observables, the connected susceptibility, R χ ,and the bond energy, R E J . In Fig. 8 we plot the effec-tive values of the ratios R χ and R E J in the main paneland inset, respectively, estimated at the crossing pointsof ξ/L as usual, for all our four distributions studied, asindicated by the different colors and symbols. In bothcases, the solid lines show a joint, second-order in L − ω polynomial fit, using as a lower cut off the lattice size L min = 16. For the case of R χ , the extrapolated valuesshown by black stars, are dependent on the field distribu-tion and are clearly non-zero, indicating violation of self-averaging. In particular, we quote the following limitingvalues: R χ = { . , . , . , . } , for REJ L - w G d G ( s = 1 ) d G ( s = 2 ) P R c L - w Figure 8. (Color online) Infinite limit-size extrapolation ofthe effective ratios R χ (main panel) and R E J (inset). the cases of the Gaussian, double Gaussian with σ = 1,double Gaussian with σ = 2, and Poissonian distribu-tions, respectively. The above results verify the predic-tion of Ref. [113], according to which the susceptibility atthe critical point is not self-averaging for models wherethe disorder is relevant, relevant meaning that the crit-ical exponent of the specific heat of the correspondingpure model is positive ( α pure >
0) [120]. As for the self-averaging ratio for the bond energy, shown in Fig. 8–inset, it goes to zero with increasing system size, indicat-ing self-averaging in the thermodynamic limit.Finally, we present some computational aspects of theimplemented push-relabel algorithm and its performanceon the study of the RFIM. Although its generic im-plementation has a polynomial time bound, its actualperformance depends on the order in which operationsare performed and which heuristics are used to main-tain auxiliary fields for the algorithm. Even within thispolynomial time bound, there is a power-law criticalslowing down of the push-relabel algorithm at the zero-temperature transition [53, 98]. This critical slowingdown is certainly reminiscent of the slowing down seenin local algorithms of statistical mechanics at finite tem-perature, such as the Metropolis algorithm, and even forcluster algorithms. In fact, Ogielski [53] was the first tonote that the push-relabel algorithm takes more time tofind the ground state near the transition in three dimen-sions from the ferromagnetic to paramagnetic phase.A direct way to measure the dynamics of the algo-rithm is to examine the dependence of the running time,measured by the number of push-relabel operations, onsystem size L . Such an analysis has already been per-formed in Ref. [121] for the Gaussian D = 3 RFIM anda FIFO queue implementation, as in the current paper,finding a dynamic exponent z = 0 . σ (c) = 2 .
27 and ν = 1 .
37 in the scaling ansatz. Here, we present a com-plementary analysis based on the numerical data also for4 z(eff) L - w z = 0 . 4 2 7 ( 2 ) Figure 9. Infinite limit-size extrapolation of the effectiveexponent z of the push-relabel algorithm. Gaussian RFIM, using our scaling approach within thequotients method and without assuming prior knowledgeof the critical field and correlation length exponent. Ourfitting attempt is shown in Fig. 9, where the solid line is asecond-order in L − ω polynomial for system sizes L ≥ z is 0 . IX. SUMMARY AND OUTLOOK
To summarize, we have presented in the current pa-per a new approach to the study of the random-fieldIsing model, using as a platform the three-dimensional version of the model. We combined several efficient nu-merical methods, from zero-temperature optimization al-gorithms to generalized fluctuation-dissipation formulasand reweighting extrapolations that allowed the compu-tation of response functions, as well as advanced finite-size scaling techniques that offered us the possibility totackle some of the hardest open problems in the random-field literature, like the existence and role of scaling cor-rections and the universality principle of the model. Wehope that this contribution gives a clear overview of allthe technical details of our implementation, paving theway to even more sophisticated studies in the field of dis-ordered systems. Currently, using the prescription out-lined above, we are dealing with the random-field prob-lem at higher dimensions and we expect to provide clear-cut results regarding the validity of the two-exponentscaling scenario, one of the building blocks in the scalingtheory of the random-field Ising model.
ACKNOWLEDGMENTS
We are grateful to D. Yllanes and, especially, to L.A.Fern´andez for substantial help during several parts of thiswork. We also thank M. Picco and N. Sourlas for readingthe manuscript. We were partly supported by MINECO,Spain, through research contract No. FIS2012-35719-C02-01. Significant allocations of computing time wereobtained in the clusters
Terminus and
Memento (BIFI).N.G. Fytas acknowledges financial support from a RoyalSociety Research Grant under No RG140201 and from aResearch Collaboration Fellowship Scheme of CoventryUniversity. [1] Y. Imry and S.-k. Ma, Phys. Rev. Lett. , 1399 (1975).[2] A. Aharony, Y. Imry, and S.-k. Ma, Phys. Rev. Lett. , 1364 (1976).[3] A. P. Young, Journal of Physics C: Solid State Physics , L257 (1977).[4] S. Fishman and A. Aharony, Journal of Physics C: SolidState Physics , L729 (1979).[5] G. Parisi, Phys. Rev. Lett. , 1754 (1979).[6] J. L. Cardy, Phys. Rev. B , 505 (1984).[7] J. Z. Imbrie, Phys. Rev. Lett. , 1747 (1984).[8] M. A. Schwartz and A. Soffer, Phys. Rev. Lett. , 2499(1985).[9] M. Gofman, J. Adler, A. Aharony, A. B. Harris, andM. Schwartz, Phys. Rev. Lett. , 1569 (1993).[10] J. Esser, U. Nowak, and K. D. Usadel, Phys. Rev. B , 5866 (1997).[11] W. Barber and D. Belanger, Journal of Magnetism andMagnetic Materials , 545 (2001), pro-ceedings of the International Conference on Magnetism(ICM 2000).[12] D. Belanger and A. Young, Journal of Magnetism andMagnetic Materials , 272 (1991). [13] H. Rieger, in Annual Reviews of Computational PhysicsII, , edited by D. Stauffer (World Scientific, Singapore,1995) http://arxiv.org/abs/cond-mat/9411017.[14] T. Nattermann, in
Spin glasses and random fields ,edited by A. P. Young (World Scientific, Singapore,1998).[15] D. P. Belanger, in
Spin Glasses and Random Fields ,edited by A. P. Young (World Scientific, Singapore,1998).[16] D. P. Belanger, A. R. King, V. Jaccarino, and J. L.Cardy, Phys. Rev. B , 2522 (1983).[17] R. L. C. Vink, K. Binder, and H. L¨owen, Phys. Rev.Lett. , 230603 (2006).[18] M. A. Annunziata and A. Pelissetto, Phys. Rev. E ,041804 (2012).[19] E. Dagotto, Science , 277202 (2001).[21] J. Cardy and J. L. Jacobsen, Phys. Rev. Lett. , 4063(1997). [22] L. A. Fern´andez, A. Gordillo-Guerrero, V. Mart´ın-Mayor, and J. J. Ruiz-Lorenzo, Phys. Rev. Lett. ,057201 (2008).[23] L. A. Fernandez, A. Gordillo-Guerrero, V. Martin-Mayor, and J. J. Ruiz-Lorenzo, Phys. Rev. B ,184428 (2012), arXiv:1205.0247.[24] J. P. Sethna, K. Dahmen, S. Kartha, J. A. Krumhansl,B. W. Roberts, and J. D. Shore, Phys. Rev. Lett. ,3347 (1993).[25] O. Perkovi´c, K. A. Dahmen, and J. P. Sethna, Phys.Rev. B , 6106 (1999).[26] D. M. Silevitch, G. Aeppli, and T. F.Rosenbaum, Proceedings of the NationalAcademy of Sciences , 1543 (1984).[28] A. J. Bray and M. A. Moore, Journal of Physics C: SolidState Physics , L927 (1985).[29] D. S. Fisher, Phys. Rev. Lett. , 416 (1986).[30] A. N. Berker and S. R. McKay, Phys. Rev. B , 4712(1986).[31] J. Bricmont and A. Kupiainen, Phys. Rev. Lett. ,1829 (1987).[32] M. E. J. Newman, B. W. Roberts, G. T. Barkema, andJ. P. Sethna, Phys. Rev. B , 16533 (1993).[33] J. Machta, M. E. J. Newman, and L. B. Chayes, Phys.Rev. E , 8782 (2000).[34] M. E. J. Newman and G. T. Barkema, Phys. Rev. E ,393 (1996).[35] M. Itakura, Phys. Rev. B , 012415 (2001).[36] N. G. Fytas and A. Malakis, The European PhysicalJournal B , 111 (2008).[37] A. Aharony, Phys. Rev. B , 3318 (1978).[38] D. J. Amit and V. Martin-Mayor, Field Theory, theRenormalization Group and Critical Phenomena , 3rded. (World Scientific, Singapore, 2005).[39] G. Parisi,
Field Theory, Disorder and Simulations (World Scientific, 1994).[40] G. Parisi and N. Sourlas, Phys. Rev. Lett. , 257204(2002).[41] M. Tissier and G. Tarjus, Phys. Rev. Lett. , 041601(2011).[42] Z. Slaniˇc, D. P. Belanger, and J. A. Fernandez-Baca,Phys. Rev. Lett. , 426 (1999).[43] F. Ye, M. Matsuda, S. Katano, H. Yoshizawa, D. Be-langer, E. Seppl, J. Fernandez-Baca, and M. Alava,Journal of Magnetism and Magnetic Materials , 1298 (2004), proceedings of the InternationalConference on Magnetism (ICM 2003).[44] Z. Slani and D. Belanger, Journal of Magnetism andMagnetic Materials , 65 (1998).[45] V. Mart´ın-Mayor, A. Pelissetto, and E. Vicari, Phys.Rev. E , 026112 (2002).[46] H. Rieger, J. Phys. A , L615 (1993).[47] U. Nowak, K. Usadel, and J. Esser, Physica A: Statis-tical Mechanics and its Applications , 1 (1998).[48] L. A. Fernandez, V. Martin-Mayor, and D. Yllanes,Phys. Rev. B , 100408(R) (2011), arXiv:1106.1555.[49] N. G. Fytas, A. Malakis, and K. Eftaxias, Journal ofStatistical Mechanics: Theory and Experiment ,P03015 (2008).[50] R. L. C. Vink, T. Fischer, and K. Binder, Phys. Rev.E , 051134 (2010). [51] B. Ahrens, J. Xiao, A. K. Hartmann, and H. G. Katz-graber, Phys. Rev. B , 174408 (2013).[52] M. Picco and N. Sourlas, EPL (Europhysics Letters) , 37001 (2015).[53] A. T. Ogielski, Phys. Rev. Lett. , 1251 (1986).[54] J.-C. A. d’Auriac, Dynamique sur les structures frac-tales et diagramme de phase du mod`ele d’Ising souschamp al´eatoire , Ph.D. thesis, Centre de Recherches surles Tr`es Basses Temp´eratures, Grenoble, France, Greno-ble, France (1986).[55] N. Sourlas, Computer Physics Communications ,183 (1999), proceedings of the Europhysics Conferenceon Computational Physics { CCP } , 141 (1995).[57] J.-C. A. d’Auriac and N. Sourlas, EPL (EurophysicsLetters) , 473 (1997).[58] M. R. Swift, A. J. Bray, A. Maritan, M. Cieplak,and J. R. Banavar, EPL (Europhysics Letters) , 273(1997).[59] S. Bastea and P. M. Duxbury, Phys. Rev. E , 4261(1998).[60] Hartmann, A. K. and Nowak, U., Eur. Phys. J. B , 105(1999).[61] A. K. Hartmann and A. P. Young, Phys. Rev. B ,214419 (2001).[62] P. M. Duxbury and J. H. Meinke, Phys. Rev. E ,036112 (2001).[63] A. A. Middleton, Phys. Rev. Lett. , 017202 (2001).[64] A. A. Middleton and D. S. Fisher, Phys. Rev. B ,134411 (2002).[65] I. Dukovski and J. Machta, Phys. Rev. B , 014413(2003).[66] Y. Wu and J. Machta, Phys. Rev. Lett. , 137208(2005).[67] S. Galam and J. L. Birman, Phys. Rev. B , 5322(1983).[68] V. K. Saxena, Phys. Rev. B , 4034 (1984).[69] A. Houghton, A. Khurana, and F. J. Seco, Phys. Rev.Lett. , 856 (1985).[70] D. C. Mattis, Phys. Rev. Lett. , 3009 (1985).[71] M. Kaufman, P. E. Klunzinger, and A. Khurana, Phys.Rev. B , 4766 (1986).[72] R. M. Sebastianes and V. K. Saxena, Phys. Rev. B ,2058 (1987).[73] A. S. de Arruda, W. Figueiredo, R. M. Sebastianes, andV. K. Saxena, Phys. Rev. B , 4409 (1989).[74] L. Hernandez and H. Ceva, Physica A: Statistical Me-chanics and its Applications , 2793 (2008).[75] N. Crokidakis and F. D. Nobre, Journal of Physics: Con-densed Matter , 145211 (2008).[76] I. Hadjiagapiou, Physica A: Statistical Mechanics andits Applications , 2229 (2011).[77] U. Akınc ı, Y. Y¨uksel, and H. Polat, Phys. Rev. E ,061103 (2011).[78] M. Picco and N. Sourlas, Journal of Statistical Mechan-ics: Theory and Experiment , P03019 (2014).[79] N. G. Fytas and V. Mart´ın-Mayor, Phys. Rev. Lett. , 227201 (2013).[80] N. G. Fytas, V. Mart´ın-Mayor, M. Picco, andN. Sourlas, “Phase transitions in disordered systems:The case of the random-field ising model in four dimen-sions.” (2015), (manuscript in preparation). [81] C. Amoruso, E. Marinari, O. C. Martin, and A. Pag-nani, Phys. Rev. Lett. , 087201 (2003).[82] H. G. Ballesteros, L. A. Fernandez, V. Martin-Mayor,and A. Mu˜noz Sudupe, Phys. Lett. B , 207 (1996),arXiv:hep-lat/9511003.[83] H. Rieger, Phys. Rev. B , 5659 (1995).[84] A. K. Hartmann and R. H., Optimization Algorithms inPhysics. , 1st ed. (Wiley-VCH, Berlin, 2004).[85] A. K. Hartmann and M. Weigt,
Phase Transitions inCombinatorial Optimization Problems. , 1st ed. (Wiley-VCH, Berlin, 2005).[86] A. J. Bray and M. A. Moore, Phys. Rev. B , 631(1985).[87] D. S. Fisher and D. A. Huse, Phys. Rev. Lett. , 1601(1986).[88] M. J. Alava, P. M. Duxbury, C. F. Moukarzel, andH. Rieger, Phase Transitions and Critical Phenomena. ,1st ed., edited by C. Domb and J. L. Lebowitz (Aca-demic Press, San Diego, 2001).[89] E. T. Sepp¨al¨a and M. J. Alava, Phys. Rev. E , 066109(2001).[90] M. Zumsande, M. J. Alava, and A. K. Hartmann, Jour-nal of Statistical Mechanics: Theory and Experiment , P02012 (2008).[91] G. P. Shrivastav, S. Krishnamoorthy, V. Banerjee, andS. Puri, EPL (Europhysics Letters) , 36003 (2011).[92] B. Ahrens and A. K. Hartmann, Phys. Rev. B ,014205 (2011).[93] J. D. Stevenson and M. Weigel, EPL (Europhysics Let-ters) , 40001 (2011).[94] Angles d’Auriac, J.C., Preissmann, M., and Rammal,R., J. Physique Lett. , 173 (1985).[95] T. H. Cormen, Leiserson, C. E., and R. R. L., Intro-duction To Algorithms. , 1st ed. (MIT Press, Cambridge,1990).[96] C. H. Papadimitriou,
Computational Complexity. , 1sted. (Addison-Wesley, Reading, MA, 1994).[97] A. V. Goldberg and R. E. Tarjan, J. ACM , 921(1988).[98] A. A. Middleton, “Scaling, domains, and states in thefour-dimensional random field ising magnet,” (2002),arXiv:cond-mat/0208182, arXiv:cond-mat/0208182.[99] F. Cooper, B. Freedman, and D. Preston, Nucl. Phys.B , 210 (1982).[100] Due to a programming error, the quantity denoted as ξ in Ref. [79] did not coincide with the standard definitionof the second-moment correlation length (see also Erra-tum of Ref. [79]). We would like to point out that thiserror is corrected here and the results shown in Figs. 1and 2 fully conform to the standard definition.[101] H. G. Ballesteros, L. A. Fernandez, V. Martin-Mayor,A. Mu˜noz Sudupe, G. Parisi, and J. J. Ruiz-Lorenzo,Nucl. Phys. B , 681 (1998).[102] M. Falcioni, E. Marinari, M. L. Paciello, G. Parisi, andB. Taglienti, Phys. Lett. , 331 (1982).[103] A. M. Ferrenberg and R. H. Swendsen, Phys. Rev. Lett. , 2635 (1988).[104] G. Harris, Nuclear Physics B , 278 (1994). [105] H. G. Ballesteros, L. A. Fernandez, V. Martin-Mayor,A. Mu˜noz Sudupe, G. Parisi, and J. J. Ruiz-Lorenzo,Phys. Rev. B , 2740 (1998).[106] The reader may check that this procedure is inconse-quential for the Gaussian or the Poissonian distribu-tions. For instance, in the Gaussian case, the analo-gous of Eq. (30) obtained by modifying simultaneouslythe width of the distribution, σ −→ σ + δσ , would be G zy | σ + δσ = h z (cid:104) S y (cid:105)R G σ,δσ (cid:12)(cid:12)(cid:12) σ / ( σ + δσ ) , where R G σ,δσ isthe reweighting factor appropriated for the Gaussiandistribution, see Eq. (37). This is exactly the same resultthat one obtains by combining Eqs. (30) and (40).[107] M. Nightingale, Physica A: Statistical Mechanics andits Applications , 561 (1976).[108] M. Baity-Jesi, R. A. Ba˜nos, A. Cruz, L. A. Fernandez,J. M. Gil-Narvion, A. Gordillo-Guerrero, D. Iniguez,A. Maiorano, F. Mantovani, E. Marinari, V. Martin-Mayor, J. Monforte-Garcia, A. Mu˜noz Sudupe,D. Navarro, G. Parisi, S. Perez-Gaviro, M. Pivanti,F. Ricci-Tersenghi, J. J. Ruiz-Lorenzo, S. F. Schifano,B. Seoane, A. Tarancon, R. Tripiccione, and D. Yllanes(Janus Collaboration), Phys. Rev. B , 224416 (2013),arXiv:1310.2910.[109] As in Ref. [79], the error given in parenthesis is of sta-tistical origin and the one in brackets comes from theuncertainty in the choice of ω .[110] A. Malakis and N. G. Fytas, Phys. Rev. E , 016109(2006).[111] C. Holm and W. Janke, Phys. Rev. Lett. , 2265(1997).[112] P. E. Theodorakis, I. Georgiou, and N. G. Fytas, Phys.Rev. E , 032119 (2013).[113] A. Aharony and A. B. Harris, Phys. Rev. Lett , 3700(1996).[114] H. Chamati, E. Korutcheva, and N. S. Tonchev, Phys.Rev. E , 026129 (2002).[115] A. Aharony, A. B. Harris, and S. Wiseman, Phys. Rev.Lett. , 252 (1998).[116] S. Wiseman and E. Domany, Phys. Rev. Lett. , 22(1998).[117] C. Deroulers and A. P. Young, Phys. Rev. B , 014438(2002).[118] G. Parisi, M. Picco, and N. Sourlas, EPL (EurophysicsLetters) , 465 (2004).[119] A. Gordillo-Guerrero and J. J. Ruiz-Lorenzo, Journal ofStatistical Mechanics: Theory and Experiment ,P06014 (2007).[120] Note, however, that we do not compute the susceptibil-ity for each sample. Rather, we compute quantities ˆ χ tailored in such a way that their average over the ran-dom fields is the susceptibility χ , recall Eqs. (32), (45),and (64). The self-averageness ratio is computed fromsuch ˆ χχ