Ground state energy of noninteracting fermions with a random energy spectrum
Hendrik Schawe, Alexander K. Hartmann, Satya N. Majumdar, Grégory Schehr
eepl draft
Ground state energy of noninteracting fermionswith a random energy spectrum
Hendrik Schawe , Alexander K. Hartmann , Satya N. Majumdar and Gr´egorySchehr Institut f¨ur Physik, Universit¨at Oldenburg, 26111 Oldenburg, Germany Univ. Paris-Sud, CNRS, LPTMS, UMR 8626, Orsay F-91405, France
PACS – Monte Carlo methods
PACS – Spin-glass and other random models
PACS – Classical statistical mechanics
Abstract –We derive analytically the full distribution of the ground-state energy of K non-interacting fermions in a disordered environment, modelled by a Hamiltonian whose spectrumconsists of N i.i.d. random energy levels with distribution p ( ε ) (with ε ≥ K , the distribution P K,N ( E ) ofthe ground-state energy E has a universal scaling form in the limit of large N . We compute thisuniversal scaling function and show that it depends only on K and the exponent α characterizingthe small ε behaviour of p ( ε ) ∼ ε α . We compared the analytical predictions with results from nu-merical simulations. For this purpose we employed a sophisticated importance-sampling algorithmthat allowed us to obtain the distributions over a large range of the support down to probabilitiesas small as 10 − . We found asymptotically a very good agreement between analytical predictionsand numerical results. The celebrated “Random Energy Model” (REM) of Derrida [1] has continued to playa central role in understanding different aspects of classical disordered systems, includingspin-glasses, directed polymers in random media and many other systems. In the REM,one typically has N energy levels which are considered to be independent and identicallydistributed (i.i.d.) random variables, each drawn from a probability distribution function(PDF) p ( ε ). Typical observables of interest are the partition function, free energy, etc.The REM can also be useful as a toy model in quantum disordered systems. For example,let us consider a single quantum particle in a disordered medium with the Hamiltonian ˆ h .We will assume that the spectrum of the operator ˆ h has a finite number of states N (forinstance a quantum particle on a lattice of finite size and a random onsite potential, asin the Anderson model). In general, solving exactly the spectrum of such an operator ishard, for a generic random potential. One possible approximation, in the spirit of the REMin classical disordered systems, would be to consider the toy model where one replaces thespectrum of the actual Hamiltonian by N ordered i.i.d. energy levels ε ≤ ε ≤ · · · ≤ ε N eachdrawn from the common PDF p ( ε ). Without loss of generality, we will also assume that theHamiltonian ˆ h has only positive eigenvalues. This would mean that, in the correspondingtoy model, the PDF p ( ε ) is supported on [0 , + ∞ ). It is well known that, in a stronglydisordered quantum system, where all single-particle eigenfunctions are localised in space,the energy levels can be approximated by i.i.d. random variables (see e.g. [2]). Therefore thep-1 a r X i v : . [ c ond - m a t . d i s - nn ] A ug EM that we consider here will be relevant in such strongly localised part of the spectrumof a disordered Hamiltonian.Now consider a system of K noninteracting fermions with the Hamiltonian ˆ H K = (cid:80) Ki =1 ˆ h i where ˆ h i is the single particle Hamiltonian associated with the i -th particle. Theground state of this many-body system would correspond to filling up the single particlespectrum up to the Fermi level ε K , with one particle occupying each of the states withenergies ε , ε , · · · , ε K . The ground state energy E of this many-body system is thereforegiven by E = K (cid:88) i =1 ε i . (1)Clearly, E is a random variable, which fluctuates from one realisation of the disorder toanother. Given p ( ε ), we are interested in computing the distribution P K,N of E , for fixed K ( i.e. the number of fermions) and N ( i.e. the number of levels). We note that, for K = 1, E = ε is just the minimum of a set of N i.i.d. random variables and is described bythe well-known extreme value statistics [3]. Thus, for general value of K , in particular, itwould be interesting to know how sensitive the distribution of E is to the choice of p ( ε ).For instance, is there any universal feature of the distribution of E that is independent of p ( ε )? We note that E is a sum of random variables, but these random variables are notindependent due to the ordering ε ≤ ε ≤ · · · ≤ ε N (even though the original unorderedrandom variables are independent). Had they been independent, the sum E in eq. (1),by virtue of the Central Limit Theorem, would converge to a shifted and scaled Gaussianrandom variable. Here, this is not the case, as the ordering induces non-trivial correlationsbetween these variables.In this paper, we compute exactly the PDF P K,N ( E ) for arbitrary K , N and p ( ε ) andshow that, indeed, a universal feature emerges in the large N limit. It turns out that thelimiting distribution of E , for large N , depends only on the small ε behaviour of p ( ε ) ≈ B ε α ,with α > −
1, but is otherwise independent of the rest of the features of p ( ε ). For fixed α and fixed K , as N → ∞ , we show that the distribution of the ground state energy convergesto a limiting scaling form P K,N ( E ) ≈ b N α +1 F ( α ) K (cid:16) b N α +1 E (cid:17) (2)where b = ( B/ ( α + 1)) / ( α +1) is just a scale factor. The scaling function F ( α ) K ( z ) (with z ∈ [0 , + ∞ )) is universal and depends only on α and K . We show that the Laplace transformof F ( α ) K ( z ) is given explicitly by (cid:90) ∞ F ( α ) K ( z )e − λ z d z = ( α + 1) K Γ( K ) λ ( α +1)( K − (cid:90) ∞ x α e − λ x − x α +1 [ γ ( α + 1 , λ x )] K − d x , (3)where γ ( a, x ) = (cid:82) x d u u a − e − u is the incomplete gamma function. While we can invertformally this Laplace transform (3), it does not have a simple expression for generic α .However, we can derive the asymptotic behaviour of F ( α ) K ( z ) F ( α ) K ( z ) ≈ c z ( α +1) K − , z → ,c z α exp (cid:20) − (cid:16) zK (cid:17) α +1 (cid:21) , z → ∞ , (4)where c = [Γ( α +2)] K Γ( K +1)Γ(( α +1) K ) and c = ( α +1) K K − α − Γ( K ) are constants. For the extreme-valuecase K = 1 our result, F ( α )1 ( z ) = ( α + 1) z α e − z α +1 , coincides with the well known Weibullp-2caling function [3]. Note that here we are interested in the sum of K lowest i.i.d. variablessupported over [0 , + ∞ ). We remark that in the statistics literature, in a completely differentcontext, the sum of the top K values of a set of i.i.d. random variables with an unboundedsupport has been studied [4, 5]. However, we have not found our results (2) and (3) in thestatistics literature.We start with a set of N positive i.i.d. random variables { x , x , · · · , x N } , each drawnfrom a common distribution p ( x ), supported on [0 , + ∞ ). The joint distribution of thesevariables is simply P ( x , · · · , x N ) = (cid:81) Ni =1 p ( x i ). At this stage, these variables are unordered.We are interested in the first K ordered variables { ε , ε , · · · , ε K } with K ≤ N . Thisordering makes these K variables correlated. Indeed, the joint distribution of the K lowestordered variables can be written explicitly as P ( ε , · · · , ε K ) = Γ( N + 1)Γ( N − K + 1) K (cid:89) i =1 p ( ε i ) K (cid:89) i =2 Θ( ε i − ε i − ) (cid:20)(cid:90) ∞ ε K p ( u ) d u (cid:21) N − K . (5)This result can be easily understood as follows. We first choose the K distinct variables froman i.i.d. set of N variables. The number of ways this can be done is simply the combinatorialfactor N ( N − · · · ( N − K + 1) = Γ( N + 1) / Γ( N − K + 1) in eq. (5). The probabilitythat they are ordered is (cid:81) Ki =1 p ( ε i ) (cid:81) Ki =2 Θ( ε i − ε i − ), where the Heaviside theta functionsensure the ordering. In addition, we have to ensure that the N − K remaining variablesare bigger than ε K , i.e. the largest value among the first K ordered variables. Since these N − K variables are i.i.d., this gives the last factor in eq. (5). The formula in eq. (5) isexact for any p ( ε ), K and N . Given the joint PDF (5), we are interested in the distribution P K,N ( E ) of the ground-state energy E in eq. (1). We therefore have P K,N ( E ) = (cid:90) P ( ε , · · · , ε K ) δ (cid:32) E − K (cid:88) i =1 ε i (cid:33) K (cid:89) i =1 d ε i . (6)The form of this equation naturally suggests to consider the Laplace transform with respectto (w.r.t.) E (cid:104) e − sE (cid:105) = (cid:90) ∞ P K,N ( E ) e − sE d E . (7)Taking the Laplace transform of eq. (6) gives (cid:104) e − sE (cid:105) = Γ( N + 1)Γ( N − K + 1) (cid:90) ∞ d ε K p ( ε K )e − s ε K (cid:20)(cid:90) ∞ ε K p ( u )d u (cid:21) N − K J K − ( ε K ) , (8)where J K − ( ε K ) = (cid:90) K − (cid:89) i =1 p ( ε i )e − sε i d ε i K (cid:89) i =2 Θ( ε i − ε i − ) . (9)This multiple integral (9) has a nested structure and can be evaluated easily by inductionand one gets J K − ( ε K ) = 1( K − (cid:20)(cid:90) ε K d u e − su p ( u ) (cid:21) K − . (10)Using this result in eq. (8), and also replacing, for later convenience, (cid:82) ∞ y d u p ( u ) = 1 − (cid:82) y d u p ( u ), we get the exact formula (cid:104) e − sE (cid:105) = K (cid:18) NK (cid:19) (cid:90) ∞ d y p ( y ) e − sy (cid:20) − (cid:90) y d u p ( u ) (cid:21) N − K (cid:20)(cid:90) y d v p ( v ) e − sv (cid:21) K − . (11)p-3his formula has a simple interpretation. Taking the Laplace transform is equivalent tobreaking the system into two species of random variables of size K and N − K (this can bedone in (cid:0) NK (cid:1) ways): Each member of the first species of size K comes with an effective weight p ( ε ) e − sε , while in the second species of size N − K each member comes with an effectiveweight p ( ε ). We first fix the K -th variable to have a value y , whose weight is p ( y ) e − sy .The members of the second species should each be bigger than y (explaining the factor (cid:104)(cid:82) ∞ y p ( u ) d u (cid:105) N − K ), while the rest of the ( K −
1) members of the first species should eachbe smaller than y , explaining the factor (cid:2)(cid:82) y d v e − sv p ( v ) (cid:3) K − . Finally, the biggest variableamong the members of the first species can be any of the K members, explaining the factor K multiplying the binomial coefficient (cid:0) NK (cid:1) in eq. (11). With this interpretation, it is clearthat eq. (11) can be easily generalized to any linear statistics of the form L K = (cid:80) Ki =1 f ( ε i )where f ( ε ) is an arbitrary function. The ground state energy E considered here correspondsto choosing f ( ε ) = ε . For general f ( ε ) the effective weight of each member of the first speciesdiscussed above is just p ( ε ) e − sf ( ε ) . Hence the formula in eq. (11) generalises to (cid:104) e − sL K (cid:105) = K (cid:18) NK (cid:19) (cid:90) ∞ d y p ( y ) e − sf ( y ) (cid:20)(cid:90) ∞ y p ( u ) d u (cid:21) N − K (cid:20)(cid:90) y d v p ( v ) e − sf ( v ) (cid:21) K − . (12)In this paper, we will focus only on the case f ( ε ) = ε . Below, we thus start with the exactresult in eq. (11) and analyse its behaviour in the large N limit.To understand the large N scaling limit, it is instructive to start with the K = 1 case.In this case, E = ε is just the minimum of a set of N i.i.d. random variables, each drawnfrom p ( ε ). In this case, eq. (11) reads (upon setting K = 1) (cid:104) e − sE (cid:105) = N (cid:90) ∞ d y p ( y ) e − sy (cid:20) − (cid:90) y d u p ( u ) (cid:21) N − , (13)where we replaced (cid:82) ∞ y d u p ( u ) = 1 − (cid:82) y d u p ( u ), using the normalisation of p ( u ). In thelarge N limit, the dominant contribution to the integral over y comes from the regime of y where the integral (cid:82) y d u p ( u ) is of order O (1 /N ). For other values of y , the contribution isexponentially small in N , for large N . Hence, we see that, in the large N limit, only thesmall y behaviour of p ( y ) matters. Let p ( y ) ≈ y → B y α (14)where α > − p ( y ) is normalisable and clearly B >
0. Substituting this leadingorder behaviour of p ( y ) for small y (14) in eq. (13), we get (cid:104) e − sE (cid:105) ≈ B N (cid:90) ∞ d y y α e − sy exp (cid:18) − B Nα + 1 y α +1 (cid:19) . (15)Performing the change of variable y = (cid:0) α +1 B N (cid:1) α +1 x , we get (cid:104) e − sE (cid:105) ≈ ( α + 1) (cid:90) ∞ d x x α exp (cid:18) − sb N α +1 x − x α +1 (cid:19) , (16)where b = ( B/ ( α + 1)) / ( α +1) . Inverting the Laplace transform formally, we obtain thescaling form given in eq. (2) with K = 1 and the scaling function F ( α )1 ( z ) has its Laplacetransform as in (3) with K = 1. Inverting this Laplace transform exactly, we recover theWeibull scaling function F ( α )1 ( z ) = ( α + 1) z α e − z α +1 . The calculation for K = 1 shows thatonly the small y behaviour of p ( y ) matters in the limit of large N . Furthermore, for K = 1,we see that the typical value of E scales as N − α +1 for large N . We then anticipate that,p-4ven for K >
1, the typical scale of E will remain the same E ∼ N − α +1 for large N .Below, we indeed use this typical scale for E (and verify a posteriori) and compute thescaling function F ( α ) K ( z ) for general K in eq. (2).We now derive the main results in Eqs. (2) and (3) for all K ≥
1. Anticipating thescaling E ∼ N − α +1 as mentioned above, we set E = 1 b N − α +1 z , (17)where b is a constant to be fixed later and the scaled ground state energy z is of order O (1).Substituting this scaling form (17) in eq. (11), we see that the left hand side (l.h.s.) reads, (cid:104) e − sE (cid:105) = (cid:104) e − s N − α +1 z/b (cid:105) = (cid:104) e − λ z (cid:105) where λ = N − α +1 s/b is the rescaled Laplace variable.We will take the N → ∞ limit, keeping λ fixed. This then corresponds to s → ∞ limit.On the right hand side (r.h.s.) of eq. (11) we make a change of variable s y = ˜ x as well as u = ˜ u/s and v = ˜ v/s . This gives (cid:104) e − λz (cid:105) = Ks K (cid:18) NK (cid:19) (cid:90) ∞ d˜ x p (cid:18) ˜ xs (cid:19) e − ˜ x (cid:32) − s (cid:90) ˜ x d˜ u p (cid:18) ˜ us (cid:19)(cid:33) N − K (cid:32)(cid:90) ˜ x d˜ v p (cid:18) ˜ vs (cid:19) e − ˜ v (cid:33) K − . (18)In the large s limit, we use p ( y ) ≈ B y α to leading order. Inserting this behaviour in eq. (18),we get (cid:104) e − λz (cid:105) ≈ K B K s ( α +1) K (cid:18) NK (cid:19) (cid:90) ∞ d˜ x ˜ x α e − ˜ x (cid:18) e − B ( N − K )( α +1) sα +1 ˜ x α +1 (cid:19) [ γ ( α, ˜ x )] K − (19)where we recall that γ ( a, x ) = (cid:82) x d u u a − e − u is the incomplete gamma function. We nowuse s = ( λ b ) N α +1 and choose b = (cid:18) Bα + 1 (cid:19) α +1 . (20)Furthermore, in the large N limit K (cid:0) NK (cid:1) ∼ N K / Γ( K ). Using these results, and rescaling˜ x = λ x , we arrive at (cid:104) e − λ z (cid:105) = ( α + 1) K Γ( K ) λ ( α +1)( K − (cid:90) ∞ x α e − λx − x α +1 [ γ ( α + 1 , λ x )] K − d x . (21)This clearly shows that the distribution of the rescaled random variable z = ( E b ) N α +1 [see eq. (17)] converges to an N -independent form F ( α ) k ( z ) for large N , whose Laplacetransform is given by (cid:82) ∞ F ( α ) K ( z )e − λ z d z = (cid:104) e − λ z (cid:105) . Therefore eq. (21) demonstrates theresult announced in eq. (3). Special cases α = 0. In this case eq. (3), using γ (1 , λ x ) = 1 − e − λx , reduces to (cid:90) ∞ F (0) K ( z )e − λ z d z = 1Γ( K ) λ K − (cid:90) ∞ d x e − ( λ +1) x (cid:0) − e − λ x (cid:1) K − = Γ(1 + 1 /λ ) λ k Γ( k + 1 + 1 /λ ) . (22)Using the properties of the Γ function, one can express the r.h.s. of (22) as a simple product (cid:90) ∞ F (0) K ( z )e − λ z d z = K (cid:89) m =1
11 + m λ . (23)p-5o invert this Laplace transform, we note that the r.h.s. has simple poles at λ = − /m with m = 0 , , · · · , K . Evaluating carefully the residues at these poles, we can invert this Laplacetransform explicitly and get F (0) K ( z ) = K (cid:88) n =1 ( − K − n n K − ( K − n )! n ! e − z/n . (24)For instance, F (0)1 ( z ) = e − z (25) F (0)2 ( z ) = e − z/ − e − z (26) F (0)3 ( z ) = 32 e − z/ − − z/ + 12 e − z . (27) Numerical simulations.
Next, we verify our analytical predictions via numerical simulations.To test the prediction of the scaling behaviour in eq. (2), as well as to test the universality ofthe associated scaling function F ( α ) K ( z ), we consider four different distributions for the energylevels: (a) an exponential distribution p ( ε ) = e − ε Θ( ε ), (b) an half-Gaussian distribution p ( ε ) = (cid:113) π e − ε Θ( ε ), (c) a Pareto distribution p ( ε ) = ε Θ( ε −
1) and (d) p ( ε ) = ε e − ε Θ( ε ).The cases (a) and (b) clearly correspond to α = 0. Hence we expect the scaling functionto be given by F (0) K ( z ) in eq. (24). The Pareto case (c), with support over [1 , + ∞ ), alsocorresponds to the α = 0 case, as seen easily after a trivial shift ε → ε −
1. Hence, inthis case as well, we expect the scaling function to be given by F (0) K ( z ). However, case (d)is different as it corresponds to α = 1 and hence the scaling function should be given by F (1) K ( z ). In fig. 1, we compare the simulation results with the analytical predictions and findvery good agreement. Note that in cases (a)-(c), the scaling function F (0) K ( z ) has an explicitexpression as in eq. (24). Hence, it is easy to compare directly the simulation results withthis expression (as in fig. 1 (a)-(c)). However, for case ( d ), where α = 1, we do not have asimple explicit formula for F (1) K ( z ), though we have explicitly given its Laplace transform ineq. (3) with α = 1. Hence, to compare with the simulation results, we first needed to invertthis Laplace transform using an arbitrary precision library [21]. This comparison is shownin fig. 1 (d).To obtain the presented numerical results one has to generate N random numbers ac-cording to the desired probability density p ( ε ). This is done by a standard method, namelywe first choose a uniform random number η ∈ [0 ,
1] and then generate ε using the formula, (cid:82) ε p ( ε (cid:48) ) d ε (cid:48) = η . The exponential (a) and Pareto (c) case can be trivially obtained using thisrelation [6]. In the half-Gaussian case (b), the Gaussian random numbers can be generatedusing the Box-Muller method [6]. In the case (d), p ( ε ) = ε e − ε , the above relation reads η = (cid:82) ε p ( ε (cid:48) )d ε (cid:48) = 1 − (1 + ε )e − ε , which can also be inverted using the − ε = − W − (cid:0) η − e (cid:1) −
1. To evaluate the Lambert W function, we usethe GSL implementation [7].The sum E in eq. (1) is completely determined by the values η = ( η , . . . , η N ). Ifone simply generates many times vectors η of independent uniform random numbers andcorrespondingly obtained random numbers ( ε , . . . , ε N ), one will obtain only typical resultsfor E , i.e. those having a high enough probability. Here, we sample the distributions overa broad range of the support, also in the far tails, where the probabilities are extremelysmall. For this purpose, we use a well tested importance sampling scheme [9, 10]. Herethe vectors η are sampled using the Metropolis algorithm including a bias of samples awayfrom the main part of the distribution. We use a bias e − E /T , where T is a “temperature”parameter which can be positive and negative and allows us to address different ranges ofthe distribution. Since the bias is known, the Metropolis results can be corrected for thep-6 P ( E b N ) / ( b N ) E bN N = 256 N = 512 N = 1024 N = 2048 F (0)20 P ( E b N ) / ( b N ) E bN N = 256 N = 512 N = 1024 N = 2048 F (0)20 P (( E K ) b N ) / ( b N ) ( E K ) bNN = 128 N = 256 N = 512 N = 1024extrapolated F (0)20 ( z ) . . .
06 40 60 80 P ( E b N / ) / ( b N / ) E bN / N = 256 , .., F (1)20 (b) p ( " ) = q ⇡ e " ⇥( " ) (c) p ( " ) = " ⇥( " (a) p ( " ) = e " ⇥( " ) (d) p ( " ) = " e " ⇥( " ) Fig. 1: (colour online) Scaled distribution P K,N ( E ) for K = 20, for different values of N and forfour different distribution p ( ε ). The insets show the behaviour near the peaks for the four differentcases. (a) shows exponentially distributed p ( ε ) = e − ε Θ( ε ) which corresponds to α = 0 and b = 1[see eq. (2)]. The scaling function F (0) K =20 in eq. (24) matches very well the numerical data all the wayup to the tails. (b) shows half-Gaussian distributed p ( ε ) = √ √ π e − ε / Θ( ε ) corresponding to α = 0and b = p (0) = √ π . (c) shows Pareto distributed energy levels p ( ε ) = ε Θ( ε − ε → ε − i.e. E → E − K , this falls in the α = 0 universality, with b = 2. In this case, using thefinite-size extrapolation scheme shown in black circles (see text and eq. (30) with β = 1), the dataapproach the theoretical scaling function F (0)20 ( z ). (d) shows energy levels distributed according to p ( ε ) = ε e − ε Θ( ε ). This corresponds to the α = 1 universality class, with the scaling parameter b = 1 / √
2. Again, using the finite-size scaling form (see text and eq. (30)) with β = 1 /
2, theextrapolated data (shown in black circles) match well with the theoretical scaling function F (1)20 ( z ),obtained from the numerical Laplace inversion of eq. (3), setting K = 20 and α = 1. bias to obtain the actual distribution. This enables us to gather good statistics also in thefar tails.To be more concrete, we use a Markov chain η ( t ) = η (0) , η (1) , . . . Every move η ( t ) → η ( t + 1) consists of changing one entry of η ( t ) leading to a trial η (cid:48) (“local update”). Whilethe most simple method to change would be the replacement of one uniform-distributedrandom number by a freshly drawn one, as used in Ref. [10], this will lead to difficultiesespecially for small values K . For the far tails, there will be a point where all entries of η are almost one (or almost zero) and almost every new proposal will be rejected, sinceit is improbable to draw a random number very close to the previous one. Therefore weperform a slightly more involved protocol, where instead of redrawing we change an entry η i → η i + ξδ , where ξ ∈ [ − ,
1] is uniformly distributed and δ ∈ { − i | i ∈ { , , , , , }} with uniform probability 1 /
6. Thus δ determines the scale of the local change. Changesresulting in an entry η i (cid:54)∈ [0 ,
1) are directly rejected , i.e. η ( t + 1) = η ( t ). Note that thisp-7rotocol will still result in uniformly distributed random numbers η i , if every change in [0 , accepted , i.e. η ( t + 1) = η (cid:48) . Here each proposed change is accepted instead with theMetropolis acceptance ratio p acc = min { , e − ∆ E /T } , (28)where ∆ E is the change in energy caused by the proposed change, and otherwise alsorejected.For any value of T , as usual for Monte Carlo simulations, performing these Markovchains “long enough” and taking measurements, this results in a histogram P T ( E ) for eachtemperature, which can be corrected for the bias using P ( E ) = e E /T Z ( T ) P T ( E ) . (29)The a-priori unknown normalization parameter Z ( T ) can be obtained by enforcing continuityand normalization of the whole distribution, which is obtained from performing simulationsfor several values of T , including T = ∞ , which corresponds to simple sampling. We will notgo into further details, since this algorithm is well described in several other publications[9–11].For the Pareto distributed case p ( ε ) = ε Θ( ε − E /T a modified Wang-Landau sampling [12–16] with multiplehistograms and subsequent entropic sampling [17, 18]. We used Wang Landau sampling forthis case, since the temperatures are harder to adjust, i.e. for negative temperatures it hap-pens quickly that equilibration becomes impossible and the energy increases constantly. Thiseffect is already known to pose difficulties for the aforementioned sampling with bias [19,20].We set K = 20 in fig. 1 and compare the distribution P K =20 ,N ( E ) for different valuesof N . We verify, by a data collapse, the scaling form predicted in eq. (2) and also comparethe numerical scaling function to the analytical ones. As mentioned earlier, for the α = 0case (corresponding to cases (a)-(c)), the analytical scaling function is given in eq. (24). Forthe α = 1 (corresponding to case (d)), we invert the Laplace transform in eq. (3) for K = 20and α = 1, using an arbitrary precision library [21].While the exponential case fits very well to the analytic result even for small values of N , the other cases show strong finite-size effects especially in the extreme right tail. Suchfinite-size effects are known to occur frequently in the extreme statistics of i.i.d. randomvariables [22]. As seen in fig. 1, the discrepancy between the numerical and the analyticalresults is very small in the main region ( i.e. in the bulk). In the tails, we need to use afinite-size ansatz to study the convergence of the numerical results as N → ∞ . For example,it is natural to expect that the finite-size corrections to the leading scaling form in eq. (2)is of the form P K,N ( E ) = b N / (1+ α ) (cid:104) F ( α ) K ( z ) + N − β G ( α ) K ( z ) + N − β H ( α ) K ( z ) + . . . (cid:105) , (30)where β = min(1 / (1 + α ) , z = b N / ( α +1) E is the scaling variable, and G ( α ) K ( z ) , H ( α ) K ( z )describe the finite-size scaling of the correction terms. Thus for α = 0 one has β = 1,while for α = 1, we have β = 1 /
2. For several values of z , we extrapolate the data byfitting pointwise the numerical data in fig. 1 as a function of N , to obtain estimates for theasymptotes F ( α ) K ( z ). We treated the cases (c) (corresponding to α = 0, and hence β = 1) and(d) ( α = 1, β = 1 / N is faster in the left tail z →
0, while it is much slower inthe right tail z → ∞ . Conclusion.
In this paper, we have studied analytically and numerically the full distribu-tion of the ground-state energy of K non-interacting fermions in a disordered environment,p-8 P ( z ) / ( b N / ) zN = 256 , .., z ! F (1)20
50 100 200 500 P ( z ) / ( b N / ) / c / z z N = 512 N = 2048 N = 8192 z ! 1 asymptote F (1)20 (a) left tail (b) right tail Fig. 2: (colour online) We consider the tails of the distribution P K =20 ,N ( E ) for the case p ( ε ) = ε e − ε , corresponding to α = 1. This scaling function in this case is given by F (1) K =20 ( z ). Theasymptotic behaviors of F (1) K =20 ( z ) in eq. (4), for z → z → ∞ (right tail in(b)) are compared to numerical simulations. The data have been plotted on a scale such that thetwo cases from eq. (4) appear as straight lines. In (b), for clarity, only three different values of N have been plotted. modelled by a Hamiltonian whose spectrum consists of N i.i.d. random energy levels withdistribution p ( ε ) (with ε ≥ K values drawn from a probability distri-bution and therefore a generalization of the extrem-value statistics, which corresponds tothe case K = 1. Thus our results should be of interest also in a very general mathematicalcontext.We have shown that for each fixed K , the distribution P K,N ( E ) of the ground-stateenergy has a universal scaling form in the limit of large N (see eq. (2)). This universaldistribution depends only on K and the exponent α characterizing the small ε behaviourof p ( ε ) ∼ ε α . We derive an exact expression for the Laplace transform of this scalingfunction in eq. (3). For generic α , the asymptotic behaviors of the scaling function arederived explicitly in eq. (4), while for the special case α = 0, the Laplace transform can beexplicitly inverted, giving the full scaling function in eq. (24). Numerically, while the peakregion of the distribution of E can be easily estimated by standard methods, estimatingthe tails of the distribution where the probability is very small is hard and requires moresophisticated techniques. In this paper, using an importance sampling algorithm, we wereable to estimate the tail probabilities (up to a precision as small as 10 − ) and therebyto verify the theoretical predictions. Thus the main conclusion of our work is that, eventhough the individual energy levels are independent random variables, the ordering neededto compute the ground-state energy induces effective correlations between the energy levels.These effective correlations then lead, for the ground-state energy, to a whole new class ofuniversal scaling functions parameterised by K and α .In this work, we have modelled the single-particle energy levels of a quantum disorderedsystem by i.i.d. random variables, `a la REM. This REM approximation for the energy levelsis known to be valid for disordered Hamiltonians whose eigenstates are strongly localisedin space [2]. Thus we expect that the results presented in this paper for the universaldistribution of the ground state energy would apply to such strongly disordered quantumsystems. It is then natural to ask what happens to the ground-state energy for Hamiltonianswith weakly localised eigenstates. In some weakly localised systems, a description basedon Random Matrix Theory (RMT) [2] is a good approximation, where the energy levels(identified with the eigenvalues of a random matrix) are strongly correlated with mutual levelrepulsion. In this RMT context, several linear statistics of ordered eigenvalues have beenp-9ecently introduced and studied for large N under the name of truncated linear statistics(TLS) [23, 24]. The ground-state energy in eq. (1) or more generally the linear statisticsas in eq. (12) studied here are instances of TLS, but for i.i.d. random variables. It wouldthus be interesting to see how the TLS, studied here for i.i.d. variables, crosses over to theRMT case, as one goes from the strongly localised part of the spectrum of a disorderedHamiltonian to the weakly localised part. ∗ ∗ ∗ This work was supported by the German Science Foundation (DFG) through the grantHA 3169/8-1. HS and AKH thank the LPTMS for hospitality and financial support duringone and two-month visits, respectively, where this project was conceived. The simulationswere performed at the cluster of the GWDG G¨ottingen and the HPC cluster CARL, locatedat the University of Oldenburg (Germany) and funded by the DFG through its MajorResearch Instrumentation Programme (INST 184/157-1 FUGG) and the Ministry of Scienceand Culture (MWK) of the Lower Saxony State. SM and GS acknowledge support by ANRgrant ANR-17-CE30-0027-01 RaMaTraF.
REFERENCES[1] B. Derrida, Phys. Rev. B , 2613 (1981).[2] M. Moshe, H. Neuberger, and B. Shapiro, Phys. Rev. Lett. , 1497 (1994).[3] E. J. Gumbel, Statistics of Extremes , Dover, New York, (1958).[4] H. N. Nagaraja, Ann. I. Stat. Math. , 437 (1981).[5] H. N. Nagaraja, Ann. Stat. , 1306 (1982).[6] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical recipes3rd edition: The art of scientific computing (Cambridge university press, 2007), ISBN9780521880688.[7] B. Gough, GNU Scientific Library Reference Manual - Third Edition (Network Theory Ltd.,2009), 3rd ed., ISBN 0954612078, 9780954612078.[8] R. M. Corless, G. H. Gonnet, D. E. G. Hare, D. J. Jeffrey, and D. E. Knuth, Advances inComputational Mathematics , 329 (1996), ISSN 1572-9044.[9] A. K. Hartmann, Phys. Rev. E , 056102 (2002).[10] A. K. Hartmann, Phys. Rev. E , 052103 (2014).[11] H. Schawe, A. K. Hartmann, and S. N. Majumdar, Phys. Rev. E , 062159 (2018).[12] F. Wang and D. P. Landau, Phys. Rev. Lett. , 2050 (2001).[13] F. Wang and D. P. Landau, Phys. Rev. E , 056101 (2001).[14] B. J. Schulz, K. Binder, M. M¨uller, and D. P. Landau, Phys. Rev. E 67, 067102 (2003).[15] R. E. Belardinelli and V. D. Pereyra, Phys. Rev. E , 046701 (2007).[16] R. E. Belardinelli and V. D. Pereyra, The Journal of Chemical Physics , 184105 (2007).[17] J. Lee, Phys. Rev. Lett. , 211 (1993).[18] R. Dickman and A. G. Cunha-Netto, Phys. Rev. E , 026701 (2011).[19] G. Claussen, A. K. Hartmann, and S. N. Majumdar, Phys. Rev. E , 052104 (2015).[20] H. Schawe, A. K. Hartmann, and S. N. Majumdar, Phys. Rev. E , 062101 (2017).[21] F. Johansson et al., mpmath: a Python library for arbitrary-precision floating-point arith-metic (version 1.0.0) (2013), http://mpmath.org/.[22] G. Gyorgyi, N. R. Moloney, K. Ozogany, Z. Racz, and M. Droz, Phys. Rev. E , 041135(2010).[23] A. Grabsch, S. N. Majumdar, C. Texier, J. Stat. Phys. , 234 (2017).[24] A. Grabsch, S. N. Majumdar, C. Texier, J. Stat. Phys. , 1452 (2017)., 1452 (2017).