Some generic properties of level spacing distributions of 2D real random matrices
aa r X i v : . [ n li n . C D ] J un Some generic properties of level spacingdistributions of 2D real random matrices
Siegfried Grossmann and Marko Robnik Fachbereich Physik der Philipps-Universit¨at, Renthof 6, D-35032 Marburg, Ger-many CAMTP - Center for Applied Mathematics and Theoretical Physics, University ofMaribor, Krekova 2, SI-2000 Maribor, Slovenia e-mail:
[email protected], [email protected]
Abstract:
We study the level spacing distribution P ( S ) of 2D real random ma-trices both symmetric as well as general, non-symmetric. In the general case werestrict ourselves to Gaussian distributed matrix elements, but different widths ofthe various matrix elements are admitted. The following results are obtained: Anexplicit exact formula for P ( S ) is derived and its behaviour close to S = 0 is studiedanalytically, showing that there is linear level repulsion, unless there are additionalconstraints for the probability distribution of the matrix elements. The constraint ofhaving only positive or only negative but otherwise arbitrary non-diagonal elementsleads to quadratic level repulsion with logarithmic corrections. These findings detailand extend our previous results already published in a preceding paper. For the symmetric real 2D matrices also other, non-Gaussian statistical distributions areconsidered. In this case we show for arbitrary statistical distribution of the diago-nal and non-diagonal elements that the level repulsion exponent ρ is always ρ = 1,provided the distribution function of the matrix elements is regular at zero value. Ifthe distribution function of the matrix elements is a singular (but still integrable)power law near zero value of S , the level spacing distribution P ( S ) is a fractionalexponent pawer law at small S . The tail of P ( S ) depends on further details of thematrix element statistics. We explicitly work out four cases: the constant (box)distribution, the Cauchy-Lorentz distribution, the exponential distribution and, asan example for a singular distribution, the power law distribution for P ( S ) near zerovalue times an exponential tail. PACS Numbers:
Zeitschrift f¨ur Naturforschung A Introduction
Random matrix theory [1]-[5] has important applications quite generally in the sta-tistical description of complex systems as e.g. for complex nuclei, for which it hasbeen originally developed, or for chaotic systems with just a few degrees of freedomas treated in quantum chaos. Usually one restricts oneself to Gaussian ensembles ofrandom matrices, meaning that the matrix elements have a Gaussian distributionwhere the diagonal matrix elements have all the same dispersion, the off-diagonalelements also have the same dispersion, but the former is by a factor 2 larger thanthe latter one. See the remarks in section 2. This ensemble is the only one whichis invariant under symmetry transformations of the underlying matrix A . If A isreal and symmetric, the group of relevant transformations consists of the orthog-onal transformations and we speak of the Gaussian Orthogonal Ensemble (GOE),while for Hermitian A the relevant group of transformations is that of the unitarytransformations and we speak of the Gaussian Unitary Ensemble (GUE) of randommatrices. The property of the matrix element distribution to be Gaussian is a di-rect consequence of just two assumptions, namely statistical independence of thedistributions of the matrix elements and invariance against the group of appropriatetransformations. We usually have in mind infinite dimensional matrices, althoughfor many purposes finite dimensionality is useful and sufficient.Several generalizations are possible. One is the generalization towards general non-normal matrices, either fully complex [6] (see also [1]) but still Gaussian (invariant),or real positive but no longer invariant against the above group of transformationswhile still Gaussian (with different variances for different matrix elements) [7]. Inthe latter paper [7] we have also treated 2D real symmetric matrices whose matrixelements are Gaussian distributed but with different variances of the diagonal andthe non-diagonal matrix elements. Thus they no longer enjoy GOE invariance, butstill have Gaussian distributed matrix elements.In the present paper we study the level spacing distribution P ( S ) of 2D real randommatrices by considering general distributions of the matrix elements, going beyondGaussian. Therefore these ensembles of random matrices are no longer invariantunder the mentioned transformation groups. The statistics P ( S ) of the level spacings S changes under transformations with the elements of those groups, so it dependson the basis chosen for their representation. Nevertheless, they are important inspecific physical situations and also involve interesting mathematics.We shall first treat rigorously the case of general 2D non-normal matrices withGaussian distributed matrix elements even admitting different variances of the vari-ous matrix elements. The level repulsion exponent ρ in this case will turn out to bestill ρ = 1, unless there are constraints on the matrix elements like e. g. having only A matrix A is non-normal [12] if it does not commute with its adjoint, i. e., [ A , A + ] = 0.Non-normal matrices have important physical applications, especially in dissipative systems [7],[13], [14], [15], [16]. symmetric matrices with other, general distributions of the realmatrix elements. For such symmetric, real matrices we shall show that the levelrepulsion exponent is always ρ = 1, provided the distributions of all matrix elementsare regular at zero value. Next we study in detail and without approximations thefollowing specific cases of such regular distributions, namely: the uniform (box)distribution, the Cauchy-Lorentz distribution, the exponential distribution, and alsosome addenda to the Gaussian case, already dealt with in [7].If, in contrast, the distribution of matrix elements is singular at zero value, P ( S )shows different behaviour at S = 0. For example, if the singularity of the statisticsof the matrix elements is an integrable power law at zero value of S , the levelspacing statistics P ( S ) exhibits a fractional exponent pawer law level repulsion aswas discovered and treated in [8],[9], [10], [11]. This probably is characteristic forsparsed matrices, which in turn are important random matrix models for nearlyintegrable systems of the KAM type. Consider 2 × A = ( A ij ), where i, j = 1 or 2. This matrix has two diagonalelements, which can always be chosen as a and − a . For general A ij introduce A s = ( A + A ) and subtract the diagonal matrix A s . Then A − A s ≡ a = − ( A − A s ), i. e., one obtains the formula (1) without loss of generality. Quitegenerally, for a matrix a bc d ! the level spacing S = | λ − λ | = | q ( a − d ) + 4 bc | only depends on the difference a − d , so that we can arbitrarily shift a and d by aconstant, in particular by A s . Let a as well as the nondiagonal elements b and b all be real and write A = ( A ij ) = a b b − a ! . (1)If b = b , the matrix A is symmetric. Let us make clear that in the general symmetric GOE matrix (cid:18) a bb d (cid:19) the variances of thediagonal elements a and d are equal, but by a factor 2 larger than the variance of the offdiagonalelement b . However, setting d = − a implies, that the GOE case occurs when the variance of a isequal to the variance of b . See also subsection 4.1 A follow from | A − λ | = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) a − λ b b − a − λ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = λ − a − b b = 0 , (2)i. e., λ , = ± q a + b b . (3)The eigenvalues are real for arbitrary symmetric ( b = b ) matrices. In the moregeneral case of b = b the eigenvalues are still real, if the product b b is larger than − a , otherwise they are purely imaginary, but never general complex.If the matrix A is not symmetric, it is no longer normal. Namely, in the generalcase one finds for the commutator[ A , A + ] = b − b a ( b − b )2 a ( b − b ) b − b ! . (4)Apparently the commutator [ A , A + ] = 0 is zero iff b = b , i. e., in the symmetriccase. In general, 2 × A are non-normal, [ A , A + ] = 0.The distribution P ( S ) of the level spacings S is given by P ( S ) = Z + ∞−∞ Z + ∞−∞ Z + ∞−∞ da db db δ (cid:18) S − (cid:12)(cid:12)(cid:12)(cid:12)q a + b b (cid:12)(cid:12)(cid:12)(cid:12)(cid:19) g a ( a ) g b ( b ) g b ( b ) . (5)Here δ ( . ) is the Dirac delta function and g a ( a ) , g b ( b ) , g b ( b ) are the normalizedprobability density functions for the matrix elements a, b , b , respectively. P ( S ) isthe central object of our study in this paper. We are going to study the dependenceof the main properties of P ( S ), especially the small- S behaviour (the level repulsion)as well as the asymptotic behaviour at large S (the tail of P ( S )), upon the mainfeatures of the matrix element distribution functions g a ( a ) , g b ( b ) , g b ( b ). In thespecial case of an ensemble of random symmetric matrices A = A + we must have b = b . It is not enough that the two statistics g b and g b are equal! This isachieved by inserting in the integrand of (5) the constraint g b ( b ) = δ ( b − b ).Integrating then over b results in the level distribution formula P ( S ) = Z + ∞−∞ Z + ∞−∞ da db δ (cid:18) S − q a + b (cid:19) g a ( a ) g b ( b ) . (6)We shall work out exact formulae for the following cases: (i) General, non-normalmatrices with Gaussian distributed elements a, b , b in sections 2 and 3, and (ii)symmetric (normal) matrices but considering non-Gaussian distributions of the ma-trix elements in section 4: uniform (constant or box) distribution, Cauchy-Lorentzdistribution, exponential distribution, and singular distribution (integrable powerlaw at S = 0 multiplied by an exponential tail). Here we also detail more on the4aussian case in order to extend our results of reference [7]. In section 5 we com-ment on the level distribution of the prototype non-normal matrix, the triangularmatrix. The final section 6 is devoted to a discussion and conclusions. We notice that the radicands in the arguments of the delta functions in both equa-tions (5) and (6) are homogeneous in the moduli of a , b , and b . Thus it is naturalto introduce spherical or plane polar coordinates. In the general case, described byequation (5), we define b = r cos θ cos ϕ, b = r cos θ sin ϕ, a = r sin θ, (7)where r ∈ [0 , ∞ ), − π/ ≤ θ ≤ π/
2, and 0 ≤ ϕ ≤ π . Then we get for the leveldistance S = 2 (cid:12)(cid:12)(cid:12)(cid:12)q a + b b (cid:12)(cid:12)(cid:12)(cid:12) = 2 r Q with Q ( θ, ϕ ) = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)s sin θ + 12 cos θ sin 2 ϕ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) . (8)The Jacobian of the coordinate transformation is r cos θ and therefore da db db = r dr cos θ dθ dϕ . The r -integration can be carried out in favour of S resulting in P ( S ) = S Z π/ − π/ Z π cosθ dθ dϕQ × g a ( S Q sin θ ) g b ( S Q cos θ cos ϕ ) g b ( S Q cos θ sin ϕ ) . (9)Now, if the value of the double integral were regular at S = 0, the level repulsionwould be quadratic, P ( S ) ∝ S . But Q has zeros at θ = 0 together with ϕ =0 , π/ , π, π/ , π , which are scanned in the integration. That leads to singularitiesof the integrand, explicit ones (1 /Q ) as well as implicit ones (in the g -arguments).In particular the Q − in the g ’s enforces a scan of the full matrix element distributionfunctions including their tails, for any nonzero value of S . One cannot simply expandthe g ’s in terms of S and so find the small S behaviour. Thus the picture of the S -dependence is far from simple. The small S behaviour depends on all details ofthe matrix element distribution functions g including their tails. Therefore in thisgeneral case of independent a, b , b we shall choose another approach to attack thisproblem and analyse it at least in the case of Gaussian g ’s in section 3.In the case of real symmetric 2D matrices the transformation to plane polar coordi-nates is simpler and thus much more useful. Here the r -integration does not produce5ingularities, since the analog of Q here is just 1. Introduce plane polar coordinatesinto equation (6), a = r cos ϕ, b = r sin ϕ, (10)where r ∈ [0 , ∞ ) and 0 ≤ ϕ ≤ π . The Jacobian is r , i. e., da db = rdr dϕ . Thelevel distance reduces to 2 q a + b = 2 r , which is independent of ϕ in contrast tothe θ, ϕ dependent factor Q in the general case of equation (8). We now can do the r -integration immediately and get P ( S ) = S Z π dϕ g a ( S ϕ ) g b ( S ϕ ) . (11)Here, for S → can use the power law expansions of the matrix elementdistribution functions g a,b in terms of S , independent of the nature of the g ’s,Gaussian or non-Gaussian. In particular, if g a ( x ) and g b ( x ) are both regular at x = 0, the integrand at S = 0 is just a constant and equal to g a (0) g b (0). We obtainfor small S P ( S ) ≈ S · π g a (0) g b (0) . (12)Thus in case of regular g ’s at zero value we always have linear level repulsion P ( S ) ∝ S for real, symmetric, random matrices, whose amplitude reflects the probabilitydensity g a,b (0) to find the matrix elements a = 0 and b = 0 in the matrix A . Higherorder corrections in S can be derived by Taylor expanding the g ’s around x = 0.We conclude that regular matrix element distribution functions g a,b transform intoregular level spacing distributions P ( S ). From equations (11) and (12) we also noticethat the level repulsion is not linear if g a (0) and g b (0) do not exist, i. e., if thedistributions g a ( x ) and g b ( x ) are singular at x = 0, if there is infinite probabilitydensity for the matrix elements to have values x = a = 0 or x = b = 0. We shallstudy this important case in section 4. P ( S ) : general We start with equation (5) and observe that the dependence of the level distance S in the integrand of P ( S ) on b and b is only through the product B = b b .Therefore it is natural to introduce hyperbolic coordinates defined as6 = b b , v = b b , equivalent to b = Bv, b = B/v. (13)Both B and v run over the entire interval ( −∞ , ∞ ), but always have the samesign, sgn B = sgn b · sgn b = sgn v . Positive B or v indicate that both non-diagonal elements are positive or that both are negative. Negative values of thevariables B and v , instead, describe the case of non-diagonal elements with differentsign. The Jacobian determinant is J = 1 / (2 | v | ), and for the area elements we have db db = dB dv/ (2 | v | ).To analyse the integral further we assume Gaussian distributed matrix elements,though with possibly different variances σ , σ , and σ , for a, b , and b , respectively. g a ( a ) = 1 σ √ π exp( − a σ ) , g b i ( b i ) = 1 σ i √ π exp( − b i σ i ) , i = 1 , . (14)All three distributions are normalized to one. With this assumption the equationfor P ( S ) obtains the form P ( S ) = 1 σσ σ √ π Z ∞−∞ Z ∞−∞ Z ∞−∞ da dB dv | v | exp − a σ + Bvσ + Bvσ !! × δ (cid:16) S − (cid:12)(cid:12)(cid:12) √ a + B (cid:12)(cid:12)(cid:12)(cid:17) . (15)The integrand is even in the variable a , we thus can use R ∞−∞ da → R ∞ da . Next,the level distance delta function does not depend on v explicitly. But since thesigns of v and B are coupled, only the first and the third quadrant of the ( B, v )-plane contribute. In both these (
B, v )-quadrants it is b = Bv = | B || v | ≥ b = B/v = | B | / | v | ≥
0, guaranteeing the convergence of the (
B, v )-integrals. Inthe first quadrant we have positive B , while in the third one −| B | is relevant for thelevel distance. This leads to P ( S ) = 2 σσ σ √ π Z ∞ da exp − a σ ! Z ∞ dB Z ∞ dv v exp − Bvσ − Bvσ ! × h δ (cid:16) S − √ a + B (cid:17) + δ (cid:16) S − (cid:12)(cid:12)(cid:12) √ a − B (cid:12)(cid:12)(cid:12)(cid:17)i . (16)Now all variables a, B, v have to be integrated over positive values only. The sumof the delta-functions then is independent of the variable v and the v -integral canbe performed, cf. [17] No. 3.478,4. Z ∞ dv v exp − Bσ v − Bσ v ! = K (cid:18) Bσ σ (cid:19) . (17)7ere K ( x ) is the modified Bessel function of second kind and zero order. Itsargument is always positive, B ≥
0. The level spacing distribution P ( S ) becomes P ( S ) = 2 σσ σ √ π Z ∞ dB K (cid:18) Bσ σ (cid:19) · G ( B ) , (18)where G ( B ) is the Gaussian averaged level distribution for fixed, given product B ≥ a , G ( B ) = Z ∞ da exp − a σ ! × h δ (cid:16) S − √ a + B (cid:17) + δ (cid:16) S − (cid:12)(cid:12)(cid:12) √ a − B (cid:12)(cid:12)(cid:12)(cid:17)i , with B ≥ . (19)The calculation of G ( B ) is easy because of the delta function and can be doneanalytically. Consider the first delta-function. Its argument, denoted as f ( a ) = S − √ a + B , with positive B , corresponds to real eigenvalues of the matrix A .The zeros a i of the delta function contribute to the integral only if they are realand positive. There is only one real, positive a i , provided B ≤ S /
4. Then thevariable u = 4 B/S fulfils 0 ≤ u ≤ f -zero reads a i = S √ − u . Theweight of the delta function contribution is given by the inverse of the derivative of f , which is | f ′ ( a i ) | − = (2 √ − u ) − . The first delta-function in G ( B ) then leads to G + re ( B ) = exp( − S σ (1 − u )) / (2 √ − u ). The label (+ re ) indicates that the termdescribes the case + B and real ( re ) eigenvalues. Transforming the variable B → u this G + re ( B ) contributes the following term to the level level spacing distribution P + re ( S ) = S σσ σ √ π Z du √ − u K S u σ σ ! exp − S σ (1 − u ) ! . (20)This integral has been considered already in ref.[7] and leads to a level repulsionexponent ρ = 2 − log , see also section 3.2. It will turn out that this contribution(20) is subdominant relative to the other two integrals for P ( S ), which in contrastto (20) will lead to the repulsion exponent ρ = 1, cf. again section 3.2.We now calculate the G ( B )-contributions coming from the second delta function.They are labelled by a ( − ) sign (since − B enters) and correspond to real as wellas to imaginary eigenvalues of the matrix A , depending on the size of B . They arelabelled therefore by ( re ) if a − B ≥ im ) if a − B <
0. The relevant(positive) zeros a i of the argument of the delta function are a i = S √ u for allpositive B or u = 4 B/S ≥ re ) and a i = S √ u − B with B ≥ S / u = 4 B/S ≥ im ). The weights | f ′ ( a i ) | − of the delta function contributions are obtained from | f ′ ( a i ) | = 2 √ u for the case( re ) and from | f ′ ( a i ) | = 2 √ u − im ). These formulae lead to the following twocontributions to the level spacing distribution P ( S ):8 − re ( S ) = S σσ σ √ π Z ∞ du √ u K S u σ σ ! exp − S σ (1 + u ) ! (21)and P − im ( S ) = S σσ σ √ π Z ∞ du √ u − K S u σ σ ! exp − S σ ( u − ! . (22)These three integrals (20), (21), and (22) can be summed up to give the completelevel spacing distrribution P ( S ). To do this one introduces variables y such thatin all three cases the exponential is exp( − y ) with y ≥
0. In the case ( − im ) onein addition substitutes y → − y . The real eigenvalues, represented by equations(20) and (21), give y -integrations from 0 to S / σ and from S / σ to ∞ . Theimaginary eigenvalues lead to an integral from 0 to ∞ or, equivalently, from −∞ to0. Respecting the always positive argument of K one evaluates for the completelevel spacing distribution function in closed form P ( S ) = S σ σ √ π Z ∞−∞ dy q | y | K σ σ σ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) y − S σ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)! exp( −| y | ) . (23)This formula has been reported to us independently by Professor H.-J. Sommers[18]. Because the integral converges also for S = 0, we can conclude from thisformula that in leading order in S we have linear level repulsion due to the explicitfactor S , namely P ( S ) = S · σ σ √ π Z ∞ dy √ y K σ σ σ y ! exp( − y ) + h.o.t. (24)The S -independent integral can be expressed in terms of the hypergeometric function F as will be shown in equation (33) of section 3.2.From equation (23) one might wish to work out also the higher order terms in S ,stemming from the integrand. However, if at small S one formally expands the factor K in (23) in terms of an S -series, one obtains contributions to the integrand, whichare no longer integrable. Therefore the small S behaviour of the y - or u -integrals ishighly nontrivial. This agrees with our earlier observation in section 2.2. We haveto analyse that in detail by studying the individual integrals P + re ( S ), P − re ( S ), and P − im ( S ) of equations (20), (21), and (22), respectively.9 .2 Level spacing distribution P ( S ) : Details of small S be-haviour As we have seen in sections 2.2 and 3.1 for non-symmetric (and thus non-normal)matrices, the behaviour of P ( S ) at small S is very delicate and depends on thedetails of the distribution functions g a,b i ( x ) for the matrix elements. In order toachieve understanding we go back to equation (20) and make the substitutions1 − u → u ′ → u and S u/ (2 σ σ ) = u ′ → u yielding P + re ( S ) = S σ √ σ σ √ π Z ǫ du √ u K ( ǫ − u ) e − Au . (25)Here we have introduced the notations ǫ = S σ σ , A = σ σ σ . (26)If ǫ is very small, ǫ ≪
1, we can use the approximation of K ( z ) at small z , which is([17], No. 8.447,1 and 3, and 8.362,3), K ( z ) = − ln z − C + O ( z ), where C is Euler’sconstant 0 . . . . . Then the leading term in the above integral, also taking intoaccount the Taylor expansion of e − Au , can be written asˆ I + re ( ǫ ) = Z ǫ du √ u ( − ln( ǫ − u )) , (27)which after a simple substitution ǫ − u = u ′ → u can be found in [17] (No. 2.727,5).After the evaluation for small ǫ we getˆ I + re ( ǫ ) ≈ √ ǫ ( − ln ǫ ) ∝ S ln S − , (28)meaning that including the explicit factor S in (25) we have P + re ( S ) ∝ S ln S − . (29)The level repulsion exponent ρ for the case of positive B (which also guaranteesreal eigenvalues of the matrix A ) therefore is ρ = 2 − log . Positive B meansthat we consider Gaussian distributed non-normal, real 2D matrices, which haveonly positive or only negative non-diagonal matrix elements b , b . It is under thisconstraint that they have the rather strong repulsion exponent ρ = 2 − log , asreported already in reference [7].The other two contributions (21) and (22) are different in behaviour. Indeed each ofthem gives rise to weaker level repulsion, i. e., to a smaller exponent ρ . They bothexhibit linear level repulsion P ( S ) ∝ S , thus ρ = 1. This dominates the stronger,quadratic repulsion for small S valid for P + re ( S ), so that the total P ( S ) has linear10evel repulsion P ( S ) ∝ S , as obtained in (23) and (24). The analytical reason forthe quite different S -dependence of P + re ( S ), from equation (20), in contrast to thatof P − re and P − im , according to equations (21) and (22), is the finite range of the B - or u -integration in (20) versus the infinite integration intervals in the cases ofnegative B , namely (21) and (22). Because of these infinite integration intervalsone cannot Taylor expand the K -function in the cases of P − re and P − im . Instead,its complete functional form including its tails affect the convergence and thus the S -behaviour of the integrals in (21), (22).In order to demonstrate this explicitly, we use the same substitution as before andarrive from equation (21) at the following expression P − re ( S ) = S exp( − S σ )2 σ √ σ σ √ π Z ∞ du √ u + ǫ K ( u ) e − Au , (30)with the relevant integralˆ I − re ( S ) = Z ∞ du √ u + ǫ K ( u ) e − Au . (31)The meaning of ǫ and A is the same as in equation (26). The integrand decaysexponentially for large u since also K ( u ) does so. Thus the upper limit of theintegral converges safely, independent of ǫ . For small u the integrand can be es-timated in a similar way as before. The result is that the integral converges to a finite value for ǫ → I + re ( S ), i.e., ˆ I − re ( S → = 0 is finite as S →
0. Consequently, from the explicit factor S in equation (30), we conclude that the level repulsion is linear, ρ = 1. Note thatin contrast to the finite integration range in the integral (27), which for small S behaves ∝ S log S − , the infinite range integral (31) has a finite , nonzero limit for S → u lead us to the expression P − im ( S ) = S σ √ σ σ √ π Z ∞ du √ u K ( ǫ + u ) e − Au . (32) ǫ and A are defined in equations (26). As before, the integral has a finite value,which does not go to zero with ǫ ∝ S →
0, and thus the level repulsion of thecontribution P − im according to equations (22) and (32) is again linear, ρ = 1. Infact, the integral in (32) at ǫ = 0 can be calculated according to [17] (6.621,3) as Z ∞ du √ u K ( u ) e − Au = √ π √ A F (cid:18) ,
12 ; 1; A − A + 1 (cid:19) , (33)where F is the hypergeometric function.11he conclusion is that the level repulsion of the complete level spacing distributionfunction P ( S ) is linear, ρ = 1, due to the contributions P − re ( S ) and P − im ( S ),whereas the contribution P + re ( S ) alone has the level repulsion exponent ρ = 2 − log ,as we have already found in [7] for the case of Gaussian random matrices with onlypositive or only negative nondiagonal elements. Note that the criterion for thedifferent small S behaviour is, if the product b b = B is always positive, B ≥
0, orif there are also
B < ∝ S ), while the formerones with only a positive product b b lead to the repulsion behaviour ∝ S log S − . In this section we treat 2D real random matrices which are symmetric , i. e., b = b ≡ b and thus B = b ≥ b = b ). Such matrices are normal, i. e., the matrix commutateswith its adjoint. The generalization here is, that we allow for a broad variety ofmatrix element distribution functions g a,b . The following classes of matrix elementdistributions are considered: (1) Gaussian distribution revisited, (2) box (uniform)distribution, (3) Cauchy-Lorentz distribution, (4) exponential distribution, and (5)singular distribution (power law approaching zero value, multiplied by exponen-tial tail). In all these cases we shall start with equation (11) as a useful integralrepresentation for P ( S ) in terms of g a,b . This case of real, Gaussian distributed 2D matrices with possibly different widthsof the diagonal and the non-diagonal matrix element statistics has been treatedrecently in reference [7]. We briefly repeat it here for the sake of completeness.Using the normalized Gaussians defined in (14) we immediately get P ( S ) = S σ a σ b e − S ( σ − a + σ − b ) I S σ − a − σ − b ) ! , (34)where I ( z ) is the modified Bessel function of the first kind and zero order. Accordingto [17](No. 8.447,1) its small-z expansion is I ( z ) = 1 + z + z + O ( z ). Thus I (0) = 1 and the level repulsion is linear, ρ = 1. The details for the general case ofdifferent widths of the a and b statistics, σ a = σ b , has been analyzed and discussed in[7]. We mention that in the case of equal statistics of all matrix elements σ a = σ b = σ we get, of course, the well known 2D GOE result12 ( S ) = S σ e − S σ . (35)After normalizing the first moment to unity, h S i = 1, leading to σ = 1 / √ π ,the level spacing distribution P ( S ) becomes the Wigner distribution P W igner ( S ) = πS exp( − πS / Let us now consider very different matrix element distributions g a,b . We start bystudying the following uniform distributions for the matrix elements a and b , g a ( a ) = 12 a , if | a | ≤ a , , (36) g b ( b ) = 12 b , if | b | ≤ b , . (37)Thus the probability density product g a ( a ) g b ( b ) is constant, equal to (4 a b ) − , insidethe (centrally located) rectangle with sides 2 a × b , and therefore for S smallerthan 2 min { a , b } the level spacing distribution P ( S ) can be calculated exactly. Itis equal to P ( S ) = πS a b , if S ≤ { a , b } . (38)Again the level repulsion is linear. This is consistent with our finding in section 2.2that whenever g a,b (0) = const = 0 there is generically linear level repulsion ρ = 1.From this geometrical picture it is also clear that P ( S ) is zero for S ≥ q a + b .For S in between P ( S ) varies continuously.Indeed, we can calculate P ( S ) exactly for all S , using equation (11). We observe that g a and g b enter this expression symmetrically. Therefore without loss of generalitywe assume that b ≤ a , i.e. min { a , b } = b . Then we have to consider two moreintervals, namely (i) 2 b ≤ S ≤ a and (ii) 2 a ≤ S ≤ q a + b .In the first case (i) the circle of radius S/ ϕ , where S sin ϕ = 2 b .Therefore, the total length of the ϕ -interval contributing to the integral in (11) isjust 4 ϕ , and consequently P ( S ) = S a b arcsin b S .In the second case (ii) the circle of radius S/ ϕ -interval where we get thecontribution to the integral. But all four angles are the same due to the doublereflection symmetry of the rectangle and circle. The larger angle ϕ between the13olar ray and the abscissa is geometrically determined by S sin ϕ = 2 b , and thesmaller one by S cos ϕ = 2 a . Thus for each pair the length of the ϕ -interval isequal to ϕ − ϕ , and since there are four such intervals, the total length of theinterval contributing to the integral is 4( ϕ − ϕ ).Putting all together we obtain the following exact result for the level spacing distri-bution P ( S ) in the case of the uniform (box) distributions g a,b , P ( S ) = πS a b , if S ≤ b ≤ a , S a b arcsin b S , if 2 b ≤ S ≤ a , S a b (cid:16) arcsin b S − arccos a S (cid:17) , if 2 a ≤ S ≤ q a + b , , if S ≥ q a + b . (39)If instead of b ≤ a one has b ≥ a , simply interchange a and b in the aboveformulae. The normalized probability densities for the matrix elements are defined by g a ( a ) = 1 πa (1 + a a ) , g a ( b ) = 1 πb (1 + b b ) . (40)From equation (11) and using plane polar coordinates we obtain P ( S ) = S π a b Z π dϕ (1 + S a cos ϕ )(1 + S b sin ϕ ) . (41)The integral for S → π , so that at small S we have P ( S ) ≈ S/ (2 πa b ) inaccordance with equation (12). But the integral (41) can also be done exactly. Theresult is P ( S ) = S πa b · α √ β + β √ α ( α + β + α β ) √ α √ β , (42)where α = S / (4 a ) and β = S / (4 b ). The asymptotic behaviour of P ( S ) atlarge S , i. e., α ≫ and β ≫
1, is an inverse quadratic power law, P ( S ) ≈ a + b ) πS . (43)If a = b = a , the complete formula for all level distances S reads14 ( S ) = S πa · α ) √ α , with α = S / (4 a ) . (44)This expression for the level spacing statistics evidently mirror images the Cauchy-Lorentz distribution of the ( g b = g a )-statistics in the P ( S )-statistics.It is interesting to note that P ( S ) in (42) has a divergent (infinite) first moment, asis clearly seen from the asymptotics (43). A generalized power law statistics of thetype g a ( a ) = C a / (1 + ( a/a ) q ), with q = 4 , , . . . , however, has a finite first moment h S i < ∞ . It will be treated elsewhere. Our normalized distributions in this subsection are chosen as g a ( a ) = C a | a | − µ a e − λ a | a | , g b ( b ) = C b | b | − µ b e − λ b | b | , (45)where the normalization constants are C i = λ − µ i i / (2Γ(1 − µ i )) . (46)Here i = a, b , the exponents µ i <
1, and Γ( x ) is the gamma function. Thesedistribution functions are singular but integrable power laws for a, b → g a,b in (45) we get (note that S is positive only, S ≥ P ( S ) = C a C b S (cid:18) S (cid:19) − ( µ a + µ b ) Z π/ dϕ exp (cid:16) − S ( λ a cos ϕ + λ b sin ϕ ) (cid:17) cos µ a ϕ sin µ b ϕ . (47)We did not succeed to calculate this integral analytically in closed form. However,one can evaluate it for small argument S , i. e., S →
0, where the exponential canbe approximated by 1 (equivalent to λ a , λ b →
0, i. e., no tail effects in this small S range). The integral then is Z π/ dϕ cos µ a ϕ sin µ b ϕ = Γ( − µ a )Γ( − µ b )2Γ(1 − µ a − µ b ) . (48)From this we get the following level repulsion law, now being a fractional exponentpower law, P ( S ) = C a C b S (cid:18) S (cid:19) − ( µ a + µ b ) Γ( − µ a )Γ( − µ b )2Γ(1 − µ a − µ b ) . (49)15he power law distribution for the matrix elements leads to a corresponding powerlaw for the level spacing distribution. The power law exponents µ a and µ b of thematrix element distribution functions g a,b immediately transform into the level re-pulsion exponent. More precisely, the level repulsion exponent is ρ = 1 − µ a − µ b .We emphasize that P ( S ) at S = 0 is integrable, if the matrix element distributions g a ( a ) and g b ( b ) are integrable at a = 0 and b = 0, respectively.The physical interpretation of this repulsion exponent ρ = 1 − µ a − µ b comprises twodifferent possibilities, depending on the size of the singularity exponents µ a and µ b of the distribution functions g a , g b for the matrix elements. If the singularities arestrong, more precisely, if µ a + µ b >
1, the repulsion exponent ρ is negative. Thismeans that due to the rather strong sparsing of the matrix there is not a repulsionbut, instead, an enhancement of the level distance S = 0, the zero eigenvaluesare emphasized. If the singularities are weak, µ a + µ b <
1, the phenomenon oflevel repulsion remains, ρ >
0, despite the singularities for the matrix elementdistributions at a = 0 and b = 0.The diagonal elements ± a and the non-diagonal elements b determine the leveldistance with equal weight since S = 2 √ a + b . It is for this reason that it is justthe sum of the singularity exponents which determines ρ , giving equal weight tothe singularities of the diagonal and non-diagonal elements to the repulsion. Thereare two typical limiting cases. (i) µ a = µ b ≡ µ , both singularities are of equalstrenght, or (ii) µ a ≡ µ = 0 and µ b = 0 or vice versa. Only one of the two matrixelement distributions is singular while the other one remains regular. Then onlythe non-diagonal is sparsed and the diagonal is regular or the other way round. Inthe first case (i), if µ < / ρ >
0, while for µ > / S = 0. – In the secondcase (ii) there will always be level repulsion despite the singularity at zero for eitherthe non-diagonal or the diagonal distribution, since ρ = 1 − µ > ,
1) .These results are interesting in the context of quantum chaos of nearly integrable(KAM type) systems. As has been observed in a variety of different systems ofmixed type, at small energies one finds the so-called fractional exponent power lawlevel repulsion, well described by the Brody distribution rather than by the Berry-Robnik distribution [19]-[24], which in turn has been clearly demonstrated to applyat sufficiently large energies, i. e., at sufficiently small effective Planck constant. Theobserved deviation from the Berry-Robnik behaviour is due to the localization andtunneling effects, and is a subject of intense current research [25]. Phenomenologi-cally it has been discovered in [8], further developed in [9], [10], and the connectionwith sparsed matrices was established in [11]. Quite generally, the matrix repre-sentation of a Hamilton operator of a nearly integrable (KAM type) system in thebasis of the integrable part results in a sparsed banded matrix with nonzero di-agonal elements [11]. Such a sparsed matrix is precisely characterised by the fact16hat many matrix elements are zero. In other words, the probability distributionfunction of the nondiagonal matrix elements g b ( b ) is singular at zero value of b , ina manner described by equation (45), while the diagonal matrix elements have aregular distribution function, which is precisely the case (ii) discussed above. Our2D random matrix theory with such singular matrix element distribution functionstherefore predicts qualitatively a fractional exponent power law level repulsion, thephenomenon observed in the above mentioned works [8]-[11]. Thus we see that thestudy of random matrices with other than the invariant ensembles (GOE and GUE)is very important and connects to new physics. We leave this direction of researchfor further studies in the near future.The large S behaviour of P ( S ) in this case is obviously dominated by the expo-nentials. Although an exact solution of the integral (47) is not known, we shallshow by applying the mean value theorem to the integral that P ( S ) decays roughlyexponentially at large S ≫
1. This will be analysed in the next subsection, devotedto pure exponential matrix element distributions without power law singularities atthe origin.
Here we start with the distribution functions of the previous subsection, but withoutsingularities. I. e., we assume µ a = µ b = 0, yielding the case of purely exponentialdistribution of matrix elements. g a ( a ) = C a e − λ a | a | , g b ( b ) = C b e − λ b | b | , with C i = λ i / . (50)The level spacing distribution function in this case is P ( S ) = C a C b S Z π/ dϕ exp (cid:18) − S λ a cos ϕ + λ b sin ϕ ) (cid:19) . (51)We could not evaluate this analytically in closed form. For small S the linear levelrepulsion law with ρ = 1 is recovered, of course, P ( S ) ≈ πC a C b S = πλ a λ b S. (52)At large S we can estimate the integral (51) using the mean value theorem. First wewrite λ a cos ϕ + λ b sin ϕ = ˆ A sin( ϕ + φ ), with φ = arctan( λ a /λ b ), and ˆ A = q λ a + λ b .We then substitute the integration variable from ϕ to χ = ϕ + φ , which now runsfrom φ to φ + π/
2. This transforms P ( S ) into P ( S ) = C a C b S Z φ + π/ φ dχ e − S ˆ A sin χ . (53)17ince the integrand is continuous and bounded we can apply the mean value theorem,saying that there is a value χ ( S ) in the interval between φ and φ + π/ P ( S ) = πC a C b · S · e − S ˆ A sin χ ( S ) . (54) χ ( S ) is expected to be only weakly dependent on S . In this sense the tail of P ( S )is roughly exponential. A special, prototype case of a non-normal matrix A from equation (1) is the trian-gular matrix A = ( A ij ) = a b − a ! . (55)Such matrices have been studied in detail in reference [7]. For them b b = B = 0and the level distance S does not depend on b . With the constraint g b ( b ) = δ ( b ) inequation (5) the a and ( b = b )-integrations factorize, leading to the level distributionfunction P ( S ) = Z ∞−∞ da δ ( S − | a | ) g a ( a ) = 12 (cid:20) g a (cid:18) S (cid:19) + g a (cid:18) − S (cid:19)(cid:21) . (56)For even distributions g a ( a ) we have P ( S ) = g a (cid:18) S (cid:19) , S ≥ , normalized on [0 ≤ S ≤ ∞ ) . (57)Apparently for triangular matrices the level spacing distribution is the immediatemirror of the diagonal element distribution g a ( a ). If g a ( a ) is Gaussian, there is nolevel repulsion at all, the repulsion exponent is ρ = 0 (see already [7]), though stillthe S are Gaussian and not Poissonian distributed as in the integrable case. If g a ( a )is Poissonian, so is P ( S ). And if g a ( a ) is centered about some finite value ¯ a , thisalso holds for the level spacing distribution. If the a -distribution does not includethe origin, there even is formally infinite level repulsion . For general matrices withnon-zero mean values ¯ a and ¯ b , the level spacing distribution is somewhat moretricky. 18 Discussion and conclusions
We have studied the level spacing distribution of general 2D real random matri-ces. First general, non-normal random matrices with Gaussian distribution of thematrix elements have been considered, showing that in general the level repulsionis still linear, as for symmetric matrices. Under some constraints it is quadraticwith logarithmic corrections. We have given an explicit formula for the level spac-ing distribution function in form of a threefold integral. Then we have consideredsymmetric matrices with general, other than Gaussian statistics for both the diag-onal and the non-diagonal elements. We have shown that the level repulsion againis always linear provided the matrix element distribution functions are regular atzero value with finite, non-zero weight. We have explicitly considered the box-type(uniform), the Cauchy-Lorentz, the exponential times singular power law at zero,and the purely exponential matrix element distributions. Explicit closed form re-sults for P ( S ) have been obtained, except for the singular times exponential taildistributions, although we could present a very good understanding of the overallbehaviour of P ( S ) also in this case.Our approach can obviously be extended to general 2D complex random matrices,first to Hermitian complex matrices, where similar general formulae like equation(11) can be obtained, by using spherical coordinates. Indeed, it becomes obviousthat we shall always have quadratic level repulsion as long as the matrix elementdistributions g ( x ) are regular at zero x , and a formula analogous to (12) will ap-ply. Denoting the diagonal elements by ± a , and the off-diagonal elements (complexconjugate) by b ± i c , we obtain P ( S ) = S Z Z cos θ dθ dϕ g a ( S θ ) g b ( S θ sin ϕ ) g c ( S θ cos ϕ ) , (58)as claimed. The details of further analysis will be published in a separate paper.We have also discussed the connection between the singular distributions of thematrix elements with a power law at zero value and the sparsed matrices whichdescribe nearly integrable systems. In such systems one derives a fractional exponentpower law level repulsion well known phenomenologically but poorly understoodtheoretically. Our approach and results promise new advances in this importantdirection of research in quantum chaos of mixed type systems initiated in [26] andgeneral random matrix theory. 19 cknowledgements This work was supported by the Cooperation Program between the Universities ofMarburg, Germany, and Maribor, Slovenia, by the Ministry of Higher Education,Science and Technology of the Republic of Slovenia, by the Nova Kreditna BankaMaribor and by TELEKOM Slovenije. We thank Professor Bruno Eckhardt foruseful comments, and Professor Hans-J¨urgen Sommers for communicating to us hisresult [18].
References [1] Haake F 2001
Quantum Signatures of Chaos (Berlin: Springer)[2] St¨ockmann H.-J. 1999
Quantum Chaos - An Introduction (Cambridge: Cam-bridge University Press).[3] Guhr T, M¨uller-Groeling A and Weidenm¨uller H A 1998
Phys. Rep.
Nos.4-6 189-428[4] Mehta M L 1991
Random Matrices (Boston: Academic Press)[5] Robnik M 1986
Lecture Notes in Physics
J. Math. Phys. J. Phys. A: Math. Theor. L459-L466[10] Prosen T and Robnik M 1994 J. Phys. A: Math. Gen. Matrix Theory - Selected Topics and Useful Results (Les Ulis:Les Editions de Physique).[13] Trefethen N L, Trefethen A E, Reddy S C, and Driscoll T A 1993
Science
Rev. Mod. Phys. Nonlinear Phenomena in Complex Systems (Minsk) Ecology Table of Integrals, Series and Products
Ed. Alan Jeffrey (San Diego: Academic Press)[18] H.-J. Sommers, private communication, December 2006[19] Berry M V and Robnik M 1984
J. Phys. A: Math. Gen. Nonlinear Phenomena in Complex Systems (Minsk) No.1,1-22[21] Robnik M and Prosen T 1997
J. Phys. A: Math. Gen. J. Phys. A: Math. Gen. J. Phys. A: Math. Gen. J. Phys. A: Math. Gen. J. Phys. A: Math. Gen.17