Cavity and replica methods for the spectral density of sparse symmetric random matrices
SSciPost Physics Lecture Notes Submission
Cavity and replica methods for the spectral density of sparsesymmetric random matrices
V.A.R. Susca , P. Vivo , R. K¨uhn King’s College London, Department of Mathematics, Strand, London WC2R 2LS, UnitedKingdom* [email protected] 2, 2021
Abstract
We review the problem of how to compute the spectral density of sparse sym-metric random matrices, i.e. weighted adjacency matrices of undirected graphs.Starting from the Edwards-Jones formula, we illustrate the milestones of thisline of research, including the pioneering work of Bray and Rodgers using repli-cas. We focus first on the cavity method, showing that it quickly provides thecorrect recursion equations both for single instances and at the ensemble level.We also describe an alternative replica solution that proves to be equivalent tothe cavity method. Both the cavity and the replica derivations allow us to ob-tain the spectral density via the solution of an integral equation for an auxiliaryprobability density function. We show that this equation can be solved using astochastic population dynamics algorithm, and we provide its implementation. Inthis formalism, the spectral density is naturally written in terms of a superposi-tion of local contributions from nodes of given degree, whose role is thoroughlyelucidated. This paper does not contain original material, but rather gives a ped-agogical overview of the topic. It is indeed addressed to students and researcherswho consider entering the field. Both the theoretical tools and the numerical al-gorithms are discussed in detail, highlighting conceptual subtleties and practicalaspects.
Contents a r X i v : . [ c ond - m a t . s t a t - m ec h ] F e b ciPost Physics Lecture Notes Submission c → ∞ limit in the cavity formalism 12 c → ∞ limit 19 ε (48)
38E The action S n in terms of π and ˆ π
39F The Kesten-McKay distribution from a peaked ˆ π
40G Trees have a symmetric spectrum 41References 42
The calculation of the average spectral density of eigenvalues of random matrices belonging toa certain ensemble has traditionally been one the fundamental problems in Random MatrixTheory (RMT), ever since the application of RMT to the statistics of energy levels of heavynuclei [1]. The spectral problem has retained its centrality in RMT with diverse applicationsin physics [2], computer science [3], finance [4–6] and statistics [7, 8]. The most celebratedresults about the density of states such as the Wigner semicircle law [9] for Wigner matrices(including Gaussian ensembles) and the Marˇcenko-Pastur law [10] for covariance matricesrefer to “dense” matrix ensembles, i.e. those for which most of the matrix entries are nonzero.On the other hand, the spectral problem is very relevant also for “sparse” matrix models,2 ciPost Physics Lecture Notes Submission i.e. when most of the entries are zero. Indeed, the spectral properties of (weighted) adjacencymatrices of sparse graphs encode the structural and topological features of many complexsystems [11, 12]. For random walks on graphs, the eigenvalue spectrum is directly connectedto the relaxation time spectrum [13, 14]. Moreover, from the condensed matter point of view,sparsely connected matrix models provide a test ground for physical systems described by aHamiltonian with finite-range interactions (see for instance [15]).In this paper, we describe the various strategies to compute the average spectral density(also known as density of states) for ensembles of sparse symmetric random matrices, i.e.weighted adjacency matrices of undirected graphs. Given a N × N random matrix J witheigenvalues { λ i } i =1 ,...,N , the average spectral density is defined as ρ ( λ ) = (cid:42) N N (cid:88) i =1 δ ( λ − λ i ) (cid:43) J , (1)where the limit N → ∞ is understood and (cid:104) ... (cid:105) J denotes the average over the matrix ensembleto which J belongs. The latter is also referred to as “disorder” average. For a given (large) N , ρ ( λ ) can be numerically obtained by first diagonalising a large number M of N × N matrices drawn from the ensemble, collecting all their N · M eigenvalues and organising theminto a normalised histogram. Our analysis is rooted in the statistical mechanics of disorderedsystems, with the main technical tools being the cavity (Section 3) and replica methods(Section 4 and 5). Our analysis will follow the historical developments that led to the solution of the problem.We start from the celebrated Edward-Jones formula [16], which is a key result linking thespectral problem to statistical mechanics. Indeed, the formula recasts the determination ofthe average spectral density (1) in terms of the average free energy (cid:104) log Z ( λ ) (cid:105) J of a disorderedsystem with partition function Z ( λ ). Edward and Jones were the first to use the replica method, extensively employed in spin-glass physics [17], in order to perform averages of thistype in the context of random matrices.Historically, the application of the Edwards-Jones recipe to sparse symmetric randommatrices (in particular Erd˝os-R´enyi adjacency matrices with the non-zero entries drawn froma Bernoulli distribution) was pioneered by Bray and Rodgers in [18] (and in a similar contextin [19] and later on in [20]). However, in their formulation the evaluation of the averagespectral density ρ ( λ ) relies on the solution of a very complicated integral equation. The sameintegral equation has been derived independently with a supersymmetric approach in [21] andlater obtained in a rigorous manner in [22], thus confirming the exactness of the symmetryassumptions in [18]. The difficulties in dealing with this equation even from a numericalpoint of view stimulated the search for a variety of approximation schemes, such as largeaverage connectivity expansions [18], the single defect approximation (SDA) [23] and theeffective medium approximation (EMA) [24, 25]. Alongside approximation schemes, resultsfrom numerical diagonalisation such as in [26, 27] have been employed to investigate thespectral properties of sparse random matrices.A different approach to the spectral problem of sparse symmetric random matrices wasproposed in [28]. There, the order parameters of the replica calculation are represented as un-countably infinite superpositions of Gaussians with random variances, as suggested by earlier3 ciPost Physics Lecture Notes Submission solutions of models for finitely coordinated harmonically coupled systems [29]. The intractableBray-Rodgers integral equation is then replaced by non-linear fixed-point equations for prob-ability density functions, which are solved by a stochastic population dynamics algorithm.We will review both approaches in Sections 4 and 5 below.Almost in parallel to [28], the cavity method [30] started to be employed for the deter-mination of the spectral density by Rogers and collaborators in [31]. The cavity method,also known as belief propagation , represents a much simpler alternative to replicas and itsexactness for locally tree-like graphs with finite mean degree c was proved in [32]. In [31],building on the Edwards-Jones setup, the authors used the cavity method to compute thespectrum of large single instances of sparse symmetric random matrices. The applicabilityof the cavity method is ensured by the tree-like structure of the underlying graphs. Theensemble average spectral density (1) is then obtained building on the single-instance results,circumventing the calculation of the average “free energy” (cid:104) log Z ( λ ) (cid:105) J altogether. The cavitytreatment produces non-linear fixed-point integral equations that are completely equivalentto those obtained in [28] within the replica framework.It has been shown in [33] that both the cavity and the replica method yield the same resultsconcerning the spectral density of graphs. Both approaches in [28] and [31] recover knownresults such as the Kesten-McKay law for the spectra of random regular graphs [34, 35], theMarˇcenko-Pastur law and Wigner’s semicircle law respectively for sparse covariance matricesand for Erd˝os-R´enyi adjacency matrices in the large mean degree limit. Moreover, bothmethods allow one to characterise the spectral density of sparse Markov matrices [36, 37]and graphs with modular [38] and small-world [39] structure and with topological constraints[40]. The localisation transition for sparse symmetric matrices was studied in [41]. In asimilar manner, both methods have also been employed to study the statistics of the topand second largest eigenpair of sparse symmetric random matrices [42–44]. The two methodshave also been extended to the case of sparse non-Hermitian matrices [45–49]. A particularattention has been dedicated to the spectral properties of the Hashimoto non-backtrackingoperator on random graphs [50, 51]. Both cavity and replica methods have been recentlyused to characterise the dense ( c → ∞ ) limit of the spectral density of adjacency matricesof undirected graphs within the configuration model, which reveals that the behaviour of thelimiting spectral density is not universal but actually depends on degree fluctuations. Indeedthe expected Wigner semicircle is recovered when the degree distribution tightly concentratesaround the mean degree c for c → ∞ , whereas non trivial deviations from the semicircle arefound when degree fluctuations are stronger [52].Moreover, thanks to the extension of the replica method to the analysis of sparse loopyrandom graphs, the influence of loops on the spectra of sparse matrices has been lately inves-tigated in [53, 54]. There is also a recent cavity analysis of the problem of loopy graphs byNewman and collaborators in [55]. In this paper, we will retrace the main milestones in the determination of the spectral density ofsparse symmetric random matrices. We start with the analysis of the Edwards-Jones formulain Section 2, providing its proof in Section 2.1 and discussing how to deal with the average (cid:104) log Z ( λ ) (cid:105) J in Section 2.2. For clarity and simplicity, we will first illustrate the cavity approachin Section 3. We outline the cavity setup in Section 3.1, then we deal with the spectrum of largeinstances of sparse symmetric random matrices in Section 3.2. In Section 3.3 we show how the4 ciPost Physics Lecture Notes Submission single-instance approach can be extended to the N → ∞ limit to recover the ensemble averagespectral density. Besides, in 3.4 we evaluate the large c limit of the average spectral densityobtained within the cavity formalism, showing that it converges to the Wigner semicircle. Wewill then follow the historical development of the subject by documenting the Bray-Rodgersreplica approach in Section 4. We will derive the Bray-Rodgers integral equation in Section4.3, while in Section 4.4 we will obtain its large c asymptotic expansion, showing that itsleading order gives rise to the Wigner semicircle, as expected. In Section 5 we will deal withthe alternative replica solution proposed in [28], showing in Section 5.1 that the solutionobtained with this approach coincides with that found by the cavity treatment in Section 3.3.In Section 6, we outline the stochastic population dynamics algorithm employed to solve thenon-linear fixed-point integral equations that are found within both the cavity and replicaframeworks respectively in Section 3.3 and Section 5.1. Edwards and Jones in [16] provide a formula to express the average spectral density of N × N random matrices (1) as ρ ( λ ) = − πN lim ε → + Im ∂∂λ (cid:104) log Z ( λ ) (cid:105) J , (2)with Z ( λ ) = (cid:90) R N d v exp (cid:20) − i2 v T ( λ ε − J ) v (cid:21) , (3)where again the (cid:104) ... (cid:105) J denotes the average over the matrix ensemble to which J belongs. In(2), which is valid for any N , Im indicates the imaginary part and log is the branch of thecomplex logarithm for which log e z = z . In (3), the symbol represents the N × N identitymatrix, the symbol v describes a vector in R N and the integral extends over R N . Moreover, λ (cid:15) = λ − i ε , where ε is a positive parameter ensuring that the integral (3) is convergent, sincethe absolute value of the integrand has the leading behaviour e − ε (cid:80) Ni =1 v i . The integral (3)can be interpreted as the canonical partition function of the Gibbs-Boltzmann distribution of N harmonically coupled particles with an imaginary (inverse) temperature, viz. P J ( v ) = 1 Z ( λ ) exp [ − i H ( v )] , (4)with a complex “Hamiltonian” H ( v ) = 12 v T ( λ (cid:15) − J ) v . (5)In this framework, the computation of (1) requires to evaluate (cid:104) log Z ( λ ) (cid:105) J , which is the canon-ical free energy of the associated N particles system, averaged over the random couplings. The starting point is the definition (1). Looking for a representation of the Dirac delta, oneconsiders the Sokhotski-Plemelj identity (see A for a proof), viz.1 x ± i ε −−−−→ ε → + Pr (cid:18) x (cid:19) ∓ i πδ ( x ) , (6)5 ciPost Physics Lecture Notes Submission where x ∈ R and Pr denotes the Cauchy principal value. The imaginary part of the identity,namely δ ( x ) = 1 π lim ε → + Im 1 x − i ε , (7)provides the desired representation. Therefore, inserting (6) into (1) results in ρ ( λ ) = 1 πN lim ε → + Im (cid:42) N (cid:88) i =1 λ − λ i − i ε (cid:43) J = − πN lim ε → + Im (cid:42) N (cid:88) i =1 λ i + i ε − λ (cid:43) J , (8)where the minus sign has been made explicit.One would now express the ratio in the angle brackets as the derivative of the principalbranch of the complex logarithm, denoted by Log. Unlike other properties, its derivativebehaves exactly like that of the real logarithm, therefore N (cid:88) i =1 λ i + i ε − λ = − ∂∂λ N (cid:88) i =1 Log ( λ i + i ε − λ ) , (9)entailing for the average spectral density the formula ρ ( λ ) = 1 πN lim ε → + Im ∂∂λ (cid:42) N (cid:88) i =1 Log ( λ i + i ε − λ ) (cid:43) J . (10)The sum of logarithms in (10) can be related to the partition function Z ( λ ) in (3) by exploitingthe following identity [56, 57], Z ( λ ) = (cid:90) R N d v exp (cid:20) − i2 v T ( λ ε − J ) v (cid:21) = (2 π ) N/ exp (cid:34) − N (cid:88) i =1 Log ( λ i + i ε − λ ) + i πN (cid:35) . (11)Caution is needed when taking the logarithm on both sides of (11), as in general Log(e z ) (cid:54) = z (see B). Indeed, using the property (160) and taking the principal logarithm on both sides of(11), one would obtain N (cid:88) i =1 Log ( λ i + i ε − λ ) = − Z ( λ ) + N Log(2 π ) + i πN π i (cid:22) − g ( λ )2 π (cid:23) , (12)where g ( λ ) = − N (cid:88) i =1 Arg( λ i + i ε − λ ) + πN (cid:98) ... (cid:99) denotes the floor operation,i.e. (cid:98) x (cid:99) is the integer such that x − < (cid:98) x (cid:99) ≤ x for x ∈ R .Note that this branch choice would make the r.h.s. not everywhere differentiable for λ ∈ R .Therefore, it is convenient to pick the branch of the complex logarithm such that log e z = z ,i.e. for which the extra (non-differentiable) phase term in (12) is killed. This choice yields N (cid:88) i =1 Log ( λ i + i ε − λ ) = − Z ( λ ) + N log(2 π ) + i πN , (14)6 ciPost Physics Lecture Notes Submission where the constant terms on the r.h.s. depend on N , but not on λ . Taking the derivative,one eventually finds ∂∂λ N (cid:88) i =1 Log ( λ i + i ε − λ ) = − ∂∂λ log Z ( λ ) , (15)therefore the Edwards-Jones formula (2) is recovered. In order to obtain the spectral density, the average (cid:104) log Z ( λ ) (cid:105) J must be computed. It explicitlyreads (cid:104) log Z ( λ ) (cid:105) J = (cid:90) (cid:89) i Graph 1 : a tree-like graph where the indices refer to the notation used in Section 3.1 toderive cavity single-instance equations. Graph 2 : example of the decorrelation occurring to the nodes j , j and j ,neighbours of the node i , after the removal of i . Eq. (26) defines a set of recursion equations for any pair of interacting nodes ( i, j ). Theset of recursion equations (26) is solved exactly by a zero-mean Gaussian ansatz for the cavitymarginals P ( i ) j ( v j ). Indeed, assuming that P ( i ) j ( v j ) = (cid:115) ω ( i ) j π exp (cid:32) − ω ( i ) j v j (cid:33) , (27)and performing the Gaussian integrals on the r.h.s. of (26), one gets P ( i ) j ( v j ) = 1 Z ( i ) j exp − i λ ε + (cid:88) (cid:96) ∈ ∂j \ i J j(cid:96) ω ( j ) (cid:96) v j . (28)The comparison between the exponents of (27) and (28) entails ω ( i ) j = i λ ε + (cid:88) (cid:96) ∈ ∂j \ i J j(cid:96) ω ( j ) (cid:96) . (29)Therefore, the set of equations (26) translates into a set of self-consistency equations for theset of cavity inverse variances ω ( i ) j .Similarly, the Gaussian ansatz (27) can be inserted in the single-site marginal expression(25), yielding a Gaussian structure for P i ( v i ), viz. P i ( v i ) = 1 Z i exp (cid:18) − ω i v i (cid:19) , (30)with single-site inverse variances given by ω i = i λ ε + (cid:88) j ∈ ∂i J ij ω ( i ) j . (31)10 ciPost Physics Lecture Notes Submission Once the cavity inverse variances are determined as the solution of (29), the single-site inversevariances (cid:104) v i (cid:105) = ω i are found from (31), and the spectral density is readily obtained from(22) as ρ J ( λ ) = 1 πN lim ε → + N (cid:88) i =1 Re (cid:20) ω i (cid:21) = 1 πN lim ε → + N (cid:88) i =1 Re[ ω i ](Re [ ω i ]) +( Im[ ω i ]) . (32)As a concluding remark, it can be noticed that the set of self-consistency equations forthe cavity inverse variances (29) only depends on the square of matrix entries, thus entailingthat the spectrum of the matrix J is equal to that of the matrix − J and therefore is perfectlysymmetric around λ = 0. This property indeed holds exactly for trees, since every tree is abipartite graph (see [58] or G for a simple proof of this property). This check further corrob-orates that cavity equations are exact on trees, but only approximate on tree-like structuresas long as cycles are negligible.The set of cavity recursions (29) can be solved by a forward iteration algorithm. A workingexample of a code that allows one to determine the average spectral density on a single instanceis available upon request. In this section we depart from [31] and show that the ensemble average of the spectral density(1) can be recovered from the single-instance spectral density (32) as obtained through thecavity method. Indeed, by invoking the law of large number in (32), in the large N limit onegets ρ J ( λ ) = 1 πN lim ε → + N (cid:88) i =1 Re (cid:20) ω i (cid:21) −−−−→ N →∞ ρ ( λ ) = 1 π lim ε → + (cid:90) d˜ ω ˜ π (˜ ω )Re (cid:20) ω (cid:21) , (33)where ˜ π (˜ ω ) is the pdf of the inverse variances ω i taking values around ˜ ω . In the r.h.s. of Eq.(33) the subscript J has been dropped, as the quantity ρ ( λ ) characterises the ensemble of J , rather than a single matrix. Eq. (33) implicitly assumes that the spectral density enjoysthe self-averaging property, meaning that a large single instance of the ensemble faithfullyrepresents the average behaviour over many instances.The task now is to find the pdf of the inverse variances ˜ π (˜ ω ). Recalling the single-instancerelation (31) between the single-site inverse variances ω i and the cavity inverse variances ω ( i ) j ,the pdf ˜ π (˜ ω ) will be determined in terms of the probability density π ( ω ) of ω ( i ) j .In order to find the pdf π ( ω ), one observes that the set of self-consistency equations forthe cavity inverse variances (29) refers to the links of the underlying graph. In an infinitelylarge network, links can be distinguished from one another by the degree of the node they arepointing to. Therefore, considering a link ( i, j ) pointing to a node j of degree k , the value ω of the cavity inverse variance ω ( i ) j living on this link is determined by the set { ω (cid:96) } k − ofthe k − ω ( j ) (cid:96) living on each of the edges connecting j with its neighbours (cid:96) ∈ ∂j \ i . In an infinite system, these values can be regarded as k − ω ( j ) (cid:96) , each drawn from the same pdf π ( ω ). The entries of J appearing in (29) are replaced by a set { K (cid:96) } k − of k − K j(cid:96) , each distributed according to the bond weights pdf p K ( K ). The distribution π ( ω ) is then obtained by averaging the contributions coming from11 ciPost Physics Lecture Notes Submission every link w.r.t. the probability kc p ( k ) of having a link pointing to a node of degree k . Thisreasoning leads to the self-consistency equation π ( ω ) = ∞ (cid:88) k =1 p ( k ) kc (cid:90) { d π } k − (cid:42) δ (cid:32) ω − (cid:32) i λ ε + k − (cid:88) (cid:96) =1 K (cid:96) ω (cid:96) (cid:33)(cid:33)(cid:43) { K } k − , (34)where { d π } k − = (cid:81) k − (cid:96) =1 d ω (cid:96) π ( ω (cid:96) ) and the angle brackets (cid:104)·(cid:105) { K } k − denote the average over k − K . Eq. (34) is generally solved via apopulation dynamics algorithm (see Section 6).The same reasoning can be applied to find the pdf ˜ π (˜ ω ) of inverse variances. Recalling(31), it can be noticed that the ω i are variables related to nodes , rather than links. Since inthe infinite size limit the nodes can be distinguished from one another by their degree, thepdf ˜ π (˜ ω ) can be written in terms of (34) as˜ π (˜ ω ) = ∞ (cid:88) k =0 p ( k ) (cid:90) { d π } k (cid:42) δ (cid:32) ˜ ω − (cid:32) i λ ε + k (cid:88) (cid:96) =1 K (cid:96) ω (cid:96) (cid:33)(cid:33)(cid:43) { K } k , (35)where p ( k ) is the degree distribution.Inserting (35) into (33) gives (at the ensemble level) ρ ( λ ) = 1 π lim ε → ∞ (cid:88) k =0 p ( k )Re (cid:90) { d π } k (cid:42) λ ε + (cid:80) k(cid:96) =1 K (cid:96) ω (cid:96) (cid:43) { K } k = 1 π lim ε → + ∞ (cid:88) k =0 p ( k ) (cid:90) { d π } k (cid:42) Re (cid:104)(cid:80) k(cid:96) =1 K (cid:96) ω (cid:96) (cid:105) + ε (cid:16) Re (cid:104)(cid:80) k(cid:96) =1 K (cid:96) ω (cid:96) (cid:105) + ε (cid:17) + (cid:16) λ + Im (cid:104)(cid:80) k(cid:96) =1 K (cid:96) ω (cid:96) (cid:105)(cid:17) (cid:43) { K } k . (36)Eq. (36) is the ensemble generalisation of the single-instance formula (32) and provides theensemble average of the spectral density (1). The average spectral density as expressed in(36) can be interpreted as a weighted sum of local densities, each pertaining to sites of degree k . As shown by (36), the solution of the spectral problem is completely determined by thedistribution π satisfying the self-consistency equation (34). Once π has been obtained, theaverage spectral density (36) is evaluated by sampling from a large population representingthe distribution π ( ω ). Section 6 illustrates the algorithm that produces the solution of self-consistency equations of this type as well as the details of the sampling procedure. c → ∞ limit in the cavity formalism One can easily show that taking the c → ∞ limit in Eq. (34), (35) and then eventually(33), the Wigner semicircle law is recovered. This has been first shown in [31]. According It can be observed that in general the probability that a node of degree k is connected to a node of degree k (cid:48) is conditional, namely P ( k (cid:48) | k ). However, configuration model ensembles (including the Erd˝os-R´enyi ensemble)are cases of random uncorrelated networks, hence P ( k (cid:48) | k ) is independent of k . Therefore, P ( k (cid:48) | k ) reduces tothe probability that an edge points to a node of degree k (cid:48) , which can be defined as the ratio between thenumber of edges pointing to nodes of degree k (cid:48) , k (cid:48) p ( k (cid:48) ), and the number of edges pointing to nodes of anydegree, i.e. the sum (cid:80) k (cid:48) k (cid:48) p ( k (cid:48) ) = c . ciPost Physics Lecture Notes Submission to [52], we consider graphs in the configuration model having a degree distribution such that σ k (cid:104) k (cid:105) = (cid:104) k (cid:105)−(cid:104) k (cid:105) (cid:104) k (cid:105) → (cid:104) k (cid:105) = c → ∞ . Here, the symbol σ k denotes the standard deviation ofthe degree distribution p ( k ). For example, σ k = √ c for Erd˝os-R´enyi graphs.A meaningful large- c limit is obtained for Eq. (34) or equivalently (35) by rescaling eachinstance of the bond random weights as K ij = K ij / √ c . Therefore, considering (34) oneobtains π ( ω ) = ∞ (cid:88) k =1 p ( k ) kc (cid:90) { d π } k − (cid:42) δ (cid:32) ω − (cid:32) i λ ε + 1 c k − (cid:88) (cid:96) =1 K (cid:96) ω (cid:96) (cid:33)(cid:33)(cid:43) {K} k − , (37)For large c , the sum over the degrees in Eq. (37) receives contributions only from k = c ±O ( σ k ).As c → ∞ , the degree distribution p ( k ) becomes highly concentrated around k = c , thus theargument of the δ -function on the r.h.s of Eq. (37) can be evaluated using the Law of LargeNumber (LLN). Indeed, one finds that the r.h.s. of the condition ω = i λ ε + 1 c c − (cid:88) (cid:96) =1 K (cid:96) ω (cid:96) (38)does not fluctuate, hence ω itself is fixed and determined by the algebraic equation¯ ω ε = i λ ε + (cid:104)K (cid:105) ¯ ω ε ⇔ ¯ ω ε = i λ ε ± (cid:112) (cid:104)K (cid:105) − λ ε . (39)For large c , the quantity (cid:104)K (cid:105) = c (cid:80) c − (cid:96) =1 K (cid:96) (cid:39) c (cid:80) c(cid:96) =1 K (cid:96) represents the second moment ofthe pdf of the rescaled bond weights.The very same reasoning can be applied to the argument of the δ function in (35), entailingthat in the limit c → ∞ the ˜ ω are non-fluctuating as well, and take the same constant valuesgiven by the solutions of Eq. (39), viz.˜ π (˜ ω ) = δ (˜ ω − ¯ ω ε ) as c → ∞ . (40)Therefore, inserting Eq. (39) and (40) in Eq. (33), one finds that in the limit c → ∞ ρ ( λ ) = 1 π lim ε → + Re (cid:20) ω ε (cid:21) = 1 π lim ε → + Re (cid:34) λ ε ± (cid:112) (cid:104)K (cid:105) − λ ε (cid:35) = 12 π (cid:104)K (cid:105) lim ε → + Re (cid:104) i λ ε ∓ (cid:112) (cid:104)K (cid:105) − λ ε (cid:105) , (41)which in the ε → + limit eventually reduces to ρ ( λ ) = (cid:40) π (cid:104)K (cid:105) (cid:112) (cid:104)K (cid:105) − λ − (cid:112) (cid:104)K (cid:105) < λ < (cid:112) (cid:104)K (cid:105) , (42)where the plus sign has been chosen to get a physical solution. The latter expression corre-sponds to the Wigner’s semicircle. 13 ciPost Physics Lecture Notes Submission In this section, we illustrate the replica calculation for the average spectral density, as orig-inally proposed by Bray and Rodgers. Following [18], the goal is to evaluate the averagespectral density (1) of an ensemble of N × N real symmetric sparse matrices. Leveraging onthe notation of section 3, given a matrix J , each matrix entry can be written as J ij = c ij K ij ,where the c ij = { , } represent the pure adjacency matrix of the underlying graph and the K ij encode the bond weights. In particular, the matrix model considered in [18] is the Erd˝os-R´enyi (ER) model: the probability of having a non-zero entry is given by p = c/N , where c represents the mean degree of the nodes of the underlying graph. For more details on the ERmodel, see C. The joint distribution of the matrix entries is given by P ( { J ij } ) = (cid:89) i 2. The Jacobian of thecoordinate transformation is J ( v, φ , ..., φ n − ) = v n − (sin( φ )) n − (sin( φ )) n − · · · sin( φ n − ).In these coordinates, the factor arising from the integration over the angular degrees of free-dom, I ( (cid:126)φ ) = (cid:90) [0 ,π ] n − d φ d φ · · · d φ n − (cid:90) π d φ n − (sin( φ )) n − (sin( φ )) n − · · · sin( φ n − ) , (68)appears both in the numerator and denominator of (67). Hence it cancels, yielding eventually ρ ( λ ) = 1 π lim ε → + Re lim n → n (cid:82) ∞ d vv n +1 exp (cid:2) − i λ ε v + cg ( v ) (cid:3)(cid:82) ∞ d vv n − exp (cid:2) − i λ ε v + cg ( v ) (cid:3) . (69)The integral in the denominator can be further simplified integrating by parts, i.e. (cid:90) ∞ d vv n − exp (cid:20) − i λ ε v + cg ( v ) (cid:21) = 1 n (cid:90) ∞ d vv n exp (cid:20) − i λ ε v + cg ( v ) (cid:21) (cid:0) i λ ε v − cg (cid:48) ( v ) (cid:1) , (70)since the boundary contribution vanishes. Therefore, taking the n → ρ ( λ ) = 1 π lim ε → + Re (cid:82) ∞ d vv exp (cid:2) − i λ ε v + cg ( v ) (cid:3)(cid:82) ∞ d v exp (cid:2) − i λ ε v + cg ( v ) (cid:3) (i λ ε v − cg (cid:48) ( v )) . (71)The expression (71) shows that the function g ( v ) is the only ingredient needed to computethe average spectral density. The search for a (replica-symmetric) solution of (62) will beaddressed in Section 4.3. Rotational invariance in replica space is a stronger condition than the symmetry upon permutation ofreplicas. An example of an ansatz satisfying permutational symmetry, but not rotational invariance would be g ( (cid:126)v ) = g ( (cid:80) na =1 v a ). ciPost Physics Lecture Notes Submission At this stage, in order to get a replica-symmetric version of (62), the replica-symmetric ansatz(66) is applied and spherical coordinates are again introduced. Assuming that φ = φ ∈ [0 , π ]is the angle between the vectors (cid:126)v and (cid:126)v (cid:48) and | (cid:126)v (cid:48) | = r one finds g ( v ) = (cid:82) ∞ d rr n − exp (cid:2) − i2 λ ε r + cg ( r ) (cid:3) (cid:82) π d φ (sin φ ) n − f ( vr cos φ ) (cid:82) ∞ d rr n − exp (cid:2) − i2 λ ε r + cg ( r ) (cid:3) (cid:82) π d φ (sin φ ) n − , (72)since all the remaining angular integrations cancel between numerator and denominator. Werecall that f ( z ) = (cid:104) e i Kz (cid:105) K − 1. In order to proceed, a specific bond weight distribution mustbe introduced. The choice made by Rodgers and Bray in [18], p K ( K ) = 12 δ K, + 12 δ K, − , (73)entails that f ( z ) = cos z − 1. This gives g ( v ) = (cid:82) ∞ d rr n − exp (cid:2) − i2 λ ε r + cg ( r ) (cid:3) (cid:82) π d φ (sin φ ) n − [cos( vr cos φ ) − (cid:82) ∞ d rr n − exp (cid:2) − i2 λ ε r + cg ( r ) (cid:3) (cid:82) π d φ (sin φ ) n − . (74)The angular integral in the numerator in (74) yields (see formula 21 of Section 3.715 in [59]) I angnum = (cid:90) π d φ (sin φ ) n − [cos( vr cos φ ) − √ π Γ (cid:0) n − (cid:1) (cid:34)(cid:18) vr (cid:19) n (cid:16) nJ n ( vr ) − vrJ n +1 ( vr ) (cid:17) − (cid:0) n (cid:1) (cid:35) , (75)where J α ( x ) indicates the Bessel function of the first kind, defined by the series J α ( x ) = ∞ (cid:88) m =0 ( − m m !Γ( m + α + 1) (cid:16) x (cid:17) m + α . (76)The angular integral in the denominator of (74) is independent of the radial one and gives I angden = (cid:90) π d φ (sin φ ) n − = √ π Γ (cid:0) n − (cid:1) Γ (cid:0) n (cid:1) , (77)thus canceling the divergent factor for n < G ( r ) = exp (cid:2) − i2 λ ε r + cg ( r ) (cid:3) ,one obtains I radden = (cid:90) ∞ d rr n − G ( r ) = − n (cid:90) ∞ d rr n G (cid:48) ( r ) , (78)as the boundary contribution vanishes. Collecting the results, one finds g ( v ) = − n Γ (cid:0) n (cid:1) (cid:82) ∞ d rr n − G ( r ) (cid:20)(cid:0) vr (cid:1) n (cid:16) nJ n ( vr ) − vrJ n +1 ( vr ) (cid:17) − ( n ) (cid:21)(cid:82) ∞ d rr n G (cid:48) ( r ) . (79)Lastly, the n → G ( r ) and noticing that18 ciPost Physics Lecture Notes Submission 1. Γ( n ) ≈ n as n → ⇒ lim n → − n Γ (cid:16) n (cid:17) = − , (80)2. lim n → (cid:90) ∞ d r r n G (cid:48) ( r ) = (cid:90) ∞ d r G (cid:48) ( r ) = G ( ∞ ) − G (0) = − e cg (0) , (81)3. lim n → r n − G ( r ) (cid:34)(cid:18) vr (cid:19) n (cid:16) nJ n ( vr ) − vrJ n +1 ( vr ) (cid:17) − (cid:0) n (cid:1) (cid:35) = − vJ ( vr ) G ( r ) , (82)one obtains g ( v ) = v (cid:82) ∞ d rJ ( vr ) exp (cid:2) − i2 λ ε r + cg ( r ) (cid:3) − e cg (0) . (83)Given the structure of Eq. (83), it follows that g (0) = 0, therefore eventually g ( v ) = − v (cid:90) ∞ d rJ ( vr ) exp (cid:20) − i2 λ ε r + cg ( r ) (cid:21) , (84)which is equivalent to Eq. (18) in [18]. This equation, also known as the Bray-Rodgers integralequation, fully defines the quantity g ( v ). Despite numerous attempts, an exact solution for(84) is still unavailable. Even numerically, the equation is hard to tackle due to the exponentialnonlinearity and the oscillatory Bessel term (see however [15]).Taking into account Eq. (84), the average spectral density (71) can be further simplifiedas follows. Indeed, recalling that G ( v ) = exp (cid:2) − i λ ε v + cg ( v ) (cid:3) , the denominator in Eq. (71)can be expressed as I ρ, den = − (cid:90) ∞ d v G (cid:48) ( v ) = G (0) − G ( ∞ ) = e cg (0) = 1 , (85)therefore entailing that the average spectral density reduces to ρ ( λ ) = 1 π lim ε → + Re (cid:90) ∞ d v v exp (cid:20) − i λ ε v + cg ( v ) (cid:21) . (86) c → ∞ limit It can be shown that Eq. (84) can be solved perturbatively in powers of 1 /c in the limit c → ∞ .In this framework, the average spectral density (86) is in turn expressed as a perturbativeexpansion, whose leading term is the Wigner semicircular law. We will provide a derivationinspired by [18, 20].At first, the following changes of variables are introduced, v = − sλ ε , (87) r = − uλ ε , (88) λ ε = cx δ , (89)19 ciPost Physics Lecture Notes Submission where x δ = x − i δ , with δ > 0. Moreover, it is assumed that g ( v ) = c γ ( s ).Eq. (84) is then rewritten in terms of the new variables. The differential in (84) transformsas d r = − i λ ε (cid:114) − λ ε u d u = − i √ cx δ (cid:114) − √ cx δ u d u , (90)while the integration boundaries are unchanged. Indeed, from (88), one finds u = √ cr δ √ cr x √ cr (cid:112) x + δ e i arctan ( xδ ) = (cid:40) r = 0 ∞ r → ∞ . (91)In terms of the new variables, after some algebra Eq. (84) converts to γ ( s ) = sx δ (cid:90) ∞ d u exp [ − u + γ ( u )] ∞ (cid:88) m =0 m !( m + 1)! (cid:18) sucx δ (cid:19) m , (92)corresponding to Eq. (23) in [18].With these choices, it is natural to expand γ ( s ) as a power series in s/c . High powers of s are related to high powers of 1 /c . Therefore, one expects to find a solution of the form γ ( s ) = c ∞ (cid:88) r =1 b r (cid:16) sc (cid:17) r , (93)where in turn the coefficients b r are defined via the expansion b r = ∞ (cid:88) (cid:96) =0 b ( (cid:96) ) r c (cid:96) . (94)Eq. (93) and (94) allow one to obtain all possible combinations of powers of s r c r + (cid:96) . Thetarget is to determine the coefficients b ( (cid:96) ) r , by solving Eq. (92) order by order. This wouldpermit a complete representation of γ ( s ) as a power series. The following steps will be followedfor the solution. • Express γ via the expansions (93) and (94) in both the l.h.s. and the exponent of ther.h.s. of (92). • Integrate w.r.t. u term by term in the r.h.s. of (92). • Equate the coefficients of the powers s r c r + (cid:96) .The expansion of γ ( s ) will be stopped at O (cid:0) c (cid:1) . Hence, the l.h.s. of (92) reads γ ( s ) = b s + b s c + O (cid:18) s c (cid:19) = b (0)1 s + b (1)1 sc + b (0)2 s c + O (cid:18) s c (cid:19) . (95)Looking at the r.h.s., one notices that only the terms m = 0 and m = 1 of the sum in (92)are needed to match the powers s r c r + (cid:96) in (95). Indeed, one finds γ ( s ) = sx δ (cid:90) ∞ d u exp [ − u + γ ( u )] + s cx δ (cid:90) ∞ d u u exp [ − u + γ ( u )] + O (cid:18) s c (cid:19) . (96)20 ciPost Physics Lecture Notes Submission The first and second integral on the r.h.s. of (96) can be denoted respectively as I A and I B . Considering first the integral I A , it has a pre-factor of order O ( s ), therefore it may yieldcontributions of any order in powers of 1 /c depending on the order at which we stop theexpansion of γ ( u ) in the exponent. Expanding γ ( u ) as in (95) is sufficient to obtain all the O (1) and O (cid:0) c (cid:1) contributions. Indeed, we have I A = (cid:90) ∞ d u exp [ − u + γ ( u )] (cid:39) (cid:90) ∞ d u exp (cid:34) b (0)2 u + (cid:32) b (0)1 − b (1)1 c (cid:33) u (cid:35) = √ π (cid:113) − b (0)2 c e y erfc( y ) , (97)where y = − b (0)1 − b (1)1 c (cid:113) − b (0)2 c , (98)and Re (cid:18) b (0)2 c (cid:19) < 0. The function denoted by erfc( z ) is the complementary error function,defined for real z as erfc( z ) = 2 √ π (cid:90) ∞ z d t e − t . (99)In order to express (97) as a power series, an asymptotic expansion of erfc( z ) is employed.For large real z it is given byerfc( z ) = e − z z √ π (cid:34) ∞ (cid:88) n =1 ( − n (2 n )! n !(2 z ) n (cid:35) , (100)which diverges for any finite z . However, few terms are sufficient to approximate erfc( z ) wellfor any finite z . Using (100) in (97), one obtains I A ≈ − b (0)1 − b (1)1 c ∞ (cid:88) n =1 (2 n )! (cid:18) b (0)2 c (cid:19) n n ! (cid:18) b (0)1 − b (1)1 c (cid:19) n = − b (0)1 − b (1)1 c − ∞ (cid:88) n =1 (2 n )! n ! (cid:32) b (0)2 c (cid:33) n (cid:18) b (0)1 − b (1)1 c (cid:19) n +1 . (101)Eq. (101) can be further simplified recalling that (1 + βx ) α ≈ αβx for x (cid:28) 1, entailingthat the first term of (101) becomes − b (0)1 − b (1)1 c ≈ − b (0)1 + b (1)1 (cid:16) − b (0)1 (cid:17) c . (102)21 ciPost Physics Lecture Notes Submission Since the expansion is stopped at O (cid:0) c (cid:1) , only the n = 1 term in the sum in (101) needsto be considered, as the contributions for n ≥ O (cid:0) c (cid:1) . Moreover, since the n = 1 term exhibits the O (cid:0) c (cid:1) scaling explicitly, only the O (1) contribution arising from theround brackets in the denominator of the general term of the sum in (101) must be taken intoaccount. Indeed, using (102) the n = 1 term in (101) becomes2 b (0)2 c − b (0)1 − b (1)1 c ≈ b (0)2 c − b (0)1 + b (1)1 (cid:16) − b (0)1 (cid:17) c = 2 b (0)2 (cid:16) − b (0)1 (cid:17) c + O (cid:18) c (cid:19) . (103)Collecting all the leading contributions to the integral I A one obtains I A = (cid:90) ∞ d u exp [ − u + γ ( u )] = 11 − b (0)1 + 1 c b (1)1 (cid:16) − b (0)1 (cid:17) + 2 b (0)2 (cid:16) − b (0)1 (cid:17) + O (cid:18) c (cid:19) . (104)Considering now the second integral on the r.h.s. of (96), denoted by I B , its pre-factorhas already the O (cid:0) c (cid:1) scaling. Therefore, only the O (1) term arising from the c expansionof I B is needed. To this purpose, it is sufficient to consider the expansion of γ ( u ) no furtherthan O (cid:0) uc (cid:1) . Indeed, one finds I B = (cid:90) ∞ d u u exp [ − u + γ ( u )] ≈ (cid:90) ∞ d u u exp (cid:34) − (cid:32) − b (0)1 − b (1)1 c (cid:33) u (cid:35) = 1 (cid:16) − b (0)1 (cid:17) 11 + b (1)1 b (0)1 − c ≈ (cid:16) − b (0)1 (cid:17) + O (cid:18) c (cid:19) , (105)with Re (cid:18) − b (0)1 − b (1)1 c (cid:19) > I A and I B , respectively given by Eq. (104) and(105), Eq. (96) representing the r.h.s. of (92) becomes γ ( s ) = 1 x δ (cid:16) − b (0)1 (cid:17) s + 1 x δ b (1)1 (cid:16) − b (0)1 (cid:17) + 2 b (0)2 (cid:16) − b (0)1 (cid:17) sc + 12 x δ (cid:16) − b (0)1 (cid:17) s c + O (cid:18) s c (cid:19) . (106)22 ciPost Physics Lecture Notes Submission Equating term by term the expansion in Eq. (95) and (106), one gets a closed set ofequations to determine the three coefficients b (0)1 , b (1)1 and b (0)2 , viz. O ( s ) : b (0)1 = 1 x δ (cid:16) − b (0)1 (cid:17) , (107) O (cid:16) sc (cid:17) : b (1)1 = 1 x δ b (1)1 (cid:16) − b (0)1 (cid:17) + 2 b (0)2 (cid:16) − b (0)1 (cid:17) , (108) O (cid:18) s c (cid:19) : b (0)2 = 12 x δ (cid:16) − b (0)1 (cid:17) , (109)corresponding to Eq. (25), (26) and (27) in [18]. The latter system of equations can be easilysolved, yielding b (0)1 = 12 (cid:104) ± A (cid:105) , (110) b (1)1 = ∓ x δ (cid:104) ± A (cid:105) A , (111) b (0)2 = 12 (cid:16) b (0)1 (cid:17) , (112)where A = 1 − x δ .At this stage, the average spectral density (86) can be evaluated perturbatively. To do so,the change of variables (87), (89) and g ( v ) = c γ ( s ) are applied to Eq. (86). Eq. (89) impliesrescaling the spectral density such that a meaningful c → ∞ limit can be considered. Thisrescaling is equivalent to normalising the matrix entries by √ c . After the change of variables,the spectral density will be expressed in terms of x δ = x − i δ , which does not scale with c . Inthis setting, the limit ε → + is replaced by the limit δ → + .Taking into account (89), for the l.h.s. of Eq. (86) before taking the ε → + limit onefinds ρ ( λ ε ) = ρ ( x δ ) d x δ d λ ε = ρ ( x δ ) 1 √ c , (113)entailing that ρ ( λ ) = lim ε → + ρ ( λ ε ) = lim δ → + ρ ( x δ ) 1 √ c = ρ ( x ) 1 √ c . (114)Transforming the variables also in the r.h.s. of Eq. (86) yields ρ ( x ) 1 √ c = 1 π lim δ → + Re (cid:20)(cid:18) − i √ cx δ (cid:19) (cid:90) ∞ d s exp [ − s + γ ( s )] (cid:21) ⇒ ρ ( x ) = 1 π lim δ → + Im (cid:20) x − i δ (cid:90) ∞ d s exp [ − s + γ ( s )] (cid:21) , (115)where the dependence on c is only through the power series (93) expressing γ ( s ).The goal is to obtain the O (1) leading term dominating (115) for large c . In order to doso, the expansion of the function γ ( s ) can be truncated as γ ( s ) = b s + O ( s ) = b (0)1 s + b (1)1 sc + O (cid:18) s c (cid:19) , (116)23 ciPost Physics Lecture Notes Submission where the coefficient expansion (94) has been used. With this assumption, the average spectraldensity depends only on the coefficients b (0)1 and b (1)1 . Finally, using (116) in (115) one gets ρ ( x ) = 1 π lim δ → + Im x − i δ − b (0)1 − b (1)1 c ≈ π lim δ → + Im (cid:34) x − i δ − b (0)1 (cid:35) + O (cid:18) c (cid:19) , (117)where again Re (cid:18) − b (0)1 − b (1)1 c (cid:19) > 0. Recalling the definition (110), after some simple algebraone obtains Im (cid:34) x − i δ − b (0)1 (cid:35) = 12 Im (cid:104) x − i δ ± (cid:112) ( x − i δ ) − (cid:105) . (118)Using (118), considering the imaginary part and taking the δ → + limit, the O (1) leadingterm in Eq. (117) becomes ρ ( x ) = (cid:40) π √ − x − < x < 20 elsewhere , (119)where we have used that Im [ √ y ] = (cid:40) √− y y < y > , (120)and the plus sign in front of the square root has been chosen in order to get a physical(non-negative) solution. Eq. (119) corresponds to the Wigner’s semicircle as expected. K¨uhn in [28] suggested a different approach for the average spectral density problem, whichcompletely bypasses (84). At the outset, the treatment in [28] is the same as in [18], but de-parts from the Bray-Rodgers original derivation at the level of the stationarity conditions (60)and (61). The order parameter ϕ ( (cid:126)v ) and its conjugate i ˆ ϕ ( (cid:126)v ) are represented as uncountablyinfinite superpositions of complex Gaussians, i.e. ϕ ( (cid:126)v ) = (cid:90) d ωπ ( ω ) n (cid:89) a =1 e − ω v a Z ( ω ) , (121)i ˆ ϕ ( (cid:126)v ) = ˆ c (cid:90) dˆ ω ˆ π (ˆ ω ) n (cid:89) a =1 e − ˆ ω v a Z (ˆ ω ) , (122)where Z ( x ) = (cid:113) πx and π ( ω ) and ˆ π (ˆ ω ) are normalised pdfs of the inverse variances ω and ˆ ω with Re( ω ) , Re(ˆ ω ) ≥ 0. The constant ˆ c is chosen to enforce the normalisation of ˆ π (ˆ ω ). We employ the same labels used in the cavity treatment, as the two objects will eventually coincide. ciPost Physics Lecture Notes Submission Eq. (121) and (122) are expansions of ϕ and ˆ ϕ in an over-complete function system. Thestructure of this ansatz derives from the study of models for amorphous systems. In thatcontext, it was noticed that harmonically coupled systems — such as the model defined byour “Hamiltonian” (5) — admit a solution in terms of superpositions of Gaussian [29, 60].This ansatz exhibits permutation symmetry among replicas as well as rotational symmetryin replica space, therefore sharing the same symmetries assumed in [18] (see Eq. (66)).The advantage of the ans¨atze (121) and (122) is that they allow us to extract the leadingcontribution to the saddle-point of (54) in the limits N → ∞ and n → ϕ and ˆ ϕ is thus replaced by a path integral over π and ˆ π .Therefore, Eq. (54) becomes (cid:104) Z ( λ ) n (cid:105) J ∝ (cid:90) D π D ˆ π exp ( N S n [ π, ˆ π, λ ]) , (123)where S n [ π, ˆ π, λ ] = S [ π, ˆ π ] + S [ π ] + S [ˆ π, λ ] , (124)and S [ π, ˆ π ] = − ˆ c − n ˆ c (cid:90) d π ( ω )dˆ π (ˆ ω )log Z ( ω + ˆ ω ) Z ( ω ) Z (ˆ ω ) , (125) S [ π ] = n c (cid:90) d π ( ω )d π ( ω (cid:48) ) (cid:28) log Z ( ω, ω (cid:48) , K ) Z ( ω ) Z ( ω (cid:48) ) (cid:29) K , (126) S [ˆ π, λ ] = ˆ c + n ∞ (cid:88) k =0 p ˆ c ( k ) (cid:90) { dˆ π } k log Z (i λ ε + { ˆ ω } k ) (cid:81) k(cid:96) =1 Z (ˆ ω (cid:96) ) . (127)In the latter expressions, the shorthands d π = d ωπ ( ω ), { dˆ π } k = (cid:81) k(cid:96) =1 dˆ ω (cid:96) ˆ π (ˆ ω (cid:96) ) and { ˆ ω } k = (cid:80) k(cid:96) =1 ˆ ω (cid:96) have been used. Moreover, Z ( ω, ω (cid:48) , K ) = Z ( ω ) Z ( ω (cid:48) + K ω ). The function p ˆ c ( k ) isthe Poisson degree distribution p ˆ c ( k ) = e − ˆ c ˆ c k k ! , which naturally crops up in the calculationwhen representing e i ˆ ϕ (cid:63) ( (cid:126)v ) appearing in (61) as a power series. The derivation of (125), (126)and (127) is detailed in E. We notice that the O (1) contributions in (125) and (127) canceleach other, making the action (124) of O ( n ).The stationarity conditions (60) and (61) are replaced by the stationarity conditions w.r.t. π and ˆ π . They are δS n δπ = 0 ⇒ ˆ cc (cid:90) dˆ π log Z ( ω + ˆ ω ) Z ( ω ) Z (ˆ ω ) = (cid:90) d π (cid:48) (cid:28) log Z ( ω, ω (cid:48) , K ) Z ( ω ) Z ( ω (cid:48) ) (cid:29) K + γc , (128)and δS n δ ˆ π = 0 ⇒ (cid:90) d π log Z ( ω + ˆ ω ) Z ( ω ) Z (ˆ ω ) = ∞ (cid:88) k =1 p ˆ c ( k ) k ˆ c (cid:90) { dˆ π } k − log Z (i λ ε + { ˆ ω } k − + ˆ ω ) Z (ˆ ω ) (cid:81) k − (cid:96) =1 Z (ˆ ω (cid:96) ) + ˆ γ ˆ c , (129)where γ and ˆ γ are two Lagrange multipliers to enforce the normalisation condition of π andˆ π . The equality (128) is realised by making sure that γ satisfies the equality − ˆ cc (cid:90) dˆ π (ˆ ω ) log Z (ˆ ω ) = γc , (130)25 ciPost Physics Lecture Notes Submission while the remaining part (which is a function of ω ) should satisfyˆ cc (cid:90) dˆ π (ˆ ω ) log Z ( ω + ˆ ω ) Z ( ω ) = (cid:90) d π ( ω (cid:48) ) (cid:42) log Z (cid:16) ω + K ω (cid:48) (cid:17) Z ( ω ) (cid:43) K , (131)where Z ( ω, ω (cid:48) , K ) = Z ( ω (cid:48) ) Z ( ω + K ω (cid:48) ) has been used. Since eq. (131) must hold for any valueof ω in order for (128) to be satisfied, one notices that the following definitionˆ π (ˆ ω ) = ˆ cc (cid:90) d π ( ω ) (cid:28) δ (cid:18) ˆ ω − K ω (cid:19)(cid:29) K , (132)once inserted in the l.h.s. of (132), indeed produces the r.h.s. Likewise, also in (129) aconstant part can be isolated, − (cid:90) d π ( ω ) log Z ( ω ) = − ∞ (cid:88) k =1 p ˆ c ( k ) k ˆ c (cid:90) { dˆ π } k − log k − (cid:89) (cid:96) =1 Z (ˆ ω (cid:96) ) + ˆ γ ˆ c , (133)and a part that is a function of ˆ ω , viz. (cid:90) d π ( ω ) log Z ( ω + ˆ ω ) Z (ˆ ω ) = ∞ (cid:88) k =1 p ˆ c ( k ) k ˆ c (cid:90) { dˆ π } k − log Z (i λ ε + { ˆ ω } k − + ˆ ω ) Z (ˆ ω ) . (134)As before, since (134) must hold for any ˆ ω , it follows that π ( ω ) = ∞ (cid:88) k =1 p ˆ c ( k ) k ˆ c (cid:90) { dˆ π } k − δ ( ω − (i λ ε + { ˆ ω } k − )) . (135)In order for both ˆ π and π to be normalised to 1, the condition ˆ c = c must be imposed. The solutions of the two coupled functional equationsˆ π (ˆ ω ) = (cid:90) d π ( ω ) (cid:28) δ (cid:18) ˆ ω − K ω (cid:19)(cid:29) K , (136) π ( ω ) = ∞ (cid:88) k =1 p c ( k ) kc (cid:90) { dˆ π } k − δ (cid:32) ω − (cid:32) i λ ε + k − (cid:88) (cid:96) =1 ˆ ω (cid:96) (cid:33)(cid:33) , (137)represent the saddle-point evaluation of (123). The symbol p c ( k ) represents the Poissondegree distribution, which is expected for ER sparse graphs. However, it has been shownin [29, 39, 43] that the above equations hold unmodified also for any non-Poissonian degreedistributions p ( k ) within the configuration model framework, as long as the mean degree (cid:104) k (cid:105) = c is a finite constant, i.e. does not scale with N . Unlike (84), the equations (136)and (137) can be very efficiently solved numerically by a population dynamics algorithm (seeSection 6). Some remarks are in order. • Inserting (136) into (137) yields a unique self-consistency equation for π that is exactlyidentical to (34), obtained using the cavity method in the thermodynamic limit. Thisfact demonstrates once more the equivalence between the replica and cavity methods.26 ciPost Physics Lecture Notes Submission • Alternatively, one could insert (137) into (136), obtaining a single self-consistency equa-tion for ˆ π that readsˆ π (ˆ ω ) = ∞ (cid:88) k =1 p c ( k ) kc (cid:90) { dˆ π } k − (cid:42) δ (cid:32) ˆ ω − K i λ ε + (cid:80) k − (cid:96) =1 ˆ ω (cid:96) (cid:33)(cid:43) { K } . (138)The solution of the latter equation via a population dynamics algorithm will be describedbelow in Section 6. While the two approaches are equivalent, here we choose to workwith the { ˆ ω } since the final equation for the spectral density is more naturally expressedin terms of those, as shown in the following.The pdf ˆ π (ˆ ω ) defined in (138) fully determines the average spectral density. Indeed,recalling (2) one gets ρ ( λ ) = − πN lim ε → + Im ∂∂λ (cid:104) log Z ( λ ) (cid:105) J (cid:39) − π lim ε → + Im lim n → n ∂∂λ S [ˆ π, λ ]= 1 π lim ε → + ∞ (cid:88) k =0 p c ( k )Re (cid:90) { dˆ π } k λ ε + { ˆ ω } k = 1 π lim ε → + ∞ (cid:88) k =0 p c ( k ) (cid:90) { dˆ π } k Re [ { ˆ ω } k ] + ε (Re [ { ˆ ω } k ] + ε ) + ( λ + Im [ { ˆ ω } k ]) , (139)where the latter expression corresponds to Eq. (33) in [28]. We notice that (139) is completelyequivalent to (36) if ˆ π (ˆ ω ) is expressed in terms of π ( ω ) according to (136). All the observationsmade about (36) hold here as well. The average spectral density as expressed in (139) isevaluated by sampling from a large population distributed according to ˆ π (ˆ ω ): this procedurewill also be illustrated in Section 6. ε The average spectral density (139) can be rewritten in order to isolate singular pure-pointcontributions from the continuous spectrum. Indeed, defining P ( a, b ) = ∞ (cid:88) k =0 p c ( k ) (cid:90) { dˆ π } k δ ( a − Re[ { ˆ ω } k ]) δ ( b − Im[ { ˆ ω } k ]) , (140)one finds the identity ρ ( λ ) = 1 π lim ε → + (cid:90) d a d bP ( a, b ) a + ε ( a + ε ) + ( λ + b ) . (141)The integrand in (141) becomes singular as ε → a = 0. These singular contributionscan be isolated representing P ( a, b ) as P ( a, b ) = P ( b ) δ ( a ) + ˜ P ( a, b ) , (142)yielding for the spectral density ρ ( λ ) = 1 π lim ε → + (cid:90) d bP ( b ) L ε ( λ + b ) + 1 π lim ε → + (cid:90) a> d a d b ˜ P ( a, b ) a + ε ( a + ε ) + ( λ + b ) , (143)27 ciPost Physics Lecture Notes Submission Here, L ε ( λ + b ) is a Cauchy distribution with scale (half-width at half-maximum) parameter ε , viz. L ε ( λ + b ) = 1 π εε + ( λ + b ) −−−−→ ε → + δ ( λ + b ) , (144)that reduces to a delta-peak in b = − λ for any value of λ as ε → + . The spectral density ρ ( λ ) can then be easily evaluated by sampling (see Section 6) from the population of the a and b (i.e. the ˆ ω ). Relying on the law of large numbers and calling M the number of samples { ( a i , b i ) } , the two integrals in (143) can indeed be rewritten as ρ ( λ ) (cid:39) ρ S ( λ ) + ρ C ( λ ) (cid:39) M M (cid:88) i =0 ∧ a i =0 L ε ( λ + b i ) + 1 π M M (cid:88) i =0 ∧ a i > a i + ε ( a i + ε ) + ( λ + b i ) , (145)where ρ S ( λ ) and ρ C ( λ ) indicate respectively the singular and the continuous part of theaverage spectral density. Eq. (145) is equivalent to (152) and corresponds to Eq. (40) in [28].Figure 2 shows the spectral density obtained for ER matrices with mean degree c = 2and Gaussian weights with zero mean and variance 1 /c . The tails of the distribution and thecentral peak in λ = 0 are dominated by localised states, i.e. the eigenvectors corresponding tothose values of λ have most of their components equal to zero. Given that there is a one-to-onematching between the eigenvectors of graphs and their nodes, a localised state can be alsodescribed as an eigenvector that is concentrated on few sites of the graph. Quantitatively, thepresence and location of localised states in the spectrum is confirmed by the numerical analysisof the Inverse Participation Ratio (IPR) of the eigenvectors in [28]. Given an eigenvector v of a N × N matrix, its IPR is defined asIPR( v ) = (cid:80) Ni =1 v i (cid:16)(cid:80) Ni =1 v i (cid:17) . (146)The above definition is independent of the eigenvector’s normalisation. The IPR of localisedstates is O (1), as opposed to the O ( N − ) scaling for delocalised states. Indeed, the numericalstudy in [27, 28] shows that the O (1) scaling for IPR is found in correspondence of the tailsand around the peak in λ = 0. Moreover, the IPR analysis makes it possible to relate localisedstates with the singular contributions to the overall spectrum ρ ( λ ). Indeed, the regions ofthe spectrum where the IPR is O (1) are also those where the singular contribution ρ S ( λ )dominates over ρ C ( λ ).Moreover, when comparing numerical direct diagonalisation and population dynamics re-sults two fundamental aspects must be taken into account. • On the one hand, the eigenvalues obtained by direct diagonalisation must be suitablybinned in order to produce the numerical spectral density profile. The binning proceduresmoothens the localised peaks and makes them harder to detect. • On the other hand, the parameter ε plays an essential role in highlighting the singularcontributions to the spectrum. In the evaluation of (145) only the samples such that b i ∈ [ − λ − O ( ε ) , − λ + O ( ε )] for any given value of λ contribute to ρ S ( λ ). Therefore, inorder to have enough data for a reliable evaluation of the singular contribution ρ S ( λ ),one must refrain from using a very small ε > ε = O (10 − )), but rather use28 ciPost Physics Lecture Notes Submission Figure 2: Spectral density of ER matrices with mean degree c = 2 and Gaussian bond weights with zero mean andvariance 1 /c . In all panels, direct diagonalisation results (red circles) are obtained from a sample of 10000 matrices ofsize N = 1000. Top : population dynamics result obtained with a regulariser ε = 10 − (solid blue line) vs. the directdiagonalisation results. Middle : population dynamics result obtained with a regulariser ε = 10 − (solid blue line) vs.the direct diagonalisation results. Bottom : comparison between population dynamics result obtained with a regulariser ε = 10 − (solid blue line), population dynamics result obtained with a regulariser ε = 10 − (solid green line) anddirect diagonalisation on a logarithmic scale. An extremely small value of ε is not able to capture the tails of the spectraldensity related to localised states, where the singular contributions prevail. ciPost Physics Lecture Notes Submission Figure 3: Spectral density of ER matrices with mean degree c = 4 and Gaussian bond weights with zero mean and variance1 /c . Top : population dynamics result (solid blue line) obtained with a regulariser ε = 10 − vs. direct diagonalisationresults (red circles) are obtained from a sample of 10000 matrices of size N = 1000. Bottom : comparison betweenpopulation dynamics result with ε = 10 − (solid blue line), direct diagonalisation results obtained from a sample of10000 matrices of size N = 1000 (red circles) and direct diagonalisation results obtained from a sample of 2500 matricesof size N = 4000 (green stars).Figure 4: Spectral density of adjacency matrices of random regular graphs with coordination c = 4. The populationdynamics result (solid blue line) is compared to the analytical expression of the spectral density, found in [34,35], knownas Kesten-McKay pdf (red circles). ciPost Physics Lecture Notes Submission a relatively large value ε > ε = O (10 − )), to ensure that M ερ S ( λ ) (cid:29) 1. Here M indicates the number of samples used to evaluate the sums in (145).The effects of the choice of the regulariser ε are evident in the case c = 2 (see Fig. 2), sincefor such low c localised states prevail in the spectrum. Indeed, this is due to the structureof the graph itself, which is made of a giant cluster component and a collection of isolatedfinite connected clusters of nodes of any size (see C). An excellent agreement between directdiagonalisation and population dynamics results is achieved for ε = 10 − (top left panel).Indeed, in the ε = 10 − case, the Cauchy peaks related to the singular contributions to thespectral density are broadened into “wider” Cauchy pdfs. On the other hand, when using asmaller regulariser such as ε = 10 − (top right panel), the spectral density exhibits largefluctuations mainly due to errors that occur when sampling isolated states, as the condition M ερ S ( λ ) (cid:29) ρ C ( λ ).However, the curve ρ C ( λ ) is unable to capture the tails of the spectral density: thiseffect is highlighted on a logarithmic scale (bottom panel). This is the typical signature ofa localisation transition : the tails of the spectral density are dominated by localised states,hence they cannot be represented by the continuous part of the spectrum. At the same time,using too small values for ε makes it impossible to observe the localised state contributions inthe tails. If a larger regulariser (hence better local statistics) were employed (top left panel),then the tails could be revealed. For an extensive discussion of these phenomena, see [28].These effects are much less evident in the c = 4 case, shown in Fig. 3, as localised statesare much less relevant as the mean degree c increases. Indeed it has been shown in [61,62] thatthe weight of the delta-peaks related to localised states is an exponentially decreasing functionof c , hence the peaks tend to disappear and merge into the continuous part of the spectrum as c grows. Moreover, the proportion of isolated nodes and isolated tree-like clusters of nodes inthe graph is strongly reduced (see again C). Therefore, the choice of the regulariser is of lesserimportance, and population dynamics simulations run with different values of ε yield verysimilar results. Here, the peak at λ = 0 due to isolated nodes can still be noticed. The case ofthe spectral density of ER matrices with Gaussian couplings with c = 4 is used to show thatfinite size effects are barely present in the spectral problem (right panel of Fig. 3). In fact,numerical diagonalisation of matrices of different size (in particular N = 1000 and N = 4000)are barely distinguishable if compared to the same population dynamics simulation.Finally, as an illustration of the fact that the formalism presented here can be used toobtain the spectral density for other finite mean degree ensembles in the configuration modelclass, we show in Fig. 4 the spectral density of the ensemble of adjacency matrices of randomregular graphs (RRG), having degree distribution p ( k ) = δ k,c . In Fig. 4 we consider the case c = 4. For RRGs adjacency matrices, there are no localised states for any c > 2. Conversely,there are mainly localised states for c = 2 (see again [28] for a detailed discussion). Thepopulation dynamics algorithm perfectly reproduces the Kesten-McKay distribution [34, 35],given by the analytical formula ρ ( λ ) = c (cid:112) c − − λ π ( c − λ ) for | λ | ≤ (cid:112) ( c − . (147)We remark that (147) can be derived analytically within the formalism of Section 5.1 employ-ing a “peaked” ansatz for the distribution of inverse variances as ˆ π (ˆ ω ) = δ (ˆ ω − ¯ ω ). This isshown in F. 31 ciPost Physics Lecture Notes Submission In this section, we sketch the stochastic population dynamics algorithm that allows us to solvethe self-consistency equation (138) and the sampling procedure to evaluate (139). This kindof algorithm is widely used in the study of spin glasses [63, 64]. This procedure is generaland allows to solve every equation having the same structure as (138) (including for instance(34)).In order to solve (138), one represents the pdf ˆ π (ˆ ω ) in terms of a population of N P complexvalues { (ˆ ω i ) } ≤ i ≤ N P , which are assumed to be sampled from that pdf. Given that the truepdf is initially unknown, a starting population is randomly initialised with Re[ˆ ω i ] > 0. Then,a stochastic algorithm for which the solution of Eq. (138) is the unique stationary solution isconstructed.To start, we fix ε = 10 − . Indeed, when solving (138), we may choose ε to be as smallas possible in order not to bias the values of the ˆ ω . Moreover, we define a set I of equallyspaced real positive numbers, starting at zero. The parameter λ will take values in I . Thedistance between two consecutive values in I , denoted by ∆ λ , represents the λ -scan . For theplots shown in this paper, we have employed ∆ λ = O (10 − ). However, the mesh precisioncan be tuned depending on the desired resolution of the spectrum. Since the average spectraldensity as expressed by Eq. (141) (or equivalently Eq. (145)) is an even function of λ , it canbe evaluated in the interval I and then simply mirrored w.r.t. 0 to obtain the full spectraldensity shape. As a last remark, it is convenient to normalise the bond weights drawn from p K ( K ) by √ c , where c is the mean degree.Given these initial remarks, the stochastic algorithm consists of iterating the followingstep until a statistically stationary population is obtained, for any given λ ∈ I .1. Generate a random k according to the distribution kc p ( k ), where p ( k ) is the degreedistribution of interest and c = (cid:104) k (cid:105) .2. Generate K from the bond weights pdf p K ( K ).3. Select k − ω (cid:96) from the population at random, then computeˆ ω ( new ) = K i λ ε + (cid:80) k − (cid:96) =1 ˆ ω (cid:96) , (148)which is the equality enforced by the delta function in (138) . Replace a randomlyselected population member ˆ ω j (where j = 1 , ..., N P ) with ˆ ω ( new ) .4. Return to (i).A sweep is completed when every member of the population ˆ ω j with j = 1 , ..., N P has beenupdated once according to the previous steps. We denote the i -th sweep for a given λ as S i ( λ ).A sufficient number N eq of sweeps is needed to equilibrate the population. Stationarity canbe assessed by looking at the sample estimate of the first moments of the ˆ ω variables.The population dynamics algorithm can also be employed for the sampling procedurethat allows one to numerically evaluate (139) (and in a similar fashion (36)). Once (for agiven value of λ ) the population { (ˆ ω i ) } ≤ i ≤ N P has been brought to convergence after N eq This morally corresponds to considering the ε → + limit in eq. (138). ciPost Physics Lecture Notes Submission equilibration sweeps, a number N meas of so-called measurement sweeps M ( λ ) is performed.Here, N meas is the number of the measurement sweeps.Each measurement sweep M j ( λ ) ( j = 1 , ..., N meas ) can be divided into two parts. In thefirst part, the population in equilibrium is updated via a sweep S j ( λ ), as described before.The second part is the actual measurement part m j ( λ ), involving the following steps. Two(real) empty arrays { a i } { ≤ i ≤ N sam } and { b i } { ≤ i ≤ N sam } of size N sam are initialised. Here, N sam is the number of samples per measurement sweep. Each of the a i and the b i will eventuallyplay the role of the real and the imaginary parts of sums of the ˆ ω (cid:96) , respectively. Then, for i = 1 , ..., N sam :1. Generate a random k according to the degree distribution p c ( k ) (or in general p ( k ))2. Select k numbers ˆ ω (cid:96) from the population { ω i } ≤ i ≤ N P at random and compute x i = k (cid:88) (cid:96) =1 ˆ ω (cid:96) . (149)3. Compute a i = Re[ x i ] , (150) b i = Im[ x i ] . (151)When all N meas measurement sweeps M j ( λ ) have been completed (typically, N meas ≈ ),the resulting N meas arrays of the type { a i } { ≤ i ≤ N sam } (respectively { b i } { ≤ i ≤ N sam } ) are mergedtogether, yielding a unique large array { A j } { ≤ j ≤M} (respectively { B j } { ≤ j ≤M} ) of size M = N sam × N meas , which is the total number of samples (typically, we choose N sam such that M is O (10 ) for the chosen value of N meas ). Therefore we can eventually compute ρ ( λ ) = 1 π M M (cid:88) j =1 A j + ε ( A j + ε ) + ( B j + λ ) , (152)which represents the contribution to the average spectral density for a given value of λ . Theevaluation of the sample average (152) requires a careful choice of ε , since the value of ε in(152) will determine the width of the Cauchy distributions approximating the delta-peaks inthe spectrum (see the discussion in Section 5.2). In general, the value of ε employed in (152)can be larger (up to ε = O (10 − )) than the value ε = 10 − chosen for the equilibrationsweeps in (148), in order to have sufficient statistics to faithfully represent the singular partof the spectrum. Eq. (152), corresponding to eq. (40) in [28], is a discretised version of(141). This step concludes the sampling algorithm for a given value of λ . The full samplingalgorithm can be summarised as follows. 33 ciPost Physics Lecture Notes SubmissionAlgorithm 1 Population dynamics sampling algorithm for λ ∈ I do for e = 1 , ..., N eq do S e ( λ ) (cid:46) Equilibration sweeps. end for for j = 1 , ..., N meas do S j ( λ ) m j ( λ ) (cid:46) S j ( λ ) and m j ( λ ) jointly form the measurement sweep M j ( λ ). end for Compute (152) for given λ end for .An example of population dynamics algorithm for the spectra of ER matrices is availableupon request. In summary, we have provided a pedagogical and comprehensive overview of the computationof the average spectral density of sparse symmetric random matrices. We started with thecelebrated Edwards-Jones formula (2) and outlined its proof. The formula allows to recastthe determination of the density of states of a N × N matrix into the calculation of theaverage free energy of a system of N interacting particles at equilibrium, described by aGibbs-Boltzmann distribution at imaginary inverse temperature. Therefore, techniques fromthe statistical physics of disordered systems, such as the replica method, can be employed tocorrectly deal with the calculation of the average free energy (see Eq. (16)) that features inthe Edwards-Jones formula. The replica method was indeed the strategy used in the seminalwork of Bray and Rodgers, which represents the first attempt to obtain the spectral densityfor matrices with ER connectivity. We have reproduced their calculations in detail, showinghow to derive the integral equation (84) whose solution still represents a challenging openproblem. We also described how to obtain the Wigner semicircle as the leading order of thelarge mean degree expansion of Eq. (84).Considering sparse tree-like graphs within the Edwards-Jones framework, we have de-scribed how to apply the cavity method to the spectral problem for single instances. Thecavity method circumvents the averaging of the free energy by making the associated Gibbs-Boltzmann distribution the target of its analysis. We have demonstrated that in this contextthe only ingredients needed to compute the spectral density are the inverse variances of eachof the N marginal pdfs of the Gibbs-Boltzmann distribution (see Eq. (22)). These inversevariances are easily obtained in terms of a set of self-consistency equations (29) for the cavityinverse variances. Moreover, we have explained how in the thermodynamic limit the cav-ity single-instance recursions give rise to a self-consistency integral equation (34) for the pdfof the inverse cavity variances, in terms of which the average spectral density (36) is fullydetermined at the ensemble level.We have also illustrated an alternative replica derivation, where the high-temperaturereplica symmetry ansatz employed by Bray and Rodgers is realised by assuming that theorder parameter and its conjugate are expressed through an infinite superposition of zero-34 ciPost Physics Lecture Notes Submission mean complex Gaussians, with random inverse variances (see Eq. (121) and (122)). Weshowed that the two coupled integral equations (136) and (137) that define the pdfs of theaforementioned inverse variances reduce to a unique self-consistency equation (138), which iscompletely equivalent to Eq. (34) found within the cavity treatment in the thermodynamiclimit. In the replica framework, too, the average spectral density (139) depends only on thissingle pdf defined in (138). Therefore, once again the equivalence between the cavity andreplica approaches is confirmed. Indeed, both methods permit to express the average spectraldensity as a weighted sum of local contributions coming from nodes of different degree k .On the practical side, the average spectral density is obtained by sampling from a largepopulation of complex numbers distributed according to the pdf of the inverse variances (138)(or equivalently (34)). We remark that both approaches are not restricted to ER graphs, butcan handle any degree distribution with finite mean degree.The essential tool for solving self-consistency equations of the kind of Eq. (138) andperforming the sampling procedure to evaluate the spectral density (139) is a stochasticpopulation dynamics algorithm. We give a detailed description of the algorithm along withpratical tips to implement it. Results obtained with the population dynamics algorithm are inexcellent agreement with the numerical diagonalisation of large weighted adjacency matricesof tree-like graphs, provided that the correct choice of the value of the regulariser ε used inthe algorithm is made. Indeed, we thoroughly describe the important role of ε in unveilingthe contribution of the localised states to the spectral density, also in connection with thegraph’s mean degree and structure. We are also able to show that the spectral density doesnot significantly suffer from finite size effects, away from any localisation transition. In theimmediate vicinity of localisation transitions, it is expected that finite size effects will affectresults as in other continuous phase transitions. However, this phenomenology has not beenfully investigated.Lastly, in line with the pedagogical gist of this work, we include a number of appendiceswhere we provide background informations on graphs, elucidations of technicalities and quickproofs of the identities that have been used in the main text. Acknowledgements PV is grateful to Fabian Aguirre Lopez and Ton Coolen for many useful discussions. Funding information The authors acknowledge funding by the Engineering and Physi-cal Sciences Research Council (EPSRC) through the Centre for Doctoral Training in CrossDisciplinary Approaches to Non-Equilibrium Systems (CANES, Grant Nr. EP/L015854/1). A Sokhotski-Plemelj formula The Sokhotski-Plemelj identity islim ε → + x ± i ε = Pr (cid:18) x (cid:19) ∓ i πδ ( x ) . (153)35 ciPost Physics Lecture Notes Submission It is employed for the solution of some improper integrals. A quick proof for a real testfunction g ( x ) follows. We havelim ε → + (cid:90) ∞−∞ d x g ( x ) x ± i ε = lim ε → + (cid:90) ∞−∞ d x x g ( x ) x + ε ∓ i π lim ε → + (cid:90) ∞−∞ d x g ( x ) 1 π εx + ε , (154)where the real and imaginary part of the integrand have been separated. The first integralon the r.h.s. of (154) can be written as a Cauchy principal value, viz.lim ε → + (cid:90) ∞−∞ d x x g ( x ) x + ε = lim ε → + (cid:90) ∞−∞ d x g ( x ) x x x + ε (155)= lim ε → + (cid:18)(cid:90) − ε −∞ d x g ( x ) x + (cid:90) ∞ ε d x g ( x ) x (cid:19) = (cid:20) Pr (cid:18) x (cid:19)(cid:21) ( g ) . (156)The second integral on the r.h.s of (154) reduces tolim ε → + (cid:90) ∞−∞ d x g ( x ) 1 π εx + ε = (cid:90) ∞−∞ d x g ( x ) δ ( x ) = g (0) , (157)where lim ε → + π εx + ε = δ ( x ) has been employed. B The principal branch of the complex logarithm The logarithm in the complex plane is in general a multi-valued function. Whenever a welldefined, single-valued function is needed, the principal branch of the complex logarithm canbe considered. It is denoted by “Log” and defined such that for any z ∈ C with r = | z | ,Log( z ) = ln( r ) + iArg( z ) with Arg( z ) ∈ ] − π, π ] . (158)The function Arg( z ) denotes the principal value of the argument of the complex number z .In particular, given z = r e i θ ∈ C , the argument of z is given by arg( z ) = θ and is in generala multi-valued function. The single-valued principal argument Arg( z ) is related to arg( z ) viathe following relation, Arg( z ) = arg( z ) + 2 π (cid:22) − arg( z )2 π (cid:23) , (159)where the symbol (cid:98) ... (cid:99) denotes the floor operation, i.e. (cid:98) x (cid:99) is the integer number such that x − < (cid:98) x (cid:99) ≤ x for x ∈ R .In general Log e z (cid:54) = z for z ∈ C . Indeed, for any z = x + i y ∈ C the following propertyholds: Log(e z ) = Log | e z | + iArg(e z ) = Log(e x ) + iArg(e i y )= x + i (cid:26) arg(e i y ) + 2 π (cid:22) − arg(e i y )2 π (cid:23)(cid:27) = x + i y + 2 π i (cid:22) − y π (cid:23) = z + 2 π i (cid:22) − Im[ z ]2 π (cid:23) , (160)where Log( x ) = ln( x ) for x ∈ R and the definition (159) has been used for the principal valueof the argument Arg( z ). 36 ciPost Physics Lecture Notes Submission C Erd˝os-R´enyi graphs The Erd˝os-R´enyi (ER) graph is the prototypical example of a random graph, introducedby Erd˝os and R´enyi in [65, 66]. It is the simplest and most studied uncorrelated undirectedrandom network. It can be denoted by G ( N, p ), where N is the number of nodes and p ∈ [0 , N ( N − / p is the probability that a link exists independently fromthe others. In formulae, the probability that a link exists between nodes i and j is p C ( c ij ) = pδ c ij , + (1 − p ) δ c ij , . (161)All properties of the ER model depend on the two parameters N and p . Its degreedistribution is binomial, viz.Pr[a random node has degree k ] = p ( k ) = (cid:18) N − k (cid:19) p k (1 − p ) N − − k . (162)Indeed, a node has degree k if it is connected to k nodes (the probability of this event being p k ) and at the same it is not connected to all the remaining N − − k nodes (the probabilityof this event being (1 − p ) N − − k ). The binomial coefficient accounts for the fact that thespecific subset of k nodes we choose out of the remaining N − c = p ( N − N → ∞ where N − (cid:39) N and keeping c = N p constant, the binomial distribution in (162) converges to the Poisson distribution, p c ( k ) = c k e − c k ! . (163)The condition for this limit to hold is exactly verified in the sparse ER ensemble that weconsider in our analysis. Indeed, we explicitly ask that the mean degree c be a finite constant,hence ensuring that p = cN → N → ∞ . The Poisson distribution in (163) is decayingexponentially for large degree k .The structure of an ER graph and in particular the existence of a giant component dependon the value of p [66]. The giant component of a graph is the largest connected component(i.e. cluster of nodes) in the graph, containing a finite fraction of the total N nodes. Ina connected component, every two nodes are connected by a path, whereas there are noconnections between two nodes belonging to two different components. We have the followingproperties [67, 68]. • For p < N (i.e. c < O (ln( N )). Thegraph can be described as a disjoint union of trees and unicycle components, i.e. treeswith an extra link forming a cycle. • For p > N (i.e. c > O ( N ) and contains cyclesof any length, while the remaining smaller components (typically trees and unicycles)have at most size O (ln( N )). • p = N = p c represents the percolation threshold as it separates the two regimes: indeedat p = p c (i.e. c = 1) most of the isolated components for c < ciPost Physics Lecture Notes Submission giving rise to a giant component of size O ( N / ). As the (constant) mean degree c > O ( N )in size. The smaller the size of the isolated components, the longer they will survive themerging process. • For p ≤ ln( N ) N , the graph contains isolated nodes almost surely, hence it is disconnected.As soon as p > ln( N ) N , the graph becomes connected, as the isolated nodes attach to thegiant component entailing that every pair of nodes in the graph is connected by a path.The value p = ln( N ) N is then a threshold for the connectivity of the graph.The structural properties of the graph are reflected in the spectrum. Indeed, the variety ofpeaks in the spectrum related to singular contributions are due to isolated nodes and isolatedfinite clusters of nodes that are still present for finite constant c > 1, alongside with the giantcomponent.The ER graph can also be seen as a model of link percolation [69]. Indeed, ER graphs canbe generated also starting from a fully connected graph and removing links at random withconstant probability 1 − p .An algorithm for the generation of the adjacency matrix of any generic random graphswithin the configuration model is described in Section 8.1 and detailed in Appendix J.5(Algorithm 27) of [70]. A simple code for the generation of single instances of adjacencymatrices of ER graphs is available upon request. D How to perform the average (48) The goal is to perform the average (cid:42) exp i2 N (cid:88) i,j =1 n (cid:88) a =1 v ia J ij v ja (cid:43) J (164)w.r.t. the joint distribution of the matrix entries P ( { J ij } ) = (cid:89) i A tree is a connected acyclic undirected graph. Acyclic means that it contains no cycles. In atree, any two nodes are connected via a unique path [71]. In particular, trees are examples of bipartite graphs, in which nodes can be divided into two disjoint subgraphs S and S , suchthat every node in subgraph S only has neighbours in the complementary subgraph S andvice versa.Here, we show that the N × N adjacency matrix A (whether it is weighted or not) of atree with N nodes has a spectrum that is symmetric around λ = 0. In other words, for anyeigenvalue λ of A , then − λ is also an eigenvalue of A . This result is encoded in the fact thatin the set of recursion equations for the cavity inverse variances (29) and single-site inversevariances (31), the matrix entries appear only through their square.Let x be the eigenvector of A corresponding to the eigenvalue λ . Given x and λ , we willbe able to construct a vector y that is an eigenvector of A corresponding to the eigenvalue − λ . Indeed, considering the eigenvalue equation for the component x i , λx i = (cid:88) j ∈ ∂i A ij x j , (178)the node i contributing to the l.h.s. of (178) and the nodes { j : j ∈ ∂i } contributing to ther.h.s. of (178) always belong to different subgraphs. Therefore, the signs of all the components x j with j ∈ ∂i all belonging to one of the two subgraphs ( S or S ) can be changed, givingrise to − λx i = (cid:88) j ∈ ∂i ( A ij )( − x j ) ⇔ − λy i = (cid:88) j ∈ ∂i ( A ij )( y j ) . (179)This reasoning can be iterated for any i = 1 , . . . , N . Therefore, given the eigenvector x corresponding to λ , one can construct a vector y such that y i = (cid:40) x i i ∈ S − x i i ∈ S . (180)Of course, the choice of inverting the sign of the components of x on the set S is arbitrary.The same result is achieved by changing the sign of the components living on nodes in S while leaving the components defined on nodes in S unchanged. The vector y is thus aneigenvector of the matrix A corresponding to − λ .41 ciPost Physics Lecture Notes Submission References [1] Eugene P Wigner. On the statistical distribution of the widths and spacingsof nuclear resonance levels. In Mathematical Proceedings of the Cambridge Philo-sophical Society , volume 47, pages 790–798. Cambridge University Press, 1951,doi:10.1017/S0305004100027237.[2] Thomas Guhr, Axel M¨uller-Groeling, and Hans A Weidenm¨uller. Random-matrix the-ories in quantum physics: common concepts. Physics Reports , 299(4-6):189–425, 1998,doi:10.1016/S0370-1573(97)00088-4.[3] Dragoˇs Cvetkovi´c and Slobodan Simi´c. Graph spectra in computer science. LinearAlgebra and its Applications , 434(6):1545–1562, 2011, doi:10.1016/j.laa.2010.11.035.[4] Laurent Laloux, Pierre Cizeau, Marc Potters, and Jean-Philippe Bouchaud. Randommatrix theory and financial correlations. International Journal of Theoretical and AppliedFinance , 3(03):391–397, 2000, doi:10.1142/S0219024900000255.[5] Zdzis(cid:32)law Burda and Jerzy Jurkiewicz. Signal and noise in financial correlation ma-trices. Physica A: Statistical Mechanics and its Applications , 344(1-2):67–72, 2004,doi:10.1016/j.physa.2004.06.089.[6] Jean-Philippe Bouchaud and Marc Potters. Financial applications of random matrixtheory: a short review. arXiv preprint arXiv:0910.1205 , 2009, https://arxiv.org/abs/0910.1205 .[7] Debashis Paul and Alexander Aue. Random matrix theory in statistics: A review. Journalof Statistical Planning and Inference , 150:1–29, 2014, doi:10.1016/j.jspi.2013.09.005.[8] Jo¨el Bun, Jean-Philippe Bouchaud, and Marc Potters. Cleaning large correlationmatrices: tools from random matrix theory. Physics Reports , 666:1–109, 2017,doi:10.1016/j.physrep.2016.10.005.[9] Eugene P Wigner. On the distribution of the roots of certain symmetric matrices. Annalsof Mathematics , pages 325–327, 1958, doi:10.2307/1970008.[10] Vladimir A Marˇcenko and Leonid A Pastur. Distribution of eigenvalues forsome sets of random matrices. Mathematics of the USSR-Sbornik , 1(4):457, 1967,doi:10.1070/sm1967v001n04abeh001994.[11] R´eka Albert and Albert-L´aszl´o Barab´asi. Statistical mechanics of complex networks. Reviews of modern physics , 74(1):47, 2002, doi:10.1103/RevModPhys.74.47.[12] Sergey N Dorogovtsev, Alexander V Goltsev, Jos´e FF Mendes, and Alexander NSamukhin. Spectra of complex networks. Physical Review E , 68(4):046109, 2003,doi:10.1103/PhysRevE.68.046109.[13] L´aszl´o Lov´asz. Random walks on graphs: A survey. Combinatorics, Paul Erd˝os is eighty ,2(1):1–46, 1993, .[14] L´aszl´o Lov´asz and J´ozsef Pelik´an. On the eigenvalues of trees. Periodica MathematicaHungarica , 3(1-2):175–182, 1973, doi:10.1007/BF02018473.42 ciPost Physics Lecture Notes Submission [15] Kurt Broderix, Timo Aspelmeier, Alexander K Hartmann, and Annette Zippelius.Stress relaxation of near-critical gels. Physical Review E , 64(2):021404, 2001,doi:10.1103/PhysRevE.64.021404.[16] Samuel F Edwards and Raymund C Jones. The eigenvalue spectrum of a large symmetricrandom matrix. Journal of Physics A: Mathematical and General , 9(10):1595, 1976,doi:10.1088/0305-4470/9/10/011.[17] Francesco Zamponi. Mean field theory of spin glasses. arXiv preprint arXiv:1008.4844 ,2010, https://arxiv.org/abs/1008.4844 .[18] Geoff J Rodgers and Alan J Bray. Density of states of a sparse random matrix. PhysicalReview B , 37(7):3557, 1988, doi:10.1103/PhysRevB.37.3557.[19] Alan J Bray and Geoff J Rodgers. Diffusion in a sparsely connectedspace: A model for glassy relaxation. Physical Review B , 38(16):11461, 1988,doi:10.1103/PhysRevB.38.11461.[20] Geoff J Rodgers, K Austin, Byungnam Kahng, and Doochul Kim. Eigenvalue spectra ofcomplex networks. Journal of Physics A: Mathematical and General , 38(43):9431, 2005,doi:10.1088/0305-4470/38/43/003.[21] Yan V Fyodorov and Alexander D Mirlin. On the density of states of sparse ran-dom matrices. Journal of Physics A: Mathematical and General , 24(9):2219, 1991,doi:10.1088/0305-4470/24/9/027.[22] Oleksiy Khorunzhy, Mariya Shcherbina, and Valentin Vengerovsky. Eigenvalue distribu-tion of large weighted random graphs. Journal of Mathematical Physics , 45(4):1648–1672,2004, doi:10.1063/1.1667610.[23] Giulio Biroli and R´emi Monasson. A single defect approximation for localized states onrandom lattices. Journal of Physics A: Mathematical and General , 32(24):L255, 1999,doi:10.1088/0305-4470/32/24/101.[24] Guilhem Semerjian and Leticia F Cugliandolo. Sparse random matrices: the eigenvaluespectrum revisited. em Journal of Physics A: Mathematical and General, 35(23):4837,2002, doi:10.1088/0305-4470/35/23/303.[25] Taro Nagao and Geoff J Rodgers. Spectral density of complex networks with a finitemean degree. Journal of Physics A: Mathematical and Theoretical , 41(26):265002, 2008,doi:10.1088/1751-8113/41/26/265002.[26] Frantiˇsek Slanina. Localization of eigenvectors in random graphs. The European PhysicalJournal B , 85(11):361, 2012, doi:10.1140/epjb/e2012-30338-1.[27] Spiros N Evangelou. A numerical study of sparse random matrices. Journal of statisticalphysics , 69(1-2):361–383, 1992, doi:10.1007/BF01053797.[28] Reimer K¨uhn. Spectra of sparse random matrices. Journal of Physics A: Mathematicaland Theoretical , 41(29):295002, 2008, doi:10.1088/1751-8113/41/29/295002.43 ciPost Physics Lecture Notes Submission [29] Reimer K¨uhn, Jort Van Mourik, Martin Weigt, and Annette Zippelius. Finitely coordi-nated models for low-temperature phases of amorphous systems. Journal of Physics A:Mathematical and Theoretical , 40(31):9227, 2007, doi:10.1088/1751-8113/40/31/004.[30] Marc M´ezard, Giorgio Parisi, and Miguel Virasoro. Spin glass theory and beyond: AnIntroduction to the Replica Method and Its Applications , volume 9. World ScientificPublishing Company, 1987, doi:10.1142/9789812799371 0006.[31] Tim Rogers, Isaac P´erez Castillo, Reimer K¨uhn, and Koujin Takeda. Cavity approachto the spectral density of sparse symmetric random matrices. Physical Review E ,78(3):031116, 2008, doi:10.1103/physreve.78.031116.[32] Charles Bordenave and Marc Lelarge. Resolvent of large random graphs. Random Struc-tures & Algorithms , 37(3):332–352, 2010, doi:10.1002/rsa.20313.[33] Frantiˇsek Slanina. Equivalence of replica and cavity methods for computingspectra of sparse random matrices. Physical Review E , 83(1):011118, 2011,doi:10.1103/PhysRevE.83.011118.[34] Harry Kesten. Symmetric random walks on groups. Transactions of the American Math-ematical Society , 92(2):336–354, 1959, doi:10.2307/1993160.[35] Brendan D McKay. The expected eigenvalue distribution of a large regular graph. LinearAlgebra and its Applications , 40:203–216, 1981, doi:10.1016/0024-3795(81)90150-6.[36] Reimer K¨uhn. Spectra of random stochastic matrices and relaxation in complex systems. EPL (Europhysics Letters) , 109(6):60003, 2015, doi:10.1209/0295-5075/109/60003.[37] Reimer K¨uhn. Random matrix spectra and relaxation in complex networks. Acta Phys.Polon. B , 46:1653–1682, 2015, doi:10.5506/aphyspolb.46.1653.[38] G¨uler Erg¨un and Reimer K¨uhn. Spectra of modular random graphs. Journalof Physics A: Mathematical and Theoretical , 42(39):395001, 2009, doi:10.1088/1751-8113/42/39/395001.[39] Reimer K¨uhn and Jort Van Mourik. Spectra of modular and small-world matrices. Jour-nal of Physics A: Mathematical and Theoretical , 44(16):165205, 2011, doi:10.1088/1751-8113/44/16/165205.[40] Tim Rogers, Conrad P´erez Vicente, Koujin Takeda, and Isaac P´erez Castillo. Spectraldensity of random graphs with topological constraints. Journal of Physics A: Mathemat-ical and Theoretical , 43(19):195002, 2010, doi:10.1088/1751-8113/43/19/195002.[41] Fernando L Metz, Izaak Neri, and D´esir´e Boll´e. Localization transition in symmetric ran-dom matrices. Physical Review E , 82(3):031135, 2010, doi:10.1103/physreve.82.031135.[42] Yoshiyuki Kabashima, Hisanao Takahashi, and Osamu Watanabe. Cavity approach tothe first eigenvalue problem in a family of symmetric random sparse matrices. In Jour-nal of Physics: Conference Series , volume 233, page 012001, 2010, doi:10.1088/1742-6596/233/1/012001. 44 ciPost Physics Lecture Notes Submission [43] Vito AR Susca, Pierpaolo Vivo, and Reimer K¨uhn. Top eigenpair statistics for weightedsparse graphs. Journal of Physics A: Mathematical and Theoretical , 52(48):485002, 2019,doi:10.1088/1751-8121/ab4d63.[44] Vito A R Susca, Pierpaolo Vivo, and Reimer K¨uhn. Second largest eigenpair statisticsfor sparse graphs. Journal of Physics A: Mathematical and Theoretical , 54(1):015004,2020, doi:10.1088/1751-8121/abcbad.[45] Tim Rogers and Isaac P´erez Castillo. Cavity approach to the spectral den-sity of non-hermitian sparse matrices. Physical Review E , 79(1):012101, 2009,doi:10.1103/physreve.79.012101.[46] Izaak Neri and Fernando L Metz. Spectra of sparse non-hermitian random ma-trices: an analytical solution. Physical Review Letters , 109(3):030602, 2012,doi:10.1103/physrevlett.109.030602.[47] Izaak Neri and Fernando L Metz. Eigenvalue outliers of non-hermitian random ma-trices with a local tree structure. Physical Review Letters , 117(22):224101, 2016,doi:10.1103/physrevlett.117.224101.[48] Fernando L Metz, Izaak Neri, and Tim Rogers. Spectral theory of sparse non-hermitianrandom matrices. Journal of Physics A: Mathematical and Theoretical , 52(43):434003,2019, doi:10.1088/1751-8121/ab1ce0.[49] Fernando L Metz and Izaak Neri. Localization and Universality of Eigenvec-tors in Directed Random Graphs. Physical Review Letters , 126(4):040604, 2021,doi:10.1103/PhysRevLett.126.040604.[50] Alaa Saade, Florent Krzakala, and Lenka Zdeborov´a. Spectral density of the non-backtracking operator on random graphs. EPL (Europhysics Letters) , 107(5):50005, 2014,doi:10.1209/0295-5075/107/50005.[51] Charles Bordenave, Marc Lelarge, and Laurent Massouli´e. Non-backtracking Spectrumof Random Graphs: Community Detection and Non-regular Ramanujan Graphs. In , pages 1347–1357.IEEE, 2015, doi:10.1109/focs.2015.86.[52] Fernando L Metz and Jeferson D Silva. Spectral density of dense random networks andthe breakdown of the Wigner semicircle law. Physical Review Research , 2(4):043116,2020, doi:10.1103/physrevresearch.2.043116.[53] Fabi´an Aguirre L´opez and Anthony CC Coolen. Imaginary replica analysis of loopy reg-ular random graphs. Journal of Physics A: Mathematical and Theoretical , 53(6):065002,2020, doi:10.1088/1751-8121/ab6512.[54] Fabian Aguirre Lopez and Anthony CC Coolen. Transitions in loopy random graphswith fixed degrees and arbitrary degree distributions. arXiv preprint arXiv:2008.11002 ,2020, https://arxiv.org/abs/2008.11002 .[55] George T Cantwell and MEJ Newman. Message passing on networks withloops. Proceedings of the National Academy of Sciences , 116(47):23398–23403, 2019,doi:10.1073/pnas.1914893116. 45 ciPost Physics Lecture Notes Submission [56] Pierpaolo Vivo. Index of a matrix, complex logarithms, and multidimensional Fres-nel integrals. Journal of Physics A: Mathematical and Theoretical , 54(2):025002, 2020,doi:10.1088/1751-8121/abccf9.[57] Giacomo Livan, Marcel Novaes, and Pierpaolo Vivo. Introduction to random matrices:theory and practice , volume 26. Springer, 2018, doi:10.1007/978-3-319-70885-0. Availableon the arXiv at https://arxiv.org/abs/1712.07903 .[58] Andries E Brouwer and Willem H Haemers. Spectra of graphs . Springer Science &Business Media, 2011, doi:10.1007/978-1-4614-1939-6.[59] Izrail S Gradshteyn and Iosif M Ryzhik. Table of integrals, series, and products . AcademicPress, 2014, doi:10.1016/B978-0-12-294760-5.50017-9.[60] Paul M Goldbart, Horacio E Castillo, and Annette Zippelius. Randomly crosslinkedmacromolecular systems: vulcanization transition to and properties of the amorphoussolid state. Advances in Physics , 45(5):393–468, 1996, doi:10.1080/00018739600101527.[61] Michel Bauer and Olivier Golinelli. Random incidence matrices: momentsof the spectral density. Journal of Statistical Physics , 103(1-2):301–337, 2001,doi:10.1023/A:1004879905284.[62] Olivier Golinelli. Statistics of delta peaks in the spectral density of large random trees. arXiv preprint cond-mat/0301437 , 2003, https://arxiv.org/abs/cond-mat/0301437 .[63] Marc M´ezard and Giorgio Parisi. The Bethe lattice spin glass revisited. The Euro-pean Physical Journal B-Condensed Matter and Complex Systems , 20(2):217–233, 2001,doi:10.1007/PL00011099.[64] Florent Krzakala, Federico Ricci-Tersenghi, Lenka Zdeborov´a, Riccardo Zecchina,Eric W Tramel, and Leticia F Cugliandolo. Statistical physics, optimiza-tion, inference, and message-passing algorithms . Oxford University Press, 2016,doi:10.1093/acprof:oso/9780198743736.001.0001.[65] Paul Erd˝os and Alfr´ed R´enyi. On random graphs I. Publ. Math. Debrecen , 6(290-297):18,1959, .[66] Paul Erd˝os and Alfr´ed R´enyi. On the evolution of random graphs. Publ. Math. Inst.Hung. Acad. Sci , 5(1):17–60, 1960 .[67] B´ela Bollob´as. The evolution of random graphs. Transactions of the American Mathe-matical Society , 286(1):257–274, 1984, doi:10.1090/s0002-9947-1984-0756039-5.[68] B´ela Bollob´as. Random graphs . Number 73. Cambridge University Press, 2001,doi:10.1017/CBO9780511814068.[69] Sergey N Dorogovtsev and Jose FF Mendes. Evolution of networks. Advances in physics ,51(4):1079–1187, 2002, doi:10.1080/00018730110112519.[70] Anthony CC Coolen, Alessia Annibale, and Ekaterina Roberts. Gen-erating random networks and graphs . Oxford University Press, 2017,doi:10.1093/oso/9780198709893.001.0001.46 ciPost Physics Lecture Notes Submission [71] John A Bondy and Uppaluri SR Murty. Graph theory with applications , volume 290.Macmillan London, 1976,