Multigraph limit of the dense configuration model and the preferential attachment graph
aa r X i v : . [ m a t h . P R ] A p r Multigraph limit of the dense configuration modeland the preferential attachment graph
Bal´azs R´ath ∗ L´aszl´o Szak´acs † November 17, 2018
Abstract
The configuration model is the most natural model to generate a random multi-graph with a given degree sequence. We use the notion of dense graph limits tocharacterize the special form of limit objects of convergent sequences of configu-ration models. We apply these results to calculate the limit object correspondingto the dense preferential attachment graph and the edge reconnecting model. Ourmain tools in doing so are (1) the relation between the theory of graph limits andthat of partially exchangeable random arrays (2) an explicit construction of ourrandom graphs that uses urn models.
The notion of dense graph limits was introduced in [10] and has been further developedover the years, see [9] for a recent survey. Heuristically, the theory of dense graph limitsgives a compact way to characterize the statistics of a randomly chosen small subgraph ofa large dense graph. In [5] the graph limits of various sequences of random dense graphswere calculated and in this paper we proceed with the investigation of this topic.Our objects of study are multigraphs rather than simple graphs, i.e. we allow par-allel and loop edges: this choice makes the definition of the limit objects of convergentmultigraph sequences ( multigraphons ) slightly more complicated than the limit objects of ∗ ETH Z¨urich, Department of Mathematics, R¨amistrasse 101, 8092 Z¨urich.Email: [email protected]. † E¨otv¨os Lor´and University, Institute of Mathematics, P´azm´any P´eter s´et´any 1/C, 1117 Budapest.Email: [email protected] Keywords: dense graph limits, multigraphs, configuration model, preferential attachment.MSC2010 classification: 05C80 (Random graphs) configuration model : we draw d ( v ) stubs (half-edges) at each vertex v andthen we uniformly choose one from the the set of possible matchings of these stubs. Inthis paper we call such random multigraphs edge stationary (for reasons that will becomeclear later) and in Theorem 1 we characterize the special form of limiting multigraphonsthat arise as the limit of random dense edge stationary multigraph sequences. Rouglyspeaking, our theorem states that the number of edges connecting the vertices v and w has Poisson distribution with parameter proportional to d ( v ) d ( w ).We also investigate two random graph models which have different definitions butturn out to have the same distribution: • The edge reconnecting model is a random multigraph evolving in time. Denotethe multigraph at time T by G n ( T ), where T = 0 , , , . . . and n = | V ( G n ( T )) | isthe number of vertices. We denote by m = | E ( G n ( T )) | the number of edges (thenumber of vertices and edges does not change over time). Given the multigraph G n ( T ) we get G n ( T + 1) by uniformly choosing an edge in E ( G n ( T )), choosing oneof the endpoints of that edge with a coin flip and reconnecting the edge to a newendpoint which is chosen using the rule of linear preferential attachment: a vertex v is chosen with probability d ( v )+ κ m + nκ , where d ( v ) is the degree of vertex v in G n ( T )and κ ∈ (0 , + ∞ ) is a fixed parameter of the edge reconnecting model. We look atthe unique stationary distribution of this multigraph-valued Markov chain which isa random multigraph on n vertices and m edges. • In Section 3.4 of [5] a random multigraph called preferential attachment graphwith n nodes and m edges (briefly PAG( n, m )) is defined. We slightly generalizethe definition to obtain PAG κ ( n, m ) where κ ∈ (0 , + ∞ ) is a fixed parameter: let V = { v , . . . , v n } be a set of vertices. We create a sequence v ∗ , . . . , v ∗ m with elementsfrom V by starting with the empty sequence and appending random elements of V one by one. If the current length of the sequence is L then we choose thenext element v ∗ L +1 to be equal to v ∈ V with probability d ( v )+ κL + nκ , where d ( v ) is themultiplicity of v in the sequence v ∗ , . . . , v ∗ L . Now we create the random multigraphPAG κ ( n, m ) on the vertex set V by adding the edges of form { v ∗ k − , v ∗ k } for each k = 1 , . . . , m .Lemma 2.1 states that the above described two random multigraphs have the samedistribution. In Theorem 2 we give the limiting multigraphon of this random multigraphwhen n → ∞ and m ≈ ρn , where ρ ∈ (0 , + ∞ ) is a fixed parameter of the model calledthe edge density . Roughly speaking, the limiting multigraphon can be described as fol-lows: it is edge stationary, and the rescaled degrees of vertices have Gamma distributionwith parameters depending on κ and ρ . 2he precise statements of these theorems along with the necessary notations can befound in Section 2. We end the Introduction with mentioning a few related results:The configuration model is a random multigraph, but if we condition it to have nomultiple and loop edges, then the resulting random simple graph is uniformly distributedgiven its degree sequence. In [6] the description of the limiting graphon of such sequencesof simple dense graphs (and a continuous version of the Erd˝os-Gallai characterization ofdegree sequences) is given.In [13] we give a characterization of the time evolution of the edge reconnecting model,viewed through the prism of the theory of multigraphons: roughly speaking, if we startthe edge reconnecting model from an arbitrary initial multigraph, then we have to run ourprocess for n ≪ T steps until G n ( T ) becomes “edge stationary” and run it for n ≪ T steps until G n ( T ) becomes “stationary”. Acknowledgement.
The authors thank L´aszl´o Lov´asz for posing the research problemthat became the subject of this paper.The research of Bal´azs R´ath was partially supported by the OTKA (Hungarian Na-tional Research Fund) grants K 60708 and CNK 77778, Morgan Stanley Analytics Bu-dapest and Collegium Budapest and the grant ERC-2009-AdG 245728-RWPERCRI. Theresearch of L´aszl´o Szak´acs was partially supported by the OTKA grant NK 67867.
Denote by N = { , , , . . . } , [ n ] := { , . . . , n } and [ k..n ] := { k, . . . , n } . If H and H are arbitrary sets, denote by f : H ֒ → H a generic injective function from H to H .Denote by M the set of undirected multigraphs (graphs with multiple and loop edges)and by M n the set of multigraphs on n vertices. Let G ∈ M n . The adjacency matrix ofa labeling of the multigraph G with [ n ] is denoted by ( B ( i, j )) ni,j =1 , where B ( i, j ) ∈ N is the number of edges connecting the vertices labeled by i and j . B ( i, j ) = B ( j, i ) sincethe graph is undirected and B ( i, i ) is two times the number of loop edges at vertex i (thus B ( i, i ) is an even number).We denote the set of adjacency matrices of multigraphs on n nodes by A n , thus A n = (cid:8) B ∈ N n × n : B T = B, ∀ i ∈ [ n ] 2 | B ( i, i ) (cid:9) . The degree of the vertex labeled by i in G with adjacency matrix B ∈ A n is definedby d ( B, i ) := P nj =1 B ( i, j ), thus d ( B, i ) is the number of stubs at i (loop edges counttwice). Let m = m ( G ) = m ( B ) = 12 n X i,j =1 B ( i, j ) = 12 n X i =1 d ( B, i )denote the number of edges. Denote by A mn the set of adjacency matrices on n verticeswith m edges. 3n unlabeled multigraph is the equivalence class of labeled multigraphs where twolabeled graphs are equivalent if one can be obtained by relabeling the other. Thus M is the set of these equivalence classes of labeled multigraphs, which are also calledisomorphism types.Suppose F ∈ M k , G ∈ M n and denote by A ∈ A k and B ∈ A n the adjacencymatrices of F and G . If g : M → R then we say that g is a multigraph parameter. Let g ( A ) := g ( F ). Conversely, if g : S ∞ k =1 A k → R is constant on isomorphism classes, then g defines a multigraph parameter. We define the induced homomorphism density of F into G by t = ( F, G ) := t = ( A, B ) := 1 n k X ϕ :[ k ] → [ n ]
11 [ ∀ i, j ∈ [ k ] : A ( i, j ) = B ( ϕ ( i ) , ϕ ( j ))] . (1)The notion of convergence of simple graph sequences and several equivalent character-izations of graphons (limit objects of convergent graph sequences) were given in [10]. In[8] a natural generalization of the theory of dense graph limits to multigraphs is given (seealso [12] for similar results in a more general setting). We say that a sequence of multi-graphs ( G n ) ∞ n =1 is convergent if for every k ∈ N and every multigraph F ∈ M k the limit g ( F ) = lim n →∞ t = ( F, G n ) exists, moreover we have P A ∈A k g ( A ) = 1. The limit object ofa convergent multigraph sequence is a measurable function W : [0 , × [0 , × N → [0 , W ( x, y, k ) ≡ W ( y, x, k ) , ∞ X k =0 W ( x, y, k ) ≡ , W ( x, x, k + 1) ≡ . (2)Such functions are called multigraphons . For every multigraphon W and multigraph F ∈ M k with adjacency matrix A ∈ A k we define t = ( F, W ) := t = ( A, W ) := Z [0 , k Y i ≤ j ≤ k W ( x i , x j , A ( i, j )) d x d x . . . d x k (3)We say that G n → W if for every k ∈ N and every F ∈ M k we havelim n →∞ t = ( F, G n ) = t = ( F, W ) . Theorem 1 of [8] states that if a sequence of multigraphs ( G n ) ∞ n =1 is convergent then G n → W for some multigraphon W and conversely, every multigraphon W arises thisway. The limiting multigraphon of a convergent sequence is not unique, but if we definethe equivalence relation W ∼ = W by ∀ F ∈ M : t = ( F, W ) = t = ( F, W ) then obviously G n → W , G n → W implies W ∼ = W . For other characterisations of the equivalencerelation ∼ = for graphons, see [4]. 4or a multigraphon W and x ∈ [0 ,
1] we define the average degree of W at x and the edge density of W by D ( W, x ) := Z ∞ X k =0 k · W ( x, y, k ) d y, (4) ρ ( W ) := Z Z ∞ X k =0 k · W ( x, y, k ) d y d x. (5)If ρ ( W ) < + ∞ then D ( W, x ) < + ∞ for Lebesgue-almost all x .Given a multigraphon W we define the degree distribution function of W by F W ( z ) = Z D ( W, x ) ≤ z ] d x, z ≥ . (6)Indeed, F W ( · ) is a probability distribution function on [0 , ∞ ), i.e. it is nonnegative, rightcontinuous, increasing and satisfies lim z →∞ F W ( z ) = 1. It is easy to see that we have ρ ( W ) = R ∞ z d F W ( z ). Denote by F − W ( u ) := min { z : F W ( z ) ≥ u } , u ∈ (0 , . (7) We denote a random element of A n by X n . We may associate a random multigraph G n to X n by taking the isomorphism class of X n .We say that a sequence of random multigraphs (cid:0) G n (cid:1) ∞ n =1 converges in probabilityto a multigraphon W (or briefly write G n p −→ W ) if for every multigraph F we have t = ( F, G n ) p −→ t = ( F, W ), i.e. ∀ F ∈ M ∀ ε > n →∞ P ( | t = ( F, G n ) − t = ( F, W ) | > ε ) = 0 . (8)We say that X n p −→ W if G n p −→ W holds for the associated random multigraphs.Note that the definitions of the edge reconnecting model and the PAG κ (see Section1) in fact naturally give rise to a random labeled graph, i.e. a random element X n of A n .The edge reconnecting Markov chain is easily seen to be irreducible and aperiodic on thestate space A mn , thus the stationary distribution is indeed unique.We say that the distribution of X n is edge stationary if the conditional distributionof X n given the degree sequence ( d ( X n , i )) ni =1 is the same as that of the configurationmodel (see Section 1) with the same degree sequence.Recall the formulas defining the Poisson and Gamma distributions: p ( k, λ ) := e − λ λ k k ! (9) g ( x, α, β ) := x α − β α e − βx Γ( α ) 11[ x >
0] (10)5e say that a nonnegative integer-valued random variable X has Poisson distributionwith parameter λ (or briefly denote X ∼ POI ( λ )) if P ( X = k ) = p ( k, λ ) for all k ∈ N .We say that a nonnegative real-valued random variable Z has gamma distribution withparameters α and β (or briefly denote Z ∼ Gamma( α, β )) if P ( Z ≤ z ) = R z g ( x, α, β ) d x .For a real-valued nonnegative random variable X define E ( X ; m ) := E ( X · X ≥ m ]) . First we state our theorem characterizing the form of multigraph limits of edge stationarymultigraph sequences:
Theorem 1.
Let W denote a multigraphon with ρ ( W ) < + ∞ . If X n is an A n -valuededge stationary random variable for all n ∈ N , X n p −→ W for some multigraphon W ,and the sequence ( X n ) ∞ n =1 satisfies lim m →∞ sup n ∈ N (cid:0) n (cid:1) X i The unique stationary distribution of the edge reconnecting model withlinear preferential attachment parameter κ and state space A mn has the same distributionas PAG κ ( n, m ) . If X n has this distribution then for all B ∈ A mn P ( X n = B ) = Q ni =1 Q d ( B,i ) j =1 ( κ + j − Q mj =1 ( κn + j − m !2 m ′ ( B ) (cid:16)Q ni =1 Q i − j =1 B ( i, j )! (cid:17) (cid:16)Q ni =1 B ( i,i )2 ! (cid:17) (14)6t the end of Section 3.4 of [5] the following theorem is stated:Let SPAG( n, m ) denote the simple graph obtained from PAG( n, m ) by deleting loopsand keeping only one copy of the parallel edges. ThenSPAG (cid:18) n, n · ( ρ + o (1)) (cid:19) p −→ W s , W s ( x, y ) := 1 − exp( − ρ ln( x ) ln( y )) , (15)where (analogously to (8)) the symbol p −→ denotes convergence in probability of a se-quence of random simple graphs to a (simple) graphon.It is easy to see that (15) is a corollary of the following theorem: Theorem 2. Let us fix κ, ρ ∈ (0 , + ∞ ) . If X n is a random element of A m ( n ) n withdistribution (14) for n = 1 , , . . . , moreover the asymptotic edge density is lim n →∞ m ( n ) n = ρ, then X n p −→ W where W ( x, y, k ) = ( p ( k, F − ( x ) F − ( y ) ρ ) if x = y | k ] · p (cid:16) k , F − ( x ) F − ( y )2 ρ (cid:17) if x = y (16) and F − is the inverse function of F ( z ) = R z g ( y, κ, κρ )d y , see (10) . Note the similarity of the multigraphons appearing in (13) and (16): as we willsee later, this is a consequence of the fact that the distribution of PAG κ ( n, m ) is edgestationary.The proofs of the above stated theorems rely on the following ideas: • We relate our random graph models to urn models with multiple colors (e.g. thewell-known P´olya urn model): the number of balls is 2 m and they are colored with n possible colors. Each ball corresponds to a stub, each color corresponds to alabeled vertex and the edge set of the multigraph depends on the positions of ballsin the urn. • We make use of the underlying symmetries of the distributions of our random graphsby relating the theory of graph limits to the theory of partially exchangeable arraysof random variables, a connection first observed in [7].The rest of this paper is organized as follows:In Section 3 we introduce the notion of random, vertex exchangeable, infinite ad-jacency matrices as well as W -random multigraphons and deduce some useful resultsrelating the convergence of these objects to graph limits.In Section 4 we relate the notion of edge stationarity to the ball exchangeability ofthe corresponding urn models and prove the convergence results stated above.7 Vertex exchangeable arrays In this section we introduce random infinite arrays X = (cid:0) X ( i, j ) (cid:1) ∞ i,j =1 that arise asthe adjacency matrices of random infinite labeled multigraphs and we give probabilisticmeaning to the homomorphism densities t = ( F, W ) by introducing W -random infinitemultigraphs X W . We also introduce the notion of the average degree D ( X , i ) of a vertex i in an infinite, dense, vertex exchangeable multigraph.In Subsection 3.1 give a useful alternative characterisation of G n p −→ W using ex-changeable arrays and prove that under certain technical conditions the average degreesof G n converge in distribution to the average degrees D ( X W , i ) of the limiting W -randominfinite array.Let A N denote the set of adjacency matrices ( A ( i, j )) ∞ i,j =1 of countable multigraphs: A N = (cid:8) A ∈ N N × N : ∀ i, j ∈ N A ( i, j ) ≡ A ( j, i ) , ∀ i ∈ N | A ( i, i ) (cid:9) . We consider the probability space ( A N , F , P ) where F is the coarsest sigma-algebra withrespect to which A ( i, j ) is measurable for all i, j and P is a probability measure onthe measurable space ( A N , F ). We are going to denote the infinite random array withdistribution P by X = ( X ( i, j )) ∞ i,j =1 . We use the standard notation X ∼ Y if X and Y are identically distributed (i.e. their distribution P is identical on ( A N , F )).If X is a random element of A N , let X [ k ] be the random element of A k defined by X [ k ] := ( X ( i, j )) ki,j =1 . Definition 3.1 ( W -random infinite multigraphons) . Let ( U i ) ∞ i =1 be independent randomvariables uniformly distributed in [0 , . Given a multigraphon W we define the ran-dom countable adjacency matrix X W = ( X W ( i, j )) ∞ i,j =1 as follows: Given the backgroundvariables ( U i ) ∞ i =1 the random variables ( X W ( i, j )) i ≤ j ∈ N are conditionally independent and P (cid:0) X W ( i, j ) = m (cid:12)(cid:12) ( U i ) ∞ i =1 (cid:1) = W ( U i , U j , m ) , that is if A ∈ A k then we have P (cid:16) X [ k ] W = A (cid:12)(cid:12) ( U i ) ∞ i =1 (cid:17) := Y i ≤ j ≤ k W ( U i , U j , A ( i, j )) . (17)In plain words: if i = j and U i = x , U j = y then the number of multiple edgesbetween the vertices labeled by i and j in X W has distribution (cid:0) W ( x, y, k ) (cid:1) ∞ k =1 and thenumber of loop edges at vertex i has distribution (cid:0) W ( x, x, k ) (cid:1) ∞ k =1 (these are indeedproper probability distributions by (2)).For every multigraphon W and multigraph F ∈ M k with adjacency matrix A ∈ A k we have t = ( F, W ) (3) , (17) = P (cid:16) X [ k ] W = A (cid:17) . (18)8ecalling (4) and (5) we have D ( W, x ) = E (cid:0) X W (1 , (cid:12)(cid:12) U = x (cid:1) , ρ ( W ) = E ( X W (1 , . (19)If ρ ( W ) < + ∞ then D ( W, U ) < + ∞ almost surely.We say that a random infinite array X = ( X ( i, j )) ∞ i,j =1 is vertex exchangeable if( X ( τ ( i ) , τ ( j ))) ∞ i,j =1 ∼ ( X ( i, j )) ∞ i,j =1 (20)for all finitely supported permutations τ : N → N . We call X = ( X ( i, j )) ∞ i,j =1 dissociated if for all m, n ∈ N the A n -valued random variable ( X ( i, j )) ni,j =1 is independent of the A m -valued random variable ( X ( i, j )) n + mi,j = n +1 .In our case an infinite exchangeable array can be thought of as the adjacency matrixof a random multigraph with vertex set N : the adjacency matrix of this random infinitemultigraph is vertex exchangeable if and only if the distribution of the random graphis invariant under the relabeling of the vertices and dissociated if and only if subgraphsspanned by disjoint vertex sets are independent.It follows from Definition 3.1 that X W is vertex exchangeable and dissociated and byAldous’ representation theorem (see Theorem 1.4, Proposition 3.3 and Theorem 5.1 in[1]), the converse holds: a random element X of A N is vertex exchangeable and dissociatedif and only if X ∼ X W for some multigraphon W . Although the notion of the W -random graph (see Definition 3.1) is already present in [10], the connection of Aldous’representation theorem with the theory of graph limits was first observed in [7]. See alsoTheorem 3.1, Theorem 3.2, Proposition 3.4 of [11]. For a self-contained proof of thisrepresentation theorem for multigraphons, see Theorem 1 and Theorem 2 in [8].For a vertex exchangeable infinite array X satisfying E ( X (1 , < + ∞ we define theaverage degree of X at vertex i by D ( X , i ) := lim n →∞ n n X j =1 X ( i, j ) . (21)The sum n P nj =1 X ( i, j ) indeed almost surely converges to a random variable as n → ∞ by de Finetti’s theorem (see Section 2.1 of [2]) and the conditional strong law of largenumbers. From (4), Definition 3.1 and (19) we get D ( X W , i ) = lim n →∞ n n X j =1 X W ( i, j ) a.s. = D ( W, U i ) . (22) In this subsection we state and prove two lemmas: in Lemma 3.1 we relate convergenceof dense random multigraphs to convergence of the probability measures of the corre-sponding random arrays and in Lemma 3.2 we give sufficient conditions under which9onvergence of dense random multigraphs imply convergence of the degree distributionof these graphs.We say that a sequence of random infinite arrays ( X n ) ∞ n =1 converges in distribution toa random infinite array X (or briefly denote X n d −→ X ) if X [ k ] n converges in distributionto X [ k ] for all k ∈ N , i.e. ∀ k ∈ N , A ∈ A n : lim n →∞ P (cid:0) A = X [ k ] n (cid:1) = P (cid:0) A = X [ k ] (cid:1) (23)If X n is vertex exchangeable for all n , then X is also vertex exchangeable.Let X n denote a random element of A n . We say that the distribution X n is vertexexchangeable if for all permutations τ : [ n ] → [ n ] and B ∈ A n P ( ∀ i, j ∈ [ n ] : B ( i, j ) = X n ( i, j )) = P ( ∀ i, j ∈ [ n ] : B ( i, j ) = X n ( τ ( i ) , τ ( j ))) , (24)that is ( X ( i, j )) ni,j =1 ∼ ( X ( τ ( i ) , τ ( j ))) ni,j =1 holds.If X n is a random element of A n then X [ k ] n = ( X n ( i, j )) ki,j =1 is well-defined for k ≤ n ,thus we might define X n d −→ X (where X is a random element of A N ) by (23). It is easyto show that if X n is vertex exchangeable for each n ∈ N then X inherits this property.Also note that by (18) we have X n d −→ X W if and only if for all k ∈ N and for all A ∈ A k we have lim n →∞ P (cid:16) X [ k ] n = A (cid:17) = t = ( A, W ). Lemma 3.1. Let X n = ( X n ( i, j )) ni,j =1 be a random, vertex exchangeable element of A n for all n ∈ N . The following statements are equivalent:(a) X n p −→ W , that is ∀ k ∀ A ∈ A k : t = ( A, X n ) p −→ t = ( A, W ) (b) X n d −→ X W , that is ∀ k ∀ A ∈ A k : lim n →∞ P (cid:16) X [ k ] n = A (cid:17) = t = ( A, W ) Proof. We are going to use the fact lim n →∞ n · ( n − ... ( n − k +1) n k = 1 many times in this proof.We first prove ( a ) = ⇒ ( b ):lim n →∞ P (cid:0) X [ k ] n = A (cid:1) (24) = lim n →∞ ( n − k )! n ! X ϕ :[ k ] ֒ → [ n ] P (cid:16) ( X n ( ϕ ( i ) , ϕ ( j ))) ki,j =1 = A (cid:17) =lim n →∞ n k X ϕ :[ k ] → [ n ] P (cid:16) ( X n ( ϕ ( i ) , ϕ ( j ))) ki,j =1 = A (cid:17) (1) = lim n →∞ E ( t = ( A, X n )) ( a ) = t = ( A, W ) (25)Now we prove ( b ) = ⇒ ( a ): The idea of this proof comes from Lemma 2.4 of [10].From ( b ) we get E ( t = ( A, X n )) → t = ( A, W ) for all A by the argument used in (25).In order to have t = ( A, X n ) p −→ t = ( A, W ) we only need to showlim n →∞ D ( t = ( A, X n )) = lim n →∞ E (cid:0) t = ( A, X n ) (cid:1) − t = ( A, W ) = 0 . n →∞ E (cid:0) t = ( A, X n ) (cid:1) (1) =lim n →∞ n k X ϕ :[2 k ] → [ n ] P (cid:16) A = ( X n ( ϕ ( i ) , ϕ ( j ))) ki,j =1 , A = ( X n ( ϕ ( i ) , ϕ ( j ))) ki,j = k +1 (cid:17) =lim n →∞ ( n − k )! n ! X ϕ :[2 k ] ֒ → [ n ] P (cid:16) A = ( X n ( ϕ ( i ) , ϕ ( j ))) ki,j =1 , A = ( X n ( ϕ ( i ) , ϕ ( j ))) ki,j = k +1 (cid:17) (24) =lim n →∞ P (cid:16) A = ( X n ( i, j )) ki,j =1 , A = ( X n ( i, j )) ki,j = k +1 (cid:17) ( b ) = P (cid:16) A = ( X W ( i, j )) ki,j =1 , A = ( X W ( i, j )) ki,j = k +1 (cid:17) ( ∗ ) = t = ( A, W ) In the equation ( ∗ ) we used the fact that X W is dissociated and (18).Recall that for a real-valued nonnegative random variable X we denote E ( X ; m ) := E ( X · X ≥ m ]). A sequence of real-valued nonnegative random variables ( X n ) ∞ n =1 isuniformly integrable (see Chapter 13 of [15]) iflim m →∞ max n E ( X n ; m ) = 0 . Now we state and prove a lemma in which we give sufficient conditions under which˜ X n d −→ X implies n d ( ˜ X n , i ) d −→ D ( X , i ). Note that some extra conditions are indeedneeded, because it might happen that very few pairs of vertices of ˜ X n with a huge numberof parallel edges between them remain invisible if we only sample small subgraphs of ˜ X n ,but still cause a sigificant distortion in the distribution of the degrees of vertices in ˜ X n .This phenomenon is related to the fact that weak convergence of a sequence of randomvariables X n d −→ X does not necessarily imply the convergence of the means of X n tothat of X : the uniform integrability of ( X n ) ∞ n =1 is a sufficient (and essentially necessary)condition that rules out pathological behavior. Lemma 3.2. (i) If ( X n ) ∞ n =1 is a sequence of infinite vertex exchangeable arrays, the sequence ( X n (1 , ∞ n =1 is uniformly integrable and X n d −→ X , then for all k ∈ N we have (cid:16) X [ k ] n , ( D ( X n , i )) ki =1 (cid:17) d −→ (cid:16) X [ k ] , ( D ( X , i )) ki =1 (cid:17) . (26) (ii) If ˜ X n is a random, vertex exchangeable element of A n for each n ∈ N , ˜ X n d −→ X holds for some infinite vertex exchangeable array X and the sequences (cid:16) ˜ X n (1 , (cid:17) ∞ n =1 nd (cid:16) ˜ X n (1 , (cid:17) ∞ n =1 are uniformly integrable then for all k ∈ N ˜ X [ k ] n , (cid:18) n d ( ˜ X n , i ) (cid:19) ki =1 ! d −→ (cid:16) X [ k ] , ( D ( X , i )) ki =1 (cid:17) . (27) Proof. Proof of (i): We first prove that (26) holds if we further assume P ( X n ( i, j ) ≤ m ) ≡ m ∈ N . By the method of moments we only need to show that for all µ i,j ∈ N ,1 ≤ i ≤ j ≤ k and ν i ∈ N , 1 ≤ i ≤ k we havelim n →∞ E Y i ≤ j ≤ k X n ( i, j ) µ i,j · k Y i =1 D ( X n , i ) ν i ! = E Y i ≤ j ≤ k X ( i, j ) µ i,j · k Y i =1 D ( X , i ) ν i ! . (28)For every i ∈ [ k ] choose J ( i ) ⊆ N such that for all i we have | J ( i ) | = ν i and J ( i ) ∩ [ k ] = ∅ , moreover for all i = i ′ we have J ( i ) ∩ J ( i ′ ) = ∅ . In order to prove (28) we first showthat if P ( X ( i, j ) ≤ m ) ≡ m ∈ N then E Y i ≤ j ≤ k X ( i, j ) µ i,j · k Y i =1 D ( X , i ) ν i ! = E Y i ≤ j ≤ k X ( i, j ) µ i,j · k Y i =1 Y j ∈ J ( i ) X ( i, j ) (29)Denote by ν = P ki =1 ν i and ν := { ( i, l ) : i ∈ [ k ] , l ∈ [ ν i ] } and X [ k ] ,µ := Q i ≤ j ≤ k X ( i, j ) µ i,j .Using (21) and dominated convergence, the left-hand side of (29) is equal tolim n →∞ E X [ k ] ,µ k Y i =1 n n X j =1 X ( i, j ) ! ν i ! =lim n →∞ n ν X j : ν → [ n ] E X [ k ] ,µ k Y i =1 ν i Y l =1 X ( i, j ( i, l )) ! =lim n →∞ n ν X j : ν֒ → [ k..n ] E X [ k ] ,µ k Y i =1 ν i Y l =1 X ( i, j ( i, l )) ! (20) =lim n →∞ n ν X j : ν֒ → [ k..n ] E X [ k ] ,µ k Y i =1 Y j ′ ∈ J ( i ) X ( i, j ′ ) Now the right-hand side of the above equation is easily shown to be equal to the right-hand side of (29).Having established (29), our assumptions X n d −→ X and P ( X n ( i, j ) ≤ m ) ≡ X n converge, and this followsfrom the definition of X n d −→ X (for details on d −→ , see [3]).Having established (26) under the condition P ( X n ( i, j ) ≤ m ) ≡ X m ( i, j ) :=min { X ( i, j ) , m } , then for each m ∈ N we have X mn d −→ X m from which (cid:16) X m, [ k ] n , ( D ( X mn , i )) ki =1 (cid:17) d −→ (cid:16) X m, [ k ] , ( D ( X m , i )) ki =1 (cid:17) (30)follows by the previous argument. By uniform integrability for every ε > m such that for all n we have E ( D ( X n , i ) − D ( X mn , i )) (19) = E ( X (1 , − min { X (1 , , m } ) ≤ ε. (31)It follows from Fatou’s lemma that E ( D ( X , i ) − D ( X m , i )) ≤ ε also holds.In order to prove (26) we only need to checklim n →∞ E (cid:16) f (cid:16) X [ k ] n , ( D ( X n , i )) ki =1 (cid:17)(cid:17) = E (cid:16) f (cid:16) X [ k ] , ( D ( X , i )) ki =1 (cid:17)(cid:17) for any bounded and continuous f : A k × [0 , + ∞ ) k → R . This can be easily proved using(30), (31) and the ε/ n ∈ N let ( η ni ) ∞ i =1 be i.i.d. and uniformly distributed on [ n ].Define the infinite array X n by X n ( i, j ) := ˆ X n ( η ni , η nj ). Now X n is vertex exchangeableand using the vertex exchangeability of ˜ X n we get E ( X n (1 , m ) = (1 − n ) E (cid:16) ˜ X n (1 , m (cid:17) + 1 n E (cid:16) ˜ X n (1 , m (cid:17) , end if we combine this with the assumptions of (ii) we get that ( X n (1 , ∞ n =1 is uniformlyintegrable.Note that by (21) and the law of large numbers have D ( X n , i ) = n d ( ˜ X n , η ni ). Usingthe vertex exchangeability of ˜ X n we get that the following two (cid:0) A k , R k + (cid:1) -valued randomvariables have the same distribution: • (cid:16) X [ k ] n , ( D ( X n , i )) ki =1 (cid:17) under the condition |{ η n , . . . , η nk }| = k • (cid:18) ˜ X [ k ] n , (cid:16) n d ( ˜ X n , i ) (cid:17) ki =1 (cid:19) Let us call this fact ( ∗ ). X n d −→ X easily follows from ˜ X n d −→ X , ( ∗ ) andlim n →∞ P ( |{ η n , . . . , η nk }| = k ) = 1 , (32)so we can apply (i) to obtain (26). Now using ( ∗ ) and (32) again we obtain (27).13 Random urn configurations and edge stationarity In this section we define a way of constructing random adjacency matrices using randomurn configurations (the basic idea comes from Section 3.4 of [5]). This construction relatesedge stationary random adjacency matrices to ball exchangeable urn models and givesan easy proof of Lemma 2.1 using the fact that the distribution of the PAG κ ( n, m ) andthat of the stationary state of the edge reconnecting model both arise from the P´olyaurn model via our construction.In Subsection 4.1 we prove Theorem 1 and Theorem 2 using this machinery.Let n, m ∈ N . A random urn configuration with 2 m balls of n different colors is aprobability distribution on [ n ] [2 m ] , that is a random function Ψ : [2 m ] → [ n ]. If l ∈ [2 m ]we say that the l ’th ball has color Ψ( l ). Let d (Ψ , i ) := P ml =1 l ) = i ] for i ∈ [ n ] denotethe multiplicity of color i in Ψ.We say that a random urn configuration Ψ is ball exchangeable if for all permutations τ : [2 m ] → [2 m ] we have (Ψ( l )) ml =1 ∼ (Ψ( τ ( l ))) ml =1 . Ψ is ball exchangeable if and only if the following property holds: conditioned on thevalue of the type vector ( d (Ψ , i )) ni =1 , the distribution of Ψ is uniform on the elements of[ n ] [2 m ] with this particular type vector, more precisely if ψ ∈ [ n ] [2 m ] then P (Ψ = ψ ) = P (( d (Ψ , i )) ni =1 = ( d ( ψ, i )) ni =1 ) (cid:16) (2 m )! Q ni =1 d ( ψ,i )! (cid:17) We say that Ψ is color exchangeable if for all permutations τ : [ n ] → [ n ] we have(Ψ( l )) ml =1 ∼ ( τ (Ψ( l ))) ml =1 . To a random urn configuration Ψ we assign a random element X of A mn by defining X ( i, j ) := m X e =1 e − 1) = i, Ψ(2 e ) = j ] + 11[Ψ(2 e − 1) = j, Ψ(2 e ) = i ] (33)for all i, j ∈ [ n ]. In plain words: the colors of the balls correspond to the labels of thevertices and if for any 1 ≤ e ≤ m we see a ball of color i at position 2 e − j at position 2 e then we draw an edge between the vertices i and j in thecorresponding labeled multigraph (and if i = j then we draw a loop edge at vertex i ).With the definition (33) we have P ( d ( X , i ) = d (Ψ , i )) = 1. It is easy to see that allprobability measures on A mn arise this way.If Ψ is color exchangeable then X is vertex exchangeable. All vertex exchangeableprobability measures on A mn arise this way.14f Ψ is ball exchangeable then for all B ∈ A mn we have P ( X = B ) = P (( d ( X , i )) ni =1 = ( d ( B, i )) ni =1 ) (cid:16) (2 m )! Q ni =1 d ( B,i )! (cid:17) m !2 m ′ ( B ) (cid:16)Q i Fix κ ∈ (0 , + ∞ ). Let Ψ L be a random element of [ n ] [ L ] .Given Ψ L we generate a random element of [ n ] [ L +1] which we denote by Ψ L +1 inthe following way: let Ψ L +1 ( l ) := Ψ L ( l ) for all l ∈ [ L ] and ∀ i ∈ [ n ] : P (cid:0) Ψ L +1 ( L + 1) = i (cid:12)(cid:12) Ψ L (cid:1) = d (Ψ L , i ) + κL + nκ • The ball replacement model: Fix κ ∈ (0 , + ∞ ). Let Ψ T be a random element of[ n ] [2 m ] . Given Ψ T we generate a random element of [ n ] [2 m ] which we denote byΨ T +1 in the following way: let ξ T denote a uniformly chosen element of [2 m ]. Forall l ∈ [2 m ] \ ξ T let Ψ T +1 ( l ) := Ψ T ( l ) and ∀ i ∈ [ n ] : P (cid:0) Ψ T +1 ( ξ T ) = i (cid:12)(cid:12) Ψ T , ξ T (cid:1) = d (Ψ T , i ) + κ m + nκ (35)It is well-known that if we start with an empty urn Ψ and repeatedly apply the P´olyaurn scheme to get Ψ L for L = 1 , , . . . , m then the distribution of Ψ m is of the followingform: ∀ ψ ∈ [ n ] [2 m ] : P (Ψ m = ψ ) = Q ni =1 Q d ( ψ,i ) j =1 ( κ + j − Q mj =1 ( κn + j − 1) (36)Thus the distribution of Ψ m is ball and color exchangeable. The PAG κ ( n, m ) (definedin Section 1) is in fact the random multigraph obtained as the image of the random urnconfiguration (36) under the mapping (33).The ball replacement model is an [ n ] [2 m ] -valued Markov chain, which is irreducibleand aperiodic with unique stationary distribution (36): if we delete the ξ T ’th ball from15 m , then by ball exchangeability the distribution of the resulting [ n ] [2 m − -valued randomvariable is the same as deleting the 2 m ’th ball: P´olya-Ψ m − . Thus replacing the removed ξ T ’th ball with a new one according to (35) we get a [ n ] [2 m ] -valued random variable withP´olya-Ψ m distribution again by ball exchangeability.Now consider the ball replacement Markov chain Ψ T , T = 0 , , . . . with Ψ beingan arbitrary [ n ] [2 m ] -valued random variable. If we use the mapping (33) to create X ( T )from Ψ T , then it is easily seen that the resulting A mn -valued stochastic process X ( T ), T = 0 , , . . . evolves according to the rules of the edge reconnecting Markov chain definedin Section 1. Some consequences of this fact: • If the distribution of Ψ is ball exchangeable then Ψ T is also ball exchangeable forall T , thus if X (0) is edge stationary then X ( T ) is also edge stationary for all T (hence the name “edge stationarity”). • The distribution (36) is stationary for the ball replacement model, thus the imageof this distribution under the mapping (33) is the unique stationary distribution ofthe edge reconnecting model. Lemma 2.1 follows from (36) and (34). The key result of this subsection is Lemma 4.1 which can be roughly summarized as fol-lows: in a large dense edge stationary random multigraph the number of edges connectingthe vertices v and w has Poisson distribution with parameter proportional to d ( v ) d ( w ).Given Lemma 4.1 the proof of Theorem 1 is straightforward and the proof of Theorem2 reduces to a limit theorem which states that the rescaled number of balls with color1 , , . . . , k in the P´olya urn model converge in distribution to i.i.d. random variables withGamma distribution. Lemma 4.1. Let F : [0 , + ∞ ) → [0 , denote the cumulative distribution function of anonnegative random variable Z . Let F − ( u ) := min { x : F ( x ) ≥ u } . Let Z , Z , . . . bei.i.d. random variables with Z i ∼ Z ∼ F − ( U i ) (where U i are uniform on [0 , ).If X n is an A n -valued random variable for n = 1 , , . . . , moreover the distribution of X n is vertex exchangeable and edge stationary, and m ( X n ) n p −→ ρ, n → ∞ , (37) where < ρ < + ∞ is positive real parameter, moreover for all k ∈ N we have (cid:18) n d ( X n , i ) (cid:19) ki =1 d −→ ( Z i ) ki =1 , n → ∞ (38) then X n p −→ W where W ( x, y, k ) = ( p ( k, F − ( x ) F − ( y ) ρ ) if x = y | k ] · p (cid:16) k , F − ( x ) F − ( y )2 ρ (cid:17) if x = y (39)16 roof. The infinite random array X W (see Definition 3.1) can be alternatively definedin the following way: Let ( X W ( i, j )) i ≤ j be conditionally independent given ( Z i ) i ∈ N withconditional distribution X W ( i, j ) ∼ POI (cid:16) Z i Z j ρ (cid:17) if i < j and X W ( i,i )2 ∼ POI (cid:16) Z i Z i ρ (cid:17) .If A ∈ A k let A ∗ denote the following modified matrix: A ∗ ( i, j ) := A ( i, j ) if i = j but A ∗ ( i, i ) := A ( i,i )2 . Thus A ∗ ( i, i ) is the number of loop edges at vertex i .Let m [ k ] := P i,j A ( i, j ). Define p ( A, ( z i ) ki =1 , ρ ) := exp − ρ k X i =1 z i ! · Y i ≤ j A ∗ ( i, j )! · k Y i =1 ( z i ) d ( A,i ) · ρ − m [ k ] · − P ki =1 A ∗ ( i,i ) By (17) and (39) we have P (cid:16) X [ k ] W = A (cid:12)(cid:12) ( Z i ) ki =1 (cid:17) = k Y i =1 k Y j = i p (cid:18) A ∗ ( i, j ) , Z i · Z j ρ · (1 + 11[ i = j ]) (cid:19) = p ( A, ( Z i ) ki =1 , ρ ) . (40)By Lemma 3.1 we only need to show that we have ∀ k ∈ N , ∀ A ∈ A k : lim n →∞ P (cid:0) X [ k ] n = A (cid:1) = P (cid:16) X [ k ] W = A (cid:17) (41)in order to prove X n p −→ W .Let ( d i ) ni =1 denote an arbitrary degree sequence with m = P ni =1 d i and denote by z i := d i n , ρ n := 2 mn . (42)Fix ε > A ∈ A k . We are going to prove that if ε ≤ ρ n ≤ ε − , ∀ i ∈ [ k ] : z i ≤ ε − (43)then P (cid:18) X [ k ] n = A (cid:12)(cid:12) ( d ( X n , i )) ki =1 = ( d i ) ki =1 , m ( X n ) n = ρ n (cid:19) = (44) p ( A, ( z i ) ki =1 , ρ n ) + Err( n, A, ε ) (45)with lim n →∞ Err( n, A, ε ) = 0. We adopt the convention that the value of Err( n, A, ε )might change from line to line.First we assume that (44) = (45) holds under the condition (43), and deduce (41)from it. Define the events B εn and B ε by B εn := { ε ≤ m ( X n ) n ≤ ε − , ∀ i ∈ [ k ] : 1 n d ( X n , i ) ≤ ε − } B ε := { ε ≤ ρ ≤ ε − , ∀ i ∈ [ k ] : Z i ≤ ε − } n →∞ P ( B εn ) ≤ P ( B ε ). (cid:12)(cid:12)(cid:12) P (cid:0) X [ k ] n = A (cid:1) − P (cid:16) X [ k ] W = A (cid:17)(cid:12)(cid:12)(cid:12) (40) = (cid:12)(cid:12)(cid:12) P (cid:0) X [ k ] n = A (cid:1) − E (cid:16) p (cid:16) A, ( Z i ) ki =1 , ρ (cid:17)(cid:17)(cid:12)(cid:12)(cid:12) ≤ (46) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) E p A, (cid:18) n d ( X n , i ) (cid:19) ki =1 , m ( X n ) n ! ; B n ! − E (cid:16) p (cid:16) A, ( Z i ) ki =1 , ρ (cid:17)(cid:17)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) + (47)Err( ε, A, n ) + (1 − P ( B εn )) (48)By (37), (38), lim n →∞ Err( n, A, ε ) = 0 and the fact that p (cid:16) A, ( · ) ki =1 , · (cid:17) is a boundedcontinuous function on the domain (43) we obtainlim sup n →∞ (47) ≤ − P ( B ε ) and lim sup n →∞ (48) ≤ − P ( B ε ) . Now P ( B ε ) → ε → 0, from which (41) and the statement of the lemma followsunder the assumption that (43) implies (44) = (45). Proof of (43) = ⇒ (44) = (45) : We are using random urn configurations to generate X n . Let Ψ n denote the ball andcolor exchangeable [ n ] [2 m ] -valued random variable with ( d (Ψ , i )) ni =1 = ( d i ) ni =1 , thus Ψ n isuniformly distributed on the set of urn configurations with this type vector. X n can begenerated from Ψ n via (33). To determine the distribution of X [ k ] n we only need to knowthe positions of the balls of color i ∈ [ k ]. We paint the rest of the balls ”grey“. Let m [ k ] := 12 X i,j A ( i, j ) , d [ k ] := k X i =1 d i , m g := m − d [ k ] + m [ k ] . Thus m g denotes the number of edges of the multigraph spanned by grey vertices.In order to prove (44) = (45) we first give an explicit formula for (44).The number of grey balls is 2 m − d [ k ] . The number of all urn configurations with typevector ( d , . . . , d k , m − d [ k ] ) is (2 m )! (cid:16)Q ki =1 d i ! (cid:17) · (2 m − d [ k ] )! (49)The number of urn configurations with type vector ( d , . . . , d k , m − d [ k ] ) for which X [ k ] n = A is m ! · m − m g − P ki =1 A ∗ ( i,i ) (cid:16)Q i ≤ j A ∗ ( i, j )! (cid:17) · (cid:16)Q ki =1 ( d i − d ( A, i ))! (cid:17) · m g ! (50)18hus (44) = (50)(49) . Our aim is to prove (50)(49) = (45): after dividing both sides of thisequality by Q i ≤ j A ∗ ( i,j )! · − P i A ∗ ( i,i ) we only need to prove m ! · (cid:16)Q ki =1 d i ! (cid:17) · (cid:0) m − d [ k ] (cid:1) ! · m − m g Q ki =1 ( d i − d ( A, i ))! · m g ! · (2 m )! = (51)exp − ρ n k X i =1 z i ! · k Y i =1 ( z i ) d ( A,i ) · ρ − m [ k ] n + Err( n, A, ε ) (52)Now we rewrite (51):(51) = k Y i =1 d ( A,i ) Y l =1 ( d i − d ( A, i ) + l ) · m − m g Y l =1 ( m g + l ) ! · m − m g Q d [ k ] l =1 (2 m − d [ k ] + l ) = m − m g Y l =1 m g + 2 l m − d [ k ] + l ! · Q ki =1 Q d ( A,i ) l =1 ( d i − d ( A, i ) + l ) Q m [ k ] l =1 (2 m − m [ k ] + l ) (53)Now we approximate various terms that appear in the right hand side of (53) usingour assumpions (43): m − m g Y l =1 m g + 2 l m − d [ k ] + l = d [ k ] Y l =1 m − l m − l · (cid:18) n Err( A, ε ) (cid:19) (54) m [ k ] Y l =1 (2 m − m [ k ] + l ) = (2 m ) m [ k ] · (cid:18) n Err( A, ε ) (cid:19) (55)where 0 ≤ | Err( A, ε ) | < + ∞ is independent of n .Let d ∗ = min { d i : i ∈ [ k ] , d ( A, i ) > } . We consider two cases separately:If d ∗ ≤ n / then using (43) it is easy to see that (53) ≤ Err( A, ε ) n − / and also p ( A, ( z i ) ki =1 , ρ n ) ≤ Err( A, ε ) n − / , so (51) = (52) holds when d ∗ ≤ n / .If d ∗ > n / then we have k Y i =1 d ( A,i ) Y l =1 ( d i − d ( A, i ) + l ) = k Y i =1 ( d i ) d ( A,i ) ! (cid:18) √ n Err( A, ε ) (cid:19) . (56)Putting (54), (55) and (56) together we get1953) = d [ k ] Y l =1 m − l m − l · Q ki =1 ( d i ) d ( A,i ) (2 m ) m [ k ] · (1 + Err( n, A, ε )) (42) = d [ k ] Y l =1 − ln ρ n − ln ρ n · Q ki =1 ( n · z i ) d ( A,i ) ( n ρ n ) m [ k ] · (1 + Err( n, A, ε )) (43) =exp − ρ n k X i =1 z i ! · k Y i =1 ( z i ) d ( A,i ) · ρ − m [ k ] n + Err( n, A, ε )This completes the proof of (51) = (52). Proof of Theorem 1. Given X n for every n ∈ N let us define the vertex exchangeablerandom adjacency matrix ˜ X n in the following way: let π n denote a uniformly distributedpermutation π n : [ n ] ֒ → [ n ], independent from X n . Let (cid:16) ˜ X n ( i, j ) (cid:17) ni,j =1 := ( X n ( π n ( i ) , π n ( j )) ni,j =1 . (57)Then ˜ X n is indeed vertex exchangeable, moreover P (cid:16) t = ( F, ˜ X n ) = t = ( F, X n ) (cid:17) = 1for every F ∈ M , so X n p −→ W is equivalent to ˜ X n p −→ W , which is in turn equivalentto ˜ X n d −→ X W by Lemma 3.1. By (57) the conditions (11) and (12) are equivalent tothe uniform integrability of (cid:16) ˜ X n (1 , (cid:17) ∞ n =1 and (cid:16) ˜ X n (1 , (cid:17) ∞ n =1 , respectively, thus we canapply Lemma 3.2/(ii) to deduce that for all k ∈ N (cid:18) n d ( ˜ X n , i ) (cid:19) ki =1 d −→ ( D ( X W , i )) ki =1 . (58)Note that by (6), Definition 3.1 and (22) we have that ( D ( X W , i )) ki =1 are i.i.d. withprobability distribution function F W ( z ).Now we are going to prove that m ( ˜ X n ) n p −→ ρ ( W ). In order to do so we definethe truncated adjacency matrix ˜ X ln by ˜ X ln ( i, j ) := min { ˜ X n ( i, j ) , l } and the truncatedmultigraphon W l which satisfies ( X W l ( i, j )) ∞ i,j =1 ∼ (min { X W ( i, j ) , l } ) ∞ i,j =1 .Now we show that if we fix l ∈ N then m ( ˜ X ln ) n p −→ ρ ( W l ).20he equations marked by ( ∗ ) below are true by exchangeability:lim n →∞ E n n X i =1 n d ( ˜ X ln , i ) ! ( ∗ ) = lim n →∞ E (cid:18) n d ( ˜ X ln , (cid:19) (58) = E ( D ( X W l , (19) = ρ ( W l ) . lim n →∞ D n n X i =1 n d ( ˜ X ln , i ) ! = lim n →∞ n n X i,j =1 Cov (cid:18) n d ( ˜ X ln , i ) , n d ( ˜ X ln , j ) (cid:19) ( ∗ ) =lim n →∞ (cid:18) n D (cid:18) n d ( ˜ X ln , (cid:19) + n − n Cov (cid:18) n d ( ˜ X ln , , n d ( ˜ X ln , (cid:19)(cid:19) (58) = 0 . Having established ∀ l : m ( ˜ X ln ) n p −→ ρ ( W l ), the relation m ( ˜ X n ) n p −→ ρ ( W ) follows fromlim l →∞ ρ ( W l ) = ρ ( W ) , ∀ ε > l →∞ sup n ∈ N P m ( ˜ X n ) n − m ( ˜ X ln ) n ≥ ε ! (11) , (12) = 0 , and the ε/ X n p −→ ˆ W , where ˆ W is of form (13). Proof of Theorem 2. The distribution (14) arises from the P´olya-Ψ n m urn model (36)with 2 m balls and n colors via (33). The distribution (36) is ball and color exchangeable,so X n is vertex exchangeable and edge stationary. If we want to prove Theorem 2 thenby Lemma 4.1 we only need to show that (38) holds for all k ∈ N where ( Z i ) i ∈ N are i.i.d.with density function g ( x, κ, κρ ) (see (10)). We may use the method of moments to proveconvergence in distribution, since the Gamma distribution is uniquely determined by itsmoments. Thus we need to show that if ν , . . . , ν k ∈ N thenlim n →∞ E k Y i =1 (cid:18) n d (Ψ n m ( n ) , i ) (cid:19) ν i ! = E k Y i =1 Z ν i i ! = k Y i =1 (cid:16) ρκ (cid:17) ν i · ν i Y j =1 ( κ + j − . Fix k and ν i , i ∈ [ k ]. Let ν = P ki =1 ν i and denote by ψ a particular element of [ k ] [ ν ] withtype vector ( ν , . . . , ν k ). By the construction of the P´olya-Ψ n m distribution we have P ( ∀ l ∈ [ ν ] : Ψ n m ( l ) = ψ ( l )) = Q ki =1 Q ν i j =1 ( κ + j − Q νj =1 ( κn + j − 1) = O ( n − ν )Denote by ν := { ( i, j ) : i ∈ [ k ] , j ∈ [ ν i ] } . The number of functions f : ν → [2 m ] with21 R ( f ) | = N is O ((2 m ( n )) N ) = O ( n N ) if 1 ≤ N ≤ ν .lim n →∞ E k Y i =1 (cid:18) n d (Ψ n m ( n ) , i ) (cid:19) ν i ! =lim n →∞ n ν X f : ν → [2 m ] P (cid:0) ∀ ( i, j ) ∈ ν : Ψ n m ( n ) ( f ( i, j )) = i (cid:1) =lim n →∞ n ν X f : ν֒ → [2 m ] P (cid:0) ∀ ( i, j ) ∈ ν : Ψ n m ( n ) ( f ( i, j )) = i (cid:1) + lim n →∞ n ν ν − X N =1 O ( n N ) O ( n − N ) ( ∗ ) =lim n →∞ Q νk =1 (2 m ( n ) − k + 1) n ν P ( ∀ l ∈ [ ν ] : Ψ n m ( l ) = ψ ( l )) (37) = k Y i =1 (cid:16) ρκ (cid:17) ν i · ν i Y j =1 ( κ + j − . The equation ( ∗ ) holds true by ball exchangeability. References [1] D. J. Aldous. Representations for partially exchangeable arrays of random variables. J. Multivar. Anal. , , 581-598. (1981)[2] D. J. Aldous. More Uses of Exchangeability: Representations of Complex RandomStructures. arXiv:0909.4339v2, to appear in ”Probability and Mathematical Genetics:Papers in Honour of Sir John Kingman” , Cambridge University Press. (2010)[3] P. Billingsley. Convergence of probability measures. Second edition . John Wiley &Sons, Inc., New York. (1999)[4] C. Borgs, J. Chayes, L. Lov´asz. Moments of Two-Variable Functions and the Unique-ness of Graph Limits. Geom. Funct. Anal. (6), 1597–1619. (2010)[5] C. Borgs, J. Chayes, L. Lov´asz, V. S´os, K. Vesztergombi. Limits of randomly growngraph sequences. Eur. J. Combin. (7), 985–999. (2011)[6] S. Chatterjee, P. Diaconis, A. Sly. Random graphs with a given degree sequence. Ann. Appl. Probab. (4), 1400–1435. (2011)[7] P. Diaconis and S. Janson. Graph limits and exchangeable random graphs Rend.Mat. Appl. (7) , , no. 1, 33–61. (2008)[8] I. Kolossv´ary and B. R´ath. Multigraph limits and exchangeability. Acta Math. Hun-gar. (1-2), 1-34. (2011)[9] L. Lov´asz. Very large graphs, in Current Developments in Mathematics 2008 , Inter-national Press, Somerville, MA, 67-128. (2009)2210] L. Lov´asz and B. Szegedy. Limits of dense graph sequences J. Combin. Theory Ser.B , no. 6, 933–957. (2006)[11] L. Lov´asz and B. Szegedy. Random graphons and weak positivstellensatz for graphsarXiv:0902.1327v1, (to appear in Journal of Graph Theory) . (2011)[12] L. Lov´asz and B. Szegedy. Limits of compact decorated graphs arXiv:1010.5155v1.(2010)[13] B. R´ath. Time evolution of dense multigraph limits under edge-conservative pref-erential attachment dynamics. (to appear in Rand. Str. Alg.) , arXiv:0912.3904v3.(2010)[14] M. Reed and B. Simon. Methods of Modern Mathematical Physics, Vol. 1: FunctionalAnalysis . Gulf Professional Publishing. (1980)[15] D. Williams.