Scaling limit of dynamical percolation on critical Erdös-Rényi random graphs
aa r X i v : . [ m a t h . P R ] J un Scaling limit of dynamical percolationon critical Erd¨os-R´enyi random graphs.
Rapha¨el Rossignol
Rapha¨el RossignolUniversit´e Grenoble AlpesInstitut FourierCS 4070038058 Grenoble cedex 9Francee-mail: [email protected]
Abstract:
Consider a critical Erd¨os-R´enyi random graph: n is the number of vertices, each one of the (cid:0) n (cid:1) possible edges is kept in the graph independently from the others with probability n − + λn − / , λ being a fixed real number. When n goes to infinity, Addario-Berry, Broutin and Goldschmidt [2] haveshown that the collection of connected components, viewed as suitably normalized measured compactmetric spaces, converges in distribution to a continuous limit G λ made of random real graphs. Inthis paper, we consider notably the dynamical percolation on critical Erd¨os-R´enyi random graphs.To each pair of vertices is attached a Poisson process of intensity n − / , and every time it rings,one resamples the corresponding edge. Under this process, the collection of connected componentsundergoes coalescence and fragmentation. We prove that this process converges in distribution, as n goes to infinity, towards a fragmentation-coalescence process on the continuous limit G λ . We also proveconvergence of discrete coalescence and fragmentation processes and provide Feller-type propertiesassociated to fragmentation and coalescence. MSC 2010 subject classifications:
Primary 60K35; secondary 05C80, 60F05..
Keywords and phrases:
Erd¨os-R´enyi, random graph, coalescence, fragmentation, dynamical perco-lation, scaling limit, Gromov-Hausdorff-Prokhorov distance, Feller property.
Contents δ -gluing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142.6.2 The coalescence processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.7 R -graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.8 Cutting, fragmentation and dynamical percolation . . . . . . . . . . . . . . . . . . . . . . . . 172.9 The scaling limit of critical Erd¨os-R´enyi random graphs . . . . . . . . . . . . . . . . . . . . . 193 Main results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 Proofs of the main results for coalescence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204.1 The Coalescent on N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204.2 Structural result for Aldous’ multiplicative coalescent . . . . . . . . . . . . . . . . . . . . . . . 234.3 The Coalescent on S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264.4 Convergence of the coalescent on Erd¨os-R´enyi random graphs . . . . . . . . . . . . . . . . . . 305 Proofs of the results for fragmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325.1 Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325.2 Reduction to finite graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335.3 The Feller property for trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 imsart-generic ver. 2014/10/16 file: Dynamical_Percolation_hal_v3.tex date: June 25, 2018 .4 The Feller property for graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395.5 Application to Erd¨os-R´enyi random graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486 Combining fragmentation and coalescence: dynamical percolation . . . . . . . . . . . . . . . . . . . 496.1 Almost Feller Property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496.2 Application to Erd¨os-R´enyi random graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
1. Introduction
Consider G ( n, p ( λ, n )), the Erd¨os-R´enyi random graph on n vertices inside the critical window, that is whenthe probability of an edge is p ( λ, n ) := n − + λn − / . The largest components are of order n / , and theirdiameter is of order n / , and this particular scaling of p with respect to n corresponds to what is calledthe critical window associated to the emergence of a giant component. In this regime, we are particularlyinterested in those large components, because in some sense, they contain all the complexity of the graph.There is a clean procedure to capture the behaviour of those components in the large n limit: if one assignsmass n − / to each vertex and length n − / to each edge, this graph converges to a random collection ofreal graphs G λ (see Theorem 2.38 below, or Theorem 24 in [2] for a more precise statement).Now, put the following dynamic on G ( n, p ( λ, n )): each pair of vertices is equipped with an independentPoisson process with rate γ n and every time it rings, one refreshes the corresponding edge, meaning thatone replaces its state by a new independent one: present with probability p ( λ, n ), absent with probability1 − p ( λ, n ). This procedure corresponds to dynamical percolation on the complete graph with n vertices, atrate γ n . A natural question now is “At which rate should we refresh the edges in order to see a non trivialprocess in the large n limit ?”. In this question, it is understood that one keeps getting interested in thesame scaling as before concerning masses and lengths.A moment of thought suggests that a good choice should be γ n = n − / . Indeed, since large componentsare of size Θ( n / ), in a pair of components there are Θ( n / ) pairs of vertices which after refreshment leadto Θ( n / p ( λ, n )) = Θ( n / ) edges added. Thus, choosing γ n = Θ( n − / ) will lead large components tocoalesce at rate Θ(1). Furthermore, on those large components typical distances are of order Θ( n / ). Anedge will be destroyed at rate γ n (1 − p ( λ, n )) so the geometry inside such a component will be affected at rateΘ( n / γ n (1 − p ( λ, n ))), which is again of order Θ(1) when γ n = Θ( n − / ). This scaling is already present inthe work of Aldous [5] where he studied the evolution of the collection of rescaled masses of the componentswhen one coalesce components at a rate proportional to the product of their mass. This is the so-calledmultiplicative coalescent. Of course, instead of refreshing the edges, one may decide to only add edges, orto only destroy edges. In the first case, one will assist to coalescence of components and in the second case,to fragmentation. Once again, one may ask the same question as before: what is the good rate in order toobtain a non trivial process in the large n limit, and what is this limit process. One of the main purposesof this article is to give an answer to these questions for the three cases that we just defined informally:dynamical percolation, coalescence and fragmentation. The limit processes will be dynamical percolation,coalescence and fragmentation processes acting on the limit G λ obtained in [2]. Furthermore, we will showthat coalescence is the time-reversal of fragmentation on this limit. The method we use is to provide Feller-type properties for coalescence and fragmentation, which we hope will be useful in the future to study scalinglimits of similar dynamics on other critical random graphs. Notice that the study of coalescence of graphs is acentral tool in the work of [7] to show convergence of a number of critical random graphs to G λ (configurationmodels, inhomogeneous random graphs etc.).Since an important amount of notations is needed in order to make such statements precise we will switchto the presentation of notations in section 2 and then announce the main results and outline the plan of therest of the article in section 3.We finish this section by mentioning a few related works. Dynamical percolation was introduced in [12],and studied in a number of subsequent works by various authors. In the context of [12], only the edges of somefixed infinite graph are resampled while in the definition above, we resample the edges of a finite completegraph. The scaling limit of dynamical percolation for critical percolation on the two dimensional triangular2attice was obtained in [11], with techniques quite different from the one used in the present paper. Morerelated to the present paper is the work [17], where dynamical percolation on critical Erd¨os-R´enyi randomgraphs, as introduced above, is studied notably at rate 1. The authors show that the size of the largestconnected component that appears during the time interval [0 ,
1] is of order n / log / n with probabilitytending to one as n goes to infinity. They also study “quantitative noise-sensitivity” of the event A n thatthe largest component of G ( n, p ( λ, n )) is of size at least an / for some fixed a > A n for instance). However, we leave thisquestion and precise statements for future work.
2. Notations and Background
If (
X, τ ) is a topological space, we denote by B ( X ) the Borel σ -field on X .If ψ is a measurable map between ( E, E ) and ( F, F ), and µ is a measure on ( E, E ), then we denote by ψ♯µ the push-forward of µ by ψ : ψ♯µ ( A ) = µ ( ψ − ( A )) for any A ∈ F .We shall frequently use Poisson processes. Let ( E, E , µ ) be a measurable set with µ σ -finite, denote byLeb( R + ) the Lebesgue sigma-field on R + , leb R + the Lebesgue measure on R + and let γ ≥
0. If P is a Poissonrandom set with intensity γ on ( E × R + , E ×
Leb( R + ) , µ × leb R + ) (that is with intensity measure γµ ⊗ leb R + )we shall denote by P t the points of P with birthtime at most t : P t := { x ∈ E : ∃ s ≤ t, ( x, s ) ∈ P} . When (
X, d ) is a Polish space and D ([0 , ∞ ) , X ) is the set of c`adl`ag functions from R + to X , we shall alwaysput on D ([0 , ∞ ) , X ) the topology of compact convergence (also known as topology of uniform convergenceon compact sets) (see chapter V and notably section V.5 in [16]). Recall that this topology is metrizableand complete (although not separable in general), finer than Skorokhod’s topology and that a sequence ω n = ( ω n ( t )) t ∈ R + in D ([0 , ∞ ) , X ) converges in this topology to ω ∞ = ( ω ∞ ( t )) t ∈ R + if and only if for every T >
0, sup t ∈ [0 ,T ] | ω n ( t ) − ω ∞ ( t ) | −−−−−→ n → + ∞ . Notice a slight subtelty: we shall prove convergence in distribution of a sequence of processes (( X n ( t )) t ≥ ) n ≥ towards ( X ( t )) t ≥ by exhibiting couplings showing that the L´evy-Prokhorov distance between the distribu-tions of X n and X goes to zero as n goes to infinity. This implies convergence in distribution (there is noneed for separability or completeness in this direction).Finally, we shall use the notation N := N ∪ { + ∞} . We will talk of a discrete graph to mean the usual graph-theoretic notion of an unoriented graph, that is apair G = ( V, E ) with V a finite set and E a subset of (cid:0) V (cid:1) := {{ u, v } : u = v ∈ V } . Often, E is seen as apoint in { , } ( V ), where 0 codes for the absence of the corresponding edge and 1 for its membership to E .For a positive integer n and p ∈ [0 , Erd¨os-R´enyi random graph (or Gilbert random graph) G ( n, p )is the random graph with vertices [ n ] := { , . . . , n } such that each edge is present with probability p ,independently from the others. Alternatively, one may see it as a Bernoulli bond percolation with parameter p on the complete graph with n vertices K n = ([ n ] , (cid:0) [ n ]2 (cid:1) ). It amounts to put the product of Bernoulli measureswith parameter p on { , } ( [ n ]2 ).Let γ + and γ − be non-negative real numbers. If G = ([ n ] , E ) is a discrete graph on n vertices, define arandom process N γ + ,γ − ( G, t ) = (
G, E t ), t ≥ K n .To each pair e ∈ (cid:0) [ n ]2 (cid:1) , we attach two Poisson processes on R + : P + e of intensity γ + and P − e of intensity γ − .3e suppose that all the (cid:0) [ n ]2 (cid:1) Poisson processes are independent. Each time P + e rings, we replace E t − by E t − ∪ { e } , and each time P − e rings, we replace E t − by E t − \ { e } . The letter N is reminiscent of “noise”. Ifone wants to insist on the Poisson processes, we shall write N ( G, ( P + , P − ) t ) instead of N γ + ,γ − ( G, t ), withan implicit definition for the map N .One may take only P + or only P − into account: write N + ( G, P + t ) for N ( G, ( P + , ∅ ) t ) and N − ( G, P + t ) for N ( G, ( ∅ , P − ) t ). Then, N + ( G, P + t ) will be referred as the discrete coalescent process of intensity γ + startedat G and N − ( G, P + t ) as the discrete fragmentation process of intensity γ + started at G .Now, dynamical percolation of parameter p and intensity γ , as described in the introduction, correspondsto the process N γp,γ (1 − p ) , and is in its stationary state when started with G ( n, p ) (independently of thePoisson processes used to define the dynamical percolation).All these processes will have continuous couterparts in the scaling limit, which will be defined in sections 2.6and 2.8. The main tool to analyze our coalescent and fragmentation processes will be a refinement of Aldous’ work [5]on the multiplicative coalescent. In this section, we recall what we will use of his work.Let us define: ℓ := x ∈ ( R + ) N ∗ : X i ≥ x i < ∞ ,ℓ ց := { x ∈ ℓ : x ≥ x ≥ . . . } . Let ( N i,j ) i,j ∈ N ∗ be independent Poisson point processes on the real line with intensity 1. Denote by T i,j,n the n -th jump-time of N i,j . For x ∈ ℓ , let MG ( x, t ) denote the multigraph (with loops) with vertex set N andedge set ∪ n ∈ N {{ i, j } ∈ (cid:0) N (cid:1) s.t. T i,j,n or T j,i,n ≤ t x i x j } . If one forgets loops and transforms any multiple edgeinto a single edge, MG ( x, t ) becomes W ( x, t ), the nonuniform random graph of section 1.4 of [5]. Denotingby X ( x, t ) the sequence of sizes, listed in decreasing order, of the connected components of W ( x, t ), Aldousproved in [5], Proposition 5, that X ( x, t ) defines a Markov process on ℓ ց which posseses the Feller property.Following Aldous, we denote by S ( x, t ) the sum of squares of the masses of the components of MG ( x, t ).We shall use later the following lemmas. Lemma 2.1 ([5], Lemma 20) . For x in l ց , P ( S ( x, t ) > s ) ≤ tsS ( x, s − S ( x, , s > S ( x, . Lemma 2.2 ([5], Lemma 23) . Let ( z i , ≤ i ≤ n ) be strictly positive vertex weights, and let ≤ m < n .Consider the bipartite random graph B on vertices { , , . . . , m } ∪ { m + 1 , . . . , n } defined by: for each pair ( i, j ) with ≤ i ≤ m < j ≤ n , the edge ( i, j ) is present with probability − exp( − tz i z j ) , independently fordifferent pairs. Write α = P mi =1 z i , α = P ni = m +1 z i . Let ( Z i ) be the sizes of the components of B . Then, ε P ( X i Z i ≥ α + ε ) ≤ (1 + t ( α + ε )) α , ε > . Remark 2.3.
As noticed in [5], page 842, Lemma 2.2 extends to z ∈ ℓ . In [5], this lemma is used in conjunction with the following one
Lemma 2.4 ([5], Lemma 17) . Let ˜ G be a graph with vertex weights (˜ x i ) . Let G be a subgraph of ˜ G (that is,each edge of G is an edge of ˜ G ) with vertex weights x i ≤ ˜ x i . Let ˜ a and a be the decreasing orderings of thecomponent sizes of ˜ G and G . Then k ˜ a − a k ≤ X i ˜ a i − X i a i provided P i a i < ∞ . Lemma 2.5.
Let G = ( V, E ) be a multigraph whose vertices have weights ( x i ) i ∈ V . If W ⊂ V and E ′ ⊂ E ,let comp( W, E ′ ) denote the set of connected components of the graph ( W, E ′ ∩ (cid:0) W (cid:1) ) and define S ( W, E ′ ) := X m ∈ comp( W,E ′ ) X i ∈ m x i ! . Now, let W ⊂ V be such that for any m ∈ comp( W, E ) and m ∈ comp( V \ W, E ) , there is at most oneedge of E between m and m . Then, for any E ′ ⊂ ES ( V, E ) − S ( W, E ) ≥ S ( V, E ′ ) − S ( W, E ′ ) , provided S ( W, E ) < ∞ .Proof. For i and j in V and E ′ ⊂ E , we denote by i ∼ E ′ j the fact that i and j are distinct and connectedby a path in ( V, E ′ ). The hypothesis on W implies that for any E ′ ⊂ E , two vertices of W are in the samecomponent of ( V, E ′ ) if and only if they are in the same component of ( W, E ′ ∩ (cid:0) W (cid:1) ): ∀ i, j ∈ W, i ∼ E ′ j ⇔ i ∼ E ′ ∩ ( W ) j . (2.1)Now, S ( V, E ′ ) = X i ∈ V x i + X i,j ∈ Vi ∼ E ′ j x i x j = S ( W, E ′ ) + X i ∈ V \ W x i + X i,j ∈ Vi ∼ E ′ j x i x j − X i,j ∈ Wi ∼ E ′∩ ( W ) j x i x j = S ( W, E ′ ) + X i ∈ V \ W x i + X ( i,j ) ∈ V \ W i ∼ E ′ j x i x j + X ( i,j ) ∈ W i ∼ E ′ j x i x j − X i,j ∈ Wi ∼ E ′∩ ( W ) j x i x j = S ( W, E ′ ) + X i ∈ V \ W x i + X ( i,j ) ∈ V \ W i ∼ E ′ j x i x j because of (2.1). The two sums on the right of the last equation are increasing in E ′ , and this shows theresult. The main characters in this article are the connected components of Erd¨os-R´enyi random graphs and theircontinuum limit, each one undergoing the updates due to dynamical percolation. One task is therefore todefine a proper space where those characters can live, and first to precise what we mean by “the connectedcomponents of a graph” seen as a single object. One option is to order the components by decreasing orderof size , as in [2], or in a size-biased way, as in [5], and thus see the collected components of a graph as asequence of graphs. However, this order is not preserved under the process of dynamical percolation. Also,looking only at the mass to impose which graphs are pairwise compared between two collections of graphsmight lead to a larger distance than what one would expect. Indeed, suppose that ( G , G ) and ( G ′ , G ′ ) are It requires some device to break ties, but those disappear in the continuum limit, for the Erd¨os-R´enyi random graphs atleast. G (resp. G ′ ) having slightly larger mass than G (resp. G ′ ). One might have G close to G ′ and G close to G ′ in some topology (the Gromov-Hausdorff-Prokhorov topology to be definedlater), but G far from G ′ in this topology. For all these reasons, I found it somewhat uncomfortable to workwith such a topology in the dynamical context. The topology we will use will be defined in section 2.5, andthe story begins with the definition of a semi-metric space.One way to present the connected components of a graph is to consider the graph as a metric space usingthe usual graph distance, allowing the metric to take the value + ∞ between points which are not in the sameconnected component, as in [8], page 1. In addition, the main difficulty in defining dynamical percolation onthe continuum limit will be in defining coalescence. In this process some points will be identified, and oneclear way to present this is to modify the metric and allow it to be equal to zero between different pointsrather than performing the corresponding quotient operation. This type of space is called a semi-metric spacein [8], Definition 1.1.4, and we shall stick to this terminology. Definition 2.6. A semi-metric space is a couple ( X, d ) where X is a non-empty set and d is a functionfrom X to R + ∪ { + ∞} such that for all x , y and z in X : • d ( x, z ) ≤ d ( x, y ) + d ( y, z ) , • d ( x, x ) = 0 , • d ( x, y ) = d ( y, x ) .A semi-metric space ( X, d ) is a metric space if in addition • d ( x, y ) = 0 ⇒ x = y .A metric or semi-metric space ( X, d ) is said to be finite if d is finite. Of course, when thinking about a semi-metric space (
X, d ), one may visualize the quotient metric space ( X/d, d ) where points at null distance are identified. X and X/d are at zero Gromov-Hausdorff distance(defined in section 2.5 below). Notice that (
X, d ) is not necessarily a separated metric space, but (
X/d, d )always is. Furthermore, (
X, d ) is separable if and only if (
X/d, d ) is separable.
Definition 2.7. If ( X, d ) is a semi-metric space, the relation R defined by: x R y ⇔ d ( x, y ) < ∞ is an equivalent relation. Each equivalence class is called a component of ( X, d ) and comp( X, d ) denotesthe set of components. We denote by diam( X ) the diameter of ( X, d ) : diam( X ) = sup x,y ∈ X d ( x, y ) and by supdiam( X ) the supremum of the diameters of its components: supdiam( X ) = sup m ∈ comp( X,d ) diam( m ) . Definition 2.8. A measured semi-metric space (m.s-m.s) is a triple X = ( X, d, µ ) where ( X, d ) is asemi-metric space and µ is a measure on X defined on a σ -field containing the Borel σ -field for the topologyinduced by d .An m.s-m.s ( X, d, µ ) is said to be finite if ( X, d ) is a finite totally bounded semi-metric space and µ is afinite measure.Finally, we define comp( X ) := comp( X, d ) and masses( X ) := ( µ ( m )) m ∈ comp( X ) . Notice that a finite m.s-m.s has only one component.
Remark 2.9.
One might feel more comfortable after realizing the following. Let π denote the projectionfrom ( X, d ) to X ′ := X/d , B ′ the Borel σ -field on X ′ and B the Borel σ -field on X . Then, π − ( B ′ ) = B andthe image measure π♯µ on X ′ is a Borel σ -finite measure. .5. The Gromov-Hausdorff-Prokhorov distance In the introduction, we mentioned that G ( n, p ( λ, n )) converges in distribution, but we did not mentionprecisely the underlying topology. The main topological ingredient in [3] is the Gromov-Hausdorff-Prokhorovdistance between two components of the graph, and we shall use this repeatedly. To define it, we need torecall some definitions from [3].If X = ( X, d, µ ) and X ′ = ( X ′ , d ′ , µ ′ ) are two measured semi-metric spaces a correspondance R between X and X ′ is a measurable subset of X × X ′ such that: ∀ x ∈ X, ∃ x ′ ∈ X ′ : ( x, x ′ ) ∈ R and ∀ x ′ ∈ X ′ , ∃ x ∈ X : ( x, x ′ ) ∈ R . We let C ( X, X ′ ) denote the set of correspondances between X and X ′ . The distortion of a correspondance R is defined as dis( R ) := inf ε > ∀ ( x, x ′ ) , ( y, y ′ ) ∈ R , d ( x, y ) ≤ d ′ ( x ′ , y ′ ) + ε and d ′ ( x ′ , y ′ ) ≤ d ( x, y ) + ε The Gromov-Hausdorff distance between two semi-metric spaces (
X, d ) and ( X ′ , d ′ ) may be defined as: d GH (( X, d ) , ( X ′ , d ′ )) := inf R∈ C ( X,X ′ )
12 dis( R ) . We denote by M ( X, X ′ ) the set of finite Borel measures on X × X ′ . For π in M ( X, X ′ ), we denote by π (resp. π ) the first (resp. the second) marginal of π . For any π ∈ M ( X, X ′ ), and any finite measures µ on X and µ ′ on X ′ one defines: D ( π ; µ, µ ′ ) = k π − µ k + k π − µ ′ k where k ν k is the total variation of a signed measure ν .The Gromov-Hausdorff-Prokhorov distance is defined as follows in [3].
Definition 2.10. If X = ( X, d, µ ) and X ′ = ( X ′ , d ′ , µ ′ ) are two m.s-m.s, the Gromov-Hausdorff-Prokhorovdistance between them is defined as: d GHP ( X , X ′ ) = inf π ∈ M ( X,X ′ ) R∈ C ( X,X ′ ) { D ( π ; µ, µ ′ ) ∨
12 dis( R ) ∨ π ( R c ) } . It is not difficult to show that d GHP satisfies the axioms of a semi-metric. Let us give a bit more intuitionto what the Gromov-Hausdorff-Prokhorov distance measures. On a semi-metric space (
X, δ ), let us denoteby δ H the Hausdorff distance and by δ LP the L´evy-Prokhorov distance. Let us recall their definition. For B ⊂ X and ε >
0, let B ε := { x ∈ X : ∃ y ∈ B, d ( x, y ) < ε } . Now, for A and B subsets of X , δ H ( A, B ) := inf { ε > A ⊂ B ε and B ⊂ A ε } and for finite measures µ and ν on X , δ LP ( µ, ν ) := inf { ε > ∀ B ∈ B ( X ) , µ ( B ) ≤ ν ( B ε ) + ε and ν ( B ) ≤ µ ( B ε ) + ε } , where B ε = { x ∈ X : ∃ y ∈ B, d ( x, y ) < ε } .The following lemma shows that the Gromov-Hausdorff-Prokhorov distance measures how well two mea-sured semi-metric spaces can be put in the same ambient space so that simultaneously their measures areclose in Prokhorov distance and their geometry are close in the Hausdorff distance. It shows that the defi-nitions of [2] and [1] are equivalent, and its proof is a small variation on the proof of Proposition 6 in [15],where only probability measures were considered. 7 emma 2.11. If X = ( X, d, µ ) and X ′ = ( X ′ , d ′ , µ ′ ) be two measured separable semi-metric spaces, let ˜ d GHP ( X , X ′ ) := inf δ { δ H ( X, X ′ ) ∨ δ LP ( µ, µ ′ ) } where the infimum is over all semi-metric δ on the disjoint union X ∪ X ′ extending d and d ′ . Then,
12 ˜ d GHP ( X , X ′ ) ≤ d GHP ( X , X ′ ) ≤ ˜ d GHP ( X , X ′ ) . Proof.
Let ε > d GHP ( X , X ′ ) < ε . Let R := { ( x, x ′ ) ∈ X × X ′ : δ ( x, x ′ ) ≤ ε } , so that R is a correspondance in C ( X, X ′ ) with distortion at most 2 ε . Suppose without loss of generalitythat µ ′ ( X ′ ) ≤ µ ( X ). Since δ P ( µ, µ ′ ) ≤ ε , for any closed set B in the disjoint union X ∪ X ′ , µ ( B ) ≤ µ ′ ( B ε ) + εµ ( B ) µ ( X ) ≤ µ ′ ( B ε ) µ ′ ( X ′ ) + εµ ( X )Thus, Strassens’s theorem (Theorem 11.6.2 in [9]) asserts that there exists a coupling π ∈ M ( X, X ′ ) suchthat: π = µµ ( X ) , π = µ ′ µ ′ ( X ′ ) and π ( R c ) ≤ εµ ( X ) . Let π := µ ( X ) π . Then, π ( R c ) ≤ ε and k π − µ k = 0 , k π − µ ′ k = µ ( X ) − µ ′ ( X ′ ) ≤ ε . Thus, d GHP ( X , X ′ ) ≤ ε and this proves the second inequality.Suppose now that d GHP ( X , X ′ ) < ε , and let R ∈ C ( X, X ′ ) and π ∈ M ( X, X ′ ) be such thatdis( R ) ≤ ε, π ( R c ) ≤ ε and D ( π ; µ, µ ′ ) ≤ ε . Then, define a semi-metric δ on the disjoint union X ∪ X ′ as follows: δ ( x, x ′ ) := inf ( y,y ′ ) ∈R { d ( x, y ) + ε + d ′ ( y ′ , x ′ ) } it is proved in [15] Proposition 6 that δ is a semi-metric on X ∪ X ′ , which extends d and d ′ . Furthermore, if( x, x ′ ) ∈ R , then δ ( x, x ′ ) = ε . Clearly, δ H ( X, X ′ ) ≤ ε and for any Borel set B in X ∪ X ′ , µ ( B ) ≤ k π − µ k + π ( B ) ≤ k π − µ k + π ( B ε ) + π ( R c ) ≤ k π − µ k + k π − µ ′ k + µ ′ ( B ε ) + π ( R c ) ≤ ε + µ ′ ( B ε )Thus, δ P ( µ, µ ′ ) ≤ ε and this proves the second assertion.It is easy to see that two m.s-m.s X and X ′ are at zero d GHP -distance if and only if there are two distanceand measure preserving maps φ and φ ′ such that φ is a map from X to X ′ and φ ′ a map from X ′ to X .Let C denote the class of finite measured semi-metric spaces and R the equivalence relation on C definedby X R X ′ ⇔ d GHP ( X , X ′ ) = 0. Even though there is no set of finite measured semi-metric spaces (see forinstance [8] Remark 7.2.5 and above), C / R can be considered as a set in the sense that there exists a set ofrepresentatives of elements of C . 8 efinition 2.12. Let U to be the universal Urysohn space and consider the set P of finite measured metricsubspaces of U . We denote by M the quotient P / R of P by the equivalence relation R , where: X R X ′ ⇔ d GHP ( X , X ′ ) = 0 . By abuse of language, we may call M the “set of equivalence classes of finite measured semi-metric spaces,equipped with the Gromov-Hausdorff-Prokhorov distance d GHP ”. M is a set of representatives of elements of C . Indeed, since every separable metric space is isometric to ametric subspace of U , every member of C will be at zero d GHP -distance from some element of P , and evenat zero d GHP -distance from some compact element of P . Thus, for every member X of the class C , there isan element [ X ′ ] of M such that for any X ′′ ∈ [ X ], d GHP ( X , X ′′ ) = 0. Abusing notation, we shall denoteby [ X ] the member of M whose elements are at zero d GHP -distance from X .For our purpose, it is in fact not crucial to have Definition 2.12, and one could reformulate all the resultsin this article in terms of sequences of random variables, at the expense of much more heavy statements.The following is shown in [1]. Theorem 2.13. ( M , d GHP ) is a complete separable metric space. Now, the Gromov-Hausdorff-Prokhorov distance in Definition 2.10 is too strong for our purpose whenapplied to m.s-m.s which have an infinite number of components: it essentially amounts to a uniform controlof the d GHP distance between paired components. We are interested in a weaker distance which localizesaround the largest components. We shall restrict to countable unions of finite semi-metric spaces with theadditional property that for any ε >
0, there are only a finite number of components whose size exceeds ε . Toformulate the distance, it will be convenient to view those semi-metric spaces as a set of counting measureson M . Definition 2.14.
For any ε > , let M >ε = { [( X, d, µ )] ∈ M s.t. µ ( X ) > ε } . For any counting measure ν on M , denote by ν >ε the restriction of ν to M ε . Denote by N the set of countingmeasures ν on M such that for any ε > , ν >ε is a finite measure and such that ν does not have atoms ofmass , that is: ν ( { [( X, d, µ )] ∈ M s.t. µ ( X ) = 0 } ) = 0 When X is a measured semi-metric space whose components are finite, we denote by ν X the counting measureon M defined by ν X := X m ∈ comp( X ) δ [ m ] Abusing notations, we shall say that X ∈ N if ν X belongs to N , and we shall denote by X >ε the disjointunion of components of X whose masses are larger than ε . Notice that X is in N if and only if it has an at most countable number of components, each one of itscomponents has positive mass and for any ε > X >ε is the disjoint union of a finite number of components,each one being totally bounded and equipped with a finite measure.To define a Gromov-Hausdorff-Prokhorov distance between elements of N , let first ρ LP be the L´evy-Prokhorov distance on the set of finite measures on the metric space ( M , d GHP ). Recall that ρ LP ( ν, ν ′ ) = inf ε > ∀ B ∈ B ( M ) , ν ( B ) ≤ ν ′ ( B ε ) + ε and ν ′ ( B ) ≤ ν ( B ε ) + ε where B ε = { X ∈ M : d GHP ( X , B ) < ε } and B ( M ) is the Borel σ -algebra on ( M , d GHP ). Now, for X = [( X, d, µ )] ∈ M and k ≥
1, define a function f k by9 k ( X ) := µ ( X ) ≥ k k ( k + 1) (cid:16) µ ( X ) − k +1 (cid:17) if µ ( X ) ∈ h k +1 , k h µ ( X ) < k +1 The following distance is an analogue of the distance in Lemma 4.6 of [14] where it is used to metrize thevague topology on boundedly finite measures.
Definition 2.15.
Let ν and ν ′ be counting measures on M . Then, we define the Gromov-Hausdorff-Prokhorov metric L GHP between ν and ν ′ as: L GHP ( ν, ν ′ ) := X k ≥ − k { ∧ ρ LP ( f k ν, f k ν ′ ) } . Remark 2.16. L GHP makes sense even between counting measures whose atoms are semi-metric spaces.
Remark 2.17.
Notice that ρ LP ( ν X , is at most the number of connected components of X ) . Thus L GHP ( ν X , ≤ − (sup m ∈ comp( X ) µ ( m )) − . One sees thus that if X ( n ) is a sequence of m.s-m.s such that sup m ∈ comp( X ( n ) ) µ ( m ) −−−−−→ n → + ∞ and whatever the diameters of the components are, then ν X ( n ) converges to zero for L GHP , which can beseen as an empty collection of measured metric spaces. Notably, supdiam is not continuous with respect tothe L GHP -distance.
The following proposition is analogue to similar results concerning vague convergence of boundedly finitemeasures (see section 4 of [14]).
Proposition 2.18. ( N , L GHP ) is a complete separable metric space, and if ν n , n ≥ and ν are elementsof N , ( ν n ) n ≥ converges to ν if and only if for every ε > such that ν ( { [( X, d, µ )] ∈ M : µ ( X ) = ε } ) = 0 , ρ LP ( ν ( n ) >ε , ν >ε ) goes to zero as n goes to infinity.Proof. The fact that L GHP is a metric is left to the reader. Let D be a countable set dense in { [( X, d, µ )] ∈M : µ ( X ) = 0 } . Let D := { P mk =1 δ X k : m ∈ N , X , . . . , X m ∈ D} . It is easy to show that D is dense in( N , L GHP ). This shows separability.Now, suppose that ν ( n ) is a Cauchy sequence for L GHP . Then, for any k ≥ f k ν ( n ) is a Cauchy sequenceof finite measures for ρ LP . From the completeness of the L´evy-Prokhorov distance on finite measures on aPolish space we get that for each k , there is some measure ν k such that ρ LP ( f k ν ( n ) , ν k ) −−−−→ n →∞ . Notice that if 2 ≤ k ≤ l , ν k = ν l on M > k − . Define ν = sup k ≥ M k +1 ν k +2 so that for any k ≥ f k ν = ν k . Then, ν is an element of N and L GHP ( ν ( n ) , ν ) −−−−→ n →∞ , showing the completeness of ( N , L GHP ).Finally, suppose that L GHP ( ν ( n ) , ν ) goes to zero as n goes to infinity and let ε > ν ( { [( X, d, µ )] ∈ M : µ ( X ) = ε } ) = 0. Then, for any α >
0, let k be such that1 k ≤ ε N be such that ∀ n ≥ N, ∀ A ∈ B ( M ) , f k ν ( A ) ≤ f k ν ( n ) ( A α ) + α and f k ν ( n ) ( A ) ≤ f k ν ( A α ) + α . Then, for n ≥ N and B ∈ B ( M >ε ), ν >ε ( B ) ≤ ν ( B ∩ M >ε + α ) + ν ( M >ε \ M >ε + α )= f k ν ( B ∩ M >ε + α ) + ν ( M >ε \ M >ε + α ) ≤ f k ν ( n ) (( B ∩ M >ε + α ) α ) + α + ν ( M >ε \ M >ε + α )= ν ( n ) >ε (( B ∩ M >ε + α ) α ) + α + ν ( M >ε \ M >ε + α ) ≤ ν ( n ) >ε (( B ) α ) + α + ν ( M >ε \ M >ε + α )where we used the fact that ( B ∩ M >ε + α ) α ⊂ M >ε and f k equals 1 on M >ε . Also, for n ≥ N and B ∈ B ( M >ε ), ν ( n ) >ε ( B ) = f k ν ( n ) ( B ) ≤ f k ν ( B α ) + α ≤ ν ( B α ) + α ≤ ν ( B α ∩ M >ε ) + ν ( M >ε − α \ M >ε )= ν >ε (( B ) α ) + ν ( M >ε − α \ M >ε )To finish the proof, note that since ν ( { [( X, d, µ )] ∈ M : µ ( X ) = ε } ) = 0, then ν ( M >ε \ M >ε + α ) + ν ( M >ε − α \ M >ε ) −−−→ α → . Notice that any m.s-m.s of N is at zero L GHP -distance of an m.s-m.s whose components are compactmetric spaces. In this article, we really are interested in equivalence classes of m.s-m.s for the equivalencerelation “being at zero L GHP -distance”, although in order to define random processes such as coalescenceand fragmentation, it will be convenient to have in mind a particular representative of such a class.We shall always use the following lemmas to bound L GHP from above.
Lemma 2.19.
Let X = ( X, d, µ ) and X ′ = ( X ′ , d ′ , µ ′ ) belong to N and ε > . Suppose there exists twoinjective maps σ : comp( X >ε ) → comp( X ′ ) and σ ′ : comp( X ′ >ε ) → comp( X ) such that: ∀ m ∈ comp( X >ε ) , d GHP ( m, σ ( m )) ≤ α and ∀ m ′ ∈ comp( X ′ >ε ) , d GHP ( m ′ , σ ′ ( m ′ )) ≤ α . Then, L GHP ( ν X , ν X ′ ) ≤ α (1 + 8 X >ε − α )) + 16 ε , and if ε > α , for p > , L GHP ( ν X , ν X ′ ) ≤ α P m ∈ comp( X ) µ ( m ) p ( ε − α ) p ! + 16 ε , roof. Consider any ε ≥ ε . For a component m in comp( X >ε ), the difference between the masses µ ( m )and µ ′ ( σ ( m )) is at most α , and the same holds between m ′ and σ ′ ( m ′ ) when m ′ ∈ comp( X ′ >ε ). Thus σ ′ sends comp( X ′ >ε ) in comp( X >ε − α ) and { m ∈ comp( X ′ >ε ) } ≤ { m ∈ comp( X >ε − α ) } . Now, let k be such that 1 k + 1 ≥ ε . Let B ∈ B ( M > k +1 ). Then, for any m in comp( X > k +1 ) ∩ B , σ ( m ) belongs to comp( X ′ ) ∩ B α . Notice that f k is k ( k + 1)-Lipschitz. f k ν X ( B ) = X m ∈ comp( X > k +1 ) ∩ B f k ( m ) ≤ X m ∈ comp( X > k +1 ) ∩ B f k ( σ ( m )) + αk ( k + 1) X > k +1 ) ≤ X m ′ ∈ comp( X ′ ) m ′ ∈ B α f k ( m ′ ) + αk ( k + 1) X > k +1 )= f k ν X ′ ( B α ) + αk ( k + 1) X > k +1 )and symmetrically f k ν X ′ ( B ) ≤ f k ν X ( B α ) + αk ( k + 1) X ′ > k +1 )Thus, for any k such that k +1 ≥ ε , ρ LP ( f k ν X , f k ν X ′ ) ≤ α (1 + k ( k + 1) X > k +1 ) ∨ X ′ > k +1 )) ≤ α (1 + k ( k + 1) X >ε − α ))Thus, L GHP ( ν X , ν X ′ ) ≤ α X >ε − α ) X k< ε − − k k ( k + 1) + X k ≥ ε − − k ≤ α (1 + 8 X >ε − α )) + 16 ε If X and X ′ are two m.s-m.s with a finite number of finite components, one may measure their distancewith d GHP (using Definition 2.10), with L GHP (using Definition 2.15) or with1 ∧ inf σ sup m ∈ comp( X ) d GHP ( m, σ ( m )) = 1 ∧ ρ LP ( ν X , ν X ′ )where the infimum is over bijections σ between comp( X ) and comp( X ′ ). Those three distances do notnecessarily coincide, and the following lemma clarifies the links between them. Lemma 2.20.
Let X = ( X, d, µ ) and X ′ = ( X ′ , d ′ , µ ′ ) be two m.s-m.s in N with a finite number ofcomponents.(i) If d GHP ( X , X ′ ) < ∞ , then there is a bijection σ from comp( X ) to comp( X ′ ) such that: ∀ m ∈ comp( X ) , d GHP ( m, σ ( m )) ≤ d GHP ( X , X ′ ) , and thus, L GHP ( X , X ′ ) ≤ d GHP ( X , X ′ )(1 + 8 X )) . ii) If there exists a bijection σ from comp( X ) to comp( X ′ ) such that: sup m ∈ comp( X ) d GHP ( m, σ ( m )) < ∞ , then, d GHP ( X , X ′ ) ≤ sup m ∈ comp( X ) d GHP ( m, σ ( m )) X ) , and L GHP ( X , X ′ ) ≤ sup m ∈ comp( X ) d GHP ( m, σ ( m ))(1 + 8 X )) . Proof. Proof of (i).
Suppose that d GHP ( X , X ′ ) < ε < ∞ . Let R ∈ C ( X, X ′ ) and π ∈ M ( X, X ′ ) be suchthat D ( π ; µ, µ ′ ) ∨
12 dis( R ) ∨ π ( R c ) ≤ ε . Since R has finite distortion, ∀ ( x, x ′ ) , ( y, y ′ ) ∈ R , d ( x, y ) = + ∞ ⇔ d ′ ( x ′ , y ′ ) = + ∞ which shows that each component m of X (resp. X ′ ) is in correspondance through R with exactly onecomponent σ ( m ) of X ′ (resp. X ). σ is thus a bijection, and R ∩ m × σ ( m ) ∈ C ( m, σ ( m )) and has distortionat most 2 d GHP ( X , X ′ ). Furthermore, π | m × σ ( m ) (( R ∩ m × σ ( m )) c ) = π ( R c ∩ m × σ ( m )) ≤ ε . Finally, for any A ∈ B ( m ), | π | m × σ ( m ) ( A × σ ( m )) − µ | m ( A ) | ≤ | π ( A × X ′ ) − µ ( A ) | + π ( A × σ ( m ) c ) ≤ ε + π ( R c ) ≤ ε and similarly, for any A ′ ∈ B ( σ ( m )), | π | m × σ ( m ) ( m × A ′ ) − µ ′ | σ ( m ) ( A ′ ) | ≤ ε . Thus, ∀ m ∈ comp( X ) , d GHP ( m, σ ( m )) ≤ ε , and the consequence on L GHP ( X , X ′ ) comes from Lemma 2.19. Proof of (ii).
Suppose that there exists a bijection σ from comp( X ) to comp( X ′ ) such that:sup m ∈ comp( X ) d GHP ( m, σ ( m )) ≤ ε < ∞ . Then, for any m , let R m ∈ C ( m, σ ( m )) and π m ∈ M ( m, σ ( m )) be such that D ( π m ; µ | m , µ | ′ σ ( m ) ) ∨
12 dis( R m ) ∨ π m ( R cm ) ≤ ε . Let π = P m ∈ comp( X ) π m and R = ∪ m ∈ comp( X ) R m . Then, R is a correspondance between X and X ′ ,12 dis( R ) ≤
12 sup m dis( R m ) ≤ ε ,π ( R c ) = X m π m ( R cm ) ≤ X ) ε . A ∈ B ( X ), | π ( A × X ′ ) − µ ( A ) | ≤ X m ∈ comp( X ) | π (( A ∩ m ) × X ′ ) − µ ( A ∩ m ) | = X m ∈ comp( X ) | π (( A ∩ m ) × σ ( m )) − µ ( A ∩ m ) |≤ X ) ε and symetrically, for any A ′ ∈ B ( X ′ ), | π ( X × A ′ ) − µ ′ ( A ′ ) | ≤ X ) ε Thus D ( π ; µ, µ ′ ) ≤ X ) ε , and we get d GHP ( X , X ′ ) ≤ X ) ε . The statement on L GHP ( X , X ′ ) follows from the hypothesis and Lemma 2.19 applied for any ε > ε go to zero. δ -gluing If (
X, d ) and ( X ′ , d ′ ) are two semi-metric spaces, the disjoint union semi-metric on X ∪ X ′ is the semi-metricequal to d on X , to d ′ on ( X ′ ) and to + ∞ on ( X × X ′ ) ∪ ( X ′ × X ). Gluing corresponds to identification ofpoints which can belong to the same semi-metric space or to different semi-metric spaces. A formal definitionis as follows (see also [8], pages 62–64). Definition 2.21.
Let ( X, d ) be a semi-metric space and R be an equivalence relation on X . The gluing of ( X, d ) along R , is the semi-metric space ( X, d R ) with semi-metric defined on X by d R ( x, y ) := inf { k X i =1 d ( p i , q i ) : p = x, q k = y, k ∈ N ∗ } where the infimum is taken over all choices of { p i } and { q i } such that ( q i , p i +1 ) ∈ R for all i = 1 , . . . , k − .If ( X ′ , d ′ ) is another semi-metric space and ˜ R ⊂ X × X ′ , let R be the equivalence relation generated by ˜ R on the disjoint union X ∪ X ′ . The gluing of ( X, d ) and ( X ′ , d ′ ) along ˜ R is the gluing of ( X ∪ X ′ , d ′′ ) along R , where X ∪ X ′ is the disjoint union of X and X ′ and d ′′ is the disjoint union semi-metric. Now, we shall define the δ -gluing of a semi-metric space X along a subset ˜ R of X as the operation ofjoining every couple ( x, x ′ ) ∈ R by a copy of the interval [0 , δ ]. Definition 2.22.
Let ( X, d ) be a semi-metric space, ˜ R ⊂ X and δ ≥ . Then, the δ -gluing of ( X, d ) along ˜ R , ( X R ,δ , d R ,δ ) , is the semi-metric space which is the result of gluing an isometric copy of [0 , δ ] between eachcouple ( x, x ′ ) belonging to ˜ R .When X = ( X, d, µ ) is a measured semi-metric space, we equip the δ -gluing of ( X, d ) along ˜ R with therestriction of the measure µ to X , we still denote this measure µ and denote the resulting semi-metric spaceby Coal δ ( X , ˜ R ) = ( X R ,δ , d R ,δ , µ ) . Remark 2.23. (i) When δ = 0 , the δ -gluing of ( X, d ) along ˜ R can be seen as the gluing of ( X, d ) alongthe equivalence relation generated by ˜ R . ii) For any ( x, y ) ∈ X , d R ,δ ( x, y ) = inf { ( k − δ + k X i =1 d ( p i , q i ) : p = x, q k = y, k ∈ N ∗ } . where the infimum is taken over all choices of { p i } and { q i } such that ( q i , p i +1 ) ∈ R for all i =1 , . . . , k − .(iii) If δ > , one may like to consider the space X with metric δ R ,δ , which corresponds to forget the interiorof the intervals [0 , δ ] that have been added in X R ,δ . If ( X, d, µ ) ∈ N , for any k , L GHP (( X, d R ,δ , µ ) , Coal δ ( X, ˜ R )) ≤ δ . When (
X, d, µ ) is a measured semi-metric space, there is a natural coalescence process (of mean-field type)which draws pairs of points ( x, y ) with intensity µ ( dx ) µ ( dy ) (and unit intensity in time) and identifies points x and y , changing the metric accordingly. To describe the process of addition of edges during the dynamicalpercolation on Erd¨os-R´enyi random graph, one needs to replace the identification of x and y by the fact thatthe distance between x and y drops to 1 /n (if x = y ). This leads to the following definition. Definition 2.24.
Let X = ( X, d, µ ) be an m.s-m.s with µ sigma-finite and δ ≥ . Let P + be a Poissonrandom set on X × R + of intensity measure µ × leb R + . The coalescence process with edge-lengths δ started from X , denoted by (Coal δ ( X , t )) t ≥ , is the random process of m.s-m.s (Coal δ ( X , P + t )) t ≥ . Notice that this process inherits the strong Markov property from the strong Markov property of thePoisson process, and the fact that for
A, B ⊂ X , Coal δ ( X , A ∪ B ) = Coal δ (Coal δ ( X , A ) , B ).When δ >
0, if one wants to keep the space fixed and change only the metric, Remark 2.23 (iii) shows thatone can do so at the price of an L GHP -distance at most δ . In this paper, one wants typically to understandscaling limits of N + ( G n , P + t ) with P + of intensity γ n and G n a discrete graph equipped with the distance d n which is the graph distance mutliplied by some δ n > n goes to infinity. See for instanceTheorem 3.1 below. If one equips G n and N + ( G n , P + t ) with their counting measures multiplied by √ γ n , N + γ n ( G n , t ) is at L GHP -distance at most δ n from (Coal δ n (( G n , d n , µ n ) , t )) t ≥ , so the scaling limits will bethe same. We shall want to identify the limit itself as (Coal( G λ , t )) t ≥ , and part of our work will consist inshowing that it is a nicely behaved process. In order to accomplish this task, we need to define some subsetsof N . Definition 2.25.
For p > , we define N p to be the set of elements ν = P m ∈ I δ m of N such that X m ∈ I µ ( m ) p < ∞ . For ν in N p , we let masses( ν ) to be the sequence in ℓ p ց of masses µ ( m ) listed in decreasing order and define,for ν and ν ′ in N p . L p,GHP ( ν, ν ′ ) = L GHP ( ν, ν ′ ) ∨ k masses( ν ) − masses( ν ′ ) k p . Again we shall abuse language, saying that (
X, d, µ ) is in N p when ν X ∈ N p and write L p,GHP ( X , X ′ )for L p,GHP ( ν X , ν X ′ ). We let the reader check that ( N p , L p,GHP ) is a complete separable metric space.It is easy to see that if ( X, d, µ ) belongs to N , then almost surely, for every t ≥
0, Coal t,δ ( X, d, µ ) is in N . We even have the Feller property on N , which will be proved in section 4.1. A consequence of the Fellerproperty of the multiplicative coalescent in ℓ is that if X = ( X, d, µ ) belongs to N , then almost surely forevery t ≥ P m ∈ comp(Coal δ ( X ,t )) µ ( m ) < ∞ . However, one cannot guarantee that components stay totallybounded, and thus that Coal δ ( X , t ) and even Coal ( X , t ) belongs to N . One will thus have to restrict to asubclass of N , which will fortunately contain G λ with probability one.15 efinition 2.26. We define S to be the class of m.s-m.s X = ( X, d, µ ) in N such that ∀ t ≥ , supdiam(Coal ( X ≤ η , t )) P −−−→ η → X ∈ S , then almost surely, for any t ≥
0, Coal ( X , t ) ∈ N . Ofcourse, I suspect that S has a more intrinsic definition, and that there is a convenient topology which turnsit into a Polish space, however I could not prove this for the moment. Let us mention that there are elementsin N \ S , see Remark 4.8.Let us finish this section by a description of the coalescence at the level of components. When X and P are as in Definition 2.24, one may associate to them a process of multigraphs with vertices comp( X ) whichwe denote by MG ( X, t ). It is defined as follows: there is an edge in MG ( X, t ) between m and m ′ if there isa point ( x, y, s ) of P with s ≤ t , x ∈ m and y ∈ m ′ . If x = masses( X ), this multigraph is of course closelyrelated to the multigraph MG ( x, t ) defined in section 2.3.When A is a measurable subset of X , let A = ( A, d | A × A , µ | A ). There is an obvious coupling between(Coal( X , t )) t ≥ and (Coal( A , t )) t ≥ : just take the restriction of the poisson process P to A × R . We shallcall it the obvious coupling . We shall use several times the following easy fact. Lemma 2.27.
Suppose that A is a union of components of X . Under the obvious coupling, if MG ( X, t ) isa forest, then for every x, y in A and every s ≤ t , if the distance between x and y in (Coal( A , s )) is finite,then it is equal to the distance between x and y in (Coal( X , s )) . R -graphs We refer to [3] for background on the definitions and statements in this section.
Definition 2.28. An R -tree is a geodesic and acyclic finite metric space. An R -graph is a totally boundedgeodesic finite metric space ( G, d ) such that there exists R > such that for any x ∈ G , ( B R ( x ) , d | B R ( x ) ) isan R -tree, where B R ( x ) is the ball of radius R and center x .For a semi-metric space ( X, d ) , we shall say that it is an R -graph if the quotient metric space ( X/d, d ) isan R -graph. Remark 2.29.
The definition above differs slightly from Definition 2.2 in [3], where an R -graph ( X, d ) isdefined as a compact geodesic metric space such that for any x ∈ G , there exists ε = ε ( x ) > such that ( B ε ( x ) , d | B ε ( x ) ) is an R -tree, where B ε ( x ) is the ball of radius ε and center x . When ( X, d ) is compact, thetwo definitions agree: one direction is obvious, whereas the other follows from the arguments at the beginningof section 6.1 in [3]. One advantage of working with precompact spaces instead of compact ones is that onemay avoid completion to recover an R -graph after fragmentation. The degree d G ( x ) of a point x in a graph ( G, d ) is the number of connected components of B ε ( x ) ( x ) \ { x } .A branchpoint x is a point with deg G ( x ) ≥
3. A leaf x is a point with degree one. We denote by leaves( G )the set of leaves of G . An R -tree or an R -graph is said to be finite if it is compact and has a finite numberof leaves.An R -graph ( G, d ) is naturally equipped with a length measure , which assigns for instance its length tothe image of a geodesic. We shall denote it by ℓ G , it is a sigma-finite diffuse measure.The structure of an R -graph is explained thoroughly in [3]. The core of ( G, d ), denoted by core( G ) is theunion of all simple arcs with both endpoints in embedded cycles of G . It is also the maximal compact subsetof G having only points of degree at least 2 (cf. Corollary 2.5 in [3], where one needs to replace “closed”by “compact” in our precompact setting). The core of a tree is empty, that of a unicyclic graph is a cycle.When G is neither a tree nor unicyclic, there is a finite connected multigraph ker( G ) = ( k ( G ) , e ( G )) calledthe kernel of G such that the core of G may be obtained from ker( G ) by gluing along each edge an isometriccopy of the interval [0 , l ], for some l >
0. The surplus of G is defined as 0 when G is a tree, 1 when G isunicyclic, and in general as: surplus( G ) = | e ( G ) | − | k ( G ) | + 1 , Although our definition differs slightly, the proof of Proposition 6.2 in [3] can be adapted straightforwardly. R -graph, where an R -graph is obtained as a “tree with shortcuts”, to employ the expression of [7]. We leave the proof to thereader. Lemma 2.30.
A metric space ( X, d ) is an R -graph if and only if there exists a totally bounded R -tree ( T, d ) and a finite set A ⊂ T such that ( X, d ) is isomorphic to the quotient metric space obtained from Coal (( T, d ) , A ) . Definition 2.31.
Let P graph denote the set of metric subspaces of the Urysohn space U that are R -graphs.Let M graph denote the set of equivalence classes on P graph under the equivalence relation of being at zero d GHP -distance.Define N graph (resp. N graphp , resp. S graph ) from M graph in the same way that N (resp. N p , resp. S ) wasdefined from M .If X = ( X, d, µ ) and X ′ = ( X ′ , d ′ , µ ′ ) belong to M graph , we let surplus( X ) denote the surplus of anygraph in the equivalent class of X and define d surplusGHP ( X , X ′ ) := d GHP ( X , X ′ ) ∨ | surplus( X ) − surplus( X ′ ) | . Finally, define L surplusGHP and L surplusp,GHP in the same way that L GHP and L p,GHP were defined, but replacing d GHP by d surplusGHP . Thanks to Lemma 2.30, it is clear that if X ∈ N graph and P ⊂ X is finite, then for any δ ≥ δ ( X , P ) still belongs to N graph . Furthermore, it will be shown in Lemma 4.10 that if X ∈ S graph , thenalmost surely, for any t ≥
0, Coal ( X , t ) ∈ N graph .Additional notations concerning R -graphs will be introduced when needed, in section 5.1. Definition 2.32.
Suppose that X = ( X, d, µ ) is an m.s-m.s whose components are length spaces and P − is asubset of X . Then, the cut of X along P − , denoted by Frag( X , P − ) is the m.s-m.s ( X \ P − , d Frag P , µ | X \P − ) ) where d Frag P ( x, y ) := inf γ { ℓ X ( γ ) } and the infimum is over all paths γ from x to y disjoint from P . Remark 2.33. (i)
Frag( X , ∅ ) is the same as X precisely because the components of ( X, d ) are lengthspaces.(ii) Notice that if ( X, d ) is complete, Frag((
X, d, µ ) , P ) is generally not complete anymore, but its completionis at zero L GHP -distance from ( X, d, µ ) . Definition 2.34.
Let X = ( X, d, µ ) be an m.s-m.s whose components are length spaces. Let ℓ be a diffuse σ -finite Borel measure on X . Let P − be a Poisson random set on X × R + of intensity measure ℓ ⊗ leb R + . The fragmentation process started from X , denoted by (Frag( X , t )) t ≥ , is the random process of m.s-m.s (Frag( X , P − t )) t ≥ . When X ∈ N graph , we shall always take ℓ to be ℓ X , the length-measure on X . Remark 2.35. (i) A similar fragmentation on the CRT is considered in [6].(ii) Since ℓ is a diffuse measure, almost surely µ ( P − t ) = 0 for any t ≥ . Thus we shall abuse notation andconsider that Frag( X , P − t ) is still equipped with µ , instead of µ | X \P − t .(iii) Notice that this process inherits the strong Markov property from the strong Markov property of thePoisson process, and the fact that for A, B ⊂ X , Frag( X , A ∪ B ) = Frag(Frag( X , A ) , B ) . Now, one wants to define dynamical percolation on measured length spaces by performing independentlyand simultaneously coalescence and fragmentation. One needs to be a bit careful here: when (
X, d ) is ageodesic space, A ⊂ X and B ⊂ X , even if B ∩ { x ∈ X : ∃ y ∈ X, ( x, y ) or ( y, x ) ∈ A } = ∅ , one cannotguarantee that Coal (Frag( X, B ) , A ) is the same as Frag(Coal ( X, A ) , B ). Indeed, let X = [0 ,
1] with the17sual metric, let B = { n , n ≥ } and A = { ( n +1 , n ) , n ≥ } . Then, there are two components inCoal (Frag( X, B ) , A ): { } and ]0 , \ B , whereas there is only one component in Frag(Coal ( X, A ) , B ):[0 , \ B . However, it will be shown in Lemma 4.10 that if X ∈ S graph , P + is as in Definition 2.24, P − asin Definition 2.34 then almost surely, ∀ t ≥ , Coal (Frag( X , P − t ) , P + t ) = Frag(Coal ( X , P + t ) , P − t ) . (2.3)This will rely on the following property. Hereafter, we say that a path γ in Coal( X, A ) takes a shortcut ( a, b )in A ⊂ X if ( a, b ) ∈ A and γ ∩ { a, b } 6 = ∅ . Lemma 2.36.
Let ( X, d ) be a length space and A ⊂ X an equivalent relation. Suppose that for any ( x, y ) ∈ X , every simple rectifiable path in Coal(
X, A ) from x to y takes only a finite number of shortcutsin A . Then, for any B ⊂ X such that B ∩ { x ∈ X : ∃ y ∈ X, ( x, y ) or ( y, x ) ∈ A } = ∅ , Coal (Frag( X, B ) , A ) = Frag(Coal ( X, A ) , B ) . Proof:
Let ℓ X ( γ ) denote the length of a path γ in X . Let d fragcoal (resp. d coalfrag , resp. d frag ) denotethe distance of Frag(Coal ( X, A ) , B ) (resp. Coal (Frag( X, B ) , A ), resp. Frag( X, B )) on X \ B . We want toshow that d fragcoal = d coalfrag . First, it is always true that d fragcoal ≤ d coalfrag . Indeed, let { p i } and { q i } , i = 1 , . . . , k be such that ( q i , p i +1 ) ∈ A for all i = 1 , . . . , k − p = x , q k = y . Then, the concatenationof ( k −
1) paths γ i in X , i = 1 , . . . , k − γ i goes from p i to q i and each path avoids B gives a pathin Coal( X, A ) from x to y avoiding B . Thus, for any x and y in X \ B , d fragcoal ( x, y ) ≤ inf k, { p i } , { q i } γ i : p i → q i γ i ∩ B = ∅ k X i =1 ℓ X ( γ i ) ! = inf k, { p i } , { q i } k X i =1 d frag ( p i , q i ) ! = d coalfrag ( x, y ) . Let us show now that d coalfrag ≤ d fragcoal . Let x and y be in X \ B and let γ be a rectifiable simple pathfrom x to y in Coal( X, A ) such that γ ∩ B = ∅ . Then, γ takes only a finite number of shortcuts in A . Thus,there exists { p i } and { q i } , i = 1 , . . . , k and paths γ i , i = 1 , . . . , k − q i , p i +1 ) ∈ A and γ i is a pathfrom p i to q i in X and γ is the concatenation of γ , . . . , γ k . Thus, γ i ∩ B = ∅ for any i and ℓ Coal(
X,A ) ( γ ) = k X i =1 ℓ X ( γ i ) ≥ k X i =1 d frag ( p i , q i ) ≥ d coalfrag ( x, y )Taking the infimum over rectifiable simple paths γ from x to y in Coal( X, A ) such that γ ∩ B = ∅ gives that d coalfrag ≤ d fragcoal . (cid:3) Definition 2.37.
Let X = ( X, d, µ ) be an m.s-m.s whose components are length spaces. Let ℓ be a diffuse σ -finite Borel measure on X . Let P − be a Poisson random set on X × R + of intensity measure ℓ ⊗ leb R + and P + be a Poisson random set on X × R + of intensity measure µ × leb R + . The dynamical percolation processstarted from X , denoted by (CoalFrag( X , t )) t ≥ , is the stochastic process (Coal (Frag( X , P − t ) , P + t )) t ≥ . Property (2.3) (when it holds !) shows that (CoalFrag( X , t )) t ≥ inherits the strong Markov Property fromthat of the Poisson process. 18 .9. The scaling limit of critical Erd¨os-R´enyi random graphs The scaling limit of critical Erd¨os-R´enyi random graphs was obtained in [2], Theorem 24, for the Gromov-Hausdorff topology, and the result is extended to Gromov-Hausdorff-Prokhorov topology in [3], Theorem 4.1.Let G n, λ denote element of N graph obtained by replacing each edge of G ( n, p ) by an isometric copy of asegment of length n − / (notably, the distance is the graph distance divided by n / ) and choosing as measurethe counting measure on vertices divided by n / . Theorem 4.1 in [3] and Corollary 2 in [5] easily imply thefollowing. Theorem 2.38 ([2],[3]) . Let λ ∈ R and p ( λ, n ) = n + λn / . There is a random element G λ of N graph suchthat G n,λ ( d ) −−−−→ n →∞ G λ , where the convergence in distribution is with respect to the L surplus ,GHP -topology. We refer to [2] for the precise definition of the limit G λ and to [4] for various properties of G λ .
3. Main results
The main results concerning Erd¨os-R´enyi random graphs are the following. Notice that the fact that(Coal ( G λ , t )) t ≥ and (CoalFrag( G λ , t )) t ≥ are well-defined processes in S graph will be part of the proofs,see section 4.3 and Remark 4.11. Recall that every convergence of a process stated in this article is withrespect to the topology of compact convergence (associated to L ,GHP , L surplus ,GHP , L ,GHP or L surplus ,GHP ), seesection 2.1. Theorem 3.1.
Let G n,λ, + ( t ) , t ≥ be the discrete coalescence process of intensity n − / , started at G ( n, p ( λ, n )) , equipped with the graph distance multiplied by n − / and the counting measure on verticesmultiplied by n − / . Then, the sequence of processes G n,λ, + ( · ) converges to Coal ( G λ , · ) for L ,GHP as n goes to infinity. Theorem 3.2.
Let G n,λ, − ( t ) , t ≥ be the discrete fragmentation process of intensity n − / started at G ( n, p ( λ, n )) , equipped with the graph distance multiplied by n − / and the counting measure on verticesmultiplied by n − / . Then, the sequence of processes G n,λ, − ( · ) converges to F rag ( G λ , · ) for L surplus ,GHP as n goes to infinity. Theorem 3.3.
Let G n,λ ( t ) , t ≥ be the dynamical percolation processes of parameter p ( λ, n ) and intensity n − / started with G ( n, p ( λ, n )) , equipped with the graph distance multiplied by n − / and the counting mea-sure on vertices multiplied by n − / . Then, the sequence of processes G n,λ ( · ) converges to CoalFrag( G λ , · ) for L ,GHP as n goes to infinity. In the course of proving those results, I tried to obtain more general results, such that one could apply thesame technology to other sequences of random graphs, for instance those belonging to the basin of attractionof G λ (see [7] for this notion). This is reflected in what I called below Feller or almost Feller properties forcoalescence (Lemma 4.13), fragmentation (Proposition 5.5), and dynamical percolation (Proposition 6.2).Note that those are variations on the Feller property, often weaker than a true Feller property in the sensethat I need to add some condition in order to ensure convergence, but also a bit stronger in the sense that Iadded in the results the convergence of the whole process in the sense of the topology of compact convergence.Let me describe the rest of the article. Section 4 is devoted to the proofs of the (almost) Feller property forcoalescence and of Theorem 3.1. It contains notably the fact that (Coal ( G λ , t )) t ≥ and (CoalFrag( G λ , t )) t ≥ are processes in S graph . Probably the most important work lies inside Lemma 4.5, which is a statement aboutthe structure of the graph W ( x, t ) in Aldous’ multiplicative coalescent. It allows notably to reduce the proofof the Feller property from N to N , where it is much easier to prove. Section 5 is devoted to the proofsof the Feller property for fragmentation and of Theorem 3.2. We shall also show that for G λ , coalescence isthe time-reversal of fragmentation, see Proposition 5.13. Finally, section 6 is devoted to the proofs of the(almost) Feller property for dynamical percolation and of Theorem 3.3.19 . Proofs of the main results for coalescence N On N , coalescence behaves very gently since there is a finite number of coalescence events in any finitetime interval. Notably, for X ∈ N , (Coal( X , t )) t ≥ is clearly c`adl`ag. The aim of this section is to proveProposition 4.3, which is essentially a Feller property. Lemma 4.1.
Let ε ∈ ]0; 1[ , X = ( X, d, µ ) and X ′ = ( X ′ , d ′ , µ ′ ) be two m.s-m.s with a finite number of finitecomponents.Suppose that P = { ( x i , y i ) , ≤ i ≤ k } are pairs of points in X and P ′ = { ( x ′ i , y ′ i ) , ≤ i ≤ k } are pairsof points in X ′ . Suppose that there exists π ∈ M ( X, X ′ ) and R ∈ C ( X, X ′ ) such that: D ( π ; µ, µ ′ ) ∨ π ( R c ) ∨
12 dis( R ) ≤ ε and that for any i ≤ k , ( x i , x ′ i ) ∈ R and ( y i , y ′ i ) ∈ R . Then, for any δ, δ ′ > , d GHP (Coal δ ( X , P ) , Coal δ ′ ( X ′ , P ′ )) ≤ (2 ε + | δ − δ ′ | )( k + 1) Proof.
This is essentially Lemma 21 in [2] and Lemma 4.2 in [3], thus we leave the details to the reader.
Lemma 4.2.
Let ε ∈ ]0; 1[ and δ > , X = ( X, d, µ ) and X ′ = ( X ′ , d ′ , µ ′ ) be two m.s-m.s with a finitenumber of finite components. If there exists π ∈ M ( X, X ′ ) and R ∈ C ( X, X ′ ) such that: D ( π ; µ, µ ′ ) ∨ π ( R c ) ∨
12 dis( R ) ≤ ε then, one may couple two Poisson processes: P of intensity µ ⊗ ⊗ leb [0 ,T ] and P ′ of intensity ( µ ′ ) ⊗ ⊗ leb [0 ,T ] , such that with probability larger than − T ε (10 + 8 µ ( X ) + 8 µ ′ ( X ′ )) − p ε + | δ − δ ′ | , for any t ≤ T , d GHP (Coal δ ( X , P t ) , Coal δ ′ ( X ′ , P ′ t )) ≤ ( T µ ( X ) + 1) p ε + | δ − δ ′ | . Proof.
Let P ( µ ) denote the distribution of a Poisson random set of intensity measure µ . Using the couplingcharacterization of total variation distance and the gluing lemma (cf [18] page 23), one may construct threePoisson random sets on the same probability space, P , ˜ P and P ′ such that:(i) P = ( X i , Y i , t i ) i =1 ,...,N has distribution P ( µ ⊗ × leb [0 ,T ] ),(ii) P ′ = ( X ′ i , Y ′ i , t ′ i ) i =1 ,...,N ′ has distribution P (( µ ′ ) ⊗ × leb [0 ,T ] ),(iii) P = ( ˜ X i , ˜ X ′ i , ˜ Y i , ˜ Y ′ i , ˜ t i ) i =1 ,..., ˜ N has distribution P ( π ⊗ × leb [0 ,T ] )and furthermore: P [( X i , Y i , t i ) i =1 ,...,N = ( ˜ X i , ˜ Y i , ˜ t i ) i =1 ,..., ˜ N ] ≤ kP ( µ ⊗ × leb [0 ,T ] ) − P ( π ⊗ × leb [0 ,T ] ) k and P [( X ′ i , Y ′ i , t ′ i ) i =1 ,...,N ′ = ( ˜ X ′ i , ˜ Y ′ i , ˜ t i ) i =1 ,..., ˜ N ] ≤ kP (( µ ′ ) ⊗ × leb [0 ,T ] ) − P ( π ⊗ × leb [0 ,T ] ) k . Now, for any
T > kP ( µ ⊗ × leb [0 ,T ] ) − P ( π ⊗ × leb [0 ,T ] ) k≤ k µ ⊗ × leb [0 ,T ] − π ⊗ × leb [0 ,T ] k = 2 T k µ ⊗ − π ⊗ k≤ T ( µ ( X ) + π ( X )) k µ − π k≤ T ε (2 µ ( X ) + ε ) In [3], Lemma 4.2 is stated for trees and for δ = 0.
20y hypothesis. Similarly, kP (( µ ′ ) ⊗ × leb [0 ,T ] ) − P ( π ⊗ × leb [0 ,T ] ) k ≤ T ε (2 µ ′ ( X ′ ) + ε ) . Furthermore, ( ˜ X i , ˜ X ′ i , ˜ t i ) i =1 ,..., ˜ N and ( ˜ Y i , ˜ Y ′ i , ˜ t i ) i =1 ,..., ˜ N both have distribution P ( π ⊗ leb [0 ,T ] ). Thus, P ( ∃ i ≤ ˜ N , ( ˜ X i , ˜ X ′ i )
6∈ R ) ≤ T π ( R c ) ≤ T ε and P ( ∃ i ≤ ˜ N , ( ˜ Y i , ˜ Y ′ i )
6∈ R ) ≤ T ε .
Let E be the event that N = N ′ and for any i , ( ˜ X i , ˜ X ′ i ) ∈ R and ( ˜ Y i , ˜ Y ′ i ) ∈ R . Altogether, we get that E hasprobability at least 1 − T ε (10 + 8 µ ( X ) + 8 µ ′ ( X ′ )).Since the distortion of R is at most 2 ε , we get using Lemma 4.1 that on the event E , for any t ≤ Td GHP (Coal δ ( X , P t ) , Coal δ ′ ( X ′ , P ′ t )) ≤ ( N + 1)(2 ε + | δ − δ ′ | )Since N has distribution P ( µ ( X ) T ), P N ≥ T µ ( X ) p ε + | δ − δ ′ | ! ≤ p ε + | δ − δ ′ | this gives the result.In the Proposition below, recall from section 2.1 that convergence of processes uses the topology of compactconvergence (here for the metric space ( N , L ,GHP )). Proposition 4.3.
Let X n = ( X n , d n , µ n ) , n ≥ be a sequence of elements in N and ( δ n ) n ≥ a sequenceof non-negative real numbers. Suppose that:(a) ( X n ) n ≥ converges (for L ,GHP ) to X ∞ = ( X ∞ , d ∞ , µ ∞ ) as n goes to infinity(b) δ n −−−−→ n →∞ δ ∞ Then,(i) (Coal δ n ( X n , t )) t ≥ converges in distribution to (Coal δ ( X ∞ , t )) t ≥ ,(ii) if t n −−−−→ n →∞ t , Coal δ n ( X n , t n ) converges in distribution to Coal δ ( X ∞ , t ) .Proof. Let us fix ε ∈ ]0 , ε ∈ ]0 , ε/
2[ be such that ε masses( X ∞ ) and µ ( X ∞≤ ε ) ≤ ε . Proposition 2.18 shows that ρ LP ( X n>ε , X ∞ >ε ) goes to zero as n goes to infinity. Let n be large enough sothat ρ LP ( X n>ε , X ∞ >ε ) ≤ (cid:18) ε ∧ ε T µ ( X ∞ ) (cid:19) | δ n − δ ∞ | ≤ (cid:18) ε ∧ ε T µ ( X ∞ ) (cid:19) and k masses( X n ) − masses( X ∞ ) k ≤ ε . Let k := X ∞ >ε ) and notice that k ≤ µ ( X ∞ ) ε . Lemma 2.20 shows that d GHP ( X n>ε , X ∞ >ε ) ≤ (cid:18) ε ∧ ε T µ ( X ∞ ) (cid:19) µ ( X ∞ ) ε . µ ( X n ≤ ε ) ≤ µ ( X ∞≤ ε ) + k masses( X n ) − masses( X ∞ ) k ≤ ε . Thus, using Lemma 4.2, one may couple the coalescence on X n and X ∞ in such a way that with probabilitylarger than 1 − εT no point of the Poisson processes touches X ∞≤ ε or X n ≤ ε and with probability larger than1 − T ε (10 + 8 µ ( X n ) + 8 µ ∞ ( X ∞ )) − p ( ε ∧ ε ) ), for any t ≤ T , d GHP (Coal δ n ( X n>ε , t ) , Coal δ ( X ∞ >ε , t )) ≤ p ( ε ∧ ε ) . Using Lemma 2.20 and Lemma 2.19, this implies that L GHP (Coal δ n ( X n , t ) , Coal δ ( X ∞ , t )) ≤ C √ ε for some finite constant C depending only on µ ( X ∞ ). Furthermore, since the multigraphs MG ( X ∞ , t ) and MG ( X n , t ) are the same for any t ≤ T in this coupling, k masses(Coal δ n ( X n , t )) − masses(Coal δ ( X ∞ , t )) k ≤ k masses(Coal δ n ( X n>ε , t )) − masses(Coal δ ( X ∞ >ε , t )) k + k masses(Coal δ n ( X n ≤ ε , t )) − masses(Coal δ ( X ∞≤ ε , t )) k ≤ k masses( X n>ε , t ) − masses( X ∞ >ε , t ) k + µ ( X n ≤ ε ) + µ ( X ∞≤ ε ) ≤ ε . This shows ( i ). To obtain ( ii ), notice that for any s and η > P ( ∃ t ∈ [ s, s + η ] : Coal δ ∞ ( X ∞ , t ) = Coal δ ∞ ( X ∞ , s )) ≤ µ ( X ∞ ) η . Thus, ( ii ) is a simple consequence of ( i ).We shall need the following variation of Proposition 4.3 when studying simultaneous coalescence andfragmentation in section 6. Proposition 4.4.
Let X n = ( X n , d n , µ n ) , n ∈ N be a sequence of random variables in N graph and ( δ n ) n ≥ a sequence of non-negative real numbers. Suppose that:(a) ( X n ) n ≥ converges in distribution for L surplus ,GHP to X ∞ as n goes to infinity,(b) δ n −−−−→ n →∞ δ .Then,(i) (Coal δ n ( X n , t )) t ≥ converges in distribution (for the topology of compact convergence associated to L surplus ,GHP ) to (Coal δ ( X ∞ , t )) t ≥ ,(ii) if t n −−−−→ n →∞ t , Coal δ n ( X n , t n ) converges in distribution to Coal δ ( X ∞ , t ) for L surplus ,GHP .Proof: Notice first that when X belongs to N graph , then with probability one, Coal δ ( X n , t ) is in N graph for any t ≥
0. Indeed, since X has finite mass, there is with probability one a finite number of points in thePoisson process P + t on X for any t ≥ X n ) n ≥ convergesto X ∞ for L surplus ,GHP , one may use d surplusGHP instead of d GHP . The fact that the multigraphs MG ( X n , s ) and MG ( X ∞ , s ) are the same for any s ≤ T , and that no point of the Poisson processes touches X n ≤ ε or X ∞≤ ε imply that for each component of Coal δ n ( X n ≥ ε , s ), its surplus is the same as the surplus of the correspondingcomponent in Coal δ ( X ∞≥ ε , s ). (cid:3) .2. Structural result for Aldous’ multiplicative coalescent Recall the definition of the multigraph MG ( x, t ) for x ∈ ℓ in section 2.3. We shall use notations analogousto Definition 2.14. For instance, for x ∈ ℓ , x ≤ ε denotes the element in ℓ defined by: ∀ i ∈ N , x ≤ ε ( i ) = x ( i ) x ( i ) ≤ ε . Also, for i ∈ N , x \ { i } denotes the element in ℓ defined by: ∀ j ∈ N , ( x \ { i } )( j ) = x ( j ) j = i . Notice that at time 0, the components of MG ( x,
0) are the singletons { i } for i ∈ N . Let us fix some ε > MG ( x, t ) are significant if they are larger than ε . We shall derive three scales(at time 0), namely, Large, Medium and Small such that with high probability (as ε goes to zero), everysignificant component of MG ( x, t ) is made of a heart made of Large or Medium components of MG ( x,
0) towhich are attached hanging trees of small or medium components of MG ( x,
0) such that the component ofthe trees attached to the heart are small components (see Figure 1) and the mass contained in the hangingtrees is at most medium. Furthermore, these scales depend on x , ε and t through the functions α
7→ k x ≤ α k and K P ( S ( x, t ) ≥ K ). Lemma 4.5.
Let x ∈ ℓ ( N ) , T ≥ and < ε < . Suppose that:(i) K ≥ is such that: P ( S ( x, T ) ≥ K ) ≤ ε , (ii) ε ∈ (0 , ε ) is such that: S ( x ≤ ε , ≤ ε T + KT ) , (iii) ε ∈ (0 , ε ) is such that: S ( x ≤ ε , ≤ ε ε T ( K + 2)) . Then with probability larger than − ε , the following holds for any t ≤ T ,(a) every component of MG ( x, t ) of size larger than ε contains a component of MG ( x, of size larger than ε .(b) MG ( x ≤ ε , t ) is a forest.(c) for each component m of MG ( x ≤ ε , t ) and each component m ′ of MG ( x >ε , t ) , there is at most one edgebetween m and m ′ in MG ( x, t ) .(d) S ( x, t ) − S ( x >ε , t ) ≤ ε .(e) for any component { i } of MG ( x, of size larger than ε , the difference between the sizes of the componentcontaining i in MG ( x, t ) and the one containing i in MG ( x >ε , t ) is less than ε . The picture depicted before Lemma 4.5 is then a simple corollary. We state it uniformly on a convergentsequence in ℓ ց because it will be convenient to prove the almost Feller property on S , Lemma 4.13. Corollary 4.6.
Let x n be a sequence in ℓ ց converging to x ∞ in ℓ . Then, for any ε > , and any T > there exists ε and ε such that for any n ∈ N , with probability at least − ε the following holds for any t ∈ [0 , T ] :(a) every significant component of MG ( x n , t ) is made of a connected heart made of Large or Medium com-ponents of MG ( x n , to which are attached hanging trees (each one attached by a single edge to theheart) of Small or Medium components of MG ( x n , such that the components of the trees attached tothe heart are Small components and the mass contained in the hanging trees is less than ε ,(b) no Medium or Small component of MG ( x, belongs to a cycle in MG ( x, t ) ,(c) S ( x, t ) − S ( x >ε , t ) ≤ ε , Sfrag replacements size ≥ ε size ∈ [ ε , ε [size < ε L LL LLL L L
MMM MM M M M
SSS SSSSS SS S SS S S S SS
Heart
Fig 1 . The structure of a significant component where a component is significant if it has size larger than ε , Large if it has size larger than ε , Medium fora size in ( ε , ε ] and Small for a size not larger than ε . The proof of Lemma 4.5 relies essentially on Aldous’ analysis of the multiplicative coalescent.
Proof. (of Lemma 4.5)If for some t ≤ T there exists a significant component of MG ( x, t ) which does not contain any largecomponent of x , then S ( x ≤ ε , t ) > ε and thus S ( x ≤ ε , T ) > ε . Thus Lemma 2.1 shows that the probabilityof (a) is larger than 1 − ε/
4, as soon as hypothesis (ii) of Lemma 4.5 holds.Let { i } be a component of MG ( x,
0) and define the event A i = { there exists at least two edges of MG ( x, t ) connecting i to a same component of MG ( x \ { i } , t ) } .Then, P ( A i | MG ( x \ { i } , t )) ≤ X m c . c . of MG ( x \{ i } ,t ) ( x i X j ∈ m x j t ) , = t x i S ( x \ { i } , t ) , thus, P ( A i ∩ { S ( x, t ) ≤ K } ) ≤ E [ P ( A i | MG ( x \ { i } , t )) S ( x \{ i } ,t ) ≤ K ] , ≤ Kt x i . (4.1)We obtain thus: P ( ∪ i ∈ N s . t . x i ≤ ε A i ) ≤ Kt S ( x ≤ ε ,
0) + P ( S ( x, t ) > K ) , which shows that the probability of (b) is at least 1 − ε/ B = { there exists at least two edges of MG ( x, T ) connecting a component m of MG ( x ≤ ε , T ) to acomponent m ′ of MG ( x >ε , T ) } .24hen, P ( B | MG ( x >ε , T ) , MG ( x ≤ ε , T )) ≤ X m c . c . of MG ( x ≤ ε ,T ) X m ′ c . c . of MG ( x >ε ,T ) ( X i ∈ m x i X j ∈ m ′ x j T ) , = T S ( x ≤ ε , T ) S ( x >ε , T ) . Thus, P ( B ) ≤ ε P ( S ( x, T ) ≥ K ) + P ( S ( x ≤ ε , T ) ≥ ε KT ) , which shows using Lemma 2.1 that the probability of (c) is at least 1 − ε/ Y be the supremum, over Large components { i } of MG ( x, i in MG ( x, t ) and the one containing i in MG ( x >ε , t ). Notice that if Y ≥ α , then S ( x, t ) ≥ S ( x >ε , t ) + 2 ε α , which implies S ( x, T ) ≥ S ( x >ε , T ) + 2 ε α when ( c ) holds, thanks to Lemma 2.5.Thus points (e) and (d) will be proved if we show that S ( x, T ) > S ( x >ε , T ) + 2 ε with probability at most ε/ C = { S ( x, T ) > S ( x >ε , T ) + 2 ε } . Lemma 2.2 shows that: P ( C and S ( x, T ) ≤ K and S ( x ≤ ε , T ) ≤ β } ) ≤ (1 + T ( K + 2 ε )) β ε . Thus, P ( C ) ≤ (1 + T ( K + 2)) β ε + P ( S ( x ≤ ε , T ) > β ) + P ( S ( x, T ) > K ) , which is less than ε/ Proof. (of Corollary 4.6) Since x n converges to x ∞ in ℓ ,sup n ∈ N k x n ≤ ε k −−−→ ε → . Also, the Feller property implies that the distributions of the sizes of MG ( x n , T ) for n ∈ N form a compactfamily of probability measures on ℓ . Thus,sup n ∈ N P ( S ( x n , T ) ≥ K ) −−−−−→ K → + ∞ . (4.2)This shows that for any T and ε , one may find K , ε and ε such that the three hypotheses of Lemma 4.5hold for x n uniformly over n ∈ N .Now suppose that x , t , ε , ε and ε are such that (a), (b), (e) and (c) of Lemma 4.5 hold. Let m be a significant component of MG ( x, t ). It contains a large component { i } of MG ( x,
0) by point (a). Let σ ( m ) denote the component of MG ( x >ε , t ) containing { i } . Point (e) implies that two large components of MG ( x,
0) are connected in MG ( x, t ) if and only if they are connected in MG ( x >ε , t ), that is only throughLarge or Medium components. This shows that σ ( m ) does not depend on the choice of the large component { i } included in m and that there cannot be any large component in m \ σ ( m ). Let us define the heart of m as σ ( m ). It is made of Large or Medium components, and m \ σ ( m ) is a graph of medium or small components.Now if a medium component in m \ σ ( m ) was directly connected to some component of σ ( m ), it would beconnected to m in MG ( x >ε , t ), and thus would belong to σ ( m ). Thus, the exterior boundary of σ ( m ) in m is made of small components. Point (b) shows that m \ σ ( m ) is a forest, and point (c) shows that each tree ofthis forest is attached by a single edge to σ ( m ), and also that no Medium or Small component of MG ( x, MG ( x, t ). 25 useful by-product of the proof of Lemma 4.5 and Corollary 4.6 is the following simple lemma. Lemma 4.7.
For x ∈ ℓ , ε > and T > let A ( x, ε, T ) be the event that for any t ≤ T : • MG ( x ≤ ε , t ) is a forest and • there is at most one edge betweeen every connected component of MG ( x ≤ ε , t ) and every component of MG ( x >ε , t ) .Suppose that x n converges to x ∞ in ℓ ց as n goes to infinity. Then, for any T > n ∈ N P ( A ( x n , ε, T )) −−−→ ε → . Remark 4.8.
Let us give an example of an m.a-m.s which is in N but not in S . Let I i , i ≥ be disjointcopies of the interval [0 , , with its usual metric, and equip I i with the measure i ( δ + δ ) . Then, X ∈ N \S .In fact, thanks to Lemma 4.5, for any ε > and t > every component of Coal(( X ) >ε , t ) is unboundedsince it contains a forest of an infinite number (since the sizes are not in ℓ ) of components of diameter 1. S The aim of this section is to prove the following theorem, and a Feller-like property (in Lemma 4.13).
Theorem 4.9.
Let X = ( X, d, µ ) belong to S . Then, (Coal ( X, t )) t ≥ defines a strong Markov process withc`adl`ag trajectories in S . The proof of this result is divided in three lemmas. The last one, Lemma 4.13 will be useful in its own toshow convergence of discrete coalescence processes.
Lemma 4.10.
Let X be an m.s-m.s.(i) If X ∈ S (resp. to S graph ) then, almost surely, for any t ≥ , Coal ( X , t ) belongs to N (resp. to N graph ).(ii) If X ∈ S graph , almost surely the commutation (2.3) holds.(iii) If the components of X are R -graphs then, almost surely, for any t ≥ , the components of Frag( X , t ) are R -graphs.Proof. (i) Suppose that X belongs to S and let us show that if X ∈ N , then with probability one,Coal ( X , t ) has totally bounded components for any t ≥
0. Let α > B ( ε ) := { supdiam(Coal( X ≤ ε , T )) ≤ α } . Since X satisfies (2.2), P ( B ( ε ) c ) −−−→ ε → A ( ε ) be the event that the conclusion of Corollary 4.6 holds. This corollary shows that P ( A ( ε ) c ) −−−→ ε → A ( ε ) ∩ B ( ε ), we have that for any t ∈ [0 , T ], any component of size larger than ε of Coal( X , t )can be covered with a finite number of balls of radius 2 α . Indeed, if m is such a component, one mayfirst cover the heart with a finite number of balls of radius α since the heart of m is composed of afinite number of totally bounded components of X glued together, and then if we increase the radiusto 2 α , those balls will cover the whole component m because we are on B ( ε ). Making ε go to zero, wesee that with probability one, for any t ∈ [0 , T ] every component of Coal( X , t ) can be covered with afinite number of balls of radius 2 α . Then, letting α go to zero, we see that with probability one, for any t ∈ [0 , T ] every component of Coal( X , t ) is totally bounded, so Coal( X , t ) ∈ N .Notice also that if the components of X are R -graphs and m is a component as above, then on A ( ε ), m is an R -graph. Letting ε go to zero, this shows that if X ∈ S graph , then with probability one, forany t ∈ [0 , T ], Coal( X , t ) ∈ N graph . 26ii) If X ∈ S graph , using the same notations as above, one sees that on A ( ε ), for any x and y in a componentof mass larger than ε , there is only a finite number of simple paths from x to y , and every such simplepath takes a finite number of shortcuts of the Poisson process P + t . Letting ε go to zero, this holdsalmost surely for any component of Coal( X , P + t ). Furthermore, since ℓ X is diffuse and P − and P + areindependent, almost surely one has, for any t , P − t ∩ { x ∈ X : ∃ y ∈ X, ( x, y ) or ( y, x ) ∈ P + t } = ∅ . Thus, Lemma 2.36 shows that ( ii ) holds.(iii) Using Lemma 2.30, X is isometric to Coal( X ′ , A ) where X ′ is an m.s-m.s whose components are realtrees, and A ⊂ S m ∈ comp( X ′ ) m is finite on any m . Again, since ℓ X is diffuse, almost surely, for any t , P − t ∩ { x ∈ X : ∃ y ∈ X, ( x, y ) or ( y, x ) ∈ A } = ∅ . Thus, Lemma 2.36 shows thatFrag( X, P − t ) = Frag(Coal( X ′ , A ) , P − t ) = Coal(Frag( X, P − t ) , A ) . The components of Frag( X, P − t ) are totally bounded R -trees, thus the components of Coal(Frag( X, P − t ) , A )are R -graphs. Remark 4.11. If X ∈ N and P is as in Definition 2.32, it may happen that Frag( X , P ) has a componentof mass zero. In this case, Frag( X , P ) does not belong to N , stricly speaking. However, Frag( X , P ) is at zero L GHP -distance from an element of N , which is Frag( X , P ) | ∪ ε> M >ε . In fact, we could have defined N asthe quotient of the set of counting measures on M with respect to the equivalence relation defined by being atzero L GHP -distance. This space is isometric to N modulo the addition of components of null masses. Then Frag( X , P ) would have always belonged to N . But I feel that it would have obscured the definition of N . Inthe sequel, we shall keep in mind that components of null masses are neglected. Thus, Lemma 4.10 shows thatif X belongs to N graph , then almost surely, for any t ≥ , Frag( X , t ) belongs to N graph and if X belongs to S graph then almost surely, for any t ≥ , Coal ( X , t ) and CoalFrag( X , t ) belong to N graph . Lemma 4.12.
Let X n = ( X n , d n , µ n ) , n ≥ be a sequence of random variables in S and ( δ n ) n ≥ be asequence of non-negative real numbers. Suppose that:(i) ( X n ) converges in distribution for L ,GHP to X ∞ = ( X ∞ , d ∞ , µ ∞ ) as n goes to infinity,(ii) δ n −−→ n ∞ ,(iii) For any α > and any T > , lim sup n ∈ N P (supdiam(Coal δ n ( X n ≤ ε , T )) > α ) −−−→ ε → . Then, with probability 1,
Coal ( X ∞ , t ) belongs to S for any t ≥ .Proof. First, the Feller property of the multiplicative coalescent, Proposition 5 of [5], shows that masses(Coal δ n ( X n , T ))converges in distribution (in ℓ ց ) to masses(Coal ( X ∞ , T )). Together with Prokhorov theorem and Lemma 4.7,this implies that: P [ MG ( X ∞≤ ε , T ) is not a forest ] ε → −−−→ . (4.3)Notice that under the obvious coupling, when MG ( X ∞ , t + s ) is a forest, Lemma 2.27 implies that:supdiam(Coal (Coal ( X ∞ , t ) ≤ η , s )) ≤ supdiam(Coal ( X ∞≤ η , t + s )) . Thus, thanks to Lemmas 4.7 and 4.10 it is enough to show that with probability one, X ∞ satisfies (2.2) forany t ≥
0. 27et P X ∞ be the distribution of X ∞ . Then for P X ∞ -almost every X and every t ∈ [0 , T ] and α > ε → P [supdiam(Coal ( X ≤ ε , t )) > α ]= lim sup ε → P [supdiam(Coal ( X ≤ ε , t )) > α and MG ( X ≤ ε , T ) is a forest] ≤ lim sup ε → P [supdiam(Coal ( X ≤ ε , T )) > α and MG ( X ≤ ε , T ) is a forest]Thus, P X ∞ { X ∈ N : sup t ≤ Tα> lim sup ε → P [supdiam(Coal ( X ≤ ε , t )) > α ] > } = sup α> P X ∞ { X : lim sup ε → P [supdiam(Coal ( X ≤ ε , T )) > α ] > } = sup α> sup η> lim ε → P X ∞ { X : P [ ∃ ε ′ ∈ ]0 , ε ] , supdiam(Coal ( X ≤ ε ′ , T )) > α ] > η }≤ sup α> sup η> lim ε → η P [ ∃ ε ′ ∈ ]0 , ε ] : supdiam(Coal ( X ∞≤ ε ′ , t )) > α ] . Thus, using (4.3), it is sufficient to prove that for any T ≥ α > ε > P sup ε ′ ∈ ]0 ,ε ] supdiam(Coal ( X ∞≤ ε ′ , t )) > α and MG ( x ∞≤ ˜ ε , T ) is a forest −−−→ ε → . (4.4)Let x ∞ := masses( X ∞ ). Notice first that there exists a decreasing sequence of positive numbers ( ε p ) p ≥ going to zero and such that: ∀ p ∈ N , P [ ε p ∈ x ∞ ] = 0 . Fix ˜ ε ≥ ε and choose the sequence so that ε ≤ ˜ ε . Then, using the obvious coupling and Lemma 2.27, P sup ε ′ ∈ ]0 ,ε ] supdiam(Coal ( X ∞≤ ε ′ , t )) > α and MG ( x ∞≤ ˜ ε , T ) is a forest ≤ P [supdiam(Coal ( X ∞≤ ε , t )) > α and MG ( x ∞≤ ˜ ε , T ) is a forest]Then, we have: lim ε → P (supdiam(Coal ( X ∞≤ ε , T )) > α and MG ( x ∞≤ ˜ ε , T ) is a forest)= lim m →∞ P (supdiam(Coal ( X ∞≤ ε m , T )) > α . Furthermore, define X m,p := ( X ≤ ε m ) >ε p for m ≤ p . Then, P (supdiam(Coal ( X ∞≤ ε m , T )) > α ) = lim p →∞ P (supdiam(Coal ( X ∞ m,p , T )) > α )Now, Proposition 4.3 implies that (Coal δ n ( X nm,p , T ) converges in distribution to (Coal ( X ∞ m,p , T ) for any m ≤ p . Since we are dealing here with finite collections of m.s-m.s with positive masses, this entails that for28ny m ≤ p , P (supdiam(Coal ( X ∞ m,p , T )) > α ) ≤ lim sup n ∞ P (supdiam(Coal δ n ( X nm,p , T )) > α ) ≤ lim sup n ∞ P (supdiam(Coal δ n ( X nm,p , T )) > α and MG ( X n ≤ ˜ ε , T ) is a forest)+ lim sup n ∞ P ( MG ( X n ≤ ε , T ) is not a forest)) ≤ lim sup n ∞ P (supdiam(Coal δ n ( X n ≤ ε m , T )) > α and MG ( X n ≤ ˜ ε , T ) is a forest)+ lim sup n ∞ P ( MG ( X n ≤ ˜ ε , T ) is not a forest)) ≤ lim sup n ∞ P (supdiam(Coal δ n ( X nε m , T )) > α )+ lim sup n ∞ P ( MG ( X n ≤ ˜ ε , T ) is not a forest))Then, using Lemma 4.7 and the hypothesis on supdiam(Coal δ n ( X nε , T )) one sees that the right-hand sideabove goes to zero when we make m and then ˜ ε go to zero. This ends the proof of (4.4).At this point, we know that if X = ( X, d, µ ) belongs to S , then the process (Coal ( X , t )) t ≥ is strongMarkov on S . Lemma 4.13 (Almost Feller property) . Let X n = ( X n , d n , µ n ) , n ≥ be a sequence of random variablesin N and ( δ n ) n ≥ a sequence of non-negative real numbers. Suppose that:(a) ( X n ) converges in distribution (for L ,GHP ) to X ∞ = ( X ∞ , d ∞ , µ ∞ ) as n goes to infinity(b) δ n −−→ n ∞ (c) For any α > and any T > , lim sup n ∈ N P (supdiam(Coal δ n ( X n ≤ ε , T )) > α ) −−−→ ε → Then,(i) (Coal δ n ( X n , t )) t ≥ converges in distribution to (Coal ( X ∞ , t )) t ≥ ,(ii) if t n −−→ n ∞ t , Coal δ n ( X n , t n ) converges in distribution to Coal ( X ∞ , t ) (for L ,GHP ).Proof. Let t n −−→ n ∞ t and let T = sup n t n . Let us fix ε ∈ ]0 ,
1[ and let ( x n ) n ∈ N ∪{∞} := (masses( X n )) n ∈ N ∪{∞} .We know that x n converges in distribution to x ∞ . Using the Skorokhod representation theorem, Corollary 4.6and (4.2), we obtain that there exists K ( ε ) ∈ ]0 , + ∞ [, ε ∈ ]0 , ε [ and ε ∈ ]0 , ε [ such that for every n ∈ N ,with probability larger than 1 − ε the event A n holds, where A n is the event that points (a), (b) and (c) ofCorollary 4.6 hold for any t ∈ [0 , T ] and S ( x n , T ) ≤ K ( ε ).Let δ ∞ := 0. On this event A n , the Gromov-Hausdorff-Prokhorov distance between a significant compo-nent of Coal δ n ( X n , t ) (at any time t ≤ T ) and its heart is at most α := δ n + supdiam(Coal δ n ( X n ≤ ε , T )) + ε .Let σ be the application from comp(Coal δ n ( X n , t ) >ε + α ) to comp(Coal δ n ( X n>ε , t )) which maps a compo-nent to its heart, and let σ ′ denote the application from comp(Coal δ n ( X n>ε , t )) >ε + α to comp(Coal δ n ( X n , t ))which maps a component to the (unique, on A n ) component of comp(Coal δ n ( X n , t )) which contains it. Thoseapplications satisfy the hypotheses of Lemma 2.19, with ε replaced by ε + α . This shows that on A n , wehave for every time t ≤ T and every ε ′ ≤ ε : L GHP (Coal δ n ( X n , t ) , Coal δ n ( X n>ε ′ , t )) ≤ α (cid:18) S ( x n , t ) ε (cid:19) + 16( ε + α ) , ≤ δ n + supdiam(Coal δ n ( X n ≤ ε , T )) + ε ) (cid:18) K ( ε ) ε (cid:19) + 16 ε . In fact, here we could talk simply of Hausdorff-Prokhorov distance since there is a trivial embedding of one measuredsemi-metric space into the other. P (supdiam(Coal δ ∞ ( X ∞≤ ε , T )) > α ) −−−→ ε → . Thus, using the hypothesis on supdiam, one may choose ε small enough (and thus ε small enough) to getthat for every n large enough (possibly infinite), with probability larger than 1 − ε , we have for every time t ≤ T and every ε ′ ≤ ε : L GHP (Coal δ n ( X n , t ) , Coal δ n ( X n>ε ′ , t )) ≤ ε . Furthermore, since (c) of Corollary 4.6 holds on A n , k masses(Coal δ n ( X n , t )) − masses(Coal δ n ( X n>ε , t )) k ≤ S ( x ( n ) , t ) − S ( x ( n ) >ε , t ) ≤ ε , where the first inequality comes from Lemma 2.4. This shows thatlim ε → lim N →∞ sup n ≥ Nn ∈ N P (sup t ≤ T L ,GHP (Coal δ n ( X n , t ) , Coal δ n ( X n>ε , t )) > ε ) = 0 . (4.5)Now, let ( α p ) p ≥ be a decreasing sequence of positive numbers going to zero such that: ∀ p ∈ N , P [ α p ∈ x ∞ ] = 0 . For any p , X n>α p converges to X ∞ >α p in distribution for L ,GHP . Proposition 4.3 implies that (Coal δ n ( X n>α p , t )) t ≤ T converges to (Coal ( X ∞ >α p , t )) t ≤ T for the topology of compact convergence associated to L ,GHP . Togetherwith (4.5), this proves ( i ).Furthermore, Proposition 4.3 implies that Coal δ n ( X n>α p , t n )) converges in distribution to Coal ( X ∞ >α p , t )as n goes to infinity for L ,GHP , and thus for L ,GHP . Together with (4.5) we obtain that Coal δ n ( X n , t n )converges to Coal ( X ∞ , t ) in distribution for L ,GHP . This proves ( ii ). Remark 4.14.
Notice that if δ n > and X n ∈ N \ N , Coal δ n ( X n , t ) is not in N for t > since thecomponents are not totally bounded. Thus, the terms “convergence in distribution for L ,GHP ” should beunderstood in a larger space, where components are allowed not to be totally bounded.However, we do not insist on this because we shall always use Lemma 4.13 with X n ∈ N for any n ∈ N ,in which case Coal δ n ( X n , t ) is in N for any t . Now, we can end the proof of Theorem 4.9. Let X = ( X, d, µ ) belong to S . We only need to show that(Coal ( X , t )) t ≥ is almost surely c`adl`ag. Let X n := X > n . Then, Lemma 4.13 shows that (Coal ( X n , t )) t ≥ converges to (Coal ( X , t )) t ≥ (in the topology of compact convergence associated to L ,GHP ). Since (Coal ( X n , t )) t ≥ is c`adl`ag, so is (Coal ( X , t )) t ≥ . In this section we prove Theorem 3.1. Recall that G n,λ is the element of N graph obtained from G ( n, p ( λ, n ))by assigning to each edge a length n − / and to each vertex a mass n − / . We know by Theorem 2.38 that G n,λ converges in distribution (for L ,GHP ) to G λ . In view of Lemma 4.13, it is sufficient to prove that forany T and α >
0: lim sup n →∞ P (supdiam(Coal n − / (( G n,λ ) ≤ ε , T )) > α ) −−−→ ε → . Let us denote by h n,λ the height process associated to the depth-first exploration process on G n,λ asdefined in [2] sections 1 and 2, and let h n,λ be its rescaled version: h n,λ ( x ) := 1 n / h n,λ ( xn / ) . I inside an excursion interval of h n,λ (away from 0) corresponds to a connected subgraph of G ( n, p ( λ, n )) and the diameter of this subgraph is bounded from above by 2 sup x,y ∈ I | h n,λ ( x ) − h n,λ ( y ) | .Let us denote by A n ( ε ) the event that there is a connected component of Coal /n (( G n,λ ) ≤ ε , T ) whichis connected by at least two edges to a component of Coal /n (( G n,λ ) >ε , T ). Thanks to Lemma 4.7, usingSkorokhod representation theorem in ℓ for the convergeing sequence masses( G n,λ ), we know thatlim sup n →∞ P ( A n ( ε )) −−−→ ε → . Now let us consider the depth-first exploration process of Coal /n (( G n,λ ) , T ). When P + T has intensity γ , N + ( G ( n, p ) , P + T ) is equal in distribution to G ( n, p ′ ) with: p ′ = p + (1 − p )(1 − e − γT )When γ = n − / , p ′ = p ( λ ′ n , n ) with λ ′ n −−−−→ n →∞ λ + T and Coal /n (( G n,λ ) , T ) is equal, in distribution, to G n,λ ′ n . The difference between λ ′ n and λ n + T is unessen-tial for us (for instance using Lemma 4.7, supdiam(Coal n − / (( G n,λ ) ≤ ε , T ) is essentially nondecreasing in T ), so let us pursue as if λ ′ n = λ + T . On the event A n ( ε ), the vertices of a connected component C of Coal /n (( G n,λ ) ≤ ε , T ) are explored either consecutively (if the exploration process of the component ofCoal /n (( G n,λ ) , T ) containing C starts outside C ) either in two time intervals (which may happen if theexploration process of the component of Coal /n (( G n,λ ) , T ) containing C starts inside C ). Thus, on A n ( ε ),supdiam((Coal /n (( G n,λ ) ≤ ε , T )) ≤ x,y ∈ R + | x − y |≤ ε | h n,λ + T ( x ) − h n,λ + T ( y ) | , where the supremum is restricted to couples ( x, y ) such that x and y belong to the same excursion of h n,λ + T above zero.Thus, it only remains to prove that for any α > n →∞ P sup x,y ∈ R + | x − y |≤ ε | h n,λ + T ( x ) − h n,λ + T ( y ) | > α −−−→ ε → , (4.6)with the supremum restricted to couples ( x, y ) such that x and y belong to the same excursion of h n,λ + T above zero.Let B λ be a Brownian motion with quadratic drift, defined by B λt = B t + λt − t with B a standardBrownian motion. Let W λ be B λ reflected above its current minimum: W λt := B λt − min ≤ s ≤ t B λs . Then (4.6) is a consequence of the fact that h n,λ + T converges in distribution to W λ in the uniform topology.It seems however that this convergence is not written in the literature, so in order to use only availablesources, one may rest on the work done in [2] as follows. One may separate the analysis of the supremum onthe N largest excursions and on the others. Let B N,n ( ε ) be the event that the maximal height of some i -thlargest component in G n,λ + T for i > N exceeds ε . The equation p.402 below equation (24) in [2] shows thatfor any ε >
0, lim N →∞ lim sup n →∞ P ( B N,n ( ε )) = 0 . (4.7)Choose N large enough so that lim sup n →∞ P ( B N,n ( ε )) ≤ ε . Then, for a fixed N , one may argue as inthe proof of Theorem 24 of [2], p.398: conditionally on the sizes, the rescaled height processes associated31o those components are independent and each one converges in distribution (for the uniform topology)to a continuous excursion (a tilted Brownian excursion). Together with the convergence of the sizes andSkorokhod representation theorem, this proves that the N largest excursions of h n,λ + T converges as a vectorin C ([0 , + ∞ [) N to a random vector of continuous functions with bounded support. This implies that (4.6)holds when the supremum is restricted to couples ( x, y ) such that x and y belong to one of the N largestcomponents of h n,λ + T . Finally, let N n ( ε ) be the number of components of G n,λ + T which are larger than ε .For each ε >
0, ( N n ( ε )) n ≥ is a tight sequence, as follows from the convergence (in distribution) in ℓ ց ofthe sizes of G n,λ + T . Thus, we obtain (4.6) and this ends the proof of Theorem 3.1.
5. Proofs of the results for fragmentation
The main goal of this section is to prove the Feller property for fragmentation on N graph , Proposition 5.5and to apply it to prove Theorem 3.2. It is very close to the work performed in [3], which proves a continuityresult for a fragmentation restricted to the core of a graph (and stopped when you get a tree). The maindifference is that we want in addition to perform fragmentation on the tree part of the graphs. Anothertechnical difference will be detailed at the beginning of section 5.4. Unfortunately, those differences force usto make substantial modifications to the arguments of [3]. We need to introduce a few more definitions to deal with fragmentation of R -graphs. For more details, werefer to [3].Let G be an R -graph. When there is only one geodesic between x and y in G , we denote by [ x, y ] its image.Recall the notion of the core of G defined in section 2.7. If S is a closed connected subset of G containingcore( G ), then for any x ∈ G , there is a unique shortest path γ x going from x to S . We denote by p S ( x ) theunique point belonging to γ x ∩ S . When G is not a tree and S = core( G ), we let α G ( x ) := p S ( x ).For any η >
0, let R η ( G ) := core( G ) ∪ { x ∈ G s.t. ∃ y : x ∈ [ y, α G ( y )] and d ( y, x ) ≥ η } . When (
T, ρ ) is a rooted R -tree, we let R η ( T ) be defined as above, with α G ( y ) replaced by the root ρ andcore( G ) replaced by { ρ } . Thus the definition of R η ( G ) extends the definition of R η ( T ) for a rooted R -tree( T, ρ ) in [10]. Notably, Lemma 2.6 (i) in [10] shows that for any η > R η ( G ) is a finite graph.A multigraph with edge-lengths is a triple ( V, E, ( ℓ ( e )) e ∈ E ) where ( V, E ) is a finite connected multigraphand for every e ∈ E , ℓ ( e ) is a strictly positive number. It may be viewed as a finite R -graph with no leaf byperforming on V (seen as as a metric sapce as the disjoint union of its elements) the ℓ ( e )-gluing along e foreach edge e ∈ E .The ε -enlargement of a correspondance R ∈ C ( X, X ′ ) is defined as: R ε := { ( x, x ′ ) ∈ X × X ′ : ∃ ( y, y ′ ) ∈ R , d ( x, y ) ∨ d ( x ′ , y ′ ) ≤ ε . It is a correspondance containing R with distortion at most dis( R ) + 4 ε .If R ∈ C ( X, X ′ ), two Borel subsets A ⊂ X and B ⊂ X ′ are said to be in correspondance through R if R ∩ ( A × B ) ∈ C ( A, B ).Let ε >
0. If X and X ′ are R -graphs with surplus at least 2, an ε -overlay is a correspondance R ∈ C ( X, X ′ )with distortion less than ε and such that there exists a multigraph isomorphism χ between the kernels ker( X )and ker( X ′ ) satisfying:1. ∀ v ∈ k ( X ) , ( v, χ ( v )) ∈ R ,2. For every e ∈ e ( X ), e and χ ( e ) are in correspondance through R and | ℓ X ( e ) − ℓ X ′ ( χ ( e )) | ≤ ε .If X and X ′ have surplus one, an ε -overlay is a correspondance with distortion less than ε such that theunique cycles of X and X ′ are in correspondance and the difference of their lengths is at most ε . If X and X ′ are trees, an ε -overlay is simply a correspondance with distortion less than ε .We let N tree be the set of elements X ∈ N graph whose components are trees.32 .2. Reduction to finite graphs The following lemma allows to reduce the proof of the Feller property on finite graphs. This will be usefulto adapt the arguments of [10].
Lemma 5.1.
Let η ∈ (0 , and T > . Let G belong to N graph . Let S be a closed connected subset of G such that R η ( G ) ⊂ S ⊂ G . Suppose that for each component H of G , S ∩ H is a connected R -graph. Let S := ( S, d | S × S , p S ♯µ ) . Then, with probability at least − T η / , for any t ∈ [0 , T ] , under the obvious coupling, L surplus ,GHP (Frag( G , t ) , Frag( S , t )) ≤ η / (1 + X H ∈ comp( G ) µ ( H ) ) . Proof.
Let P be a poisson random set of intensity measure ℓ G ⊗ leb + R on G × R + and let us use it to performthe fragmentation on S and G . Define: G ηt := { x ∈ G \ S s.t. ∃ y ∈ P t ∩ ( G \ S ) ∩ [ x, α ( x )] } . Notice that a component m of Frag( S , t ) is endowed with the distance d | m × m and the measure ( p S ♯µ ) | m ,while a component m of Frag( G , t ) is endowed with the distance d | m × m and the measure µ | m .If m is a component of Frag( G , t ) such that m ∩ S = ∅ then m ⊂ G ηt . Notably, if t ∈ [0 , T ], H ∈ comp( G ), m is a component of Frag( G , t ) included in H and µ ( m ) > µ ( G ηT ∩ H ), then m must intersect S . Furthermore,if m ∩ S = ∅ then m ∩ S is a component of Frag( S , t ).Let H ∈ comp( G ). For any component m of Frag( H, P t ) such that m ∩ S = ∅ , we claim that d GHP ( m, m ∩ S ) ≤ η ∨ µ ( G ηT ∩ H ) (5.1)Indeed, let R := { ( x, p S ( x )) : x ∈ m } , which has distortion at most 2 η , and define π := ( Id ⊗ p S ) ♯µ | m × m ∩ S .Then, π ( R c ) = 0 and D ( π ; µ | m , ( p S ♯µ ) | m ) = sup A ∈B ( m ∩ S ) µ ( p − S ( A ) \ m )= µ ( p − S ( m ) \ m ) ≤ µ ( G ηt ∩ H ) ≤ µ ( G ηT ∩ H ) . This shows (5.1). Furthermore, k masses(Frag( G , t )) − masses(Frag( S , t )) k ≤ X m ∈ Frag( G ,t ) m ∩ S = ∅ µ ( p − S ( m ) \ m ) + X m ∈ Frag( G ,t ) m ∩ S = ∅ µ ( m ) (5.2) ≤ X H ∈ comp( G ) µ ( G ηt ∩ H ) ≤ X H ∈ comp( G ) µ ( G ηT ∩ H ) . (5.3)Using Fubini’s theorem, E [ µ ( G ηT ∩ H ) ] ≤ µ ( H ) (1 − e − ηT ) ≤ µ ( H ) ηT . Thus, P X H ∈ comp( G ) µ ( G ηT ∩ H ) ≥ η / X H ∈ comp( G ) µ ( H ) ≤ T η / . E := { X H ∈ comp( G ) µ ( G ηT ∩ H ) < η / X H ∈ comp( G ) µ ( H ) } and define α := η / q P H ∈ comp( G ) µ ( H ) . Notice that on E , we have for any H ∈ comp( G ): µ ( G ηT ∩ H ) ≤ s X H ∈ comp( G ) µ ( G ηT ∩ H ) ≤ α . Let σ assign to each component of Frag( S , t ) the component of Frag( G , t ) which contains it, and let σ ′ assignto a component m of comp((Frag( G , t )) >α / + α ) the component m ∩ S of comp(Frag( S , t )). From (5.1) wededuce that for any component m of Frag( S , t ), d GHP ( m, σ ( m )) ≤ α and notice that m and σ ( m ) have the same surplus. Also, for any component m ′ of Frag( G , t ), d GHP ( m ′ , σ ′ ( m ′ )) ≤ α , and m ′ and σ ′ ( m ′ ) have the same surplus. According to Lemma 2.19 this shows that on the event E : L surplusGHP (Frag( G , t ) , Frag( S , t ) ≤ α P m ∈ comp(Frag( G ,t )) µ ( m ) α / ! + 16( α / + α ) ≤ α + α /
16 + 8 X H ∈ comp( G ) µ ( H ) . And, thanks to (5.3), k masses(Frag( G , t )) − masses(Frag( S , t ) k ≤ η / X H ∈ comp( G ) µ ( H ) which shows the result.Since R η ( G ) has finite length measure for each η > G is a totally bounded graph, Lemma 5.1easily implies the following continuity result. Proposition 5.2.
Let G belong to N graph . Then, Frag( G , t ) converges in probability (for L ,GHP ) to G when t goes to zero. The following lemma is a slight extension of Lemma 6.3 in [10] designed to take measures into account.
Lemma 5.3.
Let T = ( T, d, µ ) be a measured finite real tree, ρ ∈ T and ε > . There exists δ > (dependingon T , ρ and ε ) such that if T ′ = ( T ′ , d ′ , µ ′ ) is a measured finite real tree and d rootGHP (( T , ρ ) , ( T ′ , ρ ′ )) < δ , thenthere exist subtrees S ⊂ T and S ′ ⊂ T ′ such that ρ ∈ S , ρ ′ ∈ S ′ and:(i) d H ( S, T ) < ε and d H ( S ′ , T ′ ) < ε ,(ii) there is a bijective measurable map ψ : S → S ′ that preserves length measure and has distortion atmost ε ,(iii) ψ ( ρ ) = ρ ′ , iv) the length measure of the set of points a ∈ S such that { b ∈ S : ψ ( a ) ≤ b } 6 = ψ ( { b ∈ S : a ≤ b } ) (thatis, the set of points a such that the subtree above ψ ( a ) is not the image under ψ of the subtree above a )is less than ε .(v) there is a correspondance R ∈ C ( S, S ′ ) and a measure π ∈ M ( S, S ′ ) such that:(a) ∀ x ∈ S ( x, ψ ( x )) ∈ R (b) π ( R c ) ≤ ε (c) D ( π ; p S ♯µ, p S ′ ♯µ ′ ) ≤ ε (d) dis( R ) ≤ ε .Proof. Suppose that d rootGHP (( T , ρ ) , ( T ′ , ρ ′ )) < δ ( δ will be chosen small enough later). Then, there exists acorrespondance R ∈ C ( T, T ′ ) and a measure π ∈ M ( T, T ′ ) such that:(a) ( ρ, ρ ′ ) ∈ R (b) π ( R c ) ≤ δ (c) D ( π ; µ, µ ′ ) ≤ δ (d) dis( R ) ≤ δ .Now, one performs the proof of Lemma 6.2 in [10] and shall use their notations. First, let f ( ρ ) := ρ ′ and thenfor each x ∈ T , one chooses f ( x ) ∈ T ′ such that ( x, f ( x )) ∈ R (notice that this can be done in a measurableway). Then, letting x , . . . , x n to be the leaves of T one defines x ′ i = f ( x i ) and let T ′′ be the subtree of T ′ spanned by ρ ′ , x ′ , . . . , x ′ n . Finally, f ( x ) is defined to be the closest point from f ( x ) on T ′′ . Notice that x ′ i = f ( x i ). The proof of Lemma 6.2 in [10] shows that T ′′ has leaves x ′ , . . . , x ′ n (and root ρ ′ = f ( ρ )), that d H ( T, T ′′ ) < δ and that the function f from T to T ′′ has distortion at most 8 δ . It is easy to see that ∀ x ∈ T, d ′ ( f ( x ) , f ( x )) ≤ δ . (5.4)Then, they take y ∈ [ ρ, x ] and y ′ ∈ [ ρ ′ , x ′ ] such that d ( ρ, y ) = d ′ ( ρ ′ , y ′ ) = d ( ρ, x ) ∧ d ′ ( ρ ′ , x ′ ) anddefine ψ from S := [ ρ, y ] to S ′ := [ ρ ′ , y ′ ] in the obvious way. The proof then proceeds inductively, defining z k +1 (resp. z ′ k +1 ) as the closest point from x k +1 on S k (resp. from x ′ k +1 on S ′ k ), letting y k +1 ∈ ] z k +1 , y k +1 ]and y ′ k +1 ∈ ] z ′ k +1 ; y k +1 ] be such that d ( z k +1 , y k +1 ) = d ′ ( z ′ k +1 , y ′ k +1 ) = d ( z k +1 , x k +1 ) ∧ d ′ ( z ′ k +1 , x ′ k +1 )defining ψ from ] z k +1 , y k +1 ] to ] z ′ k +1 ; y k +1 ] in the obvious way and gluing ] z k +1 , y k +1 ] to S k to get S k +1 (resp. ] z ′ k +1 , y ′ k +1 ] to S ′ k to get S ′ k +1 ). Finally, S := S n and S ′ := S ′ n . They prove then that:dis( ψ ) < δ, d H ( S, T ) < δ d H ( S ′ , T ′ ) < δ , which shows that ψ , S and S ′ satisfy ( i ) − ( iii ) above if δ is chosen small enough. Meanwhile, they showthat for any k , d ( x k , y k ) ∨ d ′ ( x ′ k , y ′ k ) ≤ δ (see inequality (6.28) in [10]).Now, let us show that: ∀ x ∈ S, d ′ ( f ( x ) , ψ ( x )) ≤ δ. (5.5)Let x ∈ ] z k , y k ], then d ′ ( ψ ( x ) , y ′ k ) = d ( x, y k ) (recall that ψ ( y k ) = y ′ k ). Then, | d ( x, y k ) − d ( x, x k ) | ≤ δ and | d ′ ( ψ ( x ) , y ′ k ) − d ′ ( ψ ( x ) , x ′ k ) | ≤ δ . Since f has distortion at most 8 δ , | d ( x, x k ) − d ′ ( f ( x ) , x ′ k ) | ≤ δ . We get | d ′ ( ψ ( x ) , x ′ k ) − d ′ ( f ( x ) , x ′ k ) | ≤ δ . Let z be the closest point to f ( x ) on [ ρ ′ , x ′ k ]. Then, d ′ ( f ( x ) , z ) = 12 [ d ′ ( f ( x ) , ρ ′ ) + d ′ ( f ( x ) , x ′ k ) − d ′ ( ρ ′ , x ′ k )] ≤
32 dis( f ) + 12 [ d ( x ) , ρ ) + d ( x, x k ) − d ( ρ, x k )] ≤ δ , x ∈ [ ρ, x k ]. Finally, since ψ ( x ) ∈ [ ρ ′ , x ′ k ], d ′ ( f ( x ) , ψ ( x )) = d ′ ( f ( x ) , z ) + d ′ ( z, ψ ( x ))= d ′ ( f ( x ) , z ) + | d ′ ( x ′ k , ψ ( x )) − d ′ ( x ′ k , z ) |≤ d ′ ( f ( x ) , z ) + | d ′ ( x ′ k , ψ ( x )) − d ′ ( x ′ k , f ( x )) |≤ δ + 32 δ . This shows (5.5).Now, let R be defined by: R := ( x, x ′ ) ∈ S × S ′ : ∃ ( y, y ′ ) ∈ R , d ( x, y ) ≤ δandd ′ ( x ′ , y ′ ) ≤ δ , and define π := ( p S ⊗ p S ′ ) ♯π . It remains to prove point ( v ). First, recall that ( x, f ( x )) ∈ R for any x ∈ T .Thus ( v )( a ) is satisfied thanks to (5.5) and (5.4). This shows also that R is a correspondance on S × S ′ .Then, dis( R ) ≤ dis( R ) + 400 δ which is less than ε and shows ( v )( d ) if δ is chosen small enough. Since d H ( S, T ) ∨ d H ( S ′ , T ′ ) < δ , we seethat for any x ∈ T and x ′ ∈ T ′ , d ( x, p S ( x )) < δ and d ( x ′ , p S ′ ( x ′ )) < δ . Thus, if ( x, x ′ ) ∈ R , then ( p S ( x ) , p S ( x ′ )) ∈ R and this gives π ( R c ) ≤ π ( R c ) ≤ δ , which shows ( v )( b ) if δ is chosen small enough. Finally, since π = ( p S ⊗ p S ′ ) ♯π one sees that D ( π ; p S ♯µ, p S ′ ♯µ ′ ) ≤ D ( π , µ, µ ′ ) < δ . Now, let us prove the Feller property.
Proposition 5.4.
Let ( X ( n ) ) n ≥ be a sequence in N tree converging to X (in the L ,GHP metric) and t ≥ .Then (Frag( X ( n ) , s )) s ≥ converges in distribution to (Frag( X , s )) s ≥ for L ,GHP .Proof. First, we argue that one may without loss of generality suppose that X ( n ) and X contain a singlecomponent. Indeed, fix ε >
0. Since masses( X ( n ) ) converges to masses( X ) in ℓ , one may choose ε ′ masses( X ) such that: k masses( X ≤ ε ′ ) k ∨ sup n ∈ N k masses( X ( n ) ≤ ε ′ ) k ≤ ε . Then, since X ( n ) >ε ′ converges to X >ε ′ as n goes to infinity, they have the same number of components for n large enough. Call this number K . One may list them as follows: let T ( n ) i (resp. T i ), i = 1 , . . . , K be thecomponents of X ( n ) (resp. of X ) such that for any i , T ( n ) i converges to T ( n ) . Then, for any coupling between(Frag( X , s )) s ∈ [0 ,t ] and (Frag( X ( n ) , s )) s ∈ [0 ,t ] , one has: k masses(Frag( X , s )) − masses(Frag( X ( n ) , s )) k ≤ K X i =1 k masses(Frag( T i , s )) − masses(Frag( T ( n ) i , s )) k + k masses( X ≤ ε ′ ) k + k masses( X ( n ) ≤ ε ′ ) k L GHP (Frag( X , s ) , Frag( X ( n ) , s )) ≤ K X i =1 L GHP (Frag( T i , s ) , Frag( T ( n ) i , s )) + 16 ε . Thus, it is sufficient to prove that for any fixed i and n , one may find a coupling such thatsup s ∈ [0 ,t ] L GHP (Frag( T ( n ) i , s ) , Frag( T i , s )) P −−−−→ n →∞ . In the sequel, we suppose that X ( n ) =: T ( n ) (resp. X =: T ) contains a single component.Now, Lemma 5.1 implies that it is enough to consider T ( n ) and T to be finite trees. Let us fix ε > ρ (resp. ρ ( n ) ) a root in T (resp. T ( n ) ) such that d rootGHP (( T , ρ ) , ( T ( n ) , ρ ( n ) )) goes to zero as n goes to infinity. For n large enough, d rootGHP (( T , ρ ) , ( T ( n ) , ρ ( n ) )) issmall enough so that one may apply Lemma 5.3.Let us call ( T ′ , ρ ′ ) = ( T ( n ) , ρ ( n ) ) for such a large n , in order to lighten the notations. Notice that one maysuppose that µ ′ ( T ′ ) ≤ µ ( T ) + ε . Let S , S ′ , ψ , R and π be as in Lemma 5.3. Define S := ( S, d | S × S , p S ♯µ )and S ′ := ( S ′ , d | S ′ × S ′ , p S ′ ♯µ ′ ) . Let t >
0. Lemma 5.1 ensures that with probability at least 1 − tε / , for any s ∈ [0 , t ], L ,GHP (Frag( T , s ) , Frag( S , s )) ≤ ε / (1 + µ ( T )) , and L ,GHP (Frag( T ′ , s ) , Frag( S ′ , s )) ≤ ε / (1 + µ ′ ( T ′ )) ≤ ε / (1 + µ ( T ) + ε ) . For any z ∈ S (resp. z ′ ∈ S ′ ) we let S z (resp. S ′ z ′ ) be the subtree above z (resp. above z ′ ): S z := { x ∈ S : z ∈ [ ρ, x ] } . Let us define
Bad := { a ∈ S : S ψ ( a ) = ψ ( S a ) } so that Lemma 5.3 ensures that ℓ S ( Bad ) ≤ ε .Now, let P be a Poisson random set of intensity ℓ S ⊗ leb R + on S × R + . Then, for any s , ψ ( P s ) is a Poissonrandom set of intensity sℓ S on S (since ψ is a measure-preserving bijection), and we want to show that forany s ≤ t , the fragmentation of S along P s , Frag( S , P s ) and that of S ′ along ψ ( P s ), Frag( S ′ , ψ ( P s )) areclose in L GHP -distance with large probability.Notice first that Frag( S , P s ) and Frag( S ′ , ψ ( P s )) have the same number of components. If m is a com-ponent of Frag( S , P s ), it can be written as S z s \ S ki =1 S z i,s for some points z s , z ,s , . . . z k,s in P s ∪ { ρ } (we identify S z \ { z } and S z since it is at zero d GHP -distance). If P t ∩ Bad = ∅ , then for any s ≤ t , ψ ( m ) = ψ ( S z s ) \ S ki =1 S ψ ( z i,s ) and this is a component of Frag( S ′ , ψ ( P s )).Thus, let us place ourselves on the event E := {P t ∩ Bad = ∅} and define σ (which depends on s ) to bethe bijection from Frag( S , P s ) to Frag( S ′ , ψ ( P s )) which maps a component m to ψ ( m ). Since R containsthe couples ( x, ψ ( x )) for x ∈ S , R| m × ψ ( m ) is a correspondance between m and ψ ( m ) with distortion at most ε . Furthermore π | m × ψ ( m ) is a measure on m × ψ ( m ) which satisfies π | m × ψ ( m ) ( R| cm × ψ ( m ) ) ≤ π ( R c ) ≤ ε . It remains to bound D ( π | m × ψ ( m ) ; ( p S ♯µ ) | m , ( p S ′ ♯µ ′ ) | ψ ( m ) ) from above. For any Borel subset A of m , | π | m × ψ ( m ) ( A × ψ ( m )) − p S ♯µ ( A ) |≤ | π ( A × S ′ ) − p S ♯µ ( A ) | + π ( A × S ′ ) − π ( A × ψ ( m )) ≤ | π ( A × S ′ ) − p S ♯µ ( A ) | + π ( { ( x, x ′ ) ∈ S × S ′ : x ∈ m, x ′ ψ ( m ) } ) .
37 symmetric inequality holds for A ′ Borel subset of ψ ( m ), and we get: D ( π | m × ψ ( m ) ; µ | m , µ ′ | ψ ( m ) ) ≤ D ( π ; p S ♯µ, p S ′ ♯µ ′ ) + π ( m × ψ ( m ) c ) + π ( m c × ψ ( m )) . Now, notice that for any x ∈ S , x ∈ m and x ′ ψ ( m ) ⇒ [ ψ ( x ) , x ′ ] ∩ ψ ( P t ) = ∅ and x m and x ′ ∈ ψ ( m ) ⇒ [ ψ ( x ) , x ′ ] ∩ ψ ( P t ) = ∅ , where [ ψ ( x ) , x ′ ] is the geodesic between ψ ( x ) and x ′ . Thus, π ( m × ψ ( m ) c ) + π ( m c × ψ ( m )) ≤ π { ( x, x ′ ) ∈ S × S ′ : [ ψ ( x ) , x ′ ] ∩ ψ ( P t ) = ∅} . Let us denote by E the event E := (cid:8) π { ( x, x ′ ) ∈ S × S ′ : [ ψ ( x ) , x ′ ] ∩ ψ ( P t ) = ∅} ≤ √ ε (cid:9) . On E ∩ E , we get, for any s ≤ t and any component m of Frag( S , P s ): D ( π | m × ψ ( m ) ; p S ♯µ | m , p S ♯µ ′ | ψ ( m ) ) ≤ ε + √ ε . Furthermore, k masses(Frag( S , P s )) − masses(Frag( S ′ , ψ ( P s ))) k ≤ X m ∈ comp(Frag( S , P s )) ( p S ♯µ ( m ) − p S ′ ♯µ ′ ( ψ ( m ))) ≤ sup m ∈ comp(Frag( S , P s )) | p S ♯µ ( m ) − p S ′ ♯µ ′ ( ψ ( m )) |× X m ∈ comp(Frag( S , P s )) p S ♯µ ( m ) + p S ′ ♯µ ′ ( ψ ( m )) ≤ sup m ∈ comp(Frag( S , P s )) D ( π | m × ψ ( m ) ; p S ♯µ | m , p S ♯µ ′ | ψ ( m ) )( µ ( T ) + µ ′ ( T ′ )) ≤ ( ε + √ ε )(2 µ ( T ) + ε )Thus, on E ∩ E , we obtain: L ,GHP (Frag( S , P ) , Frag( S ′ , ψ ( P ))) ≤ ( ε + √ ε ) ∨ q ( ε + √ ε )(2 µ ( T ) + ε ) . It remains to bound from above the probability of ( E ∩ E ) c . Since Bad has length measure at most ε , P ( E c ) ≤ tε . Notice that since R contains ( x, ψ ( x )) for any x ∈ S and has distortion less than 2 ε , π { ( x, x ′ ) ∈ S × S ′ : d ′ ( ψ ( x ) , x ′ ) > ε } ≤ π ( R c ) < ε Then, using Fubini’s theorem, E [ π { ( x, x ′ ) ∈ S × S ′ : [ ψ ( x ) , x ′ ] ∩ ψ ( P ) = ∅} ] ≤ ε + E [ π { ( x, x ′ ) ∈ S × S ′ : [ ψ ( x ) , x ′ ] ∩ ψ ( P ) = ∅ and d ′ ( ψ ( x ) , x ′ ) ≤ ε } ]= ε + Z S × S ′ P ([ ψ ( x ) , x ′ ] ∩ ψ ( P ) = ∅ ) d ′ ( ψ ( x ) ,x ′ ) ≤ ε dπ ( x, x ′ ) ≤ tεπ ( S × S ′ ) ≤ tε ( µ ( T ) + ε ) . Thus, by Markov inequality, P ( E c ) ≤ t √ ε ( µ ( T ) + ε ) , which ends the proof. 38 .4. The Feller property for graphs We now want to prove the analog of Proposition 5.4 for graphs. However, this cannot be true withoutstrengthening the metric L GHP . For instance, consider the situation depicted in Figure 2. There G n convergesto G for d GHP , but the probability that a is separated from b in G n when fragmentation occurs (until afixed time t >
0) is asymptotically 0, whereas the probability that this event occurs in G is strictly positive.However, if we impose that the surplus of G n converges to the surplus of G , such a situation cannot hapenanymore, and one may recover the Feller property. PSfrag replacements aa bb ......... × n ... × n ... × n · · · µ n = ( δ a + δ b ) µ = ( δ a + δ b ) diam ( G n ) = diam ( G ) = 1 G n : G : length n Fig 2 . G n is composed of n graphs in series each one made of n intervals of length /n in parallel. G n converges to G for d GHP when n goes to infinity, but Frag( G n , t ) will not converge to Frag(
G, t ) for t > . Let us notice that this problem was treated a bit differently in [3]: they recover continuity (in probability)of fragmentation by imposing that G n and G live on some common subspace A r some some r >
0, where A r contains the graphs which have surplus and total length of the core bounded from above and minimal edgelength of the core bounded from below (see section 6.4 in [3] for a precise statement). When one wants tohave Feller-type properties, this seems to us less natural than imposing convergence of the surplus. In fact,the work below shows that if G n converges to G in the Gromov-Hausdorff topology while having the samesurplus for n large enough, then there is some r > n large enough, G n and G belong to A r .The converse statement is also true and is a consequence of Proposition 6.5 in [3]. Proposition 5.5.
Let ( G ( n ) ) n ≥ be a sequence in N graph converging to G in the L surplus ,GHP metric. Then, forany t ≥ , Frag( G ( n ) , t )) t ≥ converges in distribution to (Frag( G , t )) t ≥ (for L surplus ,GHP ). To prove this result, we first notice that the proof of section 5.3 extends to the case where one replacetrees by graphs having the same core.
Lemma 5.6.
Let G = ( G, d, µ ) be a measured finite R -graph which is not a tree. Let ( G ( n ) ) n ≥ be asequence of measured finite R -graphs such that for each n , there is a correspondence R ( n ) ∈ C ( G, G n ) , ameasure π ( n ) ∈ M ( G, G n ) and a homeomorphism ψ ( n ) : core( G ) → core G ( n ) such that: • ψ ( n ) preserves the length-measure, • ∀ x ∈ core( G ( n ) ) ( x, ψ ( n ) ( x )) ∈ R ( n ) , • dis( R ( n ) ) ∨ π ( n ) (( R ( n ) ) c ) ∨ D ( π ( n ) , µ, µ ( n ) ) −−−−→ n →∞ .Then, the sequence of processes Frag( G ( n ) , · ) converges in distribution to Frag( G , · ) for L surplus ,GHP . roof. It is a straightforward extension of the arguments of section 5.3, replacing roots by cores and using ψ ( n ) to map fragmentation on core( G ( n ) ) to fragmentation on core( G ).To prepare the proof of Proposition 5.5, we shall need the following lemmas. Lemma 5.7.
Let ( G, d ) and ( G ′ , d ′ ) be R -graphs and R ∈ C ( G, G ′ ) . Let ( a, a ′ ) ∈ R , ( b, b ′ ) ∈ R and ( c, c ′ ) ∈ R . Suppose that a belongs to a geodesic between b and c . Let γ a ′ ,b ′ (resp. γ a ′ ,c ′ ) be a geodesic from a ′ to b ′ (resp. from a ′ to c ′ ). Then, ∀ a ′′ ∈ γ a ′ ,b ′ ∩ γ a ′ ,c ′ , d ′ ( a ′′ , a ′ ) ≤ R ) . Proof.
Let a ′′ ∈ γ a ′ ,b ′ ∩ γ a ′ ,c ′ . Then, d ′ ( a ′ , a ′′ ) = d ′ ( a ′ , b ′ ) + d ′ ( a ′ , c ′ ) − d ′ ( a ′′ , b ′ ) − d ′ ( a ′′ , c ′ ) ≤ d ′ ( a ′ , b ′ ) + d ′ ( a ′ , c ′ ) − d ′ ( b ′ , c ′ ) ≤ d ( a, b ) + d ( a, c ) − d ( b, c ) + 3 dis( R )= 3 dis( R )where we used the triangular inequality in the second step and the fact that a belongs to a geodesic between b and c in the last step.The following should be compared to Proposition 5.6 in [3]. Lemma 5.8.
Let G be an R -graph and ε > . There exists δ depending on ε and G such that if G ′ is an R -graph with the same surplus as G and if R ∈ C ( G, G ′ ) is such that dis( R ) < δ , then there exists an ε -overlay R ∈ C ( G, G ′ ) containing R .Proof. If G has surplus 0, there is nothing to prove. In the sequel, we suppose that G has surplus at least2, the easier proof for unicyclic G is left to the reader. Furthermore, to lighten notations and make theargument clearer, we shall suppose that the vertices of ker( G ) are of degree 3, leaving the adaptation to thegeneral case to the reader.Let η := min e ∈ e ( G ) ℓ ( e ). One may view core( G ) as a multigraph with edge-lengths. However, not all theedges of this graph correspond to geodesics in G . Divide each edge of core( G ) into five pieces of equal length,introducing thus four new vertices of degree 2 for each edge (all degrees will be relative to the core). Thenew graph obtained satisfies the following:(i) all the edges remain of length larger than η/ e is the unique geodesic between its two endpoints, and for any path γ which does notcontain e , ℓ ( γ ) − ℓ ( e ) > η/ a , b , c such that b ∼ a and a ∼ c , a belongs to a geodesic between a and c .Let us call ˜core( G ) this new graph (it is indeed a graph, not merely a multigraph), which has the samesurplus as G , and write x , . . . , x n for its vertices, which are of degree 2 or 3.Let G ′ be an R -graph with the same surplus as G and R ∈ C ( G, G ′ ). Let x ′ , . . . , x ′ n be elements of G ′ such that ( x i , x ′ i ) ∈ R . Now, we shall build a subgraph of G ′ by mapping recursively edges adjacent toa given vertex in ˜core( G ) to a geodesic in G ′ . Suppose for instance that x has degree 3 (the argument isanalogous for vertices with degree 2). Let x i , x j and x k be its neighbours in ˜core( G ), with i < j < k . Choosea geodesic γ x ′ ,x ′ i between x ′ and x ′ i , then choose a geodesic γ between x ′ and x ′ j , and let z be the pointof γ x ′ ,x ′ i ∩ γ which is the furthest of x ′ (see Figure 3). Let us call γ z ,x ′ j the subpath of γ from z to x ′ j .Notice that the path using γ x ′ ,x ′ i from x ′ to z and γ from z to x ′ j is a geodesic. Finally choose a geodesic γ between x ′ and x ′ k and let z be the point of ( γ x ′ ,x ′ i ∪ γ x ′ ,x ′ j ) ∩ γ which is the furthest of x ′ . Let us call γ z ,x ′ k the subpath of γ from z to x ′ k . Let S ′ := γ x ′ ,x ′ i ∪ γ z i ,x ′ j ∪ γ z ,x ′ k . Define x ′′ be the one between z and z which is the furthest from x ′ . If x is of degree 2, there is only one point z defined and x ′′ is this one.Then, we proceed similarly for r = 2 , . . . , n : we inspect the neighbours of x r . Notice that we do not needto choose a new geodesic between x ′ r and a neighbour x ′ j for j < r , we just keep the one already built. Doingthis, we obtain S ′ r the union of the geodesics chosen going from x ′ r to the points associated to the neighbours40 Sfrag replacements x x i x j x k x ′ x ′ i x ′ j x ′ k γ x ′ ,x ′ i γ z ,x ′ j γ z ,x ′ k z z = x ′′ Fig 3 . One maps core( G ) to core( G ′ ) by first mapping the neighborhood of each vertex of core( G ) to a subset of G ′ . Here x ′′ is a vertex of ker( G ′ ) . of x r , we get two points z r and z r if x r is of degree 3 and only one point z r if x r is of degree 1. We define x ′′ r to be the one between z r and z r which is the furthest from x ′ r .Finally, let S ′ = ∪ ni =1 S ′ i , with all the vertices z bi and x ′ i which is a graph with edge-lengths (notice thatthe edges have pairwise disjoint interiors). Some edge-lengths might be zero. Thanks to point ( ii ) above andLemma 5.7, we know that: d ′ ( z i , x ′ i ) ≤ R ) , and when x ′ i is of degree 3, d ′ ( z i , x ′ i ) ≤ R ) . Thus, for any b, b ′ ∈ { , } and any i = j , d ′ ( z bi , z b ′ j ) ≥ d ′ ( x ′ i , x ′ j ) − R ) ≥ η − R ) . Thus, if dis( R ) < η/
7, two points z bi and z b ′ j are always distinct. This shows that S ′ has the same surplus ascore( G ). Since G ′ has the same surplus as G , we deduce that S ′ contains core( G ′ ). Let S ′′ be the subgraphof S ′ spanned by x ′′ , . . . x ′′ n , in the sense that we forget the vertices z i when x i is of degree 3, and we removethe semi-open path going from x ′ i to z i . Notice that S ′′ has positive edge-lengths and its edges have pairwisedisjoint interiors. S ′′ has the same surplus as S ′ , so it contains again core( G ′ ). But all the vertices in S ′′ have degree 2 or 3, so S ′′ = core( G ′ ) as a set.Now, consider S ′′ as a graph with edge-lengths and with vertices x ′′ i , i = 1 , . . . , r . The map χ from˜core( G ) to S ′′ which maps x i to x ′′ i is a graph isomorphism, and from the computation above, for any edge e of ˜core( G ), | ℓ ( e ) − ℓ ′ ( χ ( e )) | ≤ R ) | . We shall denote by [ x ′′ i , x ′′ j ] the path in S ′′ from x ′′ i to x ′′ j and which does not contain any other vertex x ′′ k for k
6∈ { i, j } . Then, let γ be a path from x ′′ i to x ′′ j which does not contain [ x ′′ i , x ′′ j ]. Using point ( ii ) above,we see that ℓ ′ ([ x ′′ i , x ′′ j ]) ≤ ℓ ′ ( γ ) − η R )Thus, if dis( R ) < η/
80, [ x ′′ i , x ′′ j ] is the unique geodesic between i and g geodesic for any i and j and everypath γ from x ′′ i to x ′′ j which does not contain [ x ′′ i , x ′′ j ] satisfies: ℓ ′ ( γ ) > ℓ ′ ([ x ′′ i , x ′′ j ]) + η . (5.6)Let us define R ′ by adding to R the couples ( x i , x ” i ) for i = 1 , . . . , r . Then, dis( R ′ ) ≤ R ). Let R be the 3 dis( R )-enlargement of R ′ . It has distortion at most 19 dis( R ). Let x belong to an edge [ x i , x j ] of˜core( G ) and let x ′ be such that ( x, x ′ ) ∈ R . Let γ x ′ ,x ′′ i (resp. γ x ′ ,x ′′ j ) be a geodesic between x ′ and x ′ i (resp.between x ′ and x ′′ j ). Then, let γ be the path from x ′′ to x ′′ j obtained by concatenating γ x ′ ,x ′′ i and γ x ′ ,x ′′ j . We41ave ℓ ′ ( γ ) ≤ d ′ ( x ′′ i , x ′ ) + d ′ ( x ′ , x ′′ j ) , ≤ d ( x i , x ) + d ( x, x j ) + 2 dis( R ′ ) , = d ( x i , x j ) + 2 dis( R ′ ) , ≤ ℓ ′ ([ x ′′ i , x ′′ j ]) + 3 dis( R ′ ) . Thus, if dis( R ′ ) ≤ R ) < η , we deduce from (5.6) that γ contains [ x ′′ i , x ′′ j ]. Thus, defining x ′′ to be thefurthest point to x ′ on γ x ′ ,x ′′ i ∩ γ x ′ ,x ′′ j , we see that x ′′ belongs to the geodesic [ x ′′ i , x ′′ j ]. Lemma 5.7 ensures that d ′ ( x ′ , x ′′ ) ≤ R ′ ). Thus, ( x, x ′′ ) ∈ R . Similarly, one shows that for every x ′′ in [ x ′′ i , x ′′ j ] there is an x in[ x i , x j ] such that x ∈ R . We have shown that for each edge e of ˜core( G ), e and χ ( e ) are in correspondancevia R .Now, notice that the multigraph with edge-lengths S ′′ obtained by keepin only vertices of degree 3 iscore( G ′ ) seen as a multigraph with edge-lengths. The isomorphism χ induces an isomorphism χ betweencore( G ) and core( G ′ ) (by restricting χ on vertices of degree 3), and we have (since every edge of core( G )was divided into five parts): | ℓ ( e ) − ℓ ′ ( χ ( e )) | ≤
30 dis( R ) . Furthermore, the same correspondance R as before is suitable to have that for each edge e of core( G ), e and χ ( e ) are in correspondance via R .This ends the proof by taking dis( R ) ≤ δ for δ small enough, namely less than ε ∧ η . Lemma 5.9.
Let ( G, d ) and ( G ′ , d ′ ) be R -graphs and R ∈ C ( G, G ′ ) . Suppose that core( G ) and core( G ′ ) arein correspondance through R . Let ( v, v ′ ) and ( x, x ′ ) ∈ R with v ∈ core( G ) and v ′ ∈ core( G ′ ) . Then, d ( α G ( x ) , v ) ≤ d ′ ( α G ′ ( x ′ ) , v ′ ) + 5 dis( R ) . Proof.
Since core( G ) and core( G ′ ) are in correspondance through R , one may find y ∈ core( G ) and y ′ ∈ core( G ′ ) such that: ( y, α G ′ ( x ′ )) ∈ R and ( α G ( x ) , y ′ ) ∈ R . Let us distinguish two cases. • d ( y, v ) ≥ d ( α G ( x ) , v ). Then, d ( α G ( x ) , v ) ≤ d ( y, v ) ≤ d ′ ( α G ′ ( x ′ ) , v ′ ) + dis( R )and the result follows. • d ( y, v ) < d ( α G ( x ) , v ). Then, d ( x, v ) = d ( x, α G ( x )) + d ( α G ( x ) , v ) ≥ d ( x, α G ( x )) + d ( y, v ) ≥ d ′ ( x ′ , y ′ ) + d ′ ( α G ′ ( x ′ ) , v ′ ) − R )= d ′ ( x ′ , α G ′ ( x ′ )) + d ′ ( α G ′ ( x ′ ) , y ′ ) + d ′ ( α G ′ ( x ′ ) , v ′ ) − R )= d ′ ( x ′ , v ′ ) + d ′ ( α G ′ ( x ′ ) , y ′ ) − R ) ≥ d ( x, v ) + d ′ ( α G ′ ( x ′ ) , y ′ ) − R ) . Thus, d ′ ( α G ′ ( x ′ ) , y ′ ) ≤ R )which implies: d ( y, α G ( x )) ≤ R )Finally, d ( α G ( x ) , v ) ≤ d ( α G ( x ) , y ) + d ( y, v ) ≤ R ) + d ( y, v ) ≤ R ) + d ′ ( α G ′ ( x ′ ) , v ′ )42et us introduce some notations for the following lemmas (see Figure 4). PSfrag replacements uu vvv − bev − be v − aev − ae GG ( e,a,b ) glued Fig 4 . G ( e,a,b ) is the ( a, b )-shortening of G along e = ( u, v ) . Definition 5.10.
For any graph G , for each oriented edge e = ( u, v ) ∈ ker( G ) and each η ∈ [0 , ℓ ( e )] , wedenote by v − ηe the point at distance η from v on the edge ( u, v ) , on core( G ) . For a < b ∈ [0 , ℓ ( e )] let ] v − be, v − ae [ be the open oriented arc between v − be and v − ae in ( u, v ) .We define G ( e,a,b ) the ( a, b )-shortening along e as the measured R -graph ( H, d H , µ H ) obtained from G asfollows: • H = G \ α − G (] v − be, v − ae [) , • d H is obtained from ( H, d | H × H ) by gluing it along ( v − be, v − ae ) , • µ H is the restriction of µ on H . Notice that G ( e,a,b ) has the same surplus as G . Lemma 5.11.
Let G be an R -graph, and define: γ G ( η ) := sup e =( u,v ) ∈ ker( G ) diam( α − G (] v − ηe, v [)) . Then, γ G ( η ) −−−→ η → . Proof.
Suppose on the contrary that γ G ( η ) −−−→ η → γ > . Then, one may find a subsequence of couples( x n , y n ) n ∈ N in α − G (] v − e, v [) such that: d ( α G ( x n ) , v ) ∨ d ( α G ( y n ) , v ) −−−−→ n →∞ , ∀ n ∈ N , d ( x n , y n ) ≥ γ , and d ( α G ( x n ) , v ) ∧ d ( α G ( y n ) , v ) > . z n ∈ { x n , y n } be such that d ( z n , v ) = d ( x n , v ) ∨ d ( y n , v ). Up to extracting a subsequence, one may alsosuppose that d ( α G ( z n ) , v ) is strictly decreasing and that for any n , d ( α G ( z n ) , v ) < γ/
4. This implies thatfor n = m , d ( z n , z m ) ≥ d ( z n , α G ( z n ))= d ( z n , v ) − d ( α G ( z n ) , v ) ≥ γ − d ( α G ( z n ) , v ) ≥ γ . This contradicts the precompacity of G . Lemma 5.12.
Let G = ( G, d, µ ) be a measured R -graph with surplus at least one, let e be an edge of core( G ) and a < b ∈ [0 , ℓ ( e )] . Let: ˜ γ G ( ε ) := X e =( u,v ) ∈ ker( G ) µ ( α − G (] v − εe, v [)) . Then, under the natural coupling between
Frag( G , . ) and Frag( G ( e,a,b ) , . ) we have, with probability at least − t ( b − a ) , for any s ∈ [0 , t ] , L surplus ,GHP (Frag( G , s ) , Frag( G ( e,a,b ) , s )) ≤ (˜ γ G ( b ) ∨ γ G ( b ))(3 + 4 µ ( G )) Proof.
Let e = ( u, v ), and P be a Poisson random set of intensity ℓ G × leb R + on ( G, d ). Then, P ′ := P\ α − G (] v − be, v − ae [) × R + is a Poisson random set of intensity ℓ G ′ × leb R + on G ′ × R + with G ′ := G \ α − G (] v − be, v − ae [).Let t > E denote the event E := {P t ∩ ] v − be, v − ae [= ∅} , and let us suppose that E holds. Let ε > ε ≥ µ ( α − G (] v − be, v − ae [) . Let us take s ≤ t and let m be a component of Frag( G, P s ). Then, • if m ⊂ α − G (] v − be, v − ae [), then µ ( m ) ≤ ε , • if m ∩ α − G (] v − be, v − ae [) = ∅ , then m is a component of Frag( G ′ , P ′ s ), • if m ∩ α − G (] v − be, v − ae [) = ∅ but m α − G (] v − be, v − ae [), then m is the unique component ofFrag( G, P s ) which intersects ] v − be, v − ae [, and m \ α − G (] v − be, v − ae [) is a component of Frag( G ′ , P ′ s ).This shows that the application σ : (cid:26) comp(Frag( G, P s ) >ε ) → comp(Frag( G ′ , P ′ s )) m m \ α − G (] v − be, v − ae [)is well defined and injective. This shows also that the application σ ′ from comp(Frag( G ′ , P ′ s )) >ε to comp(Frag( G, P s ) >ε which maps m ′ to the unique m which contains it is well defined and injective.Now, let m ∈ comp(Frag( G, P s ) >ε ) and let m ′ := σ ( m ) = m \ α − G (] v − be, v − ae [) . Let R m := { ( x, x ) : x ∈ m ′ } ∪ { ( x, v ) : x ∈ m ∩ α − G (] v − be, v − ae [ } and π m be the measure in M ( m, m ′ ) defined by: π m ( C ) = µ ( { x ∈ m ′ : ( x, x ) ∈ C } ) . d ′ be the distance on m ′ . Notice that for any x, y in m ′ , | d ( x, y ) − d ′ ( x, y ) | ≤ b Thus, dis( R m ) ≤ b + 2 diam( α − G (] v − be, v − ae [)) ≤ γ G ( b ) . Also, π m ( R cm ) = µ | m ′ ( { x ∈ m ′ : ( x, x ) ∈ R cm } ) = 0 , For A a Borel subset of m , π ( A × m ′ ) = µ ( A ∩ m ′ )and for A ′ a Borel subset of m ′ , π ( m × A ′ ) = µ ( A ′ )Thus, D ( π ; µ | m , , µ | m ′ ) ≤ µ ( α − G (] v − be, v − ae [) ≤ ˜ γ G ( b ) . Using Lemma 2.19 with α = ˜ γ G ( b ) ∨ γ G ( b )and ε = √ α + α we have shown that as soon as E holds, for any s ∈ [0 , t ], L GHP (Frag( G , P s ) , Frag( G ( e,a,b ) , P ′ s )) ≤ γ G ( b ) ∨ γ G ( b )) + (16 + µ ( G )) p ˜ γ G ( b ) ∨ γ G ( b ) . Furthermore, k masses(Frag( G , P s )) − masses( G ( e,a,b ) , P ′ s ) k ≤ γ G ( b ) . Also, for any m in comp(Frag( G, P s ) >ε , m and σ ( m ) have the same surplus (recall the gluing in Defini-tion 5.10). The same is true for m ′ and σ ′ ( m ′ ). Thus, L surplus ,GHP (Frag( G , P s ) , Frag( G ( e,a,b ) , P ′ s )) ≤ h γ G ( b ) ∨ γ G ( b )) + (16 + µ ( G )) p ˜ γ G ( b ) ∨ γ G ( b ) i ∨ γ G ( b ) . Finally, notice that E has probability at least exp − t ( b − a ) ≥ − t ( b − a ).Now, we shall prove Proposition 5.5. Let us explain the idea of the proof. If G ( n ) is close enough from G ,Lemma 5.8 shows that their cores are homomorphic multigraphs with edges having almost the same length.One may then shorten some edges of the core of G and other edges of the core of G ( n ) in such a way thatthe two cores become homeomorphic as metric spaces with a length measure. Lemma 5.12 shows that onedoes not loose too much doing this. Finally, Lemma 5.6 show then that the fragmentation on the two graphsare close. Proof. (of Proposition 5.5)The argument at the beginning of the proof of Proposition 5.4 shows that it is sufficient to prove the resultwhen G ( n ) and G have a single component. Let G = ( G, d, µ ) be a measured R -graph and let ε >
0. Wewant to show that Frag( G ′ , t ) converges in distribution to Frag( G , t ) when G ′ converges to G while havingthe same surplus.Let δ < δ ( ε, G ) be given by Lemma 5.8 and let G ′ be such that d GHP ( G , G ′ ) < δ (we will take δ smallenough later). Thus, there is a correspondance R ∈ C ( G, G ′ ) and a measure π ∈ M ( G, G ′ ) such that:dis( R ) ∨ π ( R c ) ∨ D ( π ; µ, µ ′ ) < δ ε -overlay R ∈ C ( G, G ′ ) containing R . Let us denote by χ themultigraph isomorphism from ker( G ) to ker( G ′ ) given by this overlay. For any edge e ∈ ker( G ), | ℓ ( e ) − ℓ ′ ( χ ( e )) | < ε .We define two graphs ˜ G and ˜ G ′ obtained from G and G ′ as follows. For each oriented edge e = ( u, v ) ∈ ker( G ), denoting ( u ′ , v ′ ) = χ ( e ), • if ℓ ( e ) is smaller than ℓ ′ ( e ′ ) by an amount η , we replace G ′ by its (6 ε − η, ε )-shortening along e ′ (cf.Definition 5.10), • if ℓ ′ ( e ′ ) is smaller than ℓ ( e ) by an amount η , we replace G by its (6 ε − η, ε )-shortening along e .Let us denote by ( ˜ G, ˜ d ) and ( ˜ G ′ , ˜ d ′ ) the resulting R -graphs, let ˜ µ := µ | ˜ G , ˜ µ ′ := µ ′ | ˜ G ′ and define ˜ G := ( ˜ G, ˜ d, ˜ µ ),˜ G ′ := ( ˜ G ′ , ˜ d ′ , ˜ µ ′ ).Recalling the notation in Lemma 5.11, let κ := γ G (11 ε ) + 12 ε and define R the κ -enlargement of R . We will show that˜ G and ˜ G ′ are in correspondance through R . (5.7)If x ∈ ˜ G and ( x, x ′ ) ∈ R with x ′ ˜ G ′ , then, x ′ ∈ α − G ′ (] v ′ − εe ′ , v ′ − (6 ε − η ) e ′ [) for some edge e ′ = ( u ′ , v ′ )of ker( G ′ ) and η < ε . Lemma 5.9 shows that0 < ε − η − R ) ≤ d ( α G ( x ) , v ) ≤ ε + 5 dis( R ) ≤ ε . (5.8)and thus d ( x, v ) ≤ γ G (11 ε ) + 11 ε . Thus, d ′ ( x ′ , v ′ ) ≤ γ G (11 ε ) + 12 ε ≤ κ . (5.9)This shows that ( x, v ′ ) ∈ R . Now, let x ′ ∈ ˜ G ′ and ( x, x ′ ) ∈ R with x ˜ G . Then, x ∈ α − G (] v − εe, v − (6 ε − η ) ′ [) for some edge e = ( u, v ) of ker( G ) and η < ε . Notice that: d ( x, v ) ≤ γ G (6 ε ) + 6 ε , and d ′ ( x ′ , v ′ ) ≤ d ( x, v ) + dis( R ) ≤ γ G (6 ε ) + 7 ε ≤ κ . Thus ( v, x ′ ) ∈ R . This ends the proof of (5.7).Notice that dis( R ) ≤ ε + 4 κ . Let R := R | ˜ G × ˜ G ′ ∈ C ( ˜ G, ˜ G ′ ). Let K be the number of edges in ker( G ). Notice that ∀ ( x, y ) ∈ ˜ G, | d ( x, y ) − ˜ d ( x, y ) | < Kε and ∀ ( x ′ , y ′ ) ∈ ˜ G ′ , | d ′ ( x ′ , y ′ ) − ˜ d ′ ( x ′ , y ′ ) | < Kε . Thus, dis( R ) < ( K + 13) ε + 4 γ G (11 ε ) . (5.10)Clearly, there exists a homeomorphism ψ from core( ˜ G ) to core( ˜ G ′ ) which preserves the length-measure.For each oriented edge e = ( u, v ) ∈ ker( G ), denoting ( u ′ , v ′ ) = χ ( e ), ψ satisfies ψ ( v ) = v ′ . Furthermore,since e and e ′ are in correspondance through the overlay R , we have, for each x ∈ [ u, v ], that there exists x ′ ∈ [ u ′ , v ′ ] such that: | d ( x, u ) − d ′ ( x ′ , u ′ ) | < ε .
46f furthermore x ∈ core( ˜ G ), we know that d ( x, u ) = d ′ ( ψ ( x ) , u ′ ), so | d ′ ( ψ ( x ) , u ′ ) − d ′ ( x ′ , u ′ ) | < ε . Since x ′ and ψ ( x ) belong to [ u ′ , v ′ ], d ′ ( ψ ( x ) , x ′ ) = | d ′ ( ψ ( x ) , u ′ ) − d ′ ( x ′ , u ′ ) | < ε , which shows that for every x ∈ core( ˜ G ), ( x, ψ ( x )) belongs to R , (5.11)the restriction to ˜ G × ˜ G ′ of the κ -enlargement of R .Now, let π := π | ˜ G × ˜ G ′ ∈ M ( ˜ G, ˜ G ′ ). First, π ( R c ) = π ( R c ) ≤ π ( R c ) < ε . (5.12)Then, D ( π ; ˜ µ, ˜ µ ′ ) ≤ D ( π ; µ, µ ′ ) + µ ( G \ ˜ G ) ∨ µ ′ ( G ′ \ ˜ G ′ ) . Now, define ˜ γ G ( ε ) := X e =( u,v ) ∈ ker( G ) µ ( α − G (] v − εe, v [))which goes ot zero as ε goes to zero. We have µ ( G \ ˜ G ) ≤ ˜ γ G (6 ε ) . Furthermore, recall inequality 5.8 which shows that if x ′ ∈ G ′ \ ˜ G ′ , then for every x ∈ G such that ( x, x ′ ) ∈ R , x ∈ [ e =( u,v ) ∈ ker( G ) α − G (] v − εe, v [) . Thus, µ ′ ( G ′ \ ˜ G ′ ) ≤ π ( G × ( G ′ \ ˜ G ′ )) + D ( π ; µ, µ ′ ) ≤ π (( G × ( G ′ \ ˜ G ′ )) ∩ R ) + π ( R c ) + ε ≤ π [ e =( u,v ) ∈ ker( G ) α − G (] v − εe, v [) × G ′ + 2 ε ≤ µ [ e =( u,v ) ∈ ker( G ) α − G (] v − εe, v [) + D ( π ; µ, µ ′ ) + 2 ε ≤ ˜ γ G (11 ε ) + 3 ε . (5.13)Thus, D ( π ; ˜ µ, ˜ µ ′ ) ≤ ε + ˜ γ G (11 ε ) . (5.14)Gathering (5.11), (5.10), (5.12) and (5.14) shows that one may apply Lemma 5.6, in the sense thatthere is a function f G ( ε ) going to zero as ε goes to zero such that the L´evy-Prokhorov distance (for thetopology of compact convergence associated to L surplus ,GHP ) between the distributions of (Frag( ˜ G, s )) s ∈ [0 ,t ] and(Frag( ˜ G ′ , s )) s ∈ [0 ,t ] is less than f G ( ε ).On the other hand, inequality (5.9) shows that γ G ′ ( ε ) ≤ γ G (11 ε ) + 12 ε ]47nd inequality (5.13) shows that ˜ γ G ′ ( ε ) ≤ ˜ γ G (11 ε ) + 3 ε . Then, Lemma 5.12 shows that there is a function f G ( ε ) going to zero as ε goes to zero such that theL´evy-Prokhorov distance between the distributions of (Frag( G, s )) s ∈ [0 ,t ] and (Frag( ˜ G, s )) s ∈ [0 ,t ] is less than f G ( ε ) and the L´evy-Prokhorov distance (for the topology of compact convergence associated to L surplus ,GHP )between the distributions of (Frag( G ′ , s )) s ∈ [0 ,t ] and Frag( ˜ G ′ , s )) s ∈ [0 ,t ] is less than f G ( ε ). This ends the proofof Proposition 5.5. In this section, we prove Theorem 3.2. Let us first compare the discrete fragmentation process and thecontinuous one. Let P − be a Poisson process driving the discrete fragmentation on G ( n ) := G ( n, p ( λ, n )).Recall that N − ( G ( n ) , P − t ) for the state of this process at time t , seen as member of N graph . Let Q − be aPoisson process of intensity ℓ n ⊗ leb R + on K n × R + where K n is the complete graph on n vertices seen as an R -graph where the edge lengths are δ n = n − / and ℓ n is its length measure. Then, one may suppose that P − is obtained as follows: P − = { ( e, t ) : ∃ x ∈ K n , ( x, t ) ∈ Q − } . Then, for any t , N − ( G ( n ) , P − t )) is at L ,GHP -distance at most n − / from Frag( G ( n ) , Q − t ) (cf. for in-stance Propositions 3.4 in [3]). Recall that by Theorem 2.38, G n,λ (which is G ( n ) with edge length δ n and vertex weights n − / ) converges in distribution to G λ for L surplus ,GHP . Thus Proposition 5.5 implies that(Frag( G ( n ) , Q − t )) t ≥ , and thus ( N + ( G ( n ) , P − t ))) t ≥ converges to (Frag( G λ , t )) t ≥ as n goes to infinity (in thetopology of compact convergence associated to L surplus ,GHP ). This shows Theorem 3.2.An interesting consequence of this result is the fact that on ( G λ ) λ ∈ R , fragmentation is the time-reversalof coalescence. Proposition 5.13.
For any λ ∈ R and s ∈ R + , ( G λ , Coal( G λ , s )) and (Frag( G λ + s , s ) , G λ + s )) have the samedistribution.Proof. Take P + t of intensity γ = n − / . Notice that the states of the edges are independent and identicallydistributed in ( G ( n, p ) , N + ( G ( n, p ) , P + t )). Let ( X, Y ) be the joint distribution of the state of one edge.Denoting by 0 the state “absent” and 1 the state “present”, it is easy to compute this distribution: P (( X, Y ) = (0 , − p ) e − γt P (( X, Y ) = (0 , − p )(1 − e − γt ) P (( X, Y ) = (1 , P (( X, Y ) = (1 , p Now, take P − t of intensity µ = n − / and let ( X ′ , Y ′ ) be the joint distribution of the state of one edge in( N − ( G ( n, p ′ ) , P − t ′ ) , G ( n, p ′ )). Then, P (( X ′ , Y ′ ) = (0 , − p ′ ) P (( X, Y ) = (0 , p ′ (1 − e − µt ′ ) P (( X ′ , Y ′ ) = (1 , P (( X ′ , Y ′ ) = (1 , p ′ e − µt ′ Thus, if one chooses t = 1 γ ln 1 − p − p ′ and t ′ = 1 µ ln p ′ p , then ( G ( n, p ) , N + ( G ( n, p ) , P + t )) and ( N − ( G ( n, p ′ ) , P − t ′ ) , G ( n, p ′ )) have the same distribution. Now, take p = p ( λ, n ), p ′ = p ( λ + s, n ). We have: t = n / ln (cid:18) sn / (1 − p ′ ) (cid:19) −−−−→ n →∞ s . We consider that G ( n, p ) is equipped with edge lengths n − / and vertex weight n − / . Thus Theorem 3.1shows that ( G ( n, p ) , N + ( G ( n, p ) , P + t )) converges in distribution to ( G λ , Coal( G λ , s )). Also, t ′ = n / ln 1 + λ + sn / λn / −−−−→ n →∞ s N − ( G ( n, p ′ ) , P − t ′ ) , G ( n, p ′ )) converges in distribution to(Frag( G λ + s , s ) , G λ + s )). Thus ( G λ , Coal( G λ , s )) and (Frag( G λ + s , s ) , G λ + s )) have the same distribution.Notice a curious fact: in [6], Theorem 3, it is shown that the sizes of the components of a fragmentationon the CRT are the time-reversal (after an exponential time-change) of the standard additive coalescent. Itwould be intersting to make a direct link between additive and multiplicative coalescent in the context offramgentation on G λ .
6. Combining fragmentation and coalescence: dynamical percolation
For an R -graph G with length measure ℓ G , definesuplength( G ) := sup { ℓ G ( γ ) : γ injective path in G } . For a member G of N graph , we letsuplength( G ) := sup m ∈ comp( G ) suplength( m ) . The following lemma is a simple variation on the proof of (4.4).
Lemma 6.1.
Let X n = ( X n , d n , µ n ) , n ≥ be a sequence of random variables in N graph and ( δ n ) n ≥ be asequence of non-negative real numbers. Suppose that:(i) ( X n ) converges in distribution (for L ,GHP ) to X ∞ = ( X ∞ , d ∞ , µ ∞ ) as n goes to infinity(ii) δ n −−→ n ∞ δ (iii) For any α > and any T > , lim sup n ∈ N P (suplength(Coal δ n ( X n ≤ ε , T )) > α ) −−−→ ε → Then, for any α > and any T > , P (suplength(Coal δ ( X ∞≤ ε , T )) > α ) −−−→ ε → . Proof.
The situation is simpler than in the proof of (4.4), since suplength is non-decreasing under coalesence.Using the notations of the proof of (4.4), P (suplength(Coal ( X ∞≤ ε m , T )) > α )= lim p →∞ P (suplength(Coal ( X ∞ m,p , T )) > α )Now, Proposition 4.3 implies that (Coal δ n ( X nm,p , T ) converges in distribution to (Coal ( X ∞ m,p , T ) for any m ≤ p . Thus, for any m ≤ p , P (suplength(Coal ( X ∞ m,p , T )) > α ) ≤ lim sup n ∞ P (suplength(Coal δ n ( X nm,p , T )) > α ) ≤ lim sup n ∞ P (suplength(Coal δ n ( X n ≤ ε m , T )) > α ) , which goes to zero when m goes to infinity. Proposition 6.2.
Let ( X ( n ) ) n ≥ be a sequence of random variables in N graph converging in distribution to X ( ∞ ) in the L surplus ,GHP metric. Suppose also that for any α > and any T ≥ , lim ε → lim sup n → + ∞ P (suplength(Coal ( X ( n ) ≤ ε , T )) > α ) = 0 . (6.1) Then, the sequence of processes
CoalFrag( X ( n ) , · ) converges in distribution to CoalFrag( X ( ∞ ) , · ) in thetopology of compact convergence associated to L ,GHP . roof. First, we will reduce the problem on N graph using a variation on the proof of Lemma 4.13. Let usstudy first Frag(Coal δ n ( X ( n ) , P + t ) , P − t ) with P + and P − as in Definitions 2.24 and 2.34. Let us fix ε > ≤ t ≤ T . Any component of size at least ε in Frag(Coal δ n ( X ( n ) , P + t ) , P − t ) has to belong to a componentof size at least ε in Coal δ n ( X ( n ) , P + t ). Let x n := masses( X n ) for n ∈ N .As in the proof of Lemma 4.13, we obtain that there exists K ( ε ), ε ∈ ]0 , ε [ and ε ∈ ]0 , ε [ such that forevery n ∈ N , with probability larger than 1 − ε the event A n holds, where A n is the event that points (a),(b) and (c) of Corollary 4.6 hold for any t ∈ [0 , T ] and S ( x n , T ) ≤ K ( ε ).Let us place ourselves on A n . Then, for a significant component at time t , notice that fragmentation on thehanging trees of components does change neither the mass neither the distance in the heart of a component.Thus, the same proof as that of Lemma 4.13 shows that on A n , we have for every time t ≤ T and every ε ′ ≤ ε : L GHP (Frag(Coal δ n ( X n , P + t ) , P − t ) , Frag(Coal δ n ( X n>ε ′ , P + t ) , P − t )) ≤ δ n + supdiam(Frag(Coal δ n ( X n ≤ ε , P + T ) , P − t )) + ε ) (cid:18) K ( ε ) ε (cid:19) +16 ε , for some constant C . A slight difference occurs here: t supdiam(Frag(Coal δ n ( X n ≤ ε , P + T ) , P − t ))is not necessarily nonincreasing. However, the supremum of the lengths of injective paths clearly decreases(non-strictly) under fragmentation. Thus, on A n , L GHP (Frag(Coal δ n ( X n , P + t ) , P − t ) , Frag(Coal δ n ( X n>ε ′ , P + t ) , P − t )) ≤ δ n + suplength(Coal δ n ( X n ≤ ε , P + T )) + ε ) (cid:18) K ( ε ) ε (cid:19) + 16 ε . Let V = comp(Frag( X n , P − t )) and W = comp(Frag( X n>ε ′ , P − t )) ⊂ V . Let E ′ denote the set of edges on V such that i ∼ j if and only if i and j are at finite distance in Frag(Coal δ n ( X n , P + t ) , P − t ). Let E denote theset of edges on V such that i ∼ j if and only if i and j are at finite distance in Coal δ n ( X n , P + t ). Define: x n ( t ) := masses(Frag(Coal δ n ( X n , P + t ) , P − t ))and x n>ε ( t ) := masses(Frag(Coal δ n ( X n>ε , P + t ) , P − t )) . Lemma 2.4 shows that k x ( n ) ( t ) − x ( n ) >ε ′ ( t ) k ≤ k x ( n ) ( t ) k − k x ( n ) >ε ′ ( t ) k then, using Lemma 2.5, k x ( n ) ( t ) − x ( n ) >ε ′ ( t ) k ≤ S ( x ( n ) , t ) − S ( x ( n ) >ε , t ) ≤ ε since (c) of Corollary 4.6 holds on A n . Now, let us take δ n = 0. Define: X n ( t ) := Frag(Coal ( X n , P + t ) , P − t ) = CoalFrag( X n , t )and X n>ε ( t ) := Frag(Coal ( X n>ε , P + t ) , P − t ) = CoalFrag( X n>ε , t ))50sing the hypothesis on suplength and Lemma 6.1, we get that for any ε > ε → sup n ∈ N P [ sup t ∈ [0 ,T ] L ,GHP ( X n ( t ) , X n>ε ( t )) > ε ] = 0 . (6.2)Thus, it is sufficient to show the Theorem for X n converging to X in L ,GHP with X n and X being m.s-m.s with a finite number of finite components which are finite R -graphs. We shall only sketch the proof,since it is a variation on the arguments of the proofs of Propositions 4.4 and 5.5. For any n large enough,the proof of Proposition 5.5 show that one may couple a Poisson process P − ,n on X n × R + with intensitymeasure ℓ X n ⊗ leb R + with a Poisson process P − on X × R + with intensity ℓ X ⊗ leb R + and one may find π n ∈ M ( X, X n ) and R n ∈ C ( X, X n ) such that there is an event E n in the σ -algebra of ( P − ,nt , P − t ) and asequence ε n such that:(i) P ( E cn ) ≤ ε n (ii) ε n −−−−→ n →∞ E n for any s ≤ t , R n ∩ ( X \ P − s ) × ( X n \ P − ,ns ) ∈ C ( X \ P − s , X n \ P − ,ns ) and D ( π | ( X \P − s ) × ( X n \P − ,ns ) ; µ | X \P − s , µ X n \P − ,ns ) ∨ π n (( R n ) c ) ∨ dis s ( R n ) ≤ ε n where dis s is the distortion of R as a correspondance between the semi-metric spaces Frag( X, P − s ) andFrag( X n , P s ).Then, one may use the proof of Lemma 4.2 to couple a Poisson process P + ,n on ( X n ) × R + with intensitymeasure ( µ n ) ⊗ ⊗ leb R + with a Poisson process P + on ( X n ) × R + with intensity ( µ n ) ⊗ ⊗ leb R + in sucha way that there is an event E ′ n , a sequence ε ′ n such that:(i) P ( E cn ) ≤ ε n (ii) ε n −−−−→ n →∞ (iii) on E ′ n , for any s ≤ t , D ( π | ( X \P − s ) × ( X n \P − ,ns ) ; µ | X \P − s , µ X n \P − ,ns ) ∨ π n (( R n ) c ) ∨ dis ′ s ( R n ) ≤ ε ′ n where dis ′ s is the distortion of R as a correspondance between the semi-metric spaces Coal(Frag( X, P − s ) , P + s )and Coal(Frag( X n , P s ) , P + s ).Using Lemma 2.20, this ends the proof of the convergence in the sense of L GHP . Convergence of the sizes in L is disposed of noticing, for instance, that if X and X n have the same, finite, number of components, k masses(Coal(Frag( X , P − t ) , P + t )) − masses(Coal(Frag( X n , P − ,nt , P + ,nt )) k ≤ k masses(Frag( X , P − t )) − masses(Frag( X n , P − ,nt )) k , and one may thus use Proposition 5.5. Remark 6.3.
In Proposition 6.2, the initial convergence is in L surplus ,GHP and the conclusion is in L ,GHP . Thisis unavoidable since convergence in L surplus ,GHP does not prevent the sequence X ( n ) of having components withmasses going to zero but positive surplus. These components can at positive time be glued to large components,augmenting their surplus significantly. One could recover L surplus ,GHP in the conclusion if one added to (6.1) thefollowing condition lim ε → lim sup n → + ∞ P sup m ∈ comp( X ( n ) ≤ ε ) { surplus( m ) } 6 = 0 = 0 . Notice however that this condition is not satisfied by the connected components of critical Erd¨os-R´enyirandom graphs. .2. Application to Erd¨os-R´enyi random graphs Now, we want to prove Theorem 3.3. Intuitively, the dynamical percolation process on the complete graph K n should be very close to the process CoalFrag( K n , . ), but such a statement needs some care, essentiallybecause N + and N − do not commute: some pairs of vertices might be affected by the two Poisson processes P + and P − in a time interval [0 , T ]. Furthermore, the typical number of such edges is of order n / . Itturns out that those edges will not be important for the L GHP -metric, but it requires to adapt the proof ofProposition 6.2.
Proof. (of Theorem 3.3). Let G ∞ = G λ . Let p = p ( λ, n ), let G ( n ) be the graph G ( n, p ) seen as a measured R -graph, with edge-lengths δ n := (1 − p ) n − / ∼ n − / and measure the counting measure times p pn − / ∼ n − / . Let P + (of intensity pn − / ) and P − (of intensity (1 − p ) n − / ) be the two Poisson processes drivingthe dynamical percolation on G ( n ) . Let us write G ( n ) ( t ) := N ( G ( n ) , ( P + , P − ) t )and G ( n ) >ε ( t ) := N ( G ( n ) >ε , ( P + , P − ) t ) . for the state of this process at time t , seen as a member of N graph . Let us fix ε > ≤ t ≤ T .Any component of size at least ε in N ( G ( n ) , ( P + , P − ) t ) has to belong to a component of size at least ε in N ( G ( n ) , ( P + , ∅ ) t ), which is nothing else but Coal δ n ( G ( n ) , P + t ). Now, we claim thatlim sup n ∈ N P (suplength(Coal δ n ( G n ≤ ε , T )) > α ) −−−→ ε → . (6.3)Indeed, if G is a discrete graph with height function bounded from above by h and surplus bounded fromabove by s , suplength( G ) ≤ h (1 + s ) . Thus, (6.3) is a consequence of (4.7) and the fact that the maximal surplus in G ( n ) form a tight sequence(see for instance sections 13 and 14 in [13]). Then, (6.3) and Lemma 6.1 show that for any α > T > P (suplength(Coal ( G ∞≤ ε , T )) > α ) −−−→ ε → . The arguments leading to (6.2) show that:lim ε → lim sup n ∈ N P [ sup t ∈ [0 ,T ] L ,GHP ( G ( n ) ( t ) , G ( n ) >ε ( t )) > ε ] = 0 . and lim ε → P [ sup t ∈ [0 ,T ] L ,GHP (CoalFrag( G ∞ , t ) , CoalFrag( G ∞ >ε , t )) > ε ] = 0 . Thus, it is sufficient to show that for any ε >
0, ( G ( n ) >ε ( t )) t ≥ converges to (CoalFrag( G ∞ >ε , t )) t ≥ in thetopology of compact convergence associated to L ,GHP . Let Y n denote the number of discrete coalescenceevents of P + T occurring on G ( n ) >ε . Since the masses of G ( n ) >ε form a tight sequence, ( Y n ) is a tight sequence.Since δ n goes to zero, the probability that P − T touches an edge from P + T in G ( n ) >ε goes to zero as n goes toinfinity. Thus, with probability going to one, for any t ∈ [0 , T ] G ( n ) >ε ( t ) = ˜ G ( n ) >ε ( t ) where˜ G ( n ) >ε ( t ) = N (Coal δ n ( G ( n ) >ε , P + t ) , ( ∅ , P − ) t ) . Furthermore, since Y n is a tight sequence and δ n goes to zero,sup t ∈ [0 ,T ] L ,GHP ( ˜ G ( n ) >ε ( t ) , N (Coal ( G ( n ) >ε , P + t ) , ( ∅ , P − ) t )) P −−−−→ n →∞ . Q − be a Poisson process of intensity ℓ n ⊗ leb R + on K n × R + where K n is the complete graph on n verticesseen as an R -graph where the edge lengths are δ n and ℓ n is its length measure. Then, one may suppose that P − is obtained as follows: P − = { ( e, t ) : ∃ x ∈ K n , ( x, t ) ∈ Q − } . Then, for any t , N (Coal ( G ( n ) >ε , P + t ) , ( ∅ , P − ) t ) is at L ,GHP -distance at most δ n from Frag(Coal ( G ( n ) >ε , P + t ) , Q − t )(cf. for instance Proposition 3.4 in [3]). Altogether, we get:sup t ∈ [0 ,T ] L ,GHP ( N ( G ( n ) >ε ( t ) , Frag(Coal ( G ( n ) >ε , P + t ) , Q − t )) P −−−−→ n →∞ , and (Frag(Coal ( G ( n ) >ε , P + t ) , Q − t )) t ≥ is distributed as CoalFrag( G ( n ) >ε , t ) t ≥ . Now Proposition 6.2 shows thatthe sequence of processes CoalFrag( G ( n ) >ε , · ) converges to CoalFrag( G ∞ >ε , · ) for the topology of compactconvergence associated to L ,GHP , which finishes the proof. References [1] Romain Abraham, Jean-Fran¸cois Delmas, and Patrick Hoscheit. A note on the Gromov-Hausdorff-Prokhorov distance between (locally) compact metric measure spaces.
Electron. J. Probab. , 18:21, 2013.[2] L. Addario-Berry, N. Broutin, and C. Goldschmidt. The continuum limit of critical random graphs.
Probab. Theory Related Fields , 152(3-4):367–406, 2012.[3] L. Addario-Berry, N. Broutin, C. Goldschmidt, and G. Miermont. The scaling limit of the minimumspanning tree of the complete graph.
ArXiv e-prints , January 2013.[4] Louigi Addario-Berry, Nicolas Broutin, and Christina Goldschmidt. Critical random graphs: limitingconstructions and distributional properties.
Electron. J. Probab. , 15:741–775, 2010.[5] David Aldous. Brownian excursions, critical random graphs and the multiplicative coalescent.
Ann.Probab. , 25(2):812–854, 1997.[6] David Aldous and Jim Pitman. The standard additive coalescent.
Ann. Probab. , 26(4):1703–1726, 1998.[7] S. Bhamidi, N. Broutin, S. Sen, and X. Wang. Scaling limits of random graph models at criticality:Universality and the basin of attraction of the Erd¨os-R´enyi random graph.
ArXiv e-prints , November2014.[8] D. Burago, Yu. Burago, and S. Ivanov.
A course in metric geometry.
Providence, RI: AmericanMathematical Society (AMS), 2001.[9] R.M. Dudley.
Real analysis and probability. Repr.
Cambridge: Cambridge University Press, repr. edition,2002.[10] Steven N. Evans, Jim Pitman, and Anita Winter. Rayleigh processes, real trees, and root growth withre-grafting.
Probab. Theory Relat. Fields , 134(1):81–126, 2006.[11] C. Garban, G. Pete, and O. Schramm. The scaling limits of near-critical and dynamical percolation.
ArXiv e-prints , May 2013.[12] Olle H¨aggstr¨om, Yuval Peres, and Jeffrey E. Steif. Dynamical percolation.
Ann. Inst. Henri Poincar´e,Probab. Stat. , 33(4):497–528, 1997.[13] Svante Janson, Donald E. Knuth, Tomasz Luczak, and Boris Pittel. The birth of the giant component.
Random Struct. Algorithms , 4(3):233–358, 1993.[14] Olav Kallenberg.
Random measures, theory and applications.
Cham: Springer, 2017.[15] Gr´egory Miermont. Tessellations of random maps of arbitrary genus.
Ann. Sci. ´Ec. Norm. Sup´er. (4) ,42(5):725–781, 2009.[16] David Pollard.
Convergence of stochastic processes.
Springer-Verlag, 1984.[17] M. I. Roberts and B. Sengul. Exceptional times of the critical dynamical Erd¨os-R´enyi graph.
ArXive-prints , October 2016.[18] C´edric Villani.