The hull process of the Brownian plane
TThe hull process of the Brownian plane
Nicolas Curien and Jean-François Le Gall
Université Paris-Sud
Abstract
We study the random metric space called the Brownian plane, which is closely relatedto the Brownian map and is conjectured to be the universal scaling limit of many discreterandom lattices such as the uniform infinite planar triangulation. We obtain a number ofexplicit distributions for the Brownian plane. In particular, we consider, for every r >
0, thehull of radius r , which is obtained by “filling in the holes” in the ball of radius r centeredat the root. We introduce a quantity Z r which is interpreted as the (generalized) lengthof the boundary of the hull of radius r . We identify the law of the process ( Z r ) r> as thetime-reversal of a continuous-state branching process starting from + ∞ at time −∞ andconditioned to hit 0 at time 0, and we give an explicit description of the process of hullvolumes given the process ( Z r ) r> . We obtain an explicit formula for the Laplace transformof the volume of the hull of radius r , and we also determine the conditional distribution ofthis volume given the length of the boundary. Our proofs involve certain new formulas forsuper-Brownian motion and the Brownian snake in dimension one, which are of independentinterest. Much recent work has been devoted to understanding continuous limits of random graphs drawnon the two-dimensonal sphere or in the plane, which are called random planar maps. A funda-mental object is the random compact metric space known as the Brownian map, which has beenproved to be the universal scaling limit of several important classes of random planar maps con-ditioned to have a large size (see in particular [1, 3, 6, 23, 30]). The main goal of this work is tostudy the random (non-compact) metric space called the Brownian plane, which may be viewedas an infinite-volume version of the Brownian map. The Brownian plane was first introducedand studied in [9], where it was shown to be the scaling limit in distribution of the uniforminfinite planar quadrangulation (UIPQ) in the local Gromov-Hausdorff sense. The Brownianplane is in fact conjectured to be the universal scaling limit of many discrete random latticesincluding the uniform infinite planar triangulation (UIPT) introduced by Angel and Schramm[5] and studied then by several authors. It was proved in [9] that the Brownian plane is locallyisometric to the Brownian map, in the following sense. Recalling that both the Brownian mapand the Brownian plane are equipped with a distinguished point called the root, one can couplethese two random metric spaces in such a way that, for every δ >
0, there exists ε > ε centered at the root in the two spaces are isometric with probabilityat least 1 − δ . As a consequence, the Brownian plane shares many properties of the Brownianmap. On the other hand, the Brownian plane also enjoys the important additional property ofinvariance under scaling: Multiplying the distance by a constant factor λ > a r X i v : . [ m a t h . P R ] S e p e more tractable for calculations than the Brownian map, for which very few explicit distribu-tions are known. Our purpose is to obtain such explicit distributions for the Brownian plane,and in particular to give a detailed probabilistic description of the growth of “hulls” centered atthe root.In order to give a more precise presentation of our results, let us introduce some notation.As in [9], we write ( P ∞ , D ∞ ) for the Brownian plane, and we let ρ ∞ stand for the distinguishedpoint of P ∞ called the root. We recall that P ∞ is equipped with a volume measure, and wewrite | A | for the volume of a measurable subset of P ∞ . For every r >
0, the closed ball of radius r centered at ρ ∞ in P ∞ is denoted by B r ( P ∞ ). In contrast with the case of Euclidean space, thecomplement of B r ( P ∞ ) will have infinitely many connected components (see [24] for a detaileddiscussion of these components in the slightly different setting of the Brownian map) but onlyone unbounded connected component. We then define the hull of radius r as the complement ofthe unbounded component of the complement of B r ( P ∞ ), and we denote this hull by B • r ( P ∞ ).Informally, B • r ( P ∞ ) is obtained by “filling in the holes” of B r ( P ∞ ) - see Fig. 1 below, and Fig. 3in Section 5 for a discrete version of the hull.In what follows, we give a complete description of the law of the process ( | B • r ( P ∞ ) | ) r> . Toformulate this description, it is convenient to introduce another process ( Z r ) r> which gives forevery r > B • r ( P ∞ ). Proposition 1.1.
Let r > . There exists a positive random variable Z r such that lim ε → ε − | B • r ( P ∞ ) c ∩ B r + ε ( P ∞ ) | = Z r in probability. In view of this proposition, one interprets Z r as the (generalized) length of the boundaryof the hull of radius r (this boundary is expected to be a fractal curve of dimension 2). A keyintermediate step in the derivation of our main results is to identify the process ( Z r ) r> as atime-reversed continuous-state branching process. For every u ≥
0, set ψ ( u ) = p / u / . Thecontinuous-state branching process with branching mechanism ψ is the Feller Markov process( X t ) t ≥ with values in R + , whose semigroup is characterized as follows: for every x, t ≥ λ > E [ e − λX t | X = x ] = exp (cid:16) − x (cid:16) λ − / + q / t (cid:17) − (cid:17) . See subsection 2.1 for a brief discussion of this process. Note that X gets absorbed at 0 in finitetime. It is easy to construct a process ( e X t ) t ≤ indexed by the time interval ( −∞ ,
0] and whichis distributed as the process X “started from + ∞ ” at time −∞ and conditioned to hit zero attime 0 (see subsection 2.1 for a more rigorous presentation). Proposition 1.2. (i)
For every r > , we have for every λ ≥ , E h exp( − λZ r )] = (cid:16) λr (cid:17) − / . Equivalently, Z r follows a Gamma distribution with parameter and mean r . (ii) The two processes ( Z r ) r> and ( e X − r ) r> have the same finite-dimensional marginals. We observe that results closely related to Proposition 1.2 have been obtained by Krikun[16, 17] in the discrete setting of the UIPT and the UIPQ.2 ∞ Z r s ρ ∞ B • r ( P ∞ ) Figure 1:
Illustration of the geometric meaning of the processes ( Z r ) r ≥ and ( | B • r ( P ∞ ) | ) r ≥ . The Brownian plane is represented as a two-dimensional “cactus” wherethe height of each point is equal to its distance to the root. The shaded part repre-sents the hull B • r ( P ∞ ) . At time s , both processes Z · and | B •· ( P ∞ ) | have a jump.Geometrically this corresponds to the creation of a “bubble” above height s . Part (ii) of the preceding proposition implies that the process ( Z r ) r> has a càdlàg modifi-cation, with only negative jumps, and from now on we deal with this modification. We can nowstate the main results of the present work. For every r >
0, we write ∆ Z r for the jump of Z attime r . Theorem 1.3.
Let s , s , . . . be a measurable enumeration of the jumps of Z , and let ξ , ξ , . . . be a sequence of i.i.d. real random variables with density √ πx e − / x (0 , ∞ ) ( x ) , which is independent of the process ( Z r ) r> . The following identity in distribution of randomprocesses holds: (cid:16) Z r , | B • r ( P ∞ ) | (cid:17) r> = (cid:16) Z r , X i : s i ≤ r ξ i (∆ Z s i ) (cid:17) r> . This theorem identifies the conditional distribution of the process of hull volumes knowingthe process of hull boundary lengths, whose distribution is given by the preceding proposition.Informally, each jump time r of Z corresponds to the creation of a new connected component ofthe complement of the ball B r ( P ∞ ), which is “swallowed” by the hull, leading to a negative jumpfor the boundary of the hull and a positive jump for its volume. The common distribution of the3ariables ξ i should then be interpreted as the law of the volume of a newly created connectedcomponent knowing that the “length” of its boundary is equal to 1 (see [4, Proposition 6.4] fora related result concerning the asymptotic distribution of the volume of a triangulation with aboundary of size tending to infinity). This heuristic discussion is made much more precise in thecompanion paper [10], where many of the results of the present work are interpreted in terms ofasymptotics for the so-called “peeling process” studied by Angel [4] for the UIPT.The proof of Theorem 1.3 depends on certain explicit calculations of distributions, which areof independent interest. Theorem 1.4.
Let r > . For every µ > , E h exp( − µ | B • r ( P ∞ ) | ) i = 3 / cosh((2 µ ) / r ) (cid:16) cosh ((2 µ ) / r ) + 2 (cid:17) − / . Furthermore, for every ‘ > , E h exp( − µ | B • r ( P ∞ ) | ) (cid:12)(cid:12)(cid:12) Z r = ‘ i = r (2 µ ) / cosh((2 µ ) / r )sinh ((2 µ ) / r ) exp (cid:16) − ‘ (cid:16)r µ (cid:16) ((2 µ ) / r ) − (cid:17) − r (cid:17)(cid:17) . In view of the first assertion of the theorem, one may ask whether a similar formula holds forthe volume | B r ( P ∞ ) | of the ball of radius r . In principle our methods should also be applicableto this problem, but our calculations did not lead to a tractable expression. One may stillcompare the expected volumes of the hull and the ball. From the first formula of the theorem,one easily gets that E [ | B • r ( P ∞ ) | ] = r /
3. On the other hand, using the method of the proof of[26, Proposition 5], one can verify that E [ | B r ( P ∞ ) | ] = 2 r / | B • r ( P ∞ ) | ) r> . A similar invarianceprinciple should hold for the UIPT and for more general random lattices such as the onesconstructed by Addario-Berry [2] and Stephenson [32].Our proofs depend on a new representation of the Brownian plane, which is different from theone used in [9]. Roughly speaking, this representation is a continuous analog of the constructionof the UIPQ that was given by Chassaing and Durhuus in [7], whereas [9] used a continuousversion of the construction in [12]. Similarly as in [9], the representation of the Brownian planein the present work uses a random infinite real tree T ∞ whose vertices are assigned real labels.The probabilistic structure of the real tree T ∞ is more complicated than in [9], but the labelsare now nonnegative and correspond to distances from the root in P ∞ (whereas in [9] labelscorresponded in some sense to “distances from infinity”). This is of course similar to the well-known Schaeffer bijection between rooted quadrangulations and well-labeled trees [8]. The factthat labels are distances from the root is important for our purposes, since it allows us to givea simple representation of the hull of radius r : The complement of this hull corresponds to theset of all points a in T ∞ such that labels stay greater than r along the (tree) geodesic from a toinfinity. See formula (16) below. There is a similar interpretation for the boundary of the hull,and a key observation is the fact that the “boundary length” Z r can be obtained in terms of exit4easures from ( r, ∞ ) associated with the “subtrees” branching off the spine of the infinite tree T ∞ at a level greater than the last occurence of label r on the spine (see formula (18) below).The construction of the infinite tree T ∞ and of the labels assigned to its vertices, as wellas the subsequent calculations, make a heavy use of the Brownian snake and its properties.In particular the special Markov property of the Brownian snake [18] and its connections withpartial differential equations play an important role. Because of the close relation between super-Brownian motion and the Brownian snake, some of the results that follow can be written asstatements about super-Brownian motion, which may be of independent interest. In particular,Corollary 4.7, which is essentially equivalent to the second formula of Theorem 1.4, gives theLaplace transform of the total integrated mass of a super-Brownian motion started from uδ a (for some u, a >
0) knowing that the minimum of the range is equal to 0. Similarly, Corollary4.9 determines for a super-Brownian motion starting from δ the law of the process whose valueat time r > − r, ∞ ) and the mass of thosehistorical paths that do not hit level − r .The paper is organized as follows. Section 2 presents a number of preliminaries. In particular,we recall basic facts about the (one-dimensional) Brownian snake including exit measures andthe special Markov property, and its connections with super-Brownian motion. We also statea recent result from [25] giving a decomposition of the Brownian snake knowing its minimalspatial position. The latter result is especially useful in Section 3, where we derive our newrepresentation of the Brownian plane. In order to show that this new construction is equivalentto the one in [9], we use the fact that the distribution of the Brownian plane is characterizedby the invariance under scaling and the above-mentioned property stating that the Brownianplane is locally isometric to the Brownian map. Section 4 contains the proof of our main results:Propositions 1.1 and 1.2 are proved in subsection 4.1, Theorem 1.4 is derived in subsection 4.2,and Theorem 1.3 is proved in subsection 4.3. Finally, Section 5 is devoted to our invarianceprinciple relating the hull process of the UIPQ to the process ( | B • r ( P ∞ ) | ) r> . An important role in this work will be played by a particular continuous-state branching process,which was already mentioned in the introduction. We refer to [19, Chapter 2] and referencestherein for the general theory of continuous-state branching processes, and content ourselves witha brief exposition of the case of interest in this work. We fix a constant c >
0. The continuous-state branching process with branching mechanism ψ ( u ) = c u / is the Feller Markov process( X t ) t ≥ with values in R + , càdlàg paths and no negative jumps, whose semigroup is characterizedas follows. If P x stands for the probability measure under which X starts from X = x , then,for every x, t ≥ λ > E x [ e − λX t ] = e − x u t ( λ ) where the function u t ( λ ) is determined by the differential equationd u t ( λ )d t = − c ( u t ( λ )) / , u ( λ ) = λ. It follows that u t ( λ ) = ( λ − / + c t ) − , and thus, E x [ e − λX t ] = exp (cid:16) − x (cid:16) λ − / + c t (cid:17) − (cid:17) . (1)5y differentiating with respect to λ , we have also E x [ X t e − λX t ] = xλ − / (cid:16) λ − / + c t (cid:17) − exp (cid:16) − x (cid:16) λ − / + c t (cid:17) − (cid:17) . (2)Let T := inf { t ≥ X t = 0 } , and note that X t = 0 for every t ≥ T , a.s. Since P x ( T ≤ t ) = P x ( X t = 0) = exp( − xc t ), we readily obtain that the density of T under P x is (when x >
0) thefunction t φ t ( x ) := 8 xc t exp (cid:16) − xc t (cid:17) . For future purposes, it will be useful to introduce the process X conditioned on extinctionat a fixed time. To this end, we write q t ( x, d y ) for the transition kernels of X . We fix ρ > X “conditioned on extinction at time ρ ” as the time-inhomogeneous Markovprocess indexed by the interval [0 , ρ ] with values in (0 , ∞ ) (with 0 serving as a cemetery point)whose transition kernel between times s and t is π s,t ( x, d y ) = φ ρ − t ( y ) φ ρ − s ( x ) q t − s ( x, d y ) , if 0 ≤ s < t < ρ and x >
0, and π s,ρ ( x, d y ) = δ (d y )if s ∈ [0 , ρ ) and x >
0. This is just a standard h -transform in a time-inhomogeneous setting,and the interpretation can be justified by the fact that, for every choice of 0 < s < · · ·
0] and such that: • e X t > t <
0, and e X = 0, a.s.; • e X t −→ + ∞ as t ↓ −∞ , a.s.; • for every x >
0, if e T x := inf { t ∈ ( −∞ ,
0] : e X t ≤ x } , the process ( e X ( ˜ T x + t ) ∧ ) t ≥ has thesame distribution as X started from x .To get an explicit construction of e X , one may concatenate independent copies of the process X started at n and stopped at the hitting time of n −
1, for every integer n ≥
1. We omit thedetails. 6 .2 Preliminaries about the Brownian snake
We give below a brief presentation of the Brownian snake, referring to the book [19] for moredetails. We write W for the set of all finite paths in R . An element of W is a continuous mappingw : [0 , ζ ] −→ R , where ζ = ζ (w) ≥ b w = w( ζ (w) ) for the endpoint of w. For x ∈ R , we set W x := { w ∈ W : w(0) = x } . The trivialpath w such that w(0) = x and ζ (w) = 0 is identified with the point x of R , so that we can view R as a subset of W . The space W is equipped with the distance d (w , w ) = | ζ (w) − ζ (w ) | + sup t ≥ | (w( t ∧ ζ (w) ) − w ( t ∧ ζ (w ) ) | . The Brownian snake ( W s ) s ≥ is a continuous Markov process with values in W . We willwrite ζ s = ζ ( W s ) for the lifetime process of W s . The process ( ζ s ) s ≥ evolves like a reflectingBrownian motion in R + . Conditionally on ( ζ s ) s ≥ , the evolution of ( W s ) s ≥ can be describedinformally as follows: When ζ s decreases, the path W s is shortened from its tip, and when ζ s increases the path W s is extended by adding “little pieces of linear Brownian motion” at its tip.We refer to [19, Chapter IV] for a more rigorous presentation.It is convenient to assume that the Brownian snake is defined on the canonical space C ( R + , W ) of all continuous functions from R + into C ( R + , W ), in such a way that, for ω =( ω s ) s ≥ ∈ C ( R + , W ), we have W s ( ω ) = ω s . The notation P w then stands for the law of theBrownian snake started from w.For every x ∈ R , the trivial path x is a regular recurrent point for the Brownian snake, andso we can make sense of the excursion measure N x away from x , which is a σ -finite measure on C ( R + , W ). Under N x , the process ( ζ s ) s ≥ is distributed according to the Itô measure of positiveexcursions of linear Brownian motion, which is normalized so that, for every ε > N x (cid:16) sup s ≥ ζ s > ε (cid:17) = 12 ε . We write σ := sup { s ≥ ζ s > } for the duration of the excursion under N x . For every ‘ > N ( ‘ )0 := N ( · | σ = ‘ ).We set R := { c W s : s ≥ } , W ∗ := inf R = inf s ≥ c W s . We will consider R and W ∗ under the excursion measures N x , and we note that we have also R = { c W s : 0 ≤ s ≤ σ } and W ∗ = min { c W s : 0 ≤ s ≤ σ } , N x a.e. Occasionally we also write ω ∗ = W ∗ ( ω ) for ω ∈ C ( R + , W ).If x, y ∈ R and y < x , we have N x ( y ∈ R ) = N x ( W ∗ ≤ y ) = 32( x − y ) (4)(see e.g. [19, Section VI.1]).It is known (see e.g. [27, Proposition 2.5]) that N x a.e. there is a unique instant s m ∈ [0 , σ ]such that c W s m = W ∗ . Decomposing the Brownian snake at its minimum.
We will now recall a key result of [25]that plays an important role in what follows. This result identifies the law of the minimizing path W s m under N , together with the distribution of the “subtrees” that branch off the minimizingpath. Let us define these subtrees in a more precise way.7or every s ≥
0, we set ˆ ζ s := ζ ( s m + s ) ∧ σ , ˇ ζ s := ζ ( s m − s ) ∨ . We let (ˆ a i , ˆ b i ), i ∈ ˆ I be the excursion intervals of ˆ ζ s above its past minimum. Equivalently, theintervals (ˆ a i , ˆ b i ), i ∈ ˆ I are the connected components of the set n s ≥ ζ s > min ≤ r ≤ s ˆ ζ r o . Similarly, we let (ˇ a j , ˇ b j ), j ∈ ˇ I be the excursion intervals of ˇ ζ s above its past minimimum.We may assume that the indexing sets ˆ I and ˇ I are disjoint. In terms of the tree T ζ codedby the excursion ( ζ s ) ≤ s ≤ σ under N (see e.g. [20, Section 2]), each interval (ˆ a i , ˆ b i ) or (ˇ a j , ˇ b j )corresponds to a subtree of T ζ branching off the ancestral line of the vertex associated with s m .We next consider the spatial displacements corresponding to these subtrees. The properties ofthe Brownian snake imply that, for every i ∈ I , the paths W s m + s , s ∈ [ˆ a i , ˆ b i ], are the same upto time ζ s m +ˆ a i = ζ s m +ˆ b i , and similarly for the paths W s m − s , s ∈ [ˇ a j , ˇ b j ], for every j ∈ J . Then,for every i ∈ ˆ I , we let W [ i ] ∈ C ( R + , W ) be defined by W [ i ] s ( t ) = W s m +(ˆ a i + s ) ∧ ˆ b i ( ζ s m +ˆ a i + t ) , ≤ t ≤ ζ s m +(ˆ a i + s ) ∧ ˆ b i − ζ s m +ˆ a i . Similarly, for every j ∈ ˇ I , W [ j ] s ( t ) = W s m − (ˇ a j + s ) ∧ ˇ b j ( ζ s m − ˇ a j + t ) , ≤ t ≤ ζ s m − (ˇ a j + s ) ∧ ˇ b j − ζ s m − ˇ a j . We finally introduce the point measures on R + × C ( R + , W ) defined byˆ N = X i ∈ ˆ I δ ( ζ s m +ˆ ai ,W [ i ] ) , ˇ N = X j ∈ ˇ I δ ( ζ s m − ˆ aj ,W [ j ] ) . Theorem 2.1. (i)
Let a > . Under the excursion measure N and conditionally on W ∗ = − a ,the random path ( a + W s m ( ζ s m − t )) ≤ t ≤ ζ s m is distributed as a nine-dimensional Bessel processstarted from and stopped at its last passage time at level a . (ii) Under N , conditionally on the minimizing path W s m , the point measures ˆ N (d t, d ω ) and ˇ N (d t, d ω ) are independent and their common conditional distribution is that of a Poisson pointmeasure with intensity [0 ,ζ s m ] ( t ) { ω ∗ > b W s m } d t N W s m ( t ) (d ω ) . We refer to [31, Chapter XI] for basic facts about Bessel processes. Parts (i) and (ii) of thetheorem correspond respectively to Theorem 5 and Theorem 6 of [25]. Note that when applyingTheorem 5 of [25], we also use the fact that the time-reversal of a Bessel process of dimension − a and stopped when hitting 0 is a nine-dimensional Bessel process started from0 and stopped at its last passage time at level a (see e.g. [31, Exercise XI.1.23]). Exit measures and the special Markov property.
Let D be an open interval of R , suchthat D = R . We fix x ∈ D and, for every w ∈ W x , set τ D (w) = inf { t ∈ [0 , ζ (w) ] : w( t ) / ∈ D } , with the usual convention inf ∅ = ∞ . The exit measure Z D from D (see [19, Chapter 5]) isa random measure on ∂D , which is defined under N x and is supported on the set of all exit8oints W s ( τ D ( W s )) for the paths W s such that τ D ( W s ) < ∞ (note that here ∂D has at mosttwo points, but the preceding discussion remains valid for the d -dimensional Brownian snakeand an arbitrary subdomain D of R d ). Note that N x ( Z D = 0) < ∞ . It is easy to prove, forinstance by using Proposition 2.2 below, that {Z D = 0 } = {R ⊂ D } , N x a.e. (5)A crucial ingredient of our study is the special Markov property of the Brownian snake [18].In order to state this property, we first observe that, N x -a.e., the set { s ≥ τ D ( W s ) < ζ s } is open and thus can be written as a union of disjoint open intervals ( a i , b i ), i ∈ I , where I maybe empty. From the properties of the Brownian snake, one has, N x -a.e. for every i ∈ I andevery s ∈ [ a i , b i ], τ D ( W s ) = τ D ( W a i ) = ζ a i , and more precisely all paths W s , s ∈ [ a i , b i ] coincide up to their exit time from D . For every i ∈ I , we then define an element W ( i ) of C ( R + , W ) by setting, for every s ≥ W ( i ) s ( t ) := W ( a i + s ) ∧ b i ( ζ a i + t ) , for 0 ≤ t ≤ ζ ( W ( i ) s ) := ζ ( a i + s ) ∧ b i − ζ a i . Informally, the W ( i ) ’s represent the “excursions” of the Brownian snake outside D (the word“outside” is a little misleading here, because although these excursions start from a point of ∂D ,they will typically come back inside D ).We also need to introduce a σ -field that contains the information about the paths W s beforethey exit D . To this end, we set, for every s ≥ η Ds := inf { r ≥ Z r d u { ζ u ≤ τ D ( W u ) } > s } , and we let E D be the σ -field generated by the process ( W η Ds ) s ≥ and the class of all sets that are N x -negligible. The random measure Z D is measurable with respect to E D (see [18, Proposition2.3]).We now state the special Markov property [18, Theorem 2.4]. Proposition 2.2.
Under N x , conditionally on E D , the point measure X i ∈ I δ W ( i ) is Poisson with intensity Z Z D (d y ) N y . Remarks. (i) Since on the event {Z D = 0 } there are no excursions outside D , the previousproposition is equivalent to the same statement where N x is replaced by the probability measure N x ( · | Z D = 0).(ii) In what follows we will apply the special Markov property in a conditional form. Supposethat D = ( a, ∞ ) for some a > x > a . Then the preceding statement remainsvalid if we replace N x by N x ( · ∩ {R ⊂ (0 , ∞ ) } ), provided we also replace R Z D (d y ) N y by R Z D (d y ) N y ( · ∩ {R ⊂ (0 , ∞ ) } ). This follows from the fact that conditioning a Poisson point9easure on having no point on a set of finite intensity is equivalent to removing the points thatfall into this set. We omit the details.For a < x , we write Z a := hZ ( a, ∞ ) , i for the total mass of the exit measure outside ( a, ∞ ).We will use the Laplace transform of Z a under N x , which is given by N x (cid:16) − exp( − µ Z a ) (cid:17) = 1 (cid:16) µ − / + q ( x − a ) (cid:17) , (6)for every µ ≥
0. This formula is easily derived from the fact that the (nonnegative) function u ( x ) = N x (1 − exp( − µ Z a )) defined for x ∈ ( a, ∞ ) solves the differential equation u = 4 u withboundary conditions u ( a ) = µ and u ( ∞ ) = 0 (see [19, Chapter V]). On the other hand, anapplication of the special Markov property shows that, for every b < a < x , N x (exp( − λ Z b ) | E ( a, ∞ ) ) = exp (cid:16) − Z a N a (1 − exp( − λ Z b )) (cid:17) . If we substitute formula (6) in the last display, and compare with (1), we easily get that the pro-cess ( Z x − a ) a> is Markov under N x , with the transition kernels of the continuous-state branchingprocess with branching mechanism ψ ( u ) = p / u / . Although N x is an infinite measure, thepreceding assertion makes sense, simply because we can restrict our attention to the finitemeasure event {Z x − ε > } , for any choice of ε >
0. It follows that ( Z x − a ) a> has a càdlàgmodification under N x , which we consider from now on.We finally explain an extension of the special Markov property where we consider excursionsoutside a random domain. For definiteness, we fix x = 0, and for every a >
0, we set E a = E ( − a, ∞ ) . Let H be a random variable with values in (0 , ∞ ], such that N ( H < ∞ ) < ∞ , andassume that H is a stopping time of the filtration ( E a ) a> in the sense that, for every a >
0, theevent { H ≤ a } is E a -measurable. As usual we can define the σ -field E H that consists of all events A such that A ∩ { H ≤ a } is E a -measurable, for every a >
0. Since Z − a is E a -measurable forevery a >
0, it follows by standard arguments that the random variable Z − H is E H -measurable(at this point it is important that we have taken a càdlàg modification of the process ( Z − a ) a> ).We may consider the excursions ( W H, ( i ) ) i ∈ I of the Brownian snake outside ( − H, ∞ ). Theseexcursions are defined in exactly the same way as in the case where H is deterministic, consid-ering now the connected components of the open set { s ≥ W s ( t ) < − H for some t ∈ [0 , ζ s ] } .We define f W H, ( i ) by shifting W H, ( i ) so that it starts from 0. Proposition 2.3.
Under the probability measure N ( · | H < ∞ ) , conditionally on the σ -field E H , the point measure X i ∈ I δ e W H, ( i ) is Poisson with intensity Z − H N . This proposition can be obtained by arguments very similar to the derivation of the strongMarkov property of Brownian motion from the simple Markov property: we approximate H with stopping times greater than H that take only countably many values, then use Proposition2.2 and finally perform a suitable passage to the limit. We leave the details to the reader. The Brownian snake and super-Brownian motion.
The initial motivation for studyingthe Brownian snake came from its connection with super-Brownian motion, which we briefly10ecall. Under the excursion measure N x (d ω ), the lifetime process ( ζ s ( ω )) s ≥ is distributed as aBrownian excursion, and so we can define for every t ≥ ‘ ts ( ω )) s ≥ of thisexcursion at level t . Next let µ be a finite measure on R , and let N (d ω ) = X k ∈ K δ ω ( k ) (d ω )be a Poisson measure on C ( R + , W ) with intensity R µ (d x ) N x (d ω ). For every t >
0, let X t bethe random measure on R defined by setting, for every nonnegative measurable function ϕ on R , hX t , ϕ i = X k ∈ K Z σ ( ω ( k ) )0 d ‘ ts ( ω ( k ) ) ϕ ( c W s ( ω ( k ) )) . (7)If we also set X = µ , the process ( X t ) t ≥ is then a super-Brownian motion with branching mech-anism ψ ( u ) = 2 u started from µ (see [19, Theorem IV.4]). A nice feature of this constructionis the fact that it also gives the associated historial process: Just consider for every t > X t defined by setting h X t , Φ i = X k ∈ K Z σ ( ω ( k ) )0 d ‘ ts ( ω ( k ) ) Φ( W s ( ω ( k ) )) , (8)for every nonnegative measurable function Φ on W . Some of the forthcoming results are statedin terms of super-Brownian motion and its historical process. Without loss of generality wemay and will assume that these processes are obtained by formulas (7) and (8) of the previousconstruction. This also means that we consider the special branching mechanism ψ ( u ) = 2 u ,but of course the case of a general quadratic branching mechanism can then be handled viascaling arguments. We start by giving a characterization of the Brownian plane as a random pointed metric spacesatisfying appropriate properties. We let K bcl denote the space of all isometry classes of pointedboundedly compact length spaces. The space K bcl is equipped with the local Gromov-Hausdorffdistance d LGH (see [9, Section 2.1]) and is a Polish space, that is, separable and complete forthis distance. For r > F ∈ K bcl , we use the notation B r ( F ) for the closed ball of radius r centered at the distinguished point of F . Note that B r ( F ) is always viewed as a pointed compactmetric space.The Brownian plane P ∞ is then a random variable taking values in the space K bcl . Definition 3.1.
Let E and E be two random variables with values in K bcl . We say that E and E are locally isometric if, for every δ > , there exists a number r > and a coupling of E and E such that the balls B r ( E ) and B r ( E ) are isometric with probability at least − δ . We leave it to the reader to verify that this is an equivalence relation (only transitivity is notobvious). The interest of this definition comes from the next proposition. If E is a (random)metric space and λ >
0, we use the notation λ · E for the same metric space where the distancehas been multiplied by λ . 11 roposition 3.2. The distribution of the Brownian plane is characterized in the set of allprobability measures on K bcl by the following two properties: (i) The Brownian plane is locally isometric to the Brownian map. (ii)
The Brownian plane is scale invariant, meaning that λ · P ∞ has the same distribution as P ∞ , for every λ > .Proof. The fact that property (i) holds is Theorem 1 in [9]. Property (ii) is immediate from theconstruction in [9], or directly from the convergence (1) in [9, Theorem 1]. So we just have toprove that these two properties characterize the distribution of the Brownian plane. Let E be arandom variable with values in K bcl , which is both locally isometric to the Brownian map andscale invariant. Then, E is also locally isometric to the Brownian plane, and, for every δ > r > E and P ∞ such that P [ B r ( E ) = B r ( P ∞ )] > − δ, where the equality is in the sense of isometry between pointed compact metric spaces. Triviallythis implies that, for every a > P [ B a ( ar · E ) = B a ( ar · P ∞ )] > − δ. By scale invariance, ar · E and ar · P ∞ have the same distribution as E and P ∞ respectively. Sowe get that for every δ >
0, for every a >
0, we can find a coupling of E and P ∞ such that P [ B a ( E ) = B a ( P ∞ )] > − δ. Recalling the definition of the local Gromov-Hausdorff distance d
LGH (see e.g. [9, Section 2.1])we obtain that, for every ε > δ >
0, there exists a coupling of E and P ∞ such that P [d LGH ( E, P ∞ ) < ε ] > − δ. Clearly this implies that the Lévy-Prokhorov distance between the distributions of E and P ∞ is 0 and thus E and P ∞ have the same distribution. In this section, we provide a construction of the Brownian plane, which is different from theone in [9]. We then use Proposition 3.2 and Theorem 2.1 to prove the equivalence of the twoconstructions.We consider a nine-dimensional Bessel process R = ( R t ) t ≥ starting from 0 and, conditionallyon R , two independent Poisson point measures N (d t, d ω ) and N (d t, d ω ) on R + × C ( R + , W )with the same intensity 2 {R ( ω ) ⊂ (0 , ∞ ) } d t N R t (d ω ) . It will be convenient to write N = X i ∈ I δ ( t i ,ω i ) , N = X i ∈ J δ ( t i ,ω i ) , where the indexing sets I and J are disjoint. 12e also consider the sum N = N + N , which conditionally on R is Poisson with intensity4 {R ( ω ) ⊂ (0 , ∞ ) } d t N R t (d ω ) , and we have N = X i ∈ I ∪ J δ ( t i ,ω i ) . (9)We start by introducing the infinite random tree that will be crucial in our construction ofthe Brownian plane. For every i ∈ I ∪ J , write σ i = σ ( ω i ) and let ( ζ is ) s ≥ be the lifetime processassociated with ω i . Then the function ( ζ is ) ≤ s ≤ σ i codes a rooted compact real tree, which isdenotes by T i , and we write p ζ i for the canonical projection from [0 , σ i ] onto T i (see e.g. [20,Section 2] for basic facts about the coding of trees by continuous functions). We construct arandom non-compact real tree T ∞ by grafting to the half-line [0 , ∞ ) (which we call the “spine”)the tree T i at point t i , for every i ∈ I ∪ J . Formally, the tree T ∞ is obtained from the disjointunion [0 , ∞ ) ∪ [ i ∈ I ∪ J T i ! by identifying the point t i of [0 , ∞ ) with the root ρ i of T i , for every i ∈ I ∪ J . The metric d ∞ on T ∞ is determined as follows. The restriction of d ∞ to each tree T i is (of course) the metric d T i on T i . If x ∈ T i and t ∈ [0 , ∞ ), we take d ∞ ( x, t ) = d T i ( x, ρ i ) + | t i − t | . If x ∈ T i and y ∈ T j ,with i = j , we take d ∞ ( x, y ) = d T i ( x, ρ i ) + | t i − t j | + d T j ( ρ j , y ). By convention, T ∞ is rooted at0. The infinite tree T ∞ is equipped with a volume measure V , which puts no mass on the spineand whose restriction to each tree T i is the natural volume measure on T i defined as the imageof Lebesgue measure on [0 , σ i ] under the projection p ζ i .We also define labels on the tree T ∞ . The label Λ x of a vertex x ∈ T ∞ is defined by Λ x = R t if x = t belongs to the spine [0 , ∞ ), and Λ x = b ω is if x = p ζ i ( s ) belongs to the subtree T i , forsome i ∈ I ∪ J . Note that the mapping x Λ x is continuous almost surely. For future use, wealso notice that, if x = p ζ i ( s ) belongs to the subtree T i , the quantities ω is ( t ), 0 ≤ t ≤ ζ is are thelabels of the ancestors of x in T i .We will use the fact that labels are “transient” in the sense of the following lemma. Recallthe notation ω ∗ = W ∗ ( ω ). Lemma 3.3.
We have a.s. lim r ↑∞ (cid:16) inf i ∈ I ∪ J,t i >r ω i ∗ (cid:17) = + ∞ . Proof.
It is enough to verify that, for every
A >
0, we havelim r ↑∞ P (cid:16) inf i ∈ I ∪ J, t i ≥ r ω i ∗ < A (cid:17) = 0 . However by construction, P (cid:16) inf i ∈ I ∪ J,t i ≥ r ω i ∗ < A (cid:17) = P (cid:16) inf t ≥ r R t < A (cid:17) + E h n inf t ≥ r R t ≥ A o(cid:16) − exp (cid:16) − Z ∞ r d t N R t (0 < W ∗ < A ) (cid:17)(cid:17)i = P (cid:16) inf t ≥ r R t < A (cid:17) + E h n inf t ≥ r R t ≥ A o(cid:16) − exp (cid:16) − Z ∞ r d t (cid:0) R t − A ) − R t ) (cid:1)(cid:17)(cid:17)i , using (4). The desired result easily follows from the fact that the integral R ∞ d t ( R t ) − isconvergent. 13ntil now, we have not used the fact that N is decomposed in the form N = N + N . Thisdecomposition corresponds intuitively to the fact that the trees T i are grafted on the left side ofthe spine [0 , ∞ ) when i ∈ I , and on the right side when i ∈ J . We make this precise by definingan exploration process of the tree. To begin with, we define, for every u ≥ τ u := X i ∈ I { t i ≤ u } σ i , τ u := X i ∈ J { t i ≤ u } σ i . Note that both u τ u and u τ u are nondecreasing and right-continuous. The left limits ofthese functions are denoted by τ u − and τ u − respectively, and τ − = τ − = 0 by convention.Then, for every s ≥
0, there is a unique u ≥
0, such that τ u − ≤ s ≤ τ u , and: • Either there is a (unique) i ∈ I such that u = t i , and we setΘ s := p ζ i ( s − τ t i − ) . • Or there is no such i and we set Θ s = u .We define similarly (Θ s ) s ≥ by replacing ( τ u ) u ≥ by ( τ u ) u ≥ and I by J . Informally, (Θ s ) s ≥ and(Θ s ) s ≥ correspond to the exploration of respectively the left and the right side of the tree T ∞ .Noting that Θ = Θ = 0, we define (Θ s ) s ∈ R by settingΘ s := ( Θ s if s ≥ , Θ s if s ≤ . It is straightforward to verify that the mapping s Θ s is continuous. We also note that thevolume measure V on T ∞ is the image of Lebesgue measure on R under the mapping s Θ s .This exploration process allows us to define intervals on T ∞ . Let us make the conventionthat, if s > t , the “interval” [ s, t ] is defined by [ s, t ] = [ s, ∞ ) ∪ ( −∞ , t ]. Then, for every x, y ∈ T ∞ ,there is a smallest interval [ s, t ], with s, t ∈ R , such that Θ s = x and Θ t = y , and we define[ x, y ] := { Θ r : r ∈ [ s, t ] } . Note that [ x, y ] = [ y, x ] unless x = y . We may now turn to our construction of the Brownianplane. We set, for every x, y ∈ T ∞ , D ◦∞ ( x, y ) = Λ x + Λ y − min z ∈ [ x,y ] Λ z , min z ∈ [ y,x ] Λ z ! , (10)and then D ∞ ( x, y ) = inf x = x,x ,...,x p = y p X i =1 D ◦∞ ( x i − , x i ) (11)where the infimum is over all choices of the integer p ≥ x , x , . . . , x p in T ∞ such that x = x and x p = y . Note that we have D ◦∞ ( x, y ) ≥ D ∞ ( x, y ) ≥ | Λ x − Λ y | , (12)for every x, y ∈ T ∞ . Furthermore, it is immediate from our definitions that D ∞ (0 , x ) = D ◦∞ (0 , x ) = Λ x x ∈ T ∞ . As a consequence of the continuity of the mapping s Λ Θ s , we have D ◦∞ ( x , x ) −→ D ∞ ( x , x ) −→
0) as x → x , for every x ∈ T ∞ .It is not hard to verify that D ∞ is a pseudo-distance on T ∞ . We put x ≈ y if and onlyif D ∞ ( x, y ) = 0 and we introduce the quotient space e P ∞ = T ∞ / ≈ , which is equipped withthe metric induced by D ∞ and with the distinguished point which is the equivalence class of 0.The volume measure on e P ∞ is the image of the volume measure V on T ∞ under the canonicalprojection. Theorem 3.4.
The pointed metric space e P ∞ is locally isometric to the Brownian map and scaleinvariant. Consequently, e P ∞ is distributed as the Brownian plane P ∞ .Proof. The fact that e P ∞ is scale invariant is easy from our construction. Hence the difficult partof the proof is to verify that e P ∞ is locally isometric to the Brownian map. Let us start by brieflyrecalling the construction of the Brownian map m ∞ . We argue under the conditional excursionmeasure N (1)0 = N ( · | σ = 1). Under N (1)0 , the lifetime process ( ζ s ) ≤ s ≤ is a normalizedBrownian excursion, and the tree T ζ coded by ( ζ s ) ≤ s ≤ is the so-called CRT. As previously, p ζ stands for the canonical projection from [0 ,
1] onto T ζ . We can define intervals on T ζ in away analogous to what we did before for T ∞ : If x, y ∈ T ζ , [ x, y ] = { p ζ ( r ) : r ∈ [ s, t ] } , where[ s, t ] is the smallest interval such that p ζ ( s ) = x and p ζ ( t ) = y , using now the conventionthat the interval [ s, t ] is defined by [ s, t ] = [ s, ∪ [0 , t ] when s > t . Then we equip T ζ withBrownian labels by setting Γ x = c W s if x = p ζ ( s ). For every x, y ∈ T ζ , we define D ◦ ( x, y ),resp. D ( x, y ), by exactly the same formula as in (10), resp. (11), replacing Λ by Γ. We haveagain the bound D ( x, y ) ≥ | Γ x − Γ y | . We then observe that D is a pseudo-distance on T ζ , andthe Brownian map m ∞ is the associated quotient metric space. The distinguished point of m ∞ is chosen as the (equivalence class of the) vertex x m of T ζ with minimal label, and we note that D ( x m , x ) = Γ x − Γ x m = Γ x − W ∗ for every x ∈ T ζ .If we replace the normalized Brownian excursion by a Brownian excursion with duration r >
0, that is, if we argue under N ( r )0 , and perform the same construction, simple scalingarguments show that the resulting pointed metric space is distributed as r / · m ∞ and is thuslocally isometric to m ∞ (both are locally isometric to the Brownian plane). Consequently, underthe probability measure N ( · | σ >
1) = Z ∞ d r √ πr N ( r )0 ( · )the preceding construction also yields a random pointed metric space which is locally isometricto m ∞ . Let us write M for this random pointed metric space. We will argue that M is locallyisometric to e P ∞ , which will complete the proof. Some of the arguments that follow are similarto those used in [9, Proof of Proposition 4] to verify that the Brownian plane is locally isometricto the Brownian map.We set for every b > A b := Z σ d s { τ ( − b, ∞ ) ( W s ) < ∞} , where we used the notation τ D (w) introduced in subsection 2.2. Still with the notation of thissubsection, the random variable A b is E b -measurable, and it follows that H := inf { b ≥ A b = 1 } is a stopping time of the filtration ( E a ) a> . Observe that { H < ∞} = { σ > } , N a.e. FromProposition 2.3, we get that under the probability measure N ( · | σ > H, Z − H ), the excursions of the Brownian snake outside ( − H, ∞ ) form a Poisson pointprocess with intensity Z − H N − H (incidentally this also implies that Z − H > { σ > } ).Among the excursions outside ( − H, ∞ ), there is exactly one that attains the minimal value W ∗ ,and conditionally on H = h and W ∗ = a (with a < − h ), this excursion is distributed accordingto N − h ( · | W ∗ = a ).Now compare Theorem 2.1 with the construction of e P ∞ given above to see that we can finda coupling of the Brownian snake under N ( · | σ >
1) and of the triplet ( R, N , N ) determiningthe labeled tree ( T ∞ , (Λ x ) x ∈T ∞ ), in such a way that the following properties hold. There existsa (random) real δ > I from the ball B δ ( T ζ ) (centered at the distinguishedvertex x m = p ζ ( s m )) onto the ball B δ ( T ∞ ) (centered at 0). This isometry preserves intervals,in the sense that if x, y ∈ B δ ( T ζ ), I ([ x, y ] ∩ B δ ( T ζ )) = [ I ( x ) , I ( y )] ∩ B δ ( T ∞ ). Furthermore, theisometry I preserves labels up to a shift by − W ∗ , meaning that Λ I ( x ) = Γ x − W ∗ for every x ∈ B δ ( T ζ ). Consequently, we have D ( x m , x ) = Γ x − W ∗ = Λ I ( x ) = D ∞ (0 , I ( x ))for every x ∈ B δ ( T ζ ).Next we can choose η > T ζ \ B δ ( T ζ ) are all strictly largerthan W ∗ + 2 η and labels on T ∞ \ B δ ( T ∞ ) are all strictly larger than 2 η (we use Lemma 3.3 here).In particular, if x ∈ T ζ , the condition D ( x m , x ) ≤ η implies that x ∈ B δ ( T ζ ) and, if x ∈ T ∞ ,the condition D ∞ (0 , x ) ≤ η implies that x ∈ B δ ( T ∞ ). We claim that D ( x, y ) = D ∞ ( I ( x ) , I ( y )) , (13)for every x, y ∈ T ζ such that D ( x m , x ) ≤ η and D ( x m , y ) ≤ η . To verify this claim, firstnote that, if x , y ∈ T ∞ are such that D ∞ (0 , x ) = Λ x ≤ η and D ∞ (0 , y ) = Λ y ≤ η , wecan compute D ◦∞ ( x , y ) using formula (10), and in the right-hand side of this formula we mayreplace the interval [ x , y ] by [ x , y ] ∩ B δ ( T ∞ ) (because obviously the minimal value of Λ on[ x , y ] is attained on [ x , y ] ∩ B δ ( T ∞ )). A similar replacement may be made in the analogousformula for D ◦ ( x, y ) when x, y ∈ T ζ are such that Γ x ≤ W ∗ + 2 η and Γ y ≤ W ∗ + 2 η . Using theisometry I , we then obtain that D ◦ ( x, y ) = D ◦∞ ( I ( x ) , I ( y )) (14)for every x, y ∈ T ζ such that D ( x m , x ) ≤ η and D ( x m , y ) ≤ η . Then, let x , y ∈ T ∞ besuch that Λ x ≤ η and Λ y ≤ η . If we use formula (11) to evaluate D ∞ ( x , y ), we may in theright-hand side of this formula restrict our attention to “intermediate” points x i whose labelΛ x i is smaller than 2 η (indeed if one of the intermediate points has a label strictly greater than2 η , the sum in the right-hand side of (11) will be strictly greater than 2 η ≥ D ∞ ( x , y ), thanksto (12)). A similar observation holds if we use the analog of (11) to compute D ( x, y ) when x, y ∈ T ζ are such that D ( x m , x ) ≤ η and D ( x m , y ) ≤ η . Our claim (13) is a consequence of thepreceding considerations and (14).It follows from (13) that I induces an isometry from the ball B η ( M ) onto the ball B η ( e P ∞ ).This implies that M is locally isometric to e P ∞ , and the proof is complete.In view of Theorem 3.4, we may and will write P ∞ instead of e P ∞ for the random metricspace that we constructed in the first part of this subsection. We denote the canonical projectionfrom T ∞ onto P ∞ by Π. The fact that D ∞ ( x , x ) −→ x → x , for every fixed x ∈ T ∞ ,shows that Π is continuous. The argument of the preceding proof makes it possible to transfer16everal known properties of the Brownian map to the space P ∞ . First, for every x, y ∈ T ∞ , wehave D ∞ ( x, y ) = 0 if and only if D ◦∞ ( x, y ) = 0 . Indeed this property will hold for x and y belonging to a sufficiently small ball centered at 0in T ∞ , by [21, Theorem 3.4] and the coupling argument explained in the preceding proof. Thescale invariance of the Brownian plane then completes the argument. Similarly, we have theso-called “cactus bound”, for every x, y ∈ T ∞ and every continuous path ( γ ( t )) ≤ t ≤ in P ∞ suchthat γ (0) = Π( x ) and γ (1) = Π( y ),min ≤ t ≤ D ∞ (0 , γ ( t )) ≤ min z ∈ [[ x,y ]] Λ z , (15)where [[ x, y ]] stands for the geodesic segment between x and y in the tree T ∞ . The bound (15)follows from the analogous result for the Brownian map [22, Proposition 3.1] and the couplingargument of the preceding proof.Since labels correspond to distances from the distinguished point, we have, for every r > B r ( P ∞ ) = Π (cid:16) { x ∈ T ∞ : Λ x ≤ r } (cid:17) . Recall the definition of the hull B • r ( P ∞ ) in Section 1. We have B • r ( P ∞ ) = P ∞ \ Π (cid:16) { x ∈ T ∞ : Λ y > r, ∀ y ∈ [[ x, ∞ [[ } (cid:17) , (16)where [[ x, ∞ [[ is the geodesic path from x to ∞ in the tree T ∞ . The fact that B • r ( P ∞ ) is containedin the right-hand side of (16) is easy: If x ∈ T ∞ is such that Λ y > r for every y ∈ [[ x, ∞ [[, thenΠ([[ x, ∞ [[) gives a continuous path going from Π( x ) to ∞ and staying outside the ball B r ( P ∞ ).Conversely, suppose that x ∈ T ∞ is such thatmin y ∈ [[ x, ∞ [[ Λ y ≤ r. Then, if ( γ ( t )) t ≥ is any continuous path going from Π( x ) to ∞ in P ∞ , the bound (15) leads tomin t ≥ D (0 , γ ( t )) ≤ min y ∈ [[ x, ∞ [[ Λ y ≤ r, and it follows that Π( x ) ∈ B • r ( P ∞ ).Write ∂B • r ( P ∞ ) for the topological boundary of B • r ( P ∞ ). It follows from (16) that ∂B • r ( P ∞ ) = Π (cid:16) { x ∈ T ∞ : Λ x = r and Λ y > r, ∀ y ∈ ]] x, ∞ [[ } (cid:17) , (17)with the obvious notation ]] x, ∞ [[. The latter formula motivates the definition of the (generalized)length of the boundary of B • r ( P ∞ ). We observe that this boundary contains (the image underΠ of) a single point on the spine, corresponding to the last visit of r by the process R , L r = sup { t ≥ R t = r } . Any other point x ∈ T ∞ such that Λ x = r and Λ y > r for every y ∈ ]] x, ∞ [[ must be of the form p ζ i ( s ), for some i ∈ I ∪ J , with t i > L r , and some s ∈ [0 , σ i ] such that the path ω is hits r exactlyat its lifetime. For each fixed i (with t i > L r ), the “quantity” of such values of s is measured17y the total mass Z r ( ω i ) of the exit measure of ω i from ( r, ∞ ). Here we use the same notation Z r = hZ ( r, ∞ ) , i as previously.Following the preceding discussion, we define, for every r > Z r := Z N (d t, d ω ) { L r
0. By formula (18) and the exponential formula for Poisson measures, we have, for every λ ≥ E h exp( − λZ a )] = E h exp (cid:16) − Z ∞ L a d t N R t (cid:16) {R⊂ (0 , ∞ ) } (1 − e − λ Z a ) (cid:17)(cid:17)i . (19)The quantity in the right-hand side will be computed via the following two lemmas. Lemma 4.1.
For every x > a and λ ≥ , N x (cid:16) {R⊂ (0 , ∞ ) } (1 − e − λ Z a ) (cid:17) = 32 (cid:16) x − a + ( 2 λ a − ) − / (cid:17) − − x − ! . Proof.
We have N x (cid:16) {R⊂ (0 , ∞ ) } (1 − e − λ Z a ) (cid:17) = N x (cid:16) − {R⊂ (0 , ∞ ) } e − λ Z a (cid:17) − N x (cid:16) − {R⊂ (0 , ∞ ) } (cid:17) = N x (cid:16) − {R⊂ (0 , ∞ ) } e − λ Z a (cid:17) − x , by (4). In order to compute the first term in the right-hand side, we observe that we have R ⊂ ( a, ∞ ) ⊂ (0 , ∞ ) on the event {Z a = 0 } , N x a.e., by (5). Therefore, we can write N x (cid:16) − {R⊂ (0 , ∞ ) } e − λ Z a (cid:17) = N x (cid:16) {Z a > } (cid:17) − N x (cid:16) {Z a > , R⊂ (0 , ∞ ) } e − λ Z a (cid:17) = N x (cid:16) {Z a > } (cid:17) − N x (cid:16) {Z a > } e − λ Z a exp (cid:16) − Z a a (cid:17)(cid:17) = N x (cid:16) − exp (cid:16) − ( λ + 32 a ) Z a (cid:17)(cid:17) . In the second equality we used the special Markov property, together with formula (4), to obtainthat the conditional probability of the event {R ⊂ (0 , ∞ ) } given Z a is exp( − Z a a ). The formulaof the lemma follows from the preceding two displays and (6).18 emma 4.2. For every α ∈ (0 , a ) , E h exp (cid:16) Z ∞ L a d t (cid:16) R t ) − R t − α ) (cid:17)(cid:17)i = (cid:16) a − αa (cid:17) . Proof.
By dominated convergence, we have E h exp (cid:16) Z ∞ L a d t (cid:16) R t ) − R t − α ) (cid:17)(cid:17)i = lim b ↑∞ ↓ E h exp (cid:16) Z L b L a d t (cid:16) R t ) − R t − α ) (cid:17)(cid:17)i . Let us fix b > a . By the time-reversal property of Bessel processes already mentioned after thestatement of Theorem 2.1, the process ( e R t ) t ≥ defined by e R t = R ( L b − t ) ∨ is a Bessel process of dimension − b . Set T a := inf { t ≥ e R t = a } = L b − L a .Write ( B t ) t ≥ for a one-dimensional Brownian motion which starts from r under the probabilitymeasure P r , and for every y ∈ R , let γ y := inf { t ≥ B t = y } . Then, E h exp (cid:16) Z L b L a d t (cid:16) R t ) − R t − α ) (cid:17)(cid:17)i = E h exp (cid:16) Z T a d t (cid:16) e R t ) − e R t − α ) (cid:17)(cid:17)i = (cid:16) ba (cid:17) E b h exp (cid:16) − Z γ a d t ( B t − α ) (cid:17)i , where the last equality is a consequence of the absolute continuity relation found as Lemma 1in [25]. Next observe that E b h exp (cid:16) − Z γ a d t ( B t − α ) (cid:17)i = E b − α h exp (cid:16) − Z γ a − α d t ( B t ) (cid:17)i = (cid:16) a − αb − α (cid:17) , where the second equality is well known (and can again be viewed as a consequence of Lemma1 in [25]). By combining the last two displays, we get E h exp (cid:16) Z L b L a d t (cid:16) R t ) − R t − α ) (cid:17)(cid:17)i = (cid:16) ba (cid:17) × (cid:16) a − αb − α (cid:17) , and the desired result follows by letting b ↑ ∞ .We can now identify the law of Z a . Proof of Proposition 1.2 (i) . We start from formula (19) and use first Lemma 4.1 and thenLemma 4.2 to obtain, for every λ ≥ E h exp( − λZ a )] = E h exp (cid:16) Z ∞ L a d t (cid:16) R t ) − R t − ( a − ( λ + a − ) − / )) (cid:17)(cid:17)i = (cid:16) a − ( a − ( λ + a − ) − / ) a (cid:17) , which yields the desired result. (cid:3) Our next goal is to obtain the law of the whole process ( Z a ) a ≥ , where by convention we take Z = 0. To this end it is convenient to introduce a “backward” filtration ( G a ) a ≥ , which we willdefine after introducing some notation. If w ∈ W , we set τ a (w) := inf { t ≥ t ) / ∈ ( a, ∞ ) } ,19ith the usual convention inf ∅ = ∞ . Then, let a ≥ x > a , and let ω = ( ω s ) s ≥ ∈ C ( R + , W x ) be such that ω s = x for all s large enough. For every s ≥
0, we define tr a ( ω ) s ∈ W x by the formula tr a ( ω ) s = ω η ( a ) s ( ω ) , where, for every s ≥ η ( a ) s ( ω ) := inf { r ≥ Z r d u { ζ ( ωu ) ≤ τ a ( ω u ) } > s } . From the properties of the Brownian snake, it is easy to verify that N x (d ω ) a.e., tr a ( ω ) belongsto C ( R + , W x ), and the paths tr a ( ω ) s do not visit ( −∞ , a ), and may visit a only at their endpoint(what we have done is removing those paths that hit a and survive for some positive time afterhitting a ). Note that we are using a particular instance of the time change η Ds introduced whendefining the σ -field E D in subsection 2.2 (indeed, the σ -field E ( a, ∞ ) is generated by the mapping ω tr a ( ω ) up to negligible sets).Recall formula (9) for the point measure N . For every a ≥
0, we let G a be the σ -fieldgenerated by the process ( R L a + t ) t ≥ and by the point measure N ( a ) := X i ∈ I ∪ J,t i >L a δ ( t i , tr a ( ω i )) . In the definition of N ( a ) , we keep only those excursions that start from the “spine” at a timegreater than L a (so that obviously their initial point is greater than a ) and we truncate theseexcursions at level a . Note that N (0) = N .From our definitions it is clear that G a ⊃ G b if a < b . Furthermore, it follows from themeasurability property of exit measures that Z a is G a -measurable, for every a > Z a ( ω i ) is a measurable function of tr a ( ω i )). We also notice that, for every a >
0, theprocess ( R L a + t ) t ≥ is independent of ( R t ) ≤ t ≤ L a . This follows from last exit decompositions fordiffusion processes, or in a more straightforward way this can be deduced from the time-reversalproperty already mentioned above. Proposition 4.3.
Let < a < b . Then, for every λ ≥ , E [exp( − λZ a ) | G b ] = (cid:16) ba + ( b − a )(1 + λa ) / (cid:17) exp (cid:16) − Z b (cid:16) b − a + ( λ + a − ) − / ) − b (cid:17)(cid:17) . If b > Z b − a ) ≤ a
Recall that 0 < a < b are fixed. We write Z a = Y a,b + e Y a,b , where Y a,b := X i ∈ I ∪ J,t i >L b Z a ( ω i ) , e Y a,b := X i ∈ I ∪ J,L a
Let ρ > and x > . The finite-dimensional marginal distributions of ( Z ρ − a ) ≤ a ≤ ρ knowing that Z ρ = x coincide with those of the continuous-state branching processwith branching mechanism ψ ( u ) = p / u / started from x and conditioned on extinction attime ρ . roof. Recall the notation introduced in subsection 2.1. By comparing the right-hand side of(3) with the formula of Proposition 4.3, we immediately see that, for 0 ≤ s < t < ρ , E [exp( − λZ ρ − t ) | G ρ − s ] = Z e − λy π s,t ( Z ρ − s , dy ) . Arguing inductively, we obtain that, for every 0 < s < . . . < s p < ρ , the conditional distribu-tion of ( Z ρ − s , . . . , Z ρ − s p ) knowing G ρ is π ,s ( Z ρ , d y ) π s ,s ( y , d y ) . . . π s p − ,s p ( y p − , d y p ). Thedesired result follows.We can now complete the proof of Proposition 1.2. Proof of Proposition 1.2 (ii) . We first verify that Z a and e X − a have the same distribution, forevery fixed a >
0. Let λ > f ( y ) = e − λy to simplify notation. By the properties of theprocess e X , we have E [ f ( e X − a )] = lim x ↑∞ E x [ f ( X T − a ) { a ≤ T } ] , where T = inf { t ≥ X t = 0 } as previously. On the other hand, recalling the definition of thefunctions φ t in subsection 2.1, E x [ f ( X T − a ) { a ≤ T } ] = lim n ↑∞ ∞ X k =1 E x [ { a + k − n
0. Now use the form of φ a together with formula (2) (with c = p /
3) to see that theright-hand side of the last display is equal to Z ∞ d t xa ( λ + 32 a ) − / (cid:16) ( λ + 32 a ) − / + r t (cid:17) − exp (cid:16) − x (cid:16) ( λ + 32 a ) − / + r t (cid:17) − (cid:17) = (1 + 2 λa − / (cid:16) − exp( − x ( λ + 32 a ) − / ) (cid:17) . We then let x ↑ ∞ to get that E [exp( − λ e X − a )] = (1 + 2 λa − / = E [exp( − λZ a )]by assertion (i) of the proposition.Knowing that Z a and e X − a have the same distribution, the proof is completed as follows.We observe that, for every a >
0, the law of ( e X − a + t ) ≤ t ≤ a conditionally on e X − a = x coincideswith the law of X started from x and conditioned on extinction at time a (we leave the easyverification to the reader). By comparing with Proposition 4.4, we get the desired statement. (cid:3) As a consequence of Proposition 1.2, the process ( Z r ) r> has a càdlàg modification, and fromnow on we deal only with this modification. We conclude this subsection by proving Proposition22.1: We need to verify that our definition of the random variable Z r matches the approximationgiven in this proposition. Proof of Proposition 1.1 . If x ∈ T ∞ and x is not on the spine, the point Π( x ) belongs to B • r ( P ∞ ) c ∩ B r + ε ( P ∞ ) if and only if Λ x ∈ ( r, r + ε ] and Λ y > r for every y ∈ [[ x, ∞ [[. Recallingour notation V for the volume measure on T ∞ , we can thus write | B • r ( P ∞ ) c ∩ B r + ε ( P ∞ ) | = X i ∈ I ∪ J : t i >L r V ( { x ∈ T i : Λ x ≤ r + ε and Λ y > r, ∀ y ∈ [[ ρ i , x ]] } ) . We will first deal with indices i such that t i > L r + ε , and we set A ε := X i ∈ I ∪ J : t i >L r + ε V ( { x ∈ T i : Λ x ≤ r + ε and Λ y > r, ∀ y ∈ [[ ρ i , x ]] } )to simplify notation. Recall that if x ∈ T i and x = p ζ i ( s ), we have Λ x = b ω is and { Λ y : y ∈ [[ ρ i , x ]] } = { ω is ( t ) : 0 ≤ t ≤ ζ is } . An application of the special Markov property shows that theconditional distribution of A ε knowing Z r + ε is the law of U ε ( Z r + ε ), where U ε is a subordinatorwhose Lévy measure is the “law” of Z σ d s { b W s ≤ r + ε ; W s ( t ) >r, ∀ t ∈ [0 ,ζ s ] } , under N r + ε (and U ε is assumed to be independent of Z r + ε ). From the first moment formula forthe Brownian snake [19, Proposition IV.2], one easily derives that N r + ε (cid:16) Z σ d s { b W s ≤ r + ε ; W s ( t ) >r, ∀ t ∈ [0 ,ζ s ] } (cid:17) = E r + ε h Z ∞ d t { B t ≤ r + ε } { t<γ r } i = ε , where we have used the notation of the proof of Lemma 4.2. On the other hand, scalingarguments show that ( U ε ( t )) t ≥ = ( ε U ( tε )) r ≥ , and the law of large numbers implies that t − U ( t ) converges a.s. to 1 as t → ∞ . Since theconditional distribution of ε − A ε knowing Z r + ε is the law of ε U ( Z r + ε ε ), it follows from thepreceding observations that ε − A ε − Z r + ε −→ ε → Z r + ε converges to Z r as ε →
0, we conclude that ε − A ε −→ ε → Z r in probability. To complete the proof, we just have to check that ε − X i ∈ I ∪ J : L r
0, the fonction u λ,µ ( x ) defined for every x > u λ,µ ( x ) = N x (1 − exp( − λ Z − µ Y )) . Note that u λ, ( x ) is given by formula (6). On the other hand, the limit of u λ,µ as λ ↑ ∞ is u ∞ ,µ ( x ) := N x (1 − {R⊂ (0 , ∞ ) } exp( − µ Y )) = r µ (cid:16) ((2 µ ) / x ) − (cid:17) (21)by [14, Lemma 7]. The latter formula is generalized in the next lemma. Lemma 4.5.
We have, for every x > : • if λ > q µ , u λ,µ ( x ) = r µ coth (2 µ ) / x + coth − vuut
23 + 13 s µ λ !! − ! ; • if λ < q µ , u λ,µ ( x ) = r µ tanh (2 µ ) / x + tanh − vuut
23 + 13 s µ λ !! − ! . emark. If λ = q µ , we have simply u λ,µ ( x ) = r µ . This can be obtained by a passage to the limit from the previous formulas, but a direct proof isalso easy.
Proof.
By results due to Dynkin, the function u λ,µ solves the differential equation ( u = 2 u − µ , on (0 , ∞ ) ,u (0) = λ. (22)This is indeed a very special case of Theorem 3.1 in [13]. For the reader who is unfamiliar withthe general theory of superprocesses, a direct proof can be given along the lines of the proof ofLemma 6 in [14].It is also easy to verify thatlim x →∞ u λ,µ ( x ) = N (1 − e − µσ ) = r µ . The formulas of the lemma then follow by solving equation (22), which requires some tediousbut straightforward calculations.For future reference, we note that, if λ > q µ , we have, for every x > u λ,µ ( x ) = u ∞ ,µ ( x + θ µ ( λ )) , (23)where the function θ µ , which is defined on ( q µ , ∞ ) by θ µ ( λ ) = (2 µ ) − / coth − vuut
23 + 13 s µ λ, is the functional inverse of u ∞ ,µ . Of course (23) is nothing but the flow property of solutions of(22). Proposition 4.6.
Let a > . Then, for every µ > , N a (cid:16) e − µ Y (cid:12)(cid:12)(cid:12) W ∗ = 0 (cid:17) = − a u ,µ ( a ) = a (2 µ ) / cosh((2 µ ) / a )sinh ((2 µ ) / a ) . Remark.
The conditioning on { W ∗ = 0 } may be understood as a limit as ε → {− ε < W ∗ ≤ } . Equivalently, we may use Theorem 2.1, which provides an explicitdescription of the conditional probabilities N ( · | W ∗ = y ) for every y <
0. We also note thatunder the conditioning { W ∗ = 0 } we have Y = σ . Proof.
We first observe that, for every ε > N a ( − ε < W ∗ ≤
0) = 32 a − a + ε ) ∼ ε → εa , (24)25y (4). On the other hand, N a (cid:16) e − µ Y {− ε Let a > and r > . Assume that ( X t ) t ≥ is a super-Brownian motion thatstarts from rδ a under the probability measure P rδ a . Set Σ = Z ∞ d t hX t , i , and write R X for the range of X . Then, for every µ > , E rδ a h e − µ Σ (cid:12)(cid:12)(cid:12) min R X = 0 i = a (2 µ ) / cosh((2 µ ) / a )sinh ((2 µ ) / a ) exp (cid:16) − r (cid:16)r µ (cid:16) ((2 µ ) / a ) − (cid:17) − a (cid:17)(cid:17) . Proof. We may assume that ( X t ) t ≥ is constructed from a Poisson point measure N with inten-sity r N a via formula (7). Then, we immediately verify thatΣ = Z N (d ω ) σ ( ω )26nd properties of Poisson measures lead to the formula E rδ a h e − µ Σ (cid:12)(cid:12)(cid:12) min R X = 0 i = N a (cid:16) e − µσ (cid:12)(cid:12)(cid:12) min R = 0 (cid:17) exp (cid:16) − r N a (cid:16) (1 − e − µσ ) { min R > } (cid:17)(cid:17) . The first term in the right-hand side is given by Lemma 4.6. As for the second term we observethat N a (cid:16) (1 − e − µσ ) { min R > } (cid:17) = N a (cid:16) − e − µσ { min R > } (cid:17) − N a (min R ≤ 0) = u ∞ ,µ ( a ) − a . This completes the proof. Proof of Theorem 1.4 . The first formula of the theorem is a straightforward consequence of thesecond one since we know the distribution of Z a . More precisely, using Proposition 1.2 (ii), weobserve that E h exp (cid:16) − Z a (cid:16)r µ (cid:16) ((2 µ ) / a ) − (cid:17) − a (cid:17)(cid:17)i = (cid:16) a (cid:16)r µ (cid:16) ((2 µ ) / a ) − (cid:17) − a (cid:17)(cid:17) − / = 3 / a − (2 µ ) − / (cid:16) ((2 µ ) / a ) − (cid:17) − / . If we multiply this quantity by a (2 µ ) / cosh((2 µ ) / a )sinh ((2 µ ) / a )we get the desired formula for E [exp( − µ | B • a | )].Not suprisingly, the second formula of Theorem 1.4 is a consequence of the analogous formulain Corollary 4.7. Let us explain this. Using our representation of the Brownian plane, andformula (16), we can write | B • a | as the sum of two independent contributions: • The contribution of subtrees branching off the spine at a level smaller than L a . UsingTheorem 2.1, we see that this contribution is distributed as σ under the conditional prob-ability measure N a ( · | W ∗ = 0). We also note that this contribution is independent of the σ -field G a . • The contribution of subtrees branching off the spine at a level greater than L a . Thiscontribution is G a -measurable. Furthermore, an application of the special Markov property(similar to the one in the proof of Proposition 4.3) shows that its conditional distributiongiven Z a = r is the law of X k ∈ K σ ( ω ( k ) )where P k ∈ K δ ω ( k ) is a Poisson measure with intensity r N a ( · ∩ { W ∗ > } ).The preceding discussion shows that the conditional distribution of | B • a | given Z a = r coincideswith the distribution of Σ under P rδ a ( · | min R X = 0), with the notation of Corollary 4.7. Thiscompletes the proof. (cid:3) .3 The process of hull volumes Our goal in this subsection is to prove Theorem 1.3. In a way similar to Corollary 4.7, weconsider a super-Brownian motion ( X t ) t ≥ , and the probability mesure P rδ under which thissuper-Brownian motion starts from rδ . We also introduce the associated historical process( X t ) t ≥ . As previously, we may and will assume that ( X t ) t ≥ and ( X t ) t ≥ are constructed froma Poisson measure N = X k ∈ K δ ω ( k ) with intensity r N , via formulas (7) and (8). We then set, for every a < Z a = X k ∈ K Z a ( ω ( k ) )and, for every a ≤ Y a = X k ∈ K Y a ( ω ( k ) )where Y a ( ω ) := Z σ d s { τ a ( W s ( ω ))= ∞} . We also set Z = r by convention.In the theory of superprocesses [13], Z a corresponds to the total mass of the exit measureof the historical process ( X t ) t ≥ from ( a, ∞ ) (for our present purposes, we do not need thisinterpretation). We also note that, for every a ≤ 0, we have Y a = Z ∞ d t Z X t (dw) { τ a (w)= ∞} , and the right-hand side is the total integrated mass of those historical paths that do not hit a .As previously, X = ( X t ) t ≥ denotes a continuous-state branching process with branchingmechanism ψ ( u ) = p / u / that starts from r under the probability measure P r . We will usethe “Lévy-Khintchine representation” for ψ : we have ψ ( u ) = Z κ ( dy ) ( e − λy − λy )where κ ( dy ) is the measure on (0 , ∞ ) given by κ ( dy ) = r π y − / dy. Proposition 4.8. Let a > . The law under P rδ of the pair ( Z − a , Y − a ) coincides with the lawunder P r of the pair (cid:16) X a , X i : s i ≤ a ξ i (∆ X s i ) (cid:17) (25) where s , s , . . . is a measurable enumeration of the jumps of X , and ξ , ξ , . . . is a sequence ofi.i.d. real random variables with density √ πx e − / x (0 , ∞ ) ( x ) , which is independent of the process ( X t ) t ≥ . roof. We first observe that, for every λ, µ > 0, we have E rδ [exp( − λ Z − a − µ Y − a )] = exp( − ru λ,µ ( a ))by the exponential formula for Poisson measures. We will prove that the joint Laplace transformof the pair (25) is given by the same expression.To this end, we fix µ > α = √ µ to simplify notation. We also set w a ( λ ) = u λ,µ ( a ) for every a ≥ 0. As a consequence of (22) (or directly from Lemma 4.5) we have forevery a, b ≥ w a + b = w a ◦ w b and w ( λ ) = λ . Furthermore, the derivative of w a ( λ ) at a = 0 is easily computed from theformulas of Lemma 4.5: dd a w a ( λ ) | a =0 = r √ α + λ ( α − λ ) (26)where we recall that α = √ µ .Let us consider now the Laplace transform of the pair (25). We first observe that the Laplacetransform of the variables ξ i is given by E [ e − βξ ] = (1 + p β ) e − √ β , for every β ≥ E [ ξe − βξ ] = e − √ β by the well-known formula for the Laplacetransform of a positive stable random variable with parameter 1 / λ > E r h exp (cid:16) − λX a − µ X i : ξ i ≤ a ξ i (∆ X s i ) (cid:17)i = E r h exp( − λX a ) Y ≤ s ≤ a (1 + α ∆ X s ) e − α ∆ X s i . The additivity property of continuous-state branching processes allows us to write the right-hand side in the form exp( − rv a ( λ )), where the function v a ( λ ) (which of course depends also on α ) is such that v ( λ ) = λ . The Markov property of X readily gives the semigroup property v a + b = v a ◦ v b for every a, b ≥ 0. To complete the proof of the proposition, it suffices to verify that w a = v a for every a ≥ 0, and to this end it will be enough to prove thatdd a w a ( λ ) | a =0 = dd a v a ( λ ) | a =0 . (27)The left-hand side is given by (26). Let us compute the right-hand side. We fix λ > X is a Feller process with values in [0 , ∞ ). Theexponential function ϕ λ ( x ) = e − λx belongs to the domain of the generator L of X , and L ϕ λ ( x ) = ψ ( λ ) x ϕ λ ( x ) , as a straightforward consequence of the formula for the Laplace transform of X t . Consequently,we have e − λX t = e − λr + M t + ψ ( λ ) Z t X s e − λX s d s, M is a martingale, which is clearly bounded on every compact interval. For every t ≥ V t := Y ≤ s ≤ t (1 + α ∆ X s ) e − α ∆ X s , and note that V is a nonnegative nonincreasing process, which is bounded by one. By applyingthe integration by parts formula, we have V t e − λX t = e − λr + Z t V s − d M s + ψ ( λ ) Z t V s X s e − λX s d s + Z t e − λX s d V s . (28)The martingale term R t V s − dM s has zero expectation. Let us evaluate the expected value of thelast term Z t e − λX s d V s = X ≤ s ≤ t e − λX s ∆ V s = X ≤ s ≤ t e − λX s − V s − × e − λ ∆ X s (cid:16) (1 + α ∆ X s ) e − α ∆ X s − (cid:17) . We note that the dual predictable projection of the random measure X s ≥ , ∆ X s > δ ( s, ∆ X s ) (d u, d x )is the measure X u d u κ (d x )where we recall that κ ( dx ) is the “Lévy measure” associated with X (a simple way to get thisis to use the Lamperti transformation to represent X as a time-change of the Lévy process withLévy measure κ ). It follows that E h Z t e − λX s d V s i = E h Z t X s V s e − λX s d s i × Z κ (d x ) e − λx (cid:16) (1 + αx ) e − αx − (cid:17) . By taking expectations in (28), we thus get e − rv t ( λ ) − e − rv ( λ ) = E h Z t X s V s e − λX s d s i × (cid:16) ψ ( λ ) + Z κ (d x ) e − λx (cid:16) (1 + αx ) e − αx − (cid:17)(cid:17) . Note that 1 t E h Z t X s V s e − λX s d s i −→ t ↓ r e − λr , and thus it immediately follows from the preceding display thatdd a v a ( λ ) | a =0 = − ψ ( λ ) − Z κ (d x ) e − λx (cid:16) (1 + αx ) e − αx − (cid:17) = − Z κ (d x ) (cid:16) (1 + αx ) e − ( α + λ ) x − λx (cid:17) . From the expression of κ , straightforward calculations lead to the formula Z κ (d x ) (cid:16) (1 + αx ) e − ( α + λ ) x − λx (cid:17) = − r √ α + λ ( α − λ )and our claim (27) follows, recalling (26). This completes the proof.30ith the notation introduced in Proposition 4.8, set for every a ≥ Y a := X i : s i
The law of the process ( Z − a , Y − a ) a ≥ under P rδ coincides with the law of ( X a , Y a ) a ≥ under P r .Proof. An application of the special Markov property shows that the process ( Z − a , Y − a ) a ≥ is(time-homogeneous) Markov under P rδ , with transition kernel given by E rδ [ g ( Z − a − b , Y − a − b ) | ( Z − a , Y − a )] = Φ b ( Z − a , Y − a ) , where Φ b ( z, y ) = E zδ [ g ( Z − b , y + Y − b )] . On the other hand, the Markov property of the continuous-state branching process X also showsthat the process ( X a , Y a ) a ≥ is Markov under P r , and E r [ g ( X a + b , Y a + b ) | ( X a , Y a )] = Ψ b ( X a , Y a ) , where Ψ b ( z, y ) = E z [ g ( X b , y + Y b )] . By Proposition 4.8, we have Φ b = Ψ b , and the desired result follows. Remark. We chose to put a strict inequality s i < a in the definition of Y a so that the process Y has left-continuous paths, which is also the case for Y − a . On the other hand, both Z − a and X a have right-continuous paths. Proof of Theorem 1.3 . Fix ρ > 0, and let U follow a Gamma distribution with parameter and mean ρ , so that U has the same distribution as Z ρ , by Proposition 1.2 (i). Suppose that,conditionally given U , N is a Poisson point measure with intensity U N under the probabilitymeasure P . We can use formulas (7) and (8) to define a super-Brownian motion ( X t ) t ≥ startedfrom U δ and the associated historical superprocess. We then define ( Z a , Y a ) a ≤ as in thebeginning of this subsection. We also write S for the extinction time of X .The arguments used in the proof of Theorem 1.4, based on our representation of the Brownianplane and formula (16), show that the process ( Z ρ − a , | B • ρ | − | B • ρ − a | ) ≤ a ≤ ρ has the same distri-bution as ( Z − a , Y − a ) ≤ a ≤ ρ under P ( · | S = ρ ). For a precise justification, note that B • ρ \ B • ρ − a is the image under Π of those x ∈ T ∞ such that Λ y > ρ − a for every y ∈ [[ x, ∞ [[ and thereexists z ∈ [[ x, ∞ [[ such that Λ z ≤ ρ . If x satisfies these properties, either x belongs to one of thesubtrees branching off the spine at a level belonging to ] L ρ − a , L ρ ], or x belongs to one of thesubtrees branching off the spine at a level greater than L ρ but the label of one of the ancestorsof x in this subtree is less than or equal to ρ (and, in both cases, the labels of the ancestors of x in the subtree containing x remain strictly greater than ρ − a ). The volume of the set of points x corresponding to the second case is handled via the special Markov property for the domain( ρ, ∞ ), in a way similar to the end of the proof of Theorem 1.4. We obtain that the sum of thetwo contributions leads to the quantity Y − a for a super-Brownian motion starting from Z ρ δ and conditioned on extinction at time ρ . 31rite P ( U ) for a probability measure under which the continuous-state branching process X starts from U (and the process Y is constructed by the formula preceding Corollary 4.9), andlet T be the extinction time of X as previously. Also set for every a ≥ e Y a = X i :˜ s i ≥− a ξ i (∆ e X ˜ s i ) , where e s , e s , . . . is a measurable enumeration of the jumps of e X , and the random variables ξ i are as in Proposition 4.8 and are supposed to be independent of e X .From Corollary 4.9, we obtain that the law of ( Z − a , Y − a ) ≤ a ≤ ρ under P ( · | S = ρ ) coincideswith the law of ( X a , Y a ) ≤ a ≤ ρ under P ( U ) ( · | T = ρ ). However, using the final observation of theproof of Proposition 1.2 (ii), the latter law is also the law of ( e X − ρ + a , e Y ρ − e Y ρ − a ) ≤ a ≤ ρ .Summarizing, we have obtained the identity in distribution( Z ρ − a , | B • ρ | − | B • ρ − a | ) ≤ a ≤ ρ (d) = ( e X − ρ + a , e Y ρ − e Y ρ − a ) ≤ a ≤ ρ . This immediately gives ( Z a , | B • a | ) ≤ a ≤ ρ (d) = ( e X − a , e Y a ) ≤ a ≤ ρ , from which the statement of Theorem 1.3 follows. (cid:3) We will rely on the Chassaing-Durhuus construction of the UIPQ [7]. The fact that this construc-tion is equivalent to the more usual construction involving local limits of finite quadrangulationscan be found in [29]. The Chassaing-Durhuus construction is based on a random infinite la-beled discrete ordered tree, which we denote here by T . In a way very analogous to the tree T ∞ considered above, the tree T consists of a spine, which is a discrete half-line, and for everyvertex of the spine, of two finite subtrees grafted at this vertex respectively to the left and tothe right of the spine (if the grafted subtree consists only of the root, this means that we addnothing). The root of T is the first vertex of the spine. The set of all corners of T is equippedwith a total order induced by the clockwise contour exploration of the tree. Each vertex v of T is assigned a positive integer label ‘ v , in such a way that the label of the root is 1 and thelabels of two neighboring vertices may differ by at most 1 in absolute value. We will not needthe exact distribution of the tree T : See e.g. [26, Section 2.3].Let us now explain the construction of the UIPQ from the tree T . First the vertex set of Q ∞ is the union of the vertex set V ( T ) of T and of an extra vertex denoted by ∂ . We thengenerate the edges of Q ∞ by the following device, which is analogous to the Schaeffer bijectionbetween finite (rooted) quadrangulations and well-labeled trees [8]. All corners of T with label1 are linked to ∂ by an edge of Q ∞ . Any other corner c is linked by an edge of Q ∞ to thelast corner before c (in the clockwise countour order) with strictly smaller label. The resultingcollection of edges forms an infinite quadrangulation of the plane, which is the UIPQ Q ∞ (seeFig. 2, and [7] for more details). It easily follows from the construction that the graph distance(in Q ∞ ) between ∂ and another vertex of Q ∞ is just the label of this vertex in T .Let us introduce the left and right contour processes. Starting from the root of T , we listall corners of the left side of T in clockwise contour order, and, for every k ≥ 0, we denotethe vertex corresponding to the k -th corner in this enumeration by v k (in such a way that v is the root of T ). We then write C ( L ) k for the generation (distance from the root in T ) of v k ,32 = v = v v v u u v v v v ∂ v u v Figure 2: The Chassaing-Durhuus construction. The tree T is represented in thin lines.A few of the vertices v k , v k have been indicated together with their labels in bold figures.The edges of Q ∞ incident to faces have been drawn in thick lines. and V ( L ) k = ‘ v k . Note that | C ( L ) k +1 − C ( L ) k | = 1 for every k ≥ 0. We define similarly C ( R ) k and V ( R ) k using the exploration in counterclockwise order of the right side of the tree, and the analogof the sequence ( v k ) k ≥ is denoted by ( v k ) k ≥ . By linear interpolation, we may view all fourprocesses C ( L ) , V ( L ) , C ( R ) , V ( R ) as indexed by R + . A key ingredient of the following proof is theconvergence [26, Theorem 5], (cid:16) k C ( L ) k s , r 32 1 k V ( L ) k s , k C ( R ) k s , r 32 1 k V ( R ) k s (cid:17) s ≥ −→ k →∞ (cid:16) h (Θ s ) , Λ Θ s , h (Θ s ) , Λ Θ s (cid:17) s ≥ , (29)where we recall that Θ s and Θ s are the exploration processes of respectively the left and theright side of T ∞ (see subsection 3.2), and we use the notation h (Θ s ) = d ∞ (0 , Θ s ) for the “height”of Θ s in T ∞ . The convergence (29) holds in the sense of weak convergence of the laws on thespace C ( R + , R ). We also mention another convergence in distribution concerning labels on thespine. Write u n for the n -th vertex on the spine of T . Then, (cid:16)r 32 1 k ‘ u b k s c (cid:17) s ≥ −→ k →∞ ( R s ) s ≥ (30)and this convergence in distribution holds jointly with (29). The convergence (30) can be foundin [26, Proposition 1]. The fact that this convergence holds jointly with (29) is clear from theproof of Theorem 5 in [26].According to [26, Lemma 3], we have for every A > K →∞ (cid:16) sup k ≥ P (cid:16) inf t ≥ K k V ( L ) k t < A (cid:17)(cid:17) = 0 , (31)33nd by symmetry the analogous statement with V ( L ) replaced by V ( R ) also holds. Finally, wenote that Lemma 3.3 implies lim s ↑∞ Λ Θ s = lim s ↑∞ Λ Θ s = + ∞ , a.s. (32) ∂ Figure 3: A representation of the UIPQ near the vertex ∂ . The shaded part is the ball B ( Q ∞ ) . The hull B • ( Q ∞ ) , whose boundary is in thick lines on the figure, is obtainedby filling in the holes of B ( Q ∞ ) . For every integer k ≥ 1, define the ball B k ( Q ∞ ) as the union of all faces of Q ∞ that areincident to (at least) one vertex at distance smaller than or equal to k − ∂ . The hull B • k ( Q ∞ ) is then obtained by adding to B k ( Q ∞ ) the bounded components of the complementof B k ( Q ∞ ) (see Fig. 3). Define the “volume” |B • k ( Q ∞ ) | as the number of faces contained in B • k ( Q ∞ ). Theorem 5.1. We have ( k − |B •b kr c ( Q ∞ ) | ) r> −→ k →∞ ( 12 | B • r √ / ( P ∞ ) | ) r> , in the sense of weak convergence of finite dimensional marginals. Remarks. (i) In the companion paper [10], we use the peeling process to give a differentapproach to the convergence of the sequence of processes ( k − |B •b kr c ( Q ∞ ) | ) r> . The limit thenappears in the form given in Theorem 1.3. 34ii) By scaling, the processes ( | B • r √ / ( P ∞ ) | ) r> and ( | B • (9 / / r ( P ∞ ) | ) r> have the same dis-tribution, and we recover the “usual” constant (9 / / (see e.g. [8]). The reason for stating thetheorem in the form above is the fact that the convergence then holds jointly with (29) or (30),as the proof will show. Proof. Instead of dealing with |B •b kr c ( Q ∞ ) | we will consider the quantity kB •b kr c ( Q ∞ ) k defined asthe number of vertices that are incident to a face of B •b kr c ( Q ∞ ). It is an easy exercise to verify thatthe desired convergence will follow if we can prove that the statement holds when |B •b kr c ( Q ∞ ) | is replaced by kB •b kr c ( Q ∞ ) k (the underlying idea is the fact that a finite quadrangulation with n faces has n + 2 vertices, and we also observe that, for every fixed r > 0, the size of theboundary of B •b kr c ( Q ∞ ) is negligible with respect to k – this is clear if we know that thesequence ( k − kB •b kr c ( Q ∞ ) k ) r> converges to a limit which is continuous in probability).We will verify that, if r > k − kB •b kr c ( Q ∞ ) k converges in distributionto | B • r √ / ( P ∞ ) | . It will be clear that our method extends to a joint convergence in distributionif we consider a finite number of values of r , yielding the desired statement. To simplify thepresentation, we take r = 1 in what follows. So our goal is to show that k − kB • k ( Q ∞ ) k (d) −→ k →∞ | B • √ / ( P ∞ ) | . (33)If u ∈ V ( T ), write Geo( u → ∞ ) for the geodesic path from u to ∞ in T , and set m ( u ) := min { ‘ v : v ∈ Geo( u → ∞ ) } . Let k ≥ 1. We note that:(i) The condition m ( u ) ≥ k + 3 ensures that x / ∈ B • k ( Q ∞ ). Indeed, from the way edges of Q ∞ are generated, it is easy to construct a path of Q ∞ from u to ∞ that visits only verticesat distance (at least) m ( u ) − ∂ . If m ( u ) − ≥ k + 2, none of these vertices can beincident to a face of B k ( Q ∞ ).(ii) If m ( u ) ≤ k then x ∈ B • k ( Q ∞ ). This is an immediate consequence of the discrete “cactusbound” (see [11, Proposition 4.3], in a slightly different setting), which implies that anypath of Q ∞ going from u to ∞ visits a vertex at distance less than or equal to m ( u ) from ∂ .Recall our definition of the “contour sequence” ( v k ) k ≥ of the left side of the tree. We nowextend the definition of v k to nonnegative real indices: If k ≥ k − < s < k , we take v s = v k if C ( L ) k = C ( L ) k − + 1 and v s = v k − if C ( L ) k = C ( L ) k − − 1. This definition is motivated bythe fact that we have R ∞ ds { v s = u } = 2 for every vertex u in the left side of T (not on thespine), and the same integral is equal to 1 if u is on the spine and different from the root. Weextend similarly the definition of v k .We next observe that, for every fixed s > k m ( v k s ) (d) −→ k →∞ r 23 min { Λ y : y ∈ [[Θ s , ∞ [[ } , (34)and this convergence holds jointly with (29). The convergence (34) is essentially a consequenceof (29) and (30). Let us only sketch the argument. A first technical ingredient is to replace35 ( v k s ) by a truncated version obtained by replacing Geo( u → ∞ ) in the definition of m ( u ) bythe geodesic from u to the vertex u Ak , for some large integer constant A . One then proves,using (29) and (30), that the analog of (34) holds for this truncated version, with a limit equal to p / { Λ y : y ∈ [[Θ s , A ]] } (a convenient way is to use a minor variant of the homeomorphismtheorem of [28] to see that (29) implies also the convergence of the associated “snakes”, whichis what we need here). Finally, the fact that the convergence of truncated versions suffices toget (34) is easy using (31) and (32).If we consider a finite number of values of s , the corresponding convergences (34) hold jointly(and jointly with (29)). Via the method of moments, it easily follows that, for every A > 0, andevery a > Z A d s { m ( v k s ) ≤ a k } (d) −→ k →∞ Z A d s { min { Λ y : y ∈ [[Θ s , ∞ [[ }≤ √ / a } . Thanks to (31) and (32), we can replace A by ∞ and obtain Z ∞ d s { m ( v k s ) ≤ a k } (d) −→ k →∞ Z ∞ d s { min { Λ y : y ∈ [[Θ s , ∞ [[ }≤ √ / a } . By combining this convergence with the analogous result for the right side of the tree, we get Z ∞ d s { m ( v k s ) ≤ a k } + Z ∞ d s { m ( v k s ) ≤ a k } (d) −→ k →∞ Z ∞ d s { min { Λ y : y ∈ [[Θ s , ∞ [[ }≤ √ / a } + Z ∞ d s { min { Λ y : y ∈ [[Θ s , ∞ [[ }≤ √ / a } . (35)By (16), the limit in the previous display is equal to | B • √ / a ( P ∞ ) | . On the other hand, previousremarks show that, if a k ≥ Z ∞ d s { m ( v k s ) ≤ a k } + Z ∞ d s { m ( v k s ) ≤ a k } = 2 k − ( { u ∈ V ( T ) : m ( u ) ≤ a k } − . Furthermore, it follows from properties (i) and (ii) stated above that { u ∈ V ( T ) : m ( u ) ≤ k } ≤ kB • k ( Q ∞ ) k ≤ { u ∈ V ( T ) : m ( u ) ≤ k + 2 } . Our claim (33) now follows from the convergence (35) and the preceding observations, togetherwith the fact that the mapping r 7→ | B • r ( P ∞ ) | is continuous in probability. This completes theproof.Let us conclude with a comment. It would seem more direct to derive Theorem 5.1 from thefact that the Brownian plane is the Gromov-Hausdorff scaling limit of the UIPQ [9, Theorem2]. We refrained from doing so because the local Gromov-Hausdorff convergence does not giveenough information to handle volumes of balls or hulls. It would have been necessary to establisha type of Gromov-Hausdorff-Prokhorov convergence in our setting, in the spirit of the work ofGreven, Pfaffelhuber and Winter [15], who however consider the case of metric spaces equippedwith a probability measure. This would require a number of additional technicalities, and forthis reason we preferred to rely on the results of [26]. References [1] Abraham, C. Rescaled bipartite planar maps converge to the Brownian map. Preprint(2013), available at: arXiv:1312.5959 362] Addario-Berry, L. Growing random 3-connected maps. Electron. Comm. Probab. 19, no.54, 1-12 (2014)[3] Addario-Berry, L., Albenque, M. The scaling limit of random simple triangulationsand random simple quadrangulations. Preprint (2013), available at: arXiv:1306.5227[4] Angel, O. , Growth and percolation on the uniform infinite planar triangulation. Geomet.Funct. Anal. (2003), 935-974.[5] Angel, O., Schramm, O. Uniform infinite planar triangulations. Comm. Math. Phys. Bettinelli, J., Jacob, E., Miermont, G. The scaling limit of uniform random planemaps, via the Ambjørn-Budd bijection. Preprint (2013), available at: arXiv:1312.5842[7] Chassaing, P., Durhuus, B. Local limit of labeled trees and expected volume growth ina random quadrangulation. Ann. Probab. 34, 879-917 (2006)[8] Chassaing, P., Schaeffer, G. Random planar lattices and integrated superBrownianexcursion. Probab. Th. Rel. Fields , 128, 161-212 (2004)[9] Curien, N., Le Gall, J.-F. The Brownian plane. J. Theoret. Probab. , to appear, availableat arXiv:1204.5921[10] Curien, N., Le Gall, J.-F. Asymptotics for the peeling process. In preparation.[11] Curien, N., Le Gall, J.-F., Miermont, G. The Brownian cactus I. Scaling limits ofdiscrete cactuses. Annales Inst. H. Poincaré Probab. Stat. 49, 340-373 (2013)[12] Curien, N., Ménard, L., Miermont, G. A view from infinity of the uniform infinitequadrangulation. ALEA Lat. Am. J. Probab. Math. Stat. 10, 45–88 (2013)[13] Dynkin, E.B. Superprocesses and partial differential equations. Ann. Probab. 21; 1185-1262 (1993)[14] Delmas, J.-F. Computation of moments for the length of the one dimensional ISE support. Electron. J. Probab. , 8:no. 17, 15 pp. (electronic), 2003.[15] Greven, A., Pfaffelhuber, A., Winter, A. Convergence in distribution of randommetric measure spaces: (Lambda-coalescent measure trees). Probab. Th. Rel. Fields Krikun, M. A uniformly distributed infinite planar triangulation and a related branchingprocess. J. Math. Sci. (N.Y.) Krikun, M. Local structure of random quadrangulations. Preprint, available atarXiv:math/0512304[18] Le Gall, J.-F. The Brownian snake and solutions of ∆ u = u in a domain. Probab. Th.Rel. Fields Le Gall, J.-F. Spatial Branching Processes, Random Snakes and Partial DifferentialEquations . Birkhäuser 1999. 3720] Le Gall, J.-F. Random trees and applications. Probab. Surveys , 2, 245–311 (2005)[21] Le Gall, J.-F. The topological structure of scaling limits of large planar maps. InventionesMath. Le Gall, J.-F. Geodesics in large planar maps and in the Brownian map. Acta Math. Le Gall, J.-F. , Uniqueness and universality of the Brownian map. Ann. Probab. 41, 2880-2960 (2013)[24] Le Gall, J.-F. , The Brownian cactus II. Upcrossings and local times of super-Brownianmotion. Probab. Th. Rel. Fields , to appear, available at arXiv:1308.6762[25] Le Gall, J.-F. Bessel processes, the Brownian snake and super-Brownian motion.Preprint, available at arXiv:1407.0237[26] Le Gall, J.-F., Ménard, L. Scaling limits for the uniform infinite quadrangulation. Illinois J. Math. 54, 1163-1203 (2010)[27] Le Gall, J.-F., Weill, M. Conditioned Brownian trees. Ann. Inst. H. Poincaré, Probab.Stat. , 42, 455-489 (2006)[28] Markert, M., Mokkadem, A. States spaces of the snake and of its tour – Convergenceof the discrete snake. J. Theoret. Probab. , 4, 1015-1046 (2003)[29] Ménard, L. The two uniform infinite quadrangulations of the plane have the same law. Ann. Inst. H. Poincaré Probab. Stat. 46, 190-208 (2010)[30] Miermont, G. , The Brownian map is the scaling limit of uniform random plane quadran-gulations. Acta Math. Revuz, D., Yor., M. Continuous Martingales and Brownian Motion . Springer 1991[32]