Skeletal stochastic differential equations for continuous-state branching process
aa r X i v : . [ m a t h . P R ] A p r SKELETAL STOCHASTIC DIFFERENTIAL EQUATIONS FORCONTINUOUS-STATE BRANCHING PROCESS
By Dorottya Fekete † , Joaquin Fontbona ∗ and Andreas E. Kyprianou ‡ University of Bath, Universidad de Chile and University of Bath
Abstract
It is well understood that a supercritical continuous-state branching process (CSBP) is equal in law to a discretecontinuous-time Galton Watson process (the skeleton of pro-lific individuals ) whose edges are dressed in a Poissonian waywith immigration which initiates subcritical CSBPs ( non-prolificmass ).Equally well understood in the setting of CSBPs and super-processes is the notion of a spine or immortal particle dressed ina Poissonian way with immigration which initiates copies of theoriginal CSBP, which emerges when conditioning the process tosurvive eternally.In this article, we revisit these notions for CSBPs and putthem in a common framework using the well-established lan-guage of (coupled) SDEs (cf. [7, 8, 6]). In this way, we are ableto deal simultaneously with all types of CSBPs (supercritical,critical and subcritical) as well as understanding how the skele-tal representation becomes, in the sense of weak convergence, aspinal decomposition when conditioning on survival.We have two principal motivations. The first is to prepare theway to expand the SDE approach to the spatial setting of super-processes, where recent results have increasingly sought the useof skeletal decompositions to transfer results from the branchingparticle setting to the setting of measure valued processes; cf.[26, 14, 40]. The second is to provide a pathwise decompositionof CSBPs in the spirit of genealogical coding of CSBPs via Lévyexcursions in Duquesne and LeGall [10] albeit precisely wherethe aforesaid coding fails to work because the underlying CSBPis supercritical.
1. Introduction.
In this article we are interested in X = ( X t , t ≥ a continuous-state,finite-mean branching process (CSBP). In particular, this means that X is a [0 , ∞ ) -valuedstrong Markov process with absorbing state at zero and with law on D ([0 , ∞ ) , R ) (the spaceof càdlàg mappings from [0 , ∞ ) to R ) given by P x for each initial state x ≥ , such that P x + y = P x ∗ P y . Here, P x + y = P x ∗ P y means that the sum of two independent processes, oneissued from x and the other issued from y , has the same law as the process issued from x + y . ∗ Supported by Basal-Conicyt Centre for Mathematical Modelling and Millenium Nucleus NC120062 † Supported by a scholarship from the EPSRC Centre for Doctoral Training, SAMBa ‡ Supported by EPSRC grant EP/L002442/1
MSC 2010 subject classifications:
Primary 60J80, 60H30, ; secondary 60G99
Keywords and phrases:
Continuous-state branching processes, stochastic differential equations, skeletaldecomposition, spine decomposition ts semigroup is characterised by the Laplace functional(1.1) E x (e − θX t ) = e − xu t ( θ ) , x, θ, t ≥ , where u t ( θ ) uniquely solves the evolution equation(1.2) u t ( θ ) + Z t ψ ( u s ( θ ))d s = θ, t ≥ . Here, we assume that the so-called branching mechanism ψ takes the form(1.3) ψ ( θ ) = − αθ + βθ + Z (0 , ∞ ) (e − θx − θx )Π(d x ) , θ ≥ , where α ∈ R , β ≥ and Π is a measure concentrated on (0 , ∞ ) which satisfies R (0 , ∞ ) ( x ∧ x )Π(d x ) < ∞ . These restrictions on ψ are very mild and only exclude the possibility ofhaving a non-conservative process or processes which have an infinite mean growth rate.We also assume for convenience that − ψ is not the Laplace exponent of a subordinator(i.e. a Bernstein function), thereby ruling out the case that X has monotone paths. It iseasily checked that ψ is an infinitely smooth convex function on (0 , ∞ ) with at most tworoots in [0 , ∞ ) . More precisely, is always a root, however if ψ ′ (0+) < , then there is asecond root in (0 , ∞ ) .The process X is henceforth referred to as a ψ -CSBP. It is easily verified that(1.4) E x [ X t ] = x e − ψ ′ (0+) t , t, x ≥ . The mean growth of the process is therefore characterised by ψ ′ (0+) and accordingly weclassify CSBPs by the value of this constant. We say that the ψ -CSBP is supercritical,critical or subcritical accordingly as − ψ ′ (0+) = α is strictly positive, equal to zero or strictlynegative, respectively.It is known that the process ( X, P x ) , x > , can also be represented as the unique strongsolution to the stochastic differential equation (SDE) X t = x + α Z t X s − d s + p β Z t Z X s − W (d s, d u ) + Z t Z ∞ Z X s − r ˜ N (d s, d r, d ν ) , (1.5)for x > , t ≥ , where W (d s, d u ) is a white noise process on (0 , ∞ ) based on the Lebesguemeasure d s ⊗ d u and N (d s, d r, d ν ) is a Poisson point process on [0 , ∞ ) with intensity d s ⊗ Π(d r ) ⊗ d ν . Moreover, we denote by ˜ N (d s, d r, d ν ) the compensated measure of N (d s, d r, d ν ) .See [7, 8, 2] for this fact and further properties of the above SDEs.Through the representation of a CSBP as either a strong Markov process whose semi-group is characterised by an integral equation, or as a solution to an SDE, there are threefundamental probabilistic decompositions that play a crucial role in motivating the mainresults in this paper. These concern CSBPs conditioned to die out, CSBPs conditioned tosurvive and a path decomposition of the supercritical CSBPs.2 SBPs conditioned to die out.
To understand what this means, let us momentarilyrecall that for all supercritical continuous-state branching processes (without immigration)the event { lim t →∞ X t = 0 } occurs with positive probability. Moreover, for all x ≥ , P x (lim t ↑∞ X t = 0) = e − λ ∗ x , where λ ∗ is the unique root on (0 , ∞ ) of the equation ψ ( θ ) = 0 . Note that ψ is strictly convexwith the property that ψ (0) = 0 and ψ (+ ∞ ) = ∞ , thereby ensuring that the root λ ∗ > exists; see Chapter 8 and 9 of [25] for further details. It is straightforward to show that thelaw of ( X, P x ) conditional on the event { lim t ↑∞ X t = 0 } , say P ∗ x , agrees with the law of a ψ ∗ -CSBP, where(1.6) ψ ∗ ( θ ) = ψ ( θ + λ ∗ ) . See for example [46].
CSBPs conditioned to survive.
The event { lim t →∞ X t = 0 } can be categorised furtheraccording to whether its intersection with { X t > for all t ≥ } is empty or not. Theclassical work of Grey [22] distinguishes between these two cases according to an integraltest. Indeed, the intersection is empty if and only if(1.7) Z ∞ ψ ( θ ) d u < ∞ . If we additionally assume that − ψ ′ (0+) = α ≤ , that is to say, the process is critical orsubcritical, then it is known that the notion of conditioning the process to stay positive canbe made rigorous through a limiting procedure. More precisely, if we write ζ = inf { t > X t = 0 } , then for all A ∈ F Xt := σ ( X s : s ≤ t ) and x > , P ↑ x ( A ) := lim s →∞ P x ( A | ζ > t + s ) is well defined as a probability measure and satisfies the Doob h -transform(1.8) d P ↑ x d P x (cid:12)(cid:12)(cid:12)(cid:12) F Xt = e − αt X t x { t<ζ } . In addition, ( X, P ↑ x ) , x > , has been shown to be equivalent in law to a process which hasa pathwise description which we give below. Before doing so, we need to introduce some morenotation. To this end, define N ∗ to be a Poisson random measure on [0 , ∞ ) × D ([0 , ∞ ) , R ) with intensity measure d s ⊗ r Π( d r ) ⊗ P r ( d ω ) . Moreover, Q is the intensity, or ‘excursion’measure on the space D ([0 , ∞ ) , R ) which satisfies Q (1 − e − θω t ) = − x log E x (e − θX t ) = u t ( θ ) , θ, t ≥ . Here, the measure Q is the excursion measure on the space D ([0 , ∞ ) , R ) associ-ated to P x , x > . See Theorems 3.10, 8.6 and 8.22 of [37] and [15, 31, 13, 9, 39] for furtherdetails. We can accordingly build a Poisson point process N c on [0 , ∞ ) × D ([0 , ∞ ) , R ) withintensity β d s ⊗ Q ( d ω ) . Then, for x > , ( X, P ↑ x ) is equal in law to the stochastic process(1.9) Λ t = X ′ t + Z t Z D ([0 , ∞ ) , R ) ω t − s N c ( d s, d ω ) + Z t Z ∞ Z D ([0 , ∞ ) , R ) ω t − s N ∗ ( d s, d r, d ω ) , t ≥ , where X ′ has the law P x and is independent of N c and N ∗ , which are also independent ofone another. Intuitively, one can think of the process (Λ t , t ≥ as being the result of firstrunning a subordinator S t = 2 βt + Z t Z ∞ rN ∗ ( d s, d r ) , t ≥ , where we have slightly abused our notation and written N ∗ ( d s, d r ) , s, r > in place of R D ([0 , ∞ ) , R ) N ∗ ( d s, d r, d ω ) , s, r > . The subordinator ( S t , t ≥ is usually referred to as the spine .To explain the formula (1.9), in a Poissonian way, we dress the spine with versions of X sampled under the excursion measure Q . Moreover, at each jump of S we initiate anindependent copy of X with initial mass equal to the size of the jump of S . See for example(3.9) in [36], (4.3) in [35], (4.18) in [34] or the discussion in Section 12.3.2 of [25] or [37]. Thereader is also referred to e.g. [43] or [29, 30] for further details of the notion of a spine.It turns out that one may also identify the effect of the change of measure within thecontext of the SDE setting. In [20], it was shown that ( X, P ↑ x ) , x > , offers the uniquestrong solution to the SDE X t = x + α Z t X s − d s + p β Z t Z X s − W ( d s, d u ) + Z t Z ∞ Z X s − r ˜ N ( d s, d r, d u )+ Z t Z ∞ rN ∗ ( d s, d r ) + 2 βt, t ≥ , (1.10)where W , N and ˜ N are as in (1.5) and N ∗ is as above, and all noises are independent. Seealso [8] and [21]. Skeletal path decomposition of supercritical CSBPs.
In [12, 5] and [4] it was shownthat the law of the process X , where X is defined by (1.5), can be recovered from a supercrit-ical continuous-time Galton–Watson process (GW), issued with a Poisson number of initialancestors, and dressed in a Poissonian way using the law of the original process conditionedto become extinguished.To be more precise, they showed that for each x ≥ , ( X, P x ) has the same law as theprocess (Λ t , t ≥ which has the following pathwise construction. First sample from acontinuous-time Galton–Watson process with branching rate q = ψ ′ ( λ ∗ ) and offspring distri-bution { p k : k ≥ } such that its branching generator is given by(1.11) q X k ≥ p k r k − r ! = 1 λ ∗ ψ ( λ ∗ (1 − r )) , r ∈ [0 , . skeleton and offersthe genealogy of prolific individuals , that is, individuals who have infinite genealogical linesof descent (cf. [5]). With the particular branching generator given by (1.11), p = p = 0 ,and for k ≥ , p k := p k ([0 , ∞ )) , where for r ≥ , p k (d r ) = 1 λ ∗ ψ ′ ( λ ∗ ) (cid:26) β ( λ ∗ ) δ (d r ) { k =2 } + ( λ ∗ ) k r k k ! e − λ ∗ r Π(d r ) (cid:27) . If we denote the aforesaid GW process by Z = ( Z t , t ≥ then we shall also insist that Z has a Poisson distribution with parameter λ ∗ x . Next, thinking of the trajectory of Z as agraph, dress the life-lengths of Z in such a way that a ψ ∗ -CSBP is independently grafted onto each edge of Z at time t with rate(1.12) β d Q ∗ + Z ∞ y e − λ ∗ y Π(d y )d P ∗ y . Moreover, on the event that an individual dies and branches into k ≥ offspring, withprobability p k (d x ) , an additional independent ψ ∗ -CSBP is grafted on to the branching pointwith initial mass x ≥ . The quantity Λ t is now understood to be the total dressed masspresent at time t together with the mass present at time t in an independent ψ ∗ -CSBP issuedat time zero with initial mass x . Whilst it is clear that the pair ( Z, Λ) is Markovian, it is lessclear that Λ alone is Markovian. This must, however, be the case given the conclusion that Λ and X are equal in law. A key element in this respect is the non-trivial observation that,for each t ≥ , the law of Z t given Λ t is that of a Poisson random variable with parameter λ ∗ Λ t .Such skeletal path decompositions for continuous-state branching processes, and spatialversions thereof, are by no means new. Examples include [19, 45, 44, 16, 12, 4, 23, 28, 27].In this paper our objective is to understand the relationship between the skeletal decompo-sitions of the type described above and the emergence of a spine on conditioning the processto survive. In particular, our tool of choice will be the use of SDE theory. The importance ofthis study is that it underlines a methodology that should carry over to the spatial setting ofsuperprocesses, where recent results have increasingly sought the use of skeletal decomposi-tions to transfer results from the branching particle setting to the setting of measure valuedprocesses; cf. [26, 14, 40, 41]. In future work we hope to develop the SDE approach to skeletaldecompositions in the spatial setting. We also expect this approach to be helpful in study-ing analogous decompositions in the setting of continuous state branching processes withcompetition [3, 41]. Moreover, although our method takes inspiration from the genealogicalcoding of CSBPs by Lévy excursions, cf. Duquesne and LeGall [10], our approach appearsto be applicable where the aforesaid method fails, namely supercritical processes.
2. Main results.
In this section we summarise the main results of the paper. We havethree main results. First, we provide a slightly more general family of skeletal decompositionsin the spirit of [12], albeit with milder assumptions and that we use the language of SDEs.Second, taking lessons from this first result, we give a time-inhomogeneous skeletal decom-position, again using the language of SDEs, both for supercritical and (sub)critical CSBPs.5onetheless, our proof will take inspiration from classical ideas on the genealogical coding ofCSBPs through the exploration of associated excursions of reflected Lévy processes; see forexample [10] and the references therein. Finally, our third main result, shows that a straight-forward limiting procedure in the SDE skeletal decomposition for (sub)critical processes,which corresponds to conditioning on survival, reveals a weak solution to the SDE given in(1.10). It will transpire that conditioning the process to survive until later and later times isequivalent to “thinning” the skeleton such that, in the limit, we get the spine decomposition.The limiting procedure also intuitively explains how the spine emerges in the conditionedprocess as a consequence of stretching out the skeleton in the SDE decomposition of the(sub)critical processes.Before moving to the first main result, let us introduce some more notation. The readerwill note that it is very similar but, nonetheless, subtly different to previously introducedterms. Define the Esscher transformed branching mechanism ψ λ : R + → R + for θ ≥ − λ and λ ≥ λ ∗ by(2.1) ψ λ ( θ ) = ψ ( θ + λ ) − ψ ( λ ) = ψ ′ ( λ ) θ + βθ + Z (0 , ∞ ) (cid:0) e − θx − θx (cid:1) e − λx Π( d x ) , where ψ ′ ( λ ) = − α + 2 λβ + Z (0 , ∞ ) (cid:0) − e − λx (cid:1) x Π( d x ) > . This is the branching mechanism of a subcritical branching process on account of the factthat − ψ ′ λ (0+) = − ψ ′ ( λ ) < . Heuristically speaking, given that λ ψ ′ ( λ ) is increasing, the ψ λ -CSBP becomes more and more subcritical as λ increases.Next, we need the continuous time Galton Watson process parameterised by λ ≥ λ ∗ , whichhas been seen before in e.g. [12] and agrees with the process described by (1.11) when λ = λ ∗ .It branches at rate ψ ′ ( λ ) and has branching generator given by F λ ( s ) := λ − ψ ((1 − s ) λ ) , s ∈ [0 , , λ ≥ λ ∗ . That is to say, writing F λ ( s ) as in the left-hand side of (1.11), we now have p = ψ ( λ ) /λψ ′ ( λ ) , p = 0 and for k ≥ , p k = 1 λψ ′ ( λ ) (cid:26) βλ { k =2 } + Z (0 , ∞ ) ( λr ) k k ! e − λr Π( d r ) (cid:27) . We will also use the family ( η k ( · )) k ≥ of branch point immigration laws (conditional on thenumber of offspring at the branch point), where η ( d r ) = 0 , x ≥ , and, otherwise,(2.2) η k ( d r ) = 1 p k λψ ′ ( λ ) (cid:26) ψ ( λ ) { k =0 } δ ( d r ) + βλ { k =2 } δ ( d r ) + { k ≥ } ( λr ) k k ! e − λr Π( d r ) (cid:27) , for r ≥ . Note in particular that, when λ > λ ∗ , there is the possibility that no offspringare allowed. Since in this case some lines of descent are finite, the Galton-Watson process nolonger represents the prolific individuals. 6inally, we need to introduce a series of driving sources of randomness for the SDE whichwill appear in Theorem 2.1 below. Let N be a Poisson random measure on [0 , ∞ ) withintensity measure d s ⊗ e − λr Π( d r ) ⊗ d ν , ˜ N be the associated compensated version of N , N ( d s, d r, d j ) be a Poisson point process on [0 , ∞ ) × N with intensity d s ⊗ r e − λr Π( d r ) ⊗ ♯ ( d j ) , and finally let N ( d s, d r, d k, d j ) be a Poisson point process on [0 , ∞ ) × N × N with intensity ψ ′ ( λ ) d s ⊗ η k ( d r ) ⊗ p k ♯ ( d k ) ⊗ ♯ ( d j ) , where N = { } ∪ N and ♯ ( d ℓ ) = P i ∈ N δ i ( d ℓ ) , ℓ ≥ ,denotes the counting measure on N . As before W (d s, d u ) will denote a white noise processon (0 , ∞ ) based on the Lebesgue measure d s ⊗ d u . Theorem . Suppose that ψ corresponds to a supercritical branching mechanism (i.e. α > ) and λ ≥ λ ∗ . Consider the coupled system of SDEs (cid:18) Λ t Z t (cid:19) = (cid:18) Λ Z (cid:19) − ψ ′ ( λ ) Z t (cid:18) Λ s − (cid:19) d s + p β Z t Z Λ s − (cid:18) (cid:19) W ( d s, d u )+ Z t Z ∞ Z Λ s − (cid:18) r (cid:19) ˜ N ( d s, d r, d ν )+ Z t Z ∞ Z Z s − (cid:18) r (cid:19) N ( d s, d r, d j )+ Z t Z ∞ Z ∞ Z Z s − (cid:18) rk − (cid:19) N ( d s, d r, d k, d j )+ 2 β Z t (cid:18) Z s − (cid:19) d s, t ≥ . (2.3) The equation (2.3) has a unique strong solution for arbitrary ( F -measurable) initial values Λ ≥ and Z ∈ N (where F t := σ ((Λ s , Z s ) : s ≤ t ) ). Furthermore, under the assumptionthat Z is an independent random variable which is Poisson distributed with intensity λ Λ this unique solution satisfies the following: (i) For t ≥ , conditional on F Λ t := σ (Λ s : s ≤ t ) , Z t is Poisson distributed with intensity λ Λ t ; (ii) The process (Λ t , t ≥ is Markovian and a weak solution to (1.5); (iii) If Z = 0 , then (Λ t , t ≥ is a subcritical CSBP with branching mechanism ψ λ . If one focuses on the second element, Z , in the SDE (2.3), it can be seen that there is nodependency on the first element Λ . The converse is not true however. Indeed, the stochasticevolution for Z is simply that of the continuous-time GW process with branching mechanismgiven by F λ ( s ) , s ∈ [0 , . Given the evolution of Z , the process Λ here describes nothingmore than the aggregation of a Poisson and branch-point dressing on Z together with anindependent copy of a ψ λ -CSBP. As is clear from (2.2) this results in the skeleton Z havingthe possibility of ‘dead ends’ (no offspring). Of course if λ = λ ∗ then this occurs with zeroprobability and the joint system of SDEs in (2.3) describes precisely the prolific skeleton de-composition. In the spirit of [12], albeit using different technology and in a continuum setting,Theorem 2.1 puts into a common framework a parametric family of skeletal decompositionsfor supercritical processes. Related work also appears in [1, 38].7 emark . Although we have assumed in the introduction that R (0 , ∞ ) ( x ∧ x )Π(d x ) < ∞ , the reader can verify from the proof that this is in fact not needed. Indeed, suppose thatwe relax the assumption on Π to just R (0 , ∞ ) (1 ∧ x )Π( d x ) < ∞ and we take the branchingmechanism in the form ψ ( θ ) = − αθ + βθ + Z (0 , ∞ ) (e − θx − θx { x< } )Π(d x ) , θ ≥ , where ψ ′ (0) < and Z | ψ ( ξ ) | d ξ = ∞ to ensure conservative supercriticality. Then the necessary adjustment one needs to makeoccurs, for example, in (1.5), where jumps of size greater than equal to 1 in the Poissonrandom measures N is separated out without compensation. However, the form of (2.3)remains the same as all jumps of N can be compensated.Our objective, however, is to go further and demonstrate how the SDE approach canalso apply in the finite horizon setting. We do this below, but we should remark that theskeletal decomposition is heavily motivated by the description of the CSBP genealogy usingthe so-called height process in Duquesne and Le Gall [10]. Indeed, for (sub)critical CSBPsone may consider the conclusion of Theorem 2.2, below, as a rewording thereof. However,as the proof does not rely on the CSBP being (sub)critical, the same result holds in thesupercritical case. Thus Theorem 2.2 is also a time-inhomogeneous version of Theorem 2.1for supercritical CSBPs, which setting was not discussed in [10].Assume that ψ is a branching mechanism that satisfies Grey’s condition (1.7). We fix atime marker T > and we want to describe a coupled system of SDEs in the spirit of (2.3)in which the second component describes prolific genealogies to the time horizon T . In otherwords, our aim is to provide an SDE decomposition of the CSBP along those individuals inthe population who have a descendent at time T .To this end, recall that ( u t ( θ ) , t ≥ is given by (1.1) and accordingly, for t ≥ , u t ( ∞ ) = − x − log P x ( X t = 0) gives the rate at which extinction has occurred by time t . We need aPoisson random measure N T on [0 , T ) × [0 , ∞ ) with intensity d s ⊗ e − u T − s ( ∞ ) r Π(d r ) ⊗ d ν , aPoisson process N T on [0 , T ) × [0 , ∞ ) × N with intensity d s ⊗ r e − u T − s ( ∞ ) r Π(d r ) ⊗ ♯ (d j ) , anda Poisson process N T (d s, d r, d k, d j ) on [0 , T ) × [0 , ∞ ) × N × N with intensity (cid:26) u T − s ( ∞ ) ψ ′ ( u T − s ( ∞ )) − ψ ( u T − s ( ∞ )) u T − s ( ∞ ) (cid:27) d s ⊗ η T − sk (d r ) ⊗ p T − sk ♯ (d k ) ⊗ ♯ (d j ) , where, for k ≥ ,(2.4) η T − sk (d r ) = βu T − s ( ∞ ) { k =2 } δ (d r ) + ( u T − s ( ∞ ) r ) k e − u T − s ( ∞ ) r Π(d r ) /k ! p T − sk ( u T − s ( ∞ ) ψ ′ ( u T − s ( ∞ )) − ψ ( u T − s ( ∞ ))) , r ≥ , and p T − sk is such that p T − s = p T − s = 0 and the remaining probabilities are computable byinsisting that η T − sk ( · ) is itself a probability distribution for each k ≥ .8 heorem . Suppose that ψ corresponds to a branching mechanism which satisfiesGrey’s condition (1.7). Fix a time horizon T > and consider the coupled system of SDEs (cid:18) Λ Tt Z Tt (cid:19) = (cid:18) Λ T Z T (cid:19) − Z t ψ ′ ( u T − s ( ∞ )) (cid:18) Λ Ts − (cid:19) d s + p β Z t Z Λ Ts − (cid:18) (cid:19) W ( d s, d u )+ Z t Z ∞ Z Λ Ts − (cid:18) r (cid:19) ˜ N T ( d s, d r, d ν )+ Z t Z ∞ Z Z Ts − (cid:18) r (cid:19) N T ( d s, d r, d j )+ Z t Z ∞ Z ∞ Z Z Ts − (cid:18) rk − (cid:19) N T ( d s, d r, d k, d j )+ 2 β Z t (cid:18) Z Ts − (cid:19) d s, ≤ t < T. (2.5) The equation (2.5) has a unique strong solution for arbitrary ( F T -measurable) initial values Λ T ≥ and Z T ∈ N (where F Tt := σ ((Λ Ts , Z Ts ) : s ≤ t ) , t < T ). Furthermore, under theassumption that Z T is an independent random variable which is Poisson distributed withintensity u T ( ∞ )Λ T this unique solution satisfies the following: (i) For
T > t ≥ , conditional on F Λ T t := σ (Λ Ts : s ≤ t ) , Z Tt is Poisson distributed withintensity u T − t ( ∞ )Λ Tt ; (ii) The process (Λ Tt , ≤ t < T ) is Markovian and a weak solution to (1.5); (iii) Conditional on { Z T = 0 } , the process (Λ Tt , ≤ t < T ) corresponds to a weak solutionto (1.5) conditioned to become extinct by time T . The SDE evolution in Theorem 2.2 mimics the skeletal decomposition in (2.3), albeitthat the different components in the decomposition are time-dependent. Putting the SDErepresentation aside, such time-varying skeletons have been observed in e.g. [16, 10]. We notethat the underlying skeleton Z T can be thought of as a time-inhomogenous Galton–Watsonprocess (a T -prolific skeleton) such that, at time s < T , its branching rate is given by(2.6) q T − s := u T − s ( ∞ ) ψ ′ ( u T − s ( ∞ )) − ψ ( u T − s ( ∞ )) u T − s ( ∞ ) and offspring distribution is given by { p T − sk : k ≥ } . This has the feature that the branchingrate explodes towards the time horizon T . To see why, we can appeal to (1.1), and note that P x [ X t = 0] = e − u t ( ∞ ) x , x, t > , and hence lim t → u t ( ∞ ) = ∞ . Moreover, one easily verifies from (1.3) that lim λ →∞ [ λψ ′ ( λ ) − ψ ( λ )] /λ = ∞ . Together, these facts imply the explosion of (2.6) as s → T .We also note from the integrals involving N T and N T that there is mass immigrating offthe space-time trajectory of Z T . Moreover, once mass has immigrated, the first four termsof (2.5) show that it evolves as a time-inhomogenous CSBP.9ote, that in the supercritical setting u T − t ( ∞ ) converges to λ ∗ for all t > as T → ∞ . Thisintuitively means that when T goes to ∞ , one can recover the prolific skeleton decompositionof Theorem 2.1 from the time-inhomogeneous one of Theorem 2.2.Finally with the finite-horizon SDE skeletal decomposition in Theorem 2.2, we may nowturn our attention to understanding what happens when we observe the solution to (2.5) inthe (sub)critical case on a finite time horizon [0 , t ] , and we condition on there being at leastone T -prolific genealogy, while letting T → ∞ . Theorem . Suppose that ψ is a critical or subcritical branching mechanism such thatGrey’s condition (1.7) holds. Suppose, moreover, that ((Λ Tt , Z Tt ) , ≤ t < T ) is a weak solutionto (2.5) and that Z T is an independent random variable which is Poisson distributed withintensity u T ( ∞ )Λ T . Then, conditional on the event Z T > , in the sense of weak convergencewith respect to the Skorokhod topology on D ([0 , ∞ ) , R ) , for all t > , ((Λ Tt , Z Tt ) , ≤ t ≤ t ) → (( X ↑ t , , ≤ t ≤ t ) , as T → ∞ , where X ↑ is a weak solution to (1.10). Theorem 2.3 puts the phenomena of spines and skeletons in the same framework. Roughlyspeaking, any subcritical branching population contains a naturally embedded skeleton whichdescribes the ‘fittest’ genealogies. In our setting ‘fittest’ means surviving until time T butother notions of fitness can be considered, especially when one introduces a spatial type tomass in the branching process. For example in [23] a branching Brownian motion in a stripis considered, where ‘fittest genealogies’ pertains to those lines of descent which survive inthe strip for all eternity. Having at least one line of descent in the skeleton corresponds tothe event of survival. Thus, conditioning on survival as we make the survival event itselfincreasingly unlikely, e.g. by taking T → ∞ in our model or taking the width of the stripdown to a critical value in the branching Brownian motion model, the natural stochasticbehaviour of the skeleton is to thin down to a single line of decent. This phenomenon wasoriginally observed in [16], where the scaling limit of a Galton–Watson processes conditionedon survival is shown to converge to the immortal particle decomposition of the (1 + β ) -superprocess conditioned on survival.The remainder of the paper is structured as followed. In the next section we explainthe heuristic behind how (1.5) can be decoupled into components that arise in (2.3). Theheuristic is used in Section 4 where the proof of Theorem 2.1 is given. In this sense ourproof of Theorem 2.1 has the feel of a ‘guess-and-verify’ approach. In Section 5, again inthe spirit of a ‘guess-and-verify’ approach, we use ideas from the classical description of theexploration process of CSBPs in e.g. [10] to provide the heuristic behind the mathematicalstructures that lie behind the proof of Theorem 2.2. Given the similarity of this proof tothat of Theorem 2.1, it is sketched in Section 6. Finally in Section 7 we provide the proof ofTheorem 2.3. 10 . Thinning of the CSBP SDE. In this section, we will perform an initial manipu-lation of the SDE (1.5), which we will need in order to make comparative statements forTheorems 2.1 and 2.2. To this end, we will introduce some independent marks on the atomsof the Poisson process N driving (1.5) and use them to thin out various contributions to theSDE evolution.Denote by ( t i , r i , ν i : i ∈ N ) some enumeration of the atoms of N and recall that N = { } ∪ N . By enlarging the probability space, we can introduce an additional mark to atomsof N , say ( k i : i ∈ N ) , resulting in an ‘extended’ Poisson random measure,(3.1) N (d s, d r, d ν, d k ) := X i ∈ N δ ( t i ,r i ,ν i ,k i ) (d s, d r, d ν, d k ) on [0 , ∞ ) × N with intensity d s ⊗ Π(d r ) ⊗ d ν ⊗ ( λr ) k k ! e − λr ♯ (d k ) . Now define three random measures by N (d s, d r, d ν ) = N (d s, d r, d ν, { } ) ,N (d s, d r, d ν ) = N (d s, d r, d ν, { } ) and N (d s, d r, d ν ) = N (d s, d r, d ν, { k ≥ } ) . Classical Poisson thinning now tells us that N , N and N are independent Poisson pointprocesses on [0 , ∞ ) with respective intensities d s ⊗ e − λr Π(d r ) ⊗ d ν , d s ⊗ ( λr )e − λr Π(d r ) ⊗ d ν and d s ⊗ P ∞ k =2 ( λr ) k e − λr Π(d r ) /k ! ⊗ d ν. With these thinned Poisson random measures in hand, we may start to separate out thedifferent stochastic integrals in (1.5). We have that, for t ≥ , X t = x + α Z t X s − d s + p β Z t Z X s − W (d s, d u ) + Z t Z ∞ Z X s − r ˜ N (d s, d r, d ν )+ Z t Z ∞ Z X s − rN (d s, d r, d ν ) + Z t Z ∞ Z X s − rN (d s, d r, d ν ) − Z t Z ∞ X s − ∞ X n =1 ( λr ) n n ! e − λr r Π(d r )d s = x − ψ ′ ( λ ) Z t X s d s + p β Z t Z X s − W (d s, d u ) + Z t Z ∞ Z X s − r ˜ N (d s, d r, d ν )+ Z t Z ∞ Z X s − rN (d s, d r, d ν ) + 2 βλ Z t X s − d s + Z t Z ∞ Z X s − rN (d s, d r, d ν ) , (3.2) 11here in the last equality we have used the easily derived fact that − R (0 , ∞ ) (1 − e − λr ) r Π(d r ) = − α + 2 βλ − ψ ′ ( λ ) . Recalling (2.1), the first line in the last equality of (3.2) corresponds tothe dynamics of a subcritical CSBP with branching mechanism ψ λ .Inspecting the statement of Theorem 2.1, we see intuitively that in order to prove thisresult, our job is to show that the integrals on the right-hand side of (3.2) driven by N and N can be identified with the mass that immigrates off the skeleton. λ -Skeleton: Proof of Theorem 2.1. We start by addressing the claim that (2.3)possesses a unique strong solution. Thereafter we prove claims (i) and (ii) of the theorem inorder.We can identify the existence of any weak solution to (2.3) with initial value (Λ , Z ) =( x, n ) , x ≥ , n ∈ N , by introducing additionally marked versions of the Poisson randommeasures N and N , as well as an additional Poisson random measure N ∗ . We will insistthat N ( d s, d r, d j, d ω ) has intensity d s ⊗ r e − λr Π( d r ) ⊗ ♯ ( d j ) ⊗ P ( λ ) r ( d ω ) on [0 , ∞ ) × N × D ([0 , ∞ ) , R ) , N ( d s, d r, d k, d j, d ω ) has intensity ψ ′ ( λ ) d s ⊗ η k ( d r ) ⊗ p k ♯ ( d k ) ⊗ ♯ ( d j ) ⊗ P ( λ ) r ( d ω ) on [0 , ∞ ) × N × N × D ([0 , ∞ ) , R ) and N ∗ ( d s, d j, d ω ) has intensity β d s ⊗ ♯ ( d j ) ⊗ Q ( λ ) ( d ω ) on [0 , ∞ ) × N × D ([0 , ∞ ) , R ) , where P ( λ ) r is the law of a ψ λ -CSBP with initial value r ≥ (formally speaking P ( λ )0 is the law of the null process) and Q ( λ ) is the associated excursionmeasure.Our proposed solution to to (2.3) will be to first define ( Z t , t ≥ as the continuous-time Galton–Watson process with branching rate ψ ′ ( λ ) and offspring distribution given by ( p k , k ≥ . It is then easy to otherwise Z in the more complicated form Z t = Z t Z ∞ Z ∞ Z Z s − Z D ([0 , ∞ ) , R ) ( k − N ( d s, d r, d k, d j, d ω ) , t ≥ . Next we take(4.1) Λ t = X ( λ ) t + D t , t ≥ , where X ( λ ) is an autonomously independent copy of a ψ λ -CSBP issued with initial mass x and, given N and N , D t , t ≥ is the uniquely identified (up to almost sure modification)‘dressed skeleton’ described by D t = Z t Z ∞ Z Z s − Z D ([0 , ∞ ) , R ) ω t − s N ( d s, d r, d j, d ω )+ Z t Z ∞ Z ∞ Z Z s − Z D ([0 , ∞ ) , R ) ω t − s N ( d s, d r, d k, d j, d ω )+ Z t Z Z s − Z D ([0 , ∞ ) , R ) ω t − s N ∗ ( d s, d j, d ω ) . To see why this provides a weak solution to (2.3), we may appeal to the Martingale repre-sentation the Markov pair (Λ , Z ) described above. In particular the generator of (Λ , Z ) can12e identified consistently with the generator of the process associated to (2.3); that it to say,their common generator is given by L f ( x, n ) = − ψ ′ ( λ ) x ∂f∂x ( x, n ) + βx ∂ f∂x ( x, n )+ x Z ∞ [ f ( x + r, n ) − f ( x, n ) − r ∂f∂x ( x, n )]e − rλ Π( d r )+ n Z ∞ X j ≥ [ f ( x + r, n + j − − f ( x, n )] r j λ j − j ! e − rλ Π( d r )+ βλn [ f ( x, n + 1) − f ( x, n )] + ψ ( λ ) λ n [ f ( x, n − − f ( x, n )] + 2 βn ∂f∂x ( x, n ) , for x ≥ , n ∈ N , and for all non-negative, smooth and compactly supported functions f . (Here, the penultimate term is understood to be zero when n = 0 .) With this sense ofcommonality for their generators, it is then easy to verify the conditions of Theorem 2.3 of[24] and thus to conclude that (Λ , Z ) provides a weak solution to (2.3).Pathwise uniqueness is also relatively easy to establish. Indeed, suppose that Λ is the firstcomponent of any path solution to (2.3) with driving source of randomness N , N , N and W and suppose that we write it in the form Λ t =: Λ + H (Λ s , s < t ) + I t t ≥ , where H (Λ s , s < t ) = − ψ ′ ( λ ) Z t Λ s − d s + p β Z t Z Λ s − W ( d s, d u )+ Z t Z ∞ Z Λ s − r ˜ N ( d s, d r, d ν ) , t ≥ , and I t = Z t Z ∞ Z Z s − r N ( d s, d r, d j )+ Z t Z ∞ Z ∞ Z Z s − r N ( d s, d r, d k, d j ) + 2 β Z t Z s − d s, t ≥ . Recalling that the almost sure path of Z is uniquely defined by N , it follows that, if Λ (1) and Λ (2) are two path solutions to (2.3) with the same initial value, then Λ (1) t − Λ (2) t = H (Λ (1) s , s < t ) − H (Λ (2) s , s < t ) , t ≥ . The reader will now note that the above equation is precisely the SDE one obtains whenlooking at the path difference between two solutions of an SDE of the type given in (1.5).Since there is pathwise uniqueness for (1.5), we easily conclude that Λ (1) = Λ (2) almost surely.13inally, taking account of the existence of a weak solution and pathwise uniqueness, wemay appeal to an appropriate version of the Yamada-Watanabe Theorem, see for exampleTheorem 1.2 of [2], to deduce that (2.3) possesses a unique strong solution. And since thisholds for every fixed initial configuration x and n , it also holds when the initial values areindependently randomised. (i) This claim requires an analytical verification and, in some sense, is similar in spirit tothe proof that, for t ≥ , Z t | Λ t is Poisson distributed with rate λ ∗ Λ t in the prolific skeletaldecomposition found in [4]. A fundamental difference here is that we work with SDEs, andhence stochastic calculus, rather than integral equations for semigroups as in [4] is neededand, moreover, the parameter λ need not be the minimal value, λ ∗ , in its range.Standard arguments show that the solution to (2.3) is a strong Markov process and ac-cordingly we write P x,n , x > , n ∈ N for its probabilities. Moreover, with an abuse ofnotation we write, for x > ,(4.2) P x ( · ) = X n ≥ ( λx ) n n ! e − λx P x,n ( · ) . Define f t ( η, θ ) := E x [e − η Λ t − θZ t ] , x, θ, η, t ≥ , and let F t := e − η Λ t − θZ t , t ≥ . Using Itô’sformula for semi-martingales, cf. Theorem 32 of [41], for t ≥ , d F t = − ηF t − dΛ t − θF t − d Z t + 12 η F t − d[Λ , Λ] ct + 12 θ F t − d[ Z, Z ] ct + ηθF t − d[Λ , Z ] ct + ∆ F t + ηF t − ∆Λ t + θF t − ∆ Z t . (Here and throughout the remainder of this paper, for any stochastic process Y , we use thenotation that ∆ Y t = Y t − Y t − .) As Z is a pure jump process, we have that [ Z, Z ] ct = [Λ , Z ] ct = 0 .Taking advantage of the fact that F t = F t − e − η ∆Λ t − θ ∆ Z t , we may thus write in integral form F t = F − η Z t F s − dΛ s − θ Z t F s − d Z s + βη Z t F s − Λ s d s + X s ≤ t F s − (cid:8) e − η ∆Λ s − θ ∆ Z s − η ∆Λ s + θ ∆ Z s (cid:9) , where the sum is taken over the countable set of discontinuities of (Λ , Z ) . We can split up thesum of discontinuities according to the Poisson random measure in (2.3) that is responsiblefor the discontinuity. Hence, writing ∆ ( j ) , j = 0 , , , to mean an increment coming from14ach of the three Poisson random measures, F t = F − η Z t F s − dΛ s − θ Z t F s − d Z s + βη Z t F s − Λ s − d s + X s ≤ t F s − n e − η ∆ (0) Λ s − η ∆ (0) Λ s o + X s ≤ t F s − n e − η ∆ (1) Λ s − η ∆ (1) Λ s o + X s ≤ t F s − n e − η ∆ (2) Λ s − θ ∆ Z s − η ∆ (2) Λ s + θ ∆ Z s o . (4.3)Now, note that we can re-write the first element of the vectorial SDE (2.3) as Λ t =Λ − ψ ′ ( λ ) Z t Λ s − d s + p β Z t Z Λ s − W (d s, d u ) + M t + X s ≤ t ∆ (1) Λ s + X s ≤ t ∆ (2) Λ s + 2 β Z t Z s − d s, t ≥ , where M t is a zero-mean martingale corresponding to the integral in (2.3) with respect to ˜ N . Therefore performing the necessary calculus in (4.3) for the integral with respect to dΛ t ,we get that F t − F − η ( ψ ′ ( λ ) + ηβ ) Z t F s − Λ s − d s − X s ≤ t F s − n e − η ∆ (2) Λ s − θ ∆ Z s − o + 2 ηβ Z t F s − Z s − d s − X s ≤ t F s − n e − η ∆ (1) Λ s − o − X s ≤ t F s − n e − η ∆ (0) Λ s − η ∆ (0) Λ s o , for t ≥ , is equal to a zero-mean martingale which is the sum of the previously mentioned M t , t ≥ , and the white noise integral. Taking expectations, we thus have f t ( η, θ ) = f ( η, θ ) + η ( ψ ′ ( λ ) + ηβ ) E x Z t [ F s − Λ s − ]d s − ηβ E x Z t [ F s − Z s − ]d s + E x Z t [ F s − Λ s − ]d s (cid:18)Z ∞ (e − ηr − ηr )e − λr Π(d r ) (cid:19) + E x Z t [ F s − Z s − ]d s (cid:18)Z ∞ (e − ηr − r e − λr Π(d r ) (cid:19) + E x Z t λ [ F s − Z s − ]d s ∞ X k =0 ,k =1 Z ∞ (e − ηr − θ ( k − − n ψ ( λ ) { k =0 } δ (d r )+ δ (d r ) βλ { k =2 } + { k ≥ } ( λr ) k k ! e − λr Π(d r ) o! . f t ( η, θ ) satisfies the following PDE ∂∂t f t ( η, θ ) = A λ ( η, θ ) ∂∂η f t ( η, θ ) + B λ ( η, θ ) ∂∂θ f t ( η, θ ) ,f ( η, θ ) = e − ( η + λ (1 − e − θ )) x , (4.4)where A λ ( η, θ ) = η ( − ψ ′ ( λ ) − ηβ ) − Z ∞ (e − ηr − ηr )e − λr Π(d r ) B λ ( η, θ ) = 2 ηβ − ∞ X k =0 Z ∞ (e − ηr − θ ( k − − ( ψ ( λ ) λ { k =0 } δ (d r ) + δ (d r ) βλ { k =2 } + { k ≥ } λ k − r k k ! e − λr Π(d r ) ) . Standard theory for linear partial differential equation (4.4), see for example Chapter 3(Theorem 2, p107) of [18] and references therein, tells us that it has a unique local solution.Our aim now is to show that this solution is also represented by(4.5) E x [e − ( η + λ (1 − e − θ )) X t ] = e − u t ( η + λ (1 − e − θ )) x , x, t, θ, η ≥ , where we recall that X is the ψ -CSBP. To this end, let us define κ = η + λ (1 − e − θ ) and notethat, for x, t, κ ≥ , g t ( κ ) := E x [exp {− κX t } ] satisfies ∂∂t g t ( κ ) = − ψ ( κ ) ∂∂κ g t ( κ ) ,g ( κ ) = e − κx . (4.6)See for example Exercise 12.2 in [25]. After a laborious amount of algebra one can verify that − ψ ( κ ) = A λ ( η, θ ) + λ e − θ B λ ( η, θ ) and hence we may develop the right hand side of (4.6) andwrite, for x, t, η, θ ≥ , ∂∂t g t ( κ ) = A λ ( η, θ ) ∂∂κ g t ( κ ) + λ e − θ B λ ( η, θ ) ∂∂κ g t ( κ ) = A λ ( η, θ ) ∂∂η g t ( κ ) + B λ ( η, θ ) ∂∂θ g t ( κ ) . Now we choose x = 1 . Then local uniqueness of the solution to (4.4) (or equivalently thelocal uniqueness of (4.6)) thus tells us that there exists t > such that g t ( η + λ (1 − e − θ )) = f t ( η, θ ) for all η, θ ≥ and t ∈ [0 , t ] .In conclusion, now that we have proved that for t ∈ [0 , t ] (4.7) E [e − ( η + λ (1 − e − θ )) X t ] = E [e − η Λ t − θZ t ] , x, θ, η ≥ , we can sequentially observe the following implications. Firstly, setting θ = 0 and η > , wesee that Λ t under P has the same distribution as X t under P for all t ∈ [0 , t ] . Next, settingboth η, θ > , we observe that, (Λ t , Z t ) under P has the same law as ( X t , Po ( λx ) | x = X t ) under16 , where Po ( λx ) is an autonomously independent Poisson random variable with rate λx . Inparticular, it follows that, for all t ∈ [0 , t ] , under P , the law of Z t given Λ t is Po ( λ Λ t ) .To get a global result we first show that the previous conclusions hold for any initial mass x > on the time-interval [0 , t ] , then using Markov property, we extend the results for any t > . First, from (4.5) we can observe that E x h e − ( η + λ (1 − e − θ )) X t i = (cid:16) E h e − ( η + λ (1 − e − θ )) X t i(cid:17) x . Thus in order to extend the previous results to any x > we only need to prove that E x (cid:2) e − η Λ t − θZ t (cid:3) = (cid:0) E (cid:2) e − η Λ t − θZ t (cid:3)(cid:1) x . Recalling the representation (4.1) and the notation (4.2) we can write, for t ≤ t , E x (cid:2) e − η Λ t − θZ t (cid:3) = X n ≥ ( λx ) n n ! e − λx E ( x,n ) (cid:2) e − η Λ t − θZ t (cid:3) = X n ≥ ( λx ) n n ! e − λx E x h e − ηX ( λ ) t i E (0 ,n ) (cid:2) e − ηD t − θZ t (cid:3) = (cid:16) E h e − ηX ( λ ) t i(cid:17) x X n ≥ ( λx ) n n ! e − λx (cid:0) E (0 , (cid:2) e − ηD t − θZ t (cid:3)(cid:1) n = (cid:16) E h e − ηX ( λ ) t i e λ ( E (0 , [ e − ηDt − θZt ] − ) (cid:17) x = (cid:0) E (cid:2) e − η Λ t − θZ t (cid:3)(cid:1) x , as required.Now take t < t ≤ t , and use the tower property to get E x (cid:2) e − η Λ t − θZ t (cid:3) = E x (cid:2) E Λ t (cid:2) e − η Λ t − t − θZ t − t (cid:3)(cid:3) = Z R + E y (cid:2) e − η Λ t − t − θZ t − t (cid:3) P x (Λ t ∈ d y ) , and similarly E x h e − ( η + λ (1 − e − θ )) X t i = Z R + E y h e − ( η + λ (1 − e − θ )) X t − t i P x ( X t ∈ d y ) . Thus using local uniqueness and the previously deduced implications on [0 , t ] we see that E x (cid:2) e − η Λ t − θZ t (cid:3) = E x h e − ( η + λ (1 − e − θ )) X t i on t ∈ [0 , t ] , and by iterating the previous argument we get equality for any t > .Finally, on account of the fact that (Λ t , Z t ) , t ≥ , is a joint Markovian pair, this nowglobal Poissonisation allows us to infer that Λ t , t ≥ , is itself Markovian. Indeed, for anybounded measurable and positive h and s, t ≥ , E [ h (Λ t + s ) |F Λ t ] = X n ≥ ( λ Λ t ) n n ! e − λ Λ t E x,n [ h (Λ s )] x =Λ t = E x [ h (Λ s )] x =Λ t .
17e may now conclude that for all t ≥ and x > , under P x , Z t |F Λ t is Po ( λ Λ t ) -distributedas required. (ii) We have seen that the pair ((Λ t , Z t ) , t ≥ is a Markov process for any initial state ( x, n ) but, due to the dependence on Z , on its own (Λ t , t ≥ is not Markovian. Howeverconsidering (4.7) we see that after the Poissonisation of Z , (Λ t , t ≥ becomes a Markovprocess with semi-group that agrees with that of ( X t , t ≥ . On account of the fact that X is the unique weak solution to (1.5), it automatically follows that Λ also represents theunique weak solution to (1.5). (iii) Since the event { Z = 0 } implies the event { Z t = 0 , t ≥ } , the system (2.3) reduces tothe SDE Λ t = x + ψ ′ ( λ ) Z t Λ s − d s + p β Z t Z Λ s − W (d s, d u ) + Z t Z ∞ Z Λ s − r ˜ N (d s, d r, d ν ) , which has the exact form as the SDE describing the evolution of a CSBP with branchingmechanism ψ λ . (cid:3)
5. Exploration of subcritical CSBPs.
The objective of this section is to give a heuris-tic description of how the notion of a prolific skeleton emerges in the subcritical case andspecifically why the structure of the SDE (2.5) is meaningful in this respect. We need tobe careful about what one means by ‘prolific’ but nonetheless, the inspiration for a decom-position can be gleaned by examining in more detail the description of subcritical CSBPsthrough the exploration process.We assume throughout the conditions of Theorem 2.2. That is to say, X a (sub)critical ψ -CSBP where ψ satisfies Grey’s condition (1.7). Let ( ξ t , t ≥ be a spectrally positive Lévyprocess with Laplace exponent ψ . Using the classical work of [32, 33] (see also [11, 10, 31])we can use generalised Ray–Knight-type theorems to construct X in terms of the so-calledheight process associated to ξ . For convenience and to introduce more notation, we give abrief overview here.Denote by ( ˆ ξ ( t ) , ≤ r ≤ t ) the time reversed process at time t , that is ˆ ξ ( t ) r := ξ t − ξ ( t − r ) − ,and let ˆ S ( t ) r := sup s ≤ r ˆ ξ ( t ) s . We define H t as the local time at level 0, at time t of the process ˆ S ( t ) − ˆ ξ ( t ) . Because the reversed process has a different point from which is reversed at eachtime, the process H does not behave in a monotone way. The process ( H t , t ≥ is calledthe ψ -height process, which, under assumption (1.7), is continuous. There exists a notion oflocal time up to time t of H at level a ≥ , henceforth denoted by L at . Specifically, the family ( L at , a, t ≥ satisfies Z t g ( H s )d s = Z ∞ g ( a ) L at d a t ≥ , where g is a non-negative measurable function.For x > let T x := inf { t ≥ , ξ t = − x } . Then the generalised Ray-Knight theorem for the ψ -CSBP process states that ( L aT x , a ≥ has a càdlàg modification for which ( L tT x , t ≥ d = ( X, P x ) , ψ -CSBP. It can be shown that theexcursions of H from 0 form a time-homogeneous Poisson point process of excursions withrespect to local time at 0. We shall use n to denote its intensity measure. If X = x , then thetotal amount of local time of H accumulated at zero is x . Each excursion codes a real tree(see [10] for a precise meaning) such that the excursion that occurs after u ≤ x units of localtime can be thought of as the descendants of the ‘ u -th’ individual in the initial population.Here we are interested in the genealogy of the conditioned process and what we will callthe embedded ‘ T -prolific’ tree, that is the tree of the individuals that survive up to time T .Conditioning the process on survival up to time T corresponds to conditioning the heightprocess to have at least one excursion above level T . (We have the slightly confusing, butnonetheless standard, notational anomaly that a spatial height for an excursion correspondsto the spatial height in the tree that it codes, but that this may also be seen as a time into theforward evolution of the tree.) Let n T denote the conditional probability n ( ·| sup s ≥ ǫ s ≥ T ) where ǫ is a canonical excursion of H under n . Let ( Z Tt , t ≥ be the process that countsthe number of excursions above level t that hit level T within the excursion ǫ . Duquesne andLe Gall in [10] describe the distribution of Z T under n T and prove the following. Theorem . Under n T the process ( Z Tt , ≤ t
6. Finite-time horizon Skeleton: Proof of Theorem 2.2.
Now that we understandthat the mathematical structure of (2.5) is little more than a time-dependent version of (2.3),the reader will not be surprised by the claim that the proof of strong uniqueness to (2.5) aswell as parts (i) and (ii) of Theorem 2.1 pass through almost verbatim, albeit needing someminor adjustments for additional time derivatives of u T − t ( ∞ ) in e.g. (4.3) and (4.6), whichplays the role of λ . To avoid repetition, we simply leave the proof of these two parts as anexercise for the reader.On the event { Z = 0 } , which is concurrent with the event { Z t = 0 , ≤ t < T } closeinspection of (2.5) allows us to note that Λ is generated by an SDE with time-varyingcoefficients. Indeed, standard arguments show that conditional on { Z = 0 } , Λ is a timeinhomogeneous Markov processes.Suppose that we write P Tx,n , x ≥ , n ∈ N for the law of the Markov probabilities corre-sponding to the solution of (2.5). Moreover, we will again abuse this notation in the spiritof (4.2) and write P Tx , x ≥ , when Z T is randomised to be an independent Poisson random22ariable with rate u T ( ∞ ) x . We can use part (i) and (ii) of Theorem 2.2, together with (5.3)to deduce that E Tx (cid:2) e − η Λ t | Z = 0 (cid:3) = E Tx (cid:2) e − η Λ t , Z = 0 (cid:3) P Tx ( Z = 0)= E Tx (cid:2) e − η Λ t , Z t = 0 (cid:3) P Tx ( Z = 0)= e u T ( ∞ ) x E Tx (cid:2) e − ( η + u T − t ( ∞ ))Λ t (cid:3) = E Tx (cid:2) e − ηX t (cid:3) . (6.1)This tells us that the semigroups of Λ conditional on { Z = 0 } and X conditional to becomeextinct by time T agree. Part (iii) of Theorem 2.2 is thus proved. (cid:3)
7. Thinning the skeleton to a spine: Proof of Theorem 2.3.
The aim of thissection is to recover the unique solution to (1.10) as a weak limit of (2.5) in the sense ofSkorokhod convergence. To this end, we assume throughout the conditions of Theorem 2.3,in particular that ψ is a critical or subcritical branching mechanism and Grey’s condition(1.7) holds.There are three main reasons why we should expect this result and these three reasonspertain to the three structural features of the skeleton decomposition: The feature of Poissonembedding, the Galton–Watson skeleton and the branching immigration from the skeletonwith an Esscher transformed branching mechanism. Let us dwell briefly on these heuristics.First, let us consider the behaviour of the skeleton ( Z Tt , t < T ) as T → ∞ . As we areassuming that ψ is a (sub)critical branching mechanism, it holds that lim T →∞ u T ( ∞ ) = 0 as T → ∞ . Thus, recalling that Z T ∼ Po ( u T ( ∞ ) x ) , i.e. independent and Poisson distributedwith parameter u T ( ∞ ) x , and hence conditioning on survival to time T in the skeletal de-composition is tantamount to conditioning on the event { Z T ≥ } , we see that(7.1) ̟ x,Tk := P Tx [ Z = k | Z ≥
1] = ( u T ( ∞ ) x ) k k ! e − u T ( ∞ ) x − e − u T ( ∞ ) x , k ≥ . We thus see that the probabilities (7.1) all tend to zero unless k = 1 in which case the limit isunity. Moreover, Theorem 2.2 (ii) and (iii) imply that the law of (Λ Tt , ≤ t < T ) conditionalon ( F Λ Tt ∩ { Z T ≥ } , ≤ t < T ) corresponds to the law of the ψ -CSBP, X , conditionedto survive until time T . Intuitively, then, one is compelled to believe that, in law, there isasymptotically a single skeletal contribution to the law of X conditioned to survive.Second, considering (5.1), it follows from l’Hospital’s rule that the rate at which afore-mentioned most common recent ancestor branches begins to slow down since ψ ( u T ( ∞ )) u T ( ∞ ) u T − t ( ∞ ) ψ ( u T − t ( ∞ )) → , as T → ∞ . Furthermore, considering (5.2), we also get that in the limit of the offspringdistribution lim T →∞ p Tk = 0 for all k ≥ . What we are thus observing is a thinning, in theweak sense, of the skeleton, both in terms of the number of branching events as well as thenumber of offspring. 23hirdly, we consider the mass that immigrates from the skeleton. For a fixed T , it evolvesas a ψ -CSBP conditioned to die before time T . We recall that conditioning to die before time T is tantamount to the change of measure given in (5.3). It is easy to see that as T → ∞ ,the density in this change of measure converges to unity and hence immigrating mass, in theweak limit, should have the evolution of a ψ -CSBP.With all this evidence in hand, Theorem 2.3 should now take on a natural meaning. Wegive its proof below. Proof of Theorem 2.3.
According to Theorem 2.5 on p.167 of [17, Chapter 4], if E isa locally compact and separable metric space, P T := ( P Tt , t ≥ , T > , is a sequence ofFeller semigroups on C ( E ) (the space of continuous functions on E vanishing at ∞ , endowedwith the supremum norm), P := ( P t , t ≥ is a Feller semigroup on C ( E ) such that, for f ∈ C ( E ) , with respect to the supremum norm on the space C ( E ) ,(7.2) lim T →∞ P Tt f = P t f, t ≥ , and moreover, ( ν T , T > is a sequence of Borel probability measures on E such that lim T →∞ ν T = ν weakly for some probability measure ν then, with respect to the Skorokhodtopology on D ([0 , ∞ ) , E ) , Ξ T converges weakly to Ξ , where (Ξ T , T > are the strongMarkov processes associated to ( P T , T > with initial law ( ν T , T > and Ξ is the strongMarkov processes associated to P with initial law ν , respectively.Note that such weak convergence results would normally require a tightness criterion,however, having the luxury of (7.2), where P is a Feller semigroup, removes this conditionand this will be the setting in which we are able to apply the conclusion of the previousparagraph.Fix t > . We want to prove the weak convergence result in the finite time window [0 , t ] .In order to introduce the role of P T , T > , in our setting, we will abuse yet further previousnotation and define P Tx,n,s , x ≥ , n ≥ , s ∈ [0 , t ] to be the Markov probabilities associatedto the three-dimensional process (Λ t , Z t , τ t ) , ≤ t ≤ t , whenever t < T , where (Λ t , Z t ) , ≤ t ≤ t is the weak solution to (2.5), and τ t := t , ≤ t ≤ t . Consistently with previous relatednotation, we have P Tx,n, = P Tx,n , x ≥ , n ≥ , Now define the associated time-dependentsemigroup for the three-dimensional process, for t ≥ and f ∈ C ([0 , ∞ ) × N × [0 , ∞ )) ,such that P Tt [ f ]( x, n, s ) = f ( x, n, s ) when T ≤ t , and when T > t we have P Tt [ f ]( x, n, s ) := E T [ f (Λ ( t ∨ s ) ∧ t , Z ( t ∨ s ) ∧ t , τ ( t ∨ s ) ∧ t ) | Λ s = x, Z s = n, τ s = s ] for ( x, n, s ) ∈ [0 , ∞ ) × N × [0 , t ] , and P Tt [ f ]( x, n, s ) := f ( x, n, s ) for ( x, n, s ) ∈ [0 , ∞ ) × N × ( t , ∞ ) . We take P T = P T . In order to verify the Feller property of P Tt [ f ]( x, n, s ) we need tocheck two things (cf. Proposition 2.4 in Chapter III of [42]):(i) For each t ≥ , the function ( x, n, s ) P Tt [ f ]( x, n, s ) belongs to C ([0 , ∞ ) × N × [0 , ∞ )) ,for any f in that space.(ii) For all f ∈ C ([0 , ∞ ) × N × [0 , ∞ )) and for each ( x, n, s ) ∈ [0 , ∞ ) × N × [0 , ∞ ) , wehave lim t ↓ s P Tt [ f ]( x, n, s ) = f ( x, n, s ) . 24ote that when T ≤ t , or s ≥ t , or t ≤ s ≤ t , we have P Tt [ f ]( x, n, s ) = f ( x, n, s ) . Since f ∈ C ([0 , ∞ ) × N × [0 , ∞ )) , both (i) and (ii) are trivially satisfied. We can also notice thatthe case when T > t , s ≤ t and t ≥ t reduces to the case of t = t , hence in order to showthe Feller property of P Tt [ f ]( x, n, s ) we can restrict ourselves to the case of s ≤ t ≤ t < T .By denseness of the sub-algebra generated by exponential functions (according to theuniform topology) in C ( E ) , it suffices to check, for (i), that(7.3) ( x, n, s ) E Tx,n,s (cid:2) e − γ Λ t − θZ t − ϕτ t (cid:3) , s ≤ t ≤ t < T, belongs to C ([0 , ∞ ) × N × [0 , ∞ )) and, for (ii), that(7.4) lim t ↓ E Tx,n,s (cid:2) e − γ Λ t − θZ t − ϕτ t (cid:3) = e − γx − θn − ϕs , s ≤ t ≤ t < T. To this end, note that(7.5) E Tx,n,s (cid:2) e − γ Λ t − θZ t − ϕτ t (cid:3) = E T − sx,n (cid:2) e − γ Λ t − s − θZ t − s (cid:3) e − ϕt , s ≤ t ≤ t < T. In order to evaluate expectation on the right-hand side above, we want to work with anappropriate representation of the unique weak solution to (2.5). We shall do so by followingthe example of how the weak solution to (2.3) was identified in the form (4.1).As before, we need to introduce additionally marked versions of the Poisson random mea-sures N T and N T , as well as an additional Poisson random measure N ∗ T . We will insist thatPoisson random measure N T ( d s, d r, d j, d ω ) on [0 , T ) × [0 , ∞ ) × N × D ([0 , ∞ ) , R ) has intensity d s ⊗ r e − u T − s ( ∞ ) r Π(d r ) ⊗ ♯ (d j ) ⊗ P T − sr ( d ω ) , Poisson random measure N T (d s, d r, d k, d j, d ω ) on [0 , T ) × [0 , ∞ ) × N × N × D ([0 , ∞ ) , R ) has intensity q T − s d s ⊗ η T − sk (d r ) ⊗ p T − sk ♯ (d k ) ⊗ ♯ (d j ) ⊗ P T − sr ( d ω ) and Poisson random measure N ∗ T ( d s, d j, d ω ) has intensity β d s ⊗ ♯ ( d j ) ⊗ Q T − s ( d ω ) on [0 , T ) × N × D ([0 , ∞ ) , R ) , where Q T is the excursion measure associated to P Tr , r ≥ , satisfying(7.6) Q T (1 − e − γω t ) = V Tt ( γ ) , γ > , for ≤ t < T , where V Tt was defined in (5.6). To recall some of the notation used in theserates, see (2.4) and (2.6).If the pair (Λ , Z ) has law P Tx,n , then we can write(7.7) Λ t = X t + D t , t < T, where X is autonomously independent with law P Tx and, given N , D is the uniquely identified(up to almost sure modification) ‘dressed skeleton’ described by D t = Z t Z ∞ Z Z s − Z D ([0 , ∞ ) , R ) ω t − s N T ( d s, d r, d j, d ω )+ Z t Z ∞ Z ∞ Z Z s − Z D ([0 , ∞ ) , R ) ω t − s N T ( d s, d r, d k, d j, d ω )+ Z t Z Z s − Z D ([0 , ∞ ) , R ) ω t − s N ∗ T ( d s, d j, d ω ) , Z = n . The verification of this claim follows almost verbatim the same as for (2.3)albeit with obvious change to take account of the time-varying rates. We therefore omit theproof and leave it as an exercise for the reader.With the representation (7.7), as Z is piecewise constant, we can condition on the sigma-algebra generated by N T and show, using Campbell’s formula in between the jumps of Z ,that, for ≤ t < T , γ, θ ≥ , x ≥ and n ∈ N , E Tx,n (cid:2) e − γ Λ t − θZ t (cid:3) = e − xV Tt ( γ ) E T ,n " e − θZ t − R t Z v φ uT − v ( ∞ ) ( V T − vt − v ( γ ))d v Y w ≤ t (cid:18)Z ∞ e − rV T − wt − w ( γ ) η T − w ∆ Z w +1 ( d r ) (cid:19) (7.8)where V Tt ( γ ) := u t ( γ + u T − t ( ∞ )) − u T ( ∞ ) , ≤ t < T, and, for λ, z ≥ , φ λ ( z ) = 2 βz + Z ∞ (1 − e − zr ) r e − λr Π( d r ) . Given the identities (7.5) and (7.8), the two required verifications in (7.3) and (7.4) followeasily as direct consequence of continuity and bounded convergence in (7.8).The target semigroup P on f ∈ C ([0 , ∞ ) × N × [0 , ∞ )) is defined as follows. For fixed n ∈ N , x ≥ , let P ( n ) x be the law of the homogeneous Markov process described by the weaksolution to X t = x + α Z t X s − d s + p β Z t Z X s − W ( d s, d u ) + Z t Z ∞ Z X s − r ˜ N ( d s, d r, d u )+ Z t Z ∞ rN ( ∗ ,n ) ( d s, d r ) + 2 nβt, t ≥ , with W, N and N ( ∗ ,n ) is a Poisson random measure on [0 , ∞ ) × D ([0 , ∞ ) , R ) with intensitymeasure n d s ⊗ r Π( d r ) ⊗ P r ( d ω ) . Note, we have at no detriment to consistency that P (0) x canbe replaced by P x . Then, we take the role of P t played by the semigroup P ↑ t given by P ↑ t [ f ]( x, n, s ) := E ( n ) [ f ( X t , n, τ t ) | X s = x, τ s = s ] , for ≤ s ≤ t ≤ t , n ≥ , and P ↑ t [ f ]( x, n, s ) := f ( x, n, s ) otherwise. Here f ∈ C ([0 , ∞ ) × N × [0 , ∞ )) , and τ t = t , as above. Notice ( X, P ( n ) x ) is a branching process with immigration,whose Laplace transform is given by E ( n ) x (e − γX t ) = e − xu t ( γ ) − n R t φ ( u t − v ( γ )) d v , γ ≥ . From this, it is easily seen that P ↑ t is Feller as well.Lastly, for each T ≥ we take ν T the measure on [0 , ∞ ) × N × [0 , ∞ ) given for each x ≥ by δ x ⊗ π T,x ⊗ δ , with π T,x ( · ) = P n ≥ ̟ T,xn δ n ( · ) . Recall from (7.1) that π T,x converges weakly,as T → ∞ , to the measure δ ( · ) on N , hence ν T converges weakly to ν := δ x ⊗ δ ⊗ δ .26hus, in order to invoke Theorem 2.5 in [17, Chapter 4], we just need to check the analogueof (7.2) in our setting.To this end, notice first that we can restrict ourselves to ≤ s ≤ t ≤ t , since otherwise P ↑ t [ f ]( x, n, s ) = P Tt [ f ]( x, n, s ) by definition. Then note from (2.6) that q T → as T → ∞ ,and this yields that, under P T ,n , process Z converge in probability uniformly on [0 , t ] , as T → ∞ (cf. Theorem 6.1, Chapter 1, p28 of [17]) to the constant process Z s ≡ n , s ≤ t .Referring back to (7.8), the continuity in T of the deterministic quantities as they appear onthe right-hand side and the previously mentioned uniform convergence of ( Z, P T ,n ) togetherimply that, for x ≥ , ≤ s ≤ t ≤ t , n ∈ N , lim T →∞ P Tt [ f γ,θ,ϕ ]( x, n, s ) = e − ϕt lim T →∞ E T − sx,n (cid:2) e − γ Λ t − s − θZ t − s (cid:3) = P ↑ t [ f γ,θ,φ ]( x, n, s )= e − xu t − s ( γ ) − θn − n R t − s φ ( u t − s − v ( γ )) d v − ϕt , where f γ,θ,ϕ ( x, n, s ) := e − γx − θn − ϕs , γ, θ, ϕ, x, s ≥ , n ∈ N . To conclude, it is thus enough to prove that this convergence holds uniformly in x ≥ , ≤ s ≤ t , n ∈ N , where t ≤ t . Consider fixed R > and N ∈ N . Since V Tt ( γ ) defined above isnonnegative and, for each n ∈ N , Z t ≥ Z = n , t ≥ , a.s. under P T ,n , for all T > , usingthe triangle inequality, we have sup x ≥ ,s ≤ t,n ∈ N | P Tt [ f γ,θ,ϕ ]( x, n, s ) − P ↑ t [ f γ,θ,ϕ ]( x, n, s ) | ≤ A R ( T ) + A R ( T ) + B N ( T ) + B N where we have set: A R ( T ) := sup x ≤ R,s ≤ t | e − xV T − st − s ( γ ) − e − xu t − s ( γ ) | , A R ( T ) := sup s ≤ t e − RV T − st − s ( γ ) + sup s ≤ t e − Ru t − s ( γ ) ,B N ( T ) := sup n ≤ N,s ≤ t (cid:12)(cid:12)(cid:12)(cid:12) E T − s ,n " e − θZ t − s − R t − s Z v φ uT − s − v ( ∞ ) ( V T − s − vt − s − v ( γ ))d v Y w ≤ t − s (cid:18)Z ∞ e − rV T − s − wt − s − w ( γ ) η T − s − w ∆ Z w +1 ( d r ) (cid:19) − e − θn − n R t − s φ ( u t − s − v ( γ )) d v (cid:12)(cid:12)(cid:12)(cid:12) and B N = 2e − θN . Firstly, it is not hard to see that A R ( T ) ≤ sup s ≤ t R | V T − st − s ( γ ) − u t − s ( γ ) | = R sup s ≤ t | u t − s ( γ + u T − t ( ∞ )) − u T − s ( γ ) − u t − s ( γ ) | . The identity ∂u s ( θ ) /∂θ = e − R s ψ ′ ( u r ( θ )) d r (see (12.12) in [25, Chapter 12]) and the fact that ψ ′ ( θ ) ≥ allows us to estimate | u t − s ( γ + u T − t ( ∞ )) − u t − s ( γ ) | by u T − t ( ∞ ) . Recalling that u T ( ∞ ) → and u T ( γ ) → as T → ∞ , it follows that A R ( T ) tends to as T → ∞ , for each R > . Next, since ( s, γ ) u s ( γ ) is increasing in γ and decreasing in s , we have V T − st − s ( γ ) ≥ u t − s ( γ + u T − t ( ∞ )) − u T − t ( ∞ ) ≥ inf λ ≤ u T − t ( ∞ ) u t ( γ + λ ) − λ, T sufficiently large, is bounded from below by u t ( γ ) / > . Fix ε > . Choosing R > such that e − Ru t ( γ ) / + e − Ru t ( γ ) ≤ ε we thus get lim sup T →∞ A R ( T ) ≤ ε. With regard to the term B N ( T ) , we have B N ( T ) ≤ max n ≤ N sup s ≤ t E T − s ,n (cid:20) ∧ (cid:18) sup v ≤ t Z v Z t − s | φ u T − s − v ( ∞ ) ( V T − s − vt − s − v ( γ )) − φ ( u t − s − v ( γ )) | d v (cid:19) (cid:21) + max n ≤ N sup s ≤ t E T − s ,n (cid:20)(cid:12)(cid:12)(cid:12)(cid:12) e − θZ t − s − e − θn (cid:12)(cid:12)(cid:12)(cid:12)(cid:21) + max n ≤ N sup s ≤ t E T − s ,n (cid:20)(cid:12)(cid:12)(cid:12)(cid:12) − Y w ≤ s (cid:18)Z ∞ e − rV T − s − wt − s − w ( γ ) η T − s − w ∆ Z w +1 ( d r ) (cid:19) (cid:12)(cid:12)(cid:12)(cid:12)(cid:21) ≤ max n ≤ N sup s ≤ t E T − s ,n (cid:20) sup s ′ ≤ t ∧ sup v ≤ t Z v Z t − s ′ | φ u T − s ′− v ( ∞ ) ( V T − s ′ − vt − s − v ( γ )) − φ ( u t − s ′ − v ( γ )) | d v ! (cid:21) + max n ≤ N sup s ≤ t E T − s ,n (cid:20) sup s ′ ≤ t (cid:12)(cid:12)(cid:12)(cid:12) e − θZ t − s ′ − e − θn (cid:12)(cid:12)(cid:12)(cid:12)(cid:21) + max n ≤ N sup s ≤ t E T − s ,n (cid:20) sup s ′ ≤ t (cid:12)(cid:12)(cid:12)(cid:12) − Y w ≤ s ′ (cid:18)Z ∞ e − rV T − s ′− wt − s ′− w ( γ ) η T − s ′ − w ∆ Z w +1 ( d r ) (cid:19) (cid:12)(cid:12)(cid:12)(cid:12)(cid:21) . (7.9)The first term on the right-hand side above is bounded by max n ≤ N sup s ≤ t P T − s ,n (sup v ≤ t Z v > n ) + 1 ∧ (cid:18) N t sup w ≤ t | φ u T − w ( ∞ ) ( V T − wt − w ( γ )) − φ ( u t − w ( γ )) | (cid:19) and hence goes to for each N as T → ∞ . On the other hand, as a function of ( Z s , s ≤ t ) ,the expression inside the expectation in the second term of (7.9) is bounded and continu-ous with respect to the Skorokhod topology (recall that Skorokhod continuity is preservedfor Z under the operation of supremum over finite time horizons). Moreover, it vanisheswhen Z s ≡ n , ≤ s ≤ t . This implies that this term goes to as well. Finally, the ex-pression whose absolute value we take in the the third term of (7.9) is bounded by , andvanishes unless Z jumps at least once on [0 , s ] . This shows that the last term is boundedby max n ≤ N sup s ≤ t P T − s ,n (sup w ≤ t ∆ Z w > , which goes to when T → ∞ . Note that forall three terms in (7.9), we are using the fact that, if g ( T ) ≥ is continuous in T and lim T →∞ g ( T ) = 0 , then, for each ε > , and < t ≤ t , by choosing T sufficiently large, wehave sup s ≤ t g ( T − s ) < ε . That is to say, lim T →∞ sup s ≤ t g ( T − s ) = 0 .Putting the pieces together and choosing N ∈ N large enough such that B N ≤ ε , we thusget lim sup T →∞ sup x ≥ ,s ≤ t,n ∈ N | P Tt [ f γ,θ,ϕ ]( x, n, s ) − P ↑ t [ f γ,θ,ϕ ]( x, n, s ) | ≤ ε. Since ε was arbitrary this shows the convergence of the semigroups (7.2) in our setting which,together with the weak convergence of the initial configurations, gives the weak convergenceof the associated processes on [0 , t ] . And since t > was chosen arbitrarily, this alsocompletes the proof of Theorem 2.3. 28 cknowledgements. Part of this work was carried out whilst AEK was visiting the Cen-tre for Mathematical Modelling, Universidad de Chile and JF was visiting the Departmentof Mathematical Sciences at the University of Bath, each is grateful to the host institutionof the other for their support. The authors would also like to thank the anonymous refereeswhose extensive reading of earlier versions of this paper led to many improvements.
References. [1] Romain Abraham and Jean-François Delmas. A continuum-tree-valued Markov process.
Ann. Probab. ,40(3):1167–1211, 2012.[2] M. Barczy, Z. Li, and G. Pap. Yamada-Watanabe results for stochastic differential equations withjumps.
Int. J. Stoch. Anal. , pages Art. ID 460472, 23, 2015.[3] J. Berestycki, M. C. Fittipaldi, and J. Fontbona. Ray-Knight representation of flows of branchingprocesses with competition by pruning of Lévy trees.
Probab. Theory Relat. Fields , 172(3):725–788,2018.[4] J. Berestycki, A. E. Kyprianou, and A. Murillo-Salas. The prolific backbone for supercritical superpro-cesses.
Stochastic Process. Appl. , 121(6):1315–1331, 2011.[5] J. Bertoin, J. Fontbona, and S. Martínez. On prolific individuals in a supercritical continuous-statebranching process.
J. Appl. Probab. , 45(3):714–726, 2008.[6] J. Bertoin and J-F. Le Gall. Stochastic flows associated to coalescent processes iii: limit theorems.
Illinois J. Math. , 50(1):147–181, 2006.[7] D. A. Dawson and Z. Li. Skew convolution semigroups and affine Markov processes.
Annals of Proba-bility , 34:1103–1142, 2006.[8] D. A. Dawson and Z. Li. Stochastic equations, flows and measure-valued processes.
Annals of Probability ,40(2):813–857, 2012.[9] T. Duquesne and C. Labbé. On the eve property for csbp.
Electron. J. Probab. , 19:31 pp., 2014.[10] T. Duquesne and J.-F. Le Gall. Random trees, Lévy processes and spatial branching processes.
Astérisque , (281):vi+147, 2002.[11] T. Duquesne and J.-F. Le Gall. Random Trees, Lévy Processes and Spatial Branching Processes.
ArXivMathematics e-prints , September 2005.[12] T. Duquesne and M. Winkel. Growth of Lévy trees.
Probab. Theory Related Fields , 139(3-4):313–371,2007.[13] E. B. Dynkin and S. E. Kuznetsov. N -measures for branching exit Markov systems and their applicationsto differential equations. Probab. Theory Related Fields , 130(1):135–150, 2004.[14] M. Eckhoff, A. E. Kyprianou, and M. Winkel. Spines, skeletons and the strong law of large numbersfor superdiffusions.
Ann. Probab. , 43(5):2545–2610, 2015.[15] N. El Karoui and S. Roelly. Propriétés de martingales, explosion et représentation de Lévy-Khintchined’une classe de processus de branchement à valeurs mesures.
Stochastic Process. Appl. , 38(2):239–266,1991.[16] A. M. Etheridge and D. R. E. Williams. A decomposition of the (1 + β ) -superprocess conditioned onsurvival. Proc. Roy. Soc. Edinburgh Sect. A , 133(4):829–847, 2003.[17] S. N. Ethier and T. G. Kurtz.
Markov processes . Wiley Series in Probability and Mathematical Statistics:Probability and Mathematical Statistics. John Wiley & Sons, Inc., New York, 1986. Characterizationand convergence.[18] L. C. Evans.
Partial differential equations , volume 19 of
Graduate Studies in Mathematics . AmericanMathematical Society, Providence, RI, 1998.[19] S. N. Evans and N. O’Connell. Weighted occupation time for branching particle systems and a repre-sentation for the supercritical superprocess.
Canad. Math. Bull. , 37(2):187–196, 1994.[20] M. C. Fittipaldi and J. Fontbona. On SDE associated with continuous-state branching processes con-ditioned to never be extinct.
Electron. Commun. Probab. , 17:no. 49, 13, 2012.[21] Zongfei Fu and Zenghu Li. Stochastic equations of non-negative processes with jumps.
StochasticProcess. Appl. , 120(3):306–330, 2010.
22] D. R. Grey. On possible rates of growth of age-dependent branching processes with immigration.
J.Appl. Probability , 13(1):138–143, 1976.[23] S. C. Harris, M. Hesse, and A. E. Kyprianou. Branching Brownian motion in a strip: survival nearcriticality.
Ann. Probab. , 44(1):235–275, 2016.[24] T. Kurtz.
Stochastic Analysis 2010 . Springer, 2011. Equivalence of stochastic equations and martingaleproblems.[25] A. E. Kyprianou.
Fluctuations of Lévy processes with applications . Universitext. Springer, Heidelberg,second edition, 2014. Introductory lectures.[26] A. E. Kyprianou, A. Murillo-Salas, and J. L. Pérez. An application of the backbone decomposition tosupercritical super-Brownian motion with a barrier.
J. Appl. Probab. , 49(3):671–684, 2012.[27] A. E. Kyprianou, J.-L. Pérez, and Y.-X. Ren. The backbone decomposition for spatially dependentsupercritical superprocesses. In
Séminaire de Probabilités XLVI , volume 2123 of
Lecture Notes in Math. ,pages 33–59. Springer, Cham, 2014.[28] A. E. Kyprianou and Y.-X. Ren. Backbone decomposition for continuous-state branching processeswith immigration.
Statist. Probab. Lett. , 82(1):139–144, 2012.[29] A. Lambert. Quasi-stationary distributions and the continuous-state branching process conditioned tobe never extinct.
Electron. J. Probab. , 12:no. 14, 420–446, 2007.[30] A. Lambert. Population dynamics and random genealogies.
Stoch. Models , 24(suppl. 1):45–163, 2008.[31] J.-F. Le Gall.
Spatial branching processes, random snakes and partial differential equations . Lecturesin Mathematics ETH Zürich. Birkhäuser Verlag, Basel, 1999.[32] J.-F. Le Gall and Y. Le Jan. Branching processes in Lévy processes: Laplace functionals of snakes andsuperprocesses.
Ann. Probab. , 26(4):1407–1432, 1998.[33] J.-F. Le Gall and Y. Le Jan. Branching processes in Lévy processes: The exploration process.
Ann.Probab. , 26(1):213–252, 1998.[34] Ch.-Kh. Li. Skew convolution semigroups and related immigration processes.
Teor. Veroyatnost. iPrimenen. , 46(2):247–274, 2001.[35] Zeng-Hu Li. Immigration structures associated with Dawson-Watanabe superprocesses.
StochasticProcess. Appl. , 62(1):73–86, 1996.[36] Zeng-Hu Li. Immigration processes associated with branching particle systems.
Adv. in Appl. Probab. ,30(3):657–675, 1998.[37] Zenghu Li.
Measure-valued branching Markov processes . Probability and its Applications (New York).Springer, Heidelberg, 2011.[38] Zenghu Li. Path-valued branching processes and nonlocal branching superprocesses.
Ann. Probab. ,42(1):41–79, 2014.[39] Zenghu Li. Continuous-state branching processes with immigration. arXiv e-prints , pagearXiv:1901.03521, Jan 2019.[40] P. Milos. Spatial central limit theorem for the supercritical Ornstein-Uhlenbeck superprocess.
Preprint ,2012.[41] P. Protter.
Stochastic integration and differential equations , volume 21 of
Applications of Mathematics(New York) . Springer-Verlag, Berlin, 1990. A new approach.[42] D. Revuz and M. Yor.
Continuous martingales and Brownian motion , volume 293 of
Grundlehren derMathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences] . Springer-Verlag,Berlin, third edition, 1999.[43] S. Roelly-Coppoletta and A. Rouault. Processus de Dawson-Watanabe conditionné par le futur lointain.
C. R. Acad. Sci. Paris Sér. I Math. , 309(14):867–872, 1989.[44] T. S. Salisbury and J. Verzani. On the conditioned exit measures of super Brownian motion.
Probab.Theory Related Fields , 115(2):237–285, 1999.[45] T. S. Salisbury and J. Verzani. Non-degenerate conditionings of the exit measures of super Brownianmotion.
Stochastic Process. Appl. , 87(1):25–52, 2000.[46] Y.-C. Sheu. Lifetime and compactness of range for super-Brownian motion with a general branchingmechanism.
Stochastic Process. Appl. , 70(1):129–141, 1997. epartment of Mathematical SciencesUniversity of BathClaverton DownBath, BA2 7AYUK. E-mail: [email protected], [email protected] Centre for Mathematical Modelling,DIM CMM,UMI 2807 UChile-CNRS,Universidad de Chile,Santiago,Chile. E-mail: [email protected]@dim.uchile.cl