On population growth with catastrophes
OON POPULATION GROWTH WITH CATASTROPHES
BRANDA GONCALVES, THIERRY HUILLET, AND EVA L ¨OCHERBACH
Abstract.
Deterministic population growth models can exhibit a large vari-ety of flows, ranging from algebraic, exponential to hyper-exponential (withfinite time explosion). They describe the growth for the size (or mass) of somepopulation as time goes by. Variants of such models are introduced allowinglogarithmic, exp-algebraic or even doubly exponential growth. The possibilityof immigration is also raised. An important feature of such growth models is todecide whether the ground state 0 is reflecting or absorbing and also whetherstate ∞ is accessible or inaccessible.We then study a semi-stochastic catastrophe version of such models (alsoknown as Piecewise-Deterministic-Markov Processes, in short, PDMP). Here,at some jump times, possibly governed by state-dependent rates, the size ofthe population shrinks by a random amount of its current size, an event pos-sibly leading to instantaneous local (or total) extinction. A special separableshrinkage transition kernel is investigated in more detail, including the case oftotal disasters. Between the jump times, the new process grows, following thedeterministic dynamics started at the newly reached state after each jump. Wediscuss the conditions under which such processes are either transient or re-current (positive or null), the scale function playing a key role in this respect,together with the speed measure cancelling the Kolmogorov forward opera-tor. The scale function is also used to compute, when relevant, the law of theheight of excursions. The question of the finiteness of the time to extinctionis investigated together (when finite), with the evaluation of the mean time toextinction, either local or global. Some information on the embedded chain tothe PDMP is also required when dealing with the classification of states 0 and ∞ that we exhibit. Keywords : Deterministic population growth, catastrophe, PDMP, recur-rence/transience, scale function, height and length of excursions, speed mea-sure, expected time to extinction, classification of boundary states.
AMS Classification: Introduction
Deterministic population growth models (1) with power-law rates α ( x ) = α x a ,α > , can exhibit a large variety of behaviors, ranging from algebraic ( a < a = 1) to hyper-exponential (finite time explosion if a > x t ( x ) of some population at time t ≥ x ≥ . In this setup, self-similarity (with Hurst index H =1 / (1 − a )) plays a key role, together a time substitution. Variants of this model areintroduced allowing logarithmic, exp-algebraic or even doubly exponential growth.The possibility of immigration is also raised. An important feature of such growthmodels will be to decide whether state 0 is reflecting or absorbing and also whetherstate ∞ is accessible or inaccessible. a r X i v : . [ m a t h . P R ] J u l BRANDA GONCALVES, THIERRY HUILLET, AND EVA L ¨OCHERBACH
We will then study a particular class of piecewise deterministic Markov processes(PDMP’s) which are semi-stochastic catastrophe versions X t ( x ) of the above mod-els. The process X t ( x ) describes the size of a population, initially of size x, attime t. At some random jump times the size of the population shrinks by a randomamount of its current size, an event possibly leading to instantaneous local extinc-tion. Between the jump times, X t ( x ) grows following the deterministic dynamicsstarted at the newly reached state after each jump.Semi-stochastic models of a similar flavor were considered in [4], [5], [6], [11], [12],[18] and [20]. See also [3], [9], [10] and [16].2. Deterministic population growth models
In this Section, we discuss several deterministic population growth models of theform . x t = α ( x t ), x = x where α ( x ) is continuous on [0 , ∞ ), positive on (0 , ∞ ) oreven sometimes on [0 , ∞ ).2.1. A class of self-similar growth models.
Let x t ≥ t ≥
0, with initially x := x ≥
0. With α , a > . x t = α x at , x = x, for some growth field α ( x ) := α x a . Note that in this case α ( x ) is increasing with x. Integrating when a (cid:54) = 1 (the non linear case), we get formally(2) x t ( x ) = (cid:0) x − a + α (1 − a ) t (cid:1) / (1 − a ) . In principle, such growth models are considered for some positive initial condition x. Because we will deal in the sequel with catastrophic events that can send thepopulation to state 0, it is also important to consider such growth models whenstarted at x = 0. Either after hitting state 0, the population remains stuck to 0 , and in this case 0 is absorbing. Or the population can regenerate starting afreshfrom 0 , and 0 is reflecting.Three cases arise: • < a <
1: then x ≥ / (1 − a ) >
1, the growth of x t is algebraic at rate larger than 1. We note that x t ( x ) := x t given x (0) = x obeysthe self-similarity property: for all λ > t ≥ x ≥ x λt (cid:16) λ H x (cid:17) = λ H x t ( x ),with H := 1 / (1 − a ) >
1, the Hurst exponent. When x = 0, the dynamics hastwo solutions, one x t (0) ≡ t ≥ x t (0) = ( α (1 − a ) t ) / (1 − a ) because the velocity field α ( x ) in (1) with α (0) = 0, is not Lipschitz as x gets closeto 0, having an infinite derivative. The solution x t (0) = ( α (1 − a ) t ) / (1 − a ) with x = 0 reflects some spontaneous generation phenomenon: following this path, themass at time t > • a >
1: then x > x ( t ) occurs infinite time t ∗ ( x ) = x − a / [ α ( a − x t ( x ) = x (1 − t/t ∗ ( x )) / (1 − a ) , OPULATION GROWTH 3 with algebraic singularity. Up to the explosion time t ∗ ( x ), x ( t ) is self-similar withHurst exponent H = 1 / (1 − a ) <
0. Whenever x ( t ) blows up in finite time, fol-lowing [21], we shall speak of an hyper-exponential growth regime. This model wasshown meaningful as a world population growth model over the last two millenaries,[21]. There is also some recent empirical interest into models with similar behaviorin [19] , [13] and [14]. The finite-time explosion feature, the related interpretationproblems and the previous works about this interpretation have been emphasized in[17], where the author considers the technological advance of a given market. Moretechnically, necessary and sufficient conditions for the existence of such a blow-ing up regime involving the asymptotic form of the local series representation forthe general solutions around the singularities are given in [8]. Whenever a growthprocess exhibits finite time explosion, we shall say that state ∞ is accessible. • a = 1: this is a simple special case not treated in (2), strictly speaking. However,expanding the solution (2) in the leading powers of 1 − a yields consistently:(3) x t ( x ) = e log ( x − a + α (1 − a ) t ) / (1 − a ) = e log[ x − a ( α x a − (1 − a ) t ) ] / (1 − a ) ∼ xe (1 / (1 − a )) α x a − (1 − a ) t ∼ xe α t . Here x ≥ x t ( x ) = xe α t for t ≥ x ≥
0. This is thesimple Malthus growth model. The Malthus regime with a = 1 will be called “dis-criminating” for (1), in the sense that it separates a slow algebraic growth regime( a <
1) and a blowing-up regime ( a >
Remark: ( i ) One can extend the range of a as follows: if a = 0, for all x ≥ x ( t ) = x + α t , a linear growth regime. If a <
0, (2) holds for all x ≥ / (1 − a ) < x t is again algebraic but now at rate smallerthan 1. In this case, α ( x ) = α x a is now decreasing with x. When a ≤
0, thespontaneous generation phenomenon also holds with the velocity field α ( x ) itselfdiverging near x = 0 if a <
0: the solution x t ≡ t ≥ a , x t ( x ) obeys the self-similarity property with Hurst exponent H = 1 / (1 − a ) ∈ (0 , ii ) Slow logarithmic growth: letting α ( x ) = α e − x leads to x t ( x ) = log ( e x + α t ) = x + log (cid:16) α e x t (cid:17) . For such a model, state 0 is reflecting and state ∞ is inaccessible. Again here α ( x )is decreasing with x. ( iii ) One can also extend the range of α as follows: if α <
0, depending on 0 < a < a >
1, the process either goes extinct in finite time t ext = x − a / [ α ( a − / (1 − a ) reaching 0 in infinite time (respectively). Becausegrowth is our main interest, we shall avoid this case in general.2.2. Other choices of α . In general α ( x ) will be assumed continuous on [0 , ∞ ),positive on (0 , ∞ ). Then (cid:90) x t ( x ) x dyα ( y ) = t. BRANDA GONCALVES, THIERRY HUILLET, AND EVA L ¨OCHERBACH
If for x > , I ( x ) := (cid:82) x dyα ( y ) < ∞ , then we have x t ( x ) = I − ( I ( x ) + t ) . If for x > , I ( x ) = ∞ and I ∞ ( x ) := (cid:82) ∞ x dyα ( y ) < ∞ , then x t ( x ) = I − ∞ ( I ∞ ( x ) − t ) . Finally we have in all cases, x t ( x ) = I − ( I ( x ) + t ) , where I ( x ) = (cid:82) x dyα ( y ) is an indeterminate integral. This occurs for example when α ( x ) = x a e − bx with a > b > . Clearly, I ( x ) being the time needed to reach some state x inside the domain (0 , ∞ )starting from 0 and I ∞ ( x ) being the time needed to reach ∞ starting from some x inside the domain, I ( x ) < ∞ ⇐⇒ state 0 is reflecting, I ∞ ( x ) < ∞ ⇐⇒ state ∞ is accessible, I ( x ) = ∞ ⇐⇒ state 0 is absorbing, I ∞ ( x ) = ∞ ⇐⇒ state ∞ is inaccessible.2.3. Exponentiating and log-self-similarity.
With µ, a >
0, consider now thedynamics driven by α ( x ) = µ (1 + x ) (log (1 + x )) a given by(4) . x t = µ (1 + x t ) (log (1 + x t )) a , x = x ≥ . Then we have I ( x ) < ∞ ⇐⇒ a < I ∞ ( x ) < ∞ ⇐⇒ a > . Introducing z t = log (1 + x t ) and z = log (1 + x ), z t obeys (1) with initial condition z . Integrating (4), we get formally if a (cid:54) = 1(5) x t ( x ) = exp (cid:16) (log (1 + x )) − a + µ (1 − a ) t (cid:17) / (1 − a ) − . We conclude: • < a <
1: the integrated solution makes sense and the growth of x t is exp-algebraic at algebraic rate 1 / (1 − a ) >
1. In this case, x t is log-self-similar withHurst exponent α = 1 / (1 − a ) > • a >
1: an explosion or blow-up of x t occurs in finite time t ∗ ( x ) given by t ∗ ( x ) =(log (1 + x )) − a / [ µ ( a − t ∗ ( x ), x t is log-self-similarwith Hurst exponent α = 1 / (1 − a ) <
0. We get x t ( x ) = (1 + x )( − tt ∗ ( x ) ) / (1 − a ) − , with an essential singularity. • a = 1: then (4) has a super-exponential solution x t ( x ) = (1 + x ) e µt − t ≥ a = 1 is discrim-inating for (4) again separating an exp-algebraic growth regime and a blowing-upregime. State 0 absorbing ( I ( x ) = ∞ ) and state ∞ is not accessible in finite time( I ∞ ( x ) = ∞ ). In this case, with I ( x ) = (cid:82) x dyµ (1+ y ) log(1+ y ) = µ log (log (1 + x )) ,x t ( x ) = I − ( I ( x ) + t ) . OPULATION GROWTH 5
One can extend the range of a as follows: if a = 0, x t = (1 + x ) e µt −
1, theMalthusian exponential growth regime. If a <
0, (5) holds for all x ≥ / (1 − a ) <
1, the growth of x t is exp-algebraic with time now at algebraic ratesmaller than 1 and x t is log-self-similar with Hurst exponent α = 1 / (1 − a ) ∈ (0 , α ( x ) . - α ( x ) = α e x leads to x t ( x ) = − log (cid:0) e − x − α t (cid:1) = x − log (1 − t/t ∗ ( x )) , t < t ∗ ( x ) , which explodes logarithmically at t ∗ ( x ) = e − x /α . Here, I ( x ) < ∞ and I ∞ ( x ) < ∞ , such that state 0 is reflecting and state ∞ accessible.- α ( x ) = α x a e bx , again emphasizing that it is possible to have ∞ accessible infinite time and 0 reflecting. Indeed, I ( x ) < ∞ ⇐⇒ a < I ∞ ( x ) < ∞ ⇐⇒ b > . Immigration.
We will now briefly consider two cases involving immigration( α > α ( x ) = α + α x a (constant immigration rate α ).2/ α ( x ) = α x + α x a (linear immigration rate α x ) . Case 1/: The solution to . x t = α ( x t ) = α + α x at , x = x is x t ( x ) = I − ( I ( x ) + t )where I ( x ) = (cid:90) x dyα + α y a = xα F (cid:18) , a , a + 1; − α α x a (cid:19) involving the Gauss hypergeometric function F ( a, b, c ; z ) . Clearly, I ∞ ( x ) < ∞ ⇐⇒ a > ∞ accessible in finite time) and I ( x ) < ∞ for all a (state 0 reflecting).When a = 1, x t ( x ) = xe α t + α α (cid:0) e α t − (cid:1) corresponding to a version of Malthus growth model having state 0 reflecting.Case 2/: The solution to . x t = α ( x t ) = α x t + α x at , x = x, is explicitly known(Bernoulli ODE). It is given by x t ( x ) = e α t (cid:18) x − a + α α (cid:16) − e − (1 − a ) α t (cid:17)(cid:19) / (1 − a ) , for all a (cid:54) = 1 . When a = 1, x t ( x ) = xe ( α + α ) t (Malthus), already discussed.Clearly, I ∞ ( x ) < ∞ ⇐⇒ a > ∞ accessible in finite time t ∗ ( x ) = a − α log (cid:16) α α x − a (cid:17) ) and I ( x ) < ∞ ⇐⇒ a < Conclusion:
For a large class of relevant α ( x ) , it is easy to decide I ( x ) < ∞ ⇐⇒ state 0 is reflecting, I ∞ ( x ) < ∞ ⇐⇒ state ∞ is accessible, I ( x ) = ∞ ⇐⇒ state 0 is absorbing, I ∞ ( x ) = ∞ ⇐⇒ state ∞ is inaccessible. BRANDA GONCALVES, THIERRY HUILLET, AND EVA L ¨OCHERBACH
Time-changes.
Consider the simple dynamical system(6) . y τ = 1, y = x, in integrated form: y τ = x + τ . This most simple growth process was considered in[3]. Consider the time change t τ = (cid:90) τ α ( y τ (cid:48) ) dτ (cid:48) . Its inverse τ t defined by t τ t = t satisfies . τ t = 1 / ˙ t τ t = α ( y τ t ) showing that x t := y τ t obeys . x t = . y τ t · . τ t = α ( x t ) , x = y = x, which is (1). The system (1) is thus atime-changed version of (6).3. Including catastrophes
In this Section we study semi-stochastic catastrophe versions X t ( x ) of such models.3.1. The PDMP Model (sample paths).
With α ( x ) continuous on [0 , ∞ ), pos-itive on (0 , ∞ ) and non-negative on [0 , ∞ ) , consider the population growth models . x t = α ( x t ) , x = x. Then t (cid:48) > t ≥ x t (cid:48) ( x ) > x t ( x ), provided x > x t (cid:48) ( x ) < ∞ , possiblyreaching ∞ at some time t ∗ ( x ) = I ∞ ( x ) ≤ ∞ . Let β ( x ) be a continuous ratefunction on [0 , ∞ ) , positive on (0 , ∞ ). To define a new process X t including catas-trophes, suppose jumps occur at a state dependent rate β ( x ) . At the jump times,the size of the population shrinks by a random amount ∆ ( X t − ) ∈ (0 , X t − ] of itscurrent size X t − . Up to the next jump time, X grows following the deterministicdynamics started at Y ( X t − ) := X t − − ∆ ( X t − ).Let P ( X ≤ y | X − = x ) = P (∆( x ) ≥ x − y ) = H ( x, y ) , y ≤ x, be the kernel H which fixes the law of the jump amplitude. Clearly H ( x, y ) is anon-decreasing function of y with H ( x, y ) = 1 for all y ≥ x. We shall also write H ( x, dy ) = H ( x, δ + H ( x, dy ) ,H ( x, y ) = (cid:90) y H ( x, dy (cid:48) ) = H ( x,
0) + H ( x, y ) , with H ( x,
0) = 0, H ( x, x ) = 1 − H ( x, . If H ( x, > , there is a positiveprobability of disasters (instantaneous local extinction).A special (separable) interesting case is when H ( x, y ) ∗ = h ( y ) h ( x ) = h (0) h ( x ) + h ( y ) − h (0) h ( x ) , for some positive non-decreasing right-continuous function h. Although our main concern will deal with this particular structure of H , we mentionother interesting shapes that it can take, opening the way to further studies. OPULATION GROWTH 7
Example 1. ( i ) - If H ( x, y ) = h ( y ) /h ( x ) (the separable case), then necessarily x → H ( x, y ) is non-increasing in x for all y (because x → H ( x, y ) is non-decreasingin y for all x entailing h non-decreasing). Particular cases are:* h ( x ) = e x in which case H ( x,
0) = e − x > (instantaneous disaster can occur withsome positive probability) . This it the continuous version of the truncated geometricmodel defined in [16] .Letting
Z > random, with cpdf F Z ( z ) = P ( Z > z ) , H ( x, y ) = F Z ( x ) /F Z ( y ) isalso in this class, with H ( x,
0) = F Z ( x ) > . * h ( x ) = x in which case H ( x,
0) = 0 (no instantaneous disaster) . In the latter two examples H ( ∞ , y ) = 0 and there is no way to come down frominfinity.- Let Z > random and proper, with pdf F Z ( z ) = P ( Z > z ) . Suppose H ( x, y ) = h ( y ) /h ( x ) with h ( x ) = h ( ∞ ) − ( h ( ∞ ) − h (0)) F Z ( x ) , for some constants ∞ >h ( ∞ ) > h (0) > . Then, h ( x ) being bounded above, H ( ∞ , y ) = 1 − h ( ∞ ) − h (0) h ( ∞ ) F Z ( y ) and there is a possibility to come down from infinity. Note H ( x,
0) = h (0) /h ( x ) > . ( ii ) - If, with u ∈ (0 , , H ( x, dy ) = δ ux , then after each catastrophe a fixed fraction u of the previous population is kept.In this case H ( x, y ) = ( y ≥ ux ) which is not separable.- Let U ∈ (0 , random, with pdf F U ( u ) = P ( U ≤ u ) . Define H ( x, y ) = F U (cid:0) yx (cid:1) . After each catastrophe a random fraction U of the previous population is kept.If F U ( u ) = u α , α > ( U ∼ beta ( α, ), we are led to a separable case: H ( x, y ) = (cid:0) yx (cid:1) α = h ( y ) h ( x ) with h ( x ) = x α , h (0) = 0 whence H ( x,
0) = 0 . The case α = 1 wasalready discussed.If F U ( u ) = 1 − (1 − u ) β , β > ( U ∼ beta (1 , β ) ), we are led to a non separable case: H ( x, y ) = 1 − (cid:0) − yx (cid:1) β . If U ∼ beta ( α, β ) , α, β > and β (cid:54) = 1 , the model is not separable.- Let Z > random, with cpdf F Z ( z ) = P ( Z > z ) . Suppose H ( x, y ) = F Z ( x − y ) . Except when F Z ( z ) = e − z , this is a non-separable case which is non-decreasing in y for all x and non-increasing in x for all y . While H ( x,
0) = F Z ( x ) > , there isa positive probability of disasters. Example (Pareto): F Z ( z ) = (1 + z ) − α , α > . - Let Z > random, with pdf F Z ( z ) = P ( Z ≤ z ) and suppose that H ( x, y ) = F Z ( x + y ) /F Z (2 x ) . This is a non-separable case which is non-decreasing in y for all x and not necessarily non-increasing in x for all y . While H ( x,
0) = F Z ( x ) /F Z (2 x ) > , there is a positive probability of disasters. Note also ∀ y ≥ , H ( ∞ , y ) = 1 : if the process X ever hits ∞ and jumps down, it is instantaneouslyreset to . Example (exponential): F Z ( z ) = 1 − e − αz , α > . - In the separable case, with l ( z ) = ddz log h ( z ) ,H ( x, y ) = e − (cid:82) xy l ( z ) dz , where the integral only depends on the terminal and initial values x and y. BRANDA GONCALVES, THIERRY HUILLET, AND EVA L ¨OCHERBACH
Introducing a Poisson random measure M ( dt, dx ) on [0 , ∞ ) × [0 , ∞ ) with inten-sity dtdz, we are thus led to consider the piecewise deterministic Markov process(PDMP) X t ( x ) with state-space [0 , ∞ ] obeying(7) dX t ( x ) = α ( X t − ( x )) dt − ∆( X t − ( x )) (cid:90) ∞ { z ≤ β ( X t − ( x )( x )) } M ( dt, dz ) ,X ( x ) = x. The associated infinitesimal generator is given for any smooth testfunction u by(8) Gu ( x ) = α ( x ) u (cid:48) ( x ) + β ( x ) (cid:90) x [ u ( y ) − u ( x )] H ( x, dy ) , x ≥ . In the separable case H ( x, y ) = h ( y ) /h ( x ) , this reads(9) Gu ( x ) = α ( x ) u (cid:48) ( x ) − β ( x ) /h ( x ) (cid:90) x u (cid:48) ( y ) h ( y ) dy, x ≥ . The underlying jump counting process is(10) dN t ( x ) = (cid:90) ∞ { z ≤ β ( X t − ( x )) } M ( dt, dz )with E ( N t ( x )) = E (cid:90) t β ( X s ( x )) ds. Defining(11) T x = inf { t > X t ( x ) (cid:54) = X t − ( x ) } = inf { t > X t (cid:54) = X t − | X = x } (with the convention that inf ∅ = ∞ ), T x is the time at which a first jump occurs.In what follows, we shall write S = 0 ≤ S ≤ S ≤ . . . ≤ S n for the successivejump times of the process X t ( x ) . Notice that S = T x . Moreover, conditionally on X S = x , S − S L = T X x , etc.We shall also consider τ x, = inf { t > X t ( x ) = 0 } = inf { t > X t = 0 | X = x } , inf ∅ := + ∞ , which is the first time to local extinction. We are led to the following distinctions:1/ Total catastrophes (disasters): H ( y,
0) = 1 for all y > , which means that P ( X T x = 0 | X T x − = y ) = P (∆( y ) = y ) = 1 . Given x >
0, state 0 is reached with probability 1 , provided T x < ∞ almost surely.- If 0 is absorbing for x t , then X t = 0 for all t ≥ T x . Moreover T x coincides withthe first time to extinction τ x, . - If 0 is reflecting for x t , X t possibly visits 0 a finite or an infinite number of timesdepending on weather T x < ∞ almost surely or not.2/ Partial catastrophes (catastrophes without disasters): H ( x,
0) = 0 for all x > , which is equivalent to P (∆( x ) < x ) = 1 for all x > . Given x >
0, state 0 is never visited. The reflecting/absorbing status of state 0 isunimportant, being never reached. Formally, τ x, = ∞ . OPULATION GROWTH 9
3/ General catastrophes: H ( x, ∈ (0 , , which means that P (∆( x ) < x ) ∈ (0 ,
1) for all x > . Then P ( X > | X − = x ) = P ( x − ∆( x ) >
0) = 1 − H ( x, ∈ (0 , . - If 0 is absorbing for x t , X t = 0 for all t ≥ τ x, , where τ x, is stochastically largerthan T x . - If 0 is reflecting for x t , X t possibly visits 0 a finite or an infinite number of times. Remark 1. In [3] , a special case of PDMP with α ( x ) = 1 (corresponding to α ( x ) = α x a , α = 1 and a = 0 ) was considered. In [18] , a special (Malthusian)case of PDMP corresponding to α ( x ) = α x was considered. First jump distribution.
Given X = x ≥
0, the first jump time T x isdefined by T x = inf ( t > X t (cid:54) = X t − | X = x ) , Thus, for x > , the law of first jump time T x is P ( T x > t ) = P x ( N t = 0) = P x (cid:18)(cid:90) t (cid:90) ∞ { z ≤ β ( x s ( x )) } M ( ds, dz ) = 0 (cid:19) , where N t ( x ) was defined in (10) above. Suppose that I ∞ ( x ) = ∞ , that is, thereis no finite-time explosion of x t ( x ) . Then, with γ ( x ) := β ( x ) /α ( x ) and Γ ( x ) := (cid:82) x γ ( y ) dy , an increasing function defined as an indefinite integral, we get, since α > , ∞ ) , (12) P ( T x > t ) = e − (cid:82) t β ( x s ( x )) ds = e − [Γ( x t ( x )) − Γ( x )] . Note that the left endpoint (lower bound) of the support of the law of T x is 0 . Inthe sequel, we shall impose the following two conditions.
Assumption 1.
Γ ( ∞ ) = ∞ . Assumption 2.
Γ (0) > −∞ . Notice that imposing Assumption 1 ensures P ( T x < ∞ ) = 1 . Indeed, since α > , ∞ ) , for any x > , x t ( x ) → ∞ as t → ∞ , which, together with (12)allows to conclude. Moreover, imposing Assumption 2 implies that for all t ≥ , lim x → P ( T x > t ) > P ( T > t ) = 1 , meaning T = ∞ almost surely. Being absorbed at 0 , theprocess X will never return to 0 in finite time.If 0 is reflecting, the definition of T in (11) makes sense replacing x by 0 , and(12) remains valid, since t (cid:55)→ x t (0) is invertible. In this case, Assumption 2 isautomatically satisfied.Under Assumption 1 together with I ∞ ( x ) = ∞ , we obtain for x > E ( T x ) = (cid:90) ∞ e − (cid:82) xt ( x ) x γ ( y ) dy dt = (cid:90) ∞ x α ( z ) e − (cid:82) zx γ ( y ) dy dz = e Γ( x ) (cid:90) ∞ x α ( z ) e − Γ( z ) dz. Notice that under Assumption 1, the above expression is finite if we assume that β is lower-bounded in a neighborhood of ∞ , say by a strictly positive constant c > . Then for x sufficiently large, E ( T x ) = e Γ( x ) (cid:90) ∞ x α ( z ) e − Γ( z ) dz = e Γ( x ) (cid:90) ∞ x dzβ ( z ) γ ( z ) e − Γ( z ) ≤ c (cid:90) ∞ x γ ( z ) e − Γ( z ) dz < ∞ , since we supposed that Γ( ∞ ) = ∞ . Remark 2. If β (0) > , then Assumption 2 implies I ( x ) < ∞ , such that isnecessarily reflecting. Notice also that β ( ∞ ) < ∞ together with Assumption 1implies that I ∞ ( x ) = ∞ . Remark 3.
Under Assumption 1 and if I ∞ ( x ) = ∞ , for x > , we may rewrite E ( T x ) as follows. E ( T x ) = e Γ( x ) (cid:90) ∞ x dzβ ( z ) γ ( z ) e − Γ( z ) . Introducing the random variable G ( x ) with density P ( G ( x ) ∈ dz ) = dze Γ( x ) γ ( z ) e − Γ( z ) { z>x } , this is also E ( T x ) = E (cid:18) β ( G ( x )) (cid:19) . Example 2.
We take α ( x ) = α x a with a ≤ such that state ∞ is inaccessible.Moreover we choose β ( x ) = β x b with b > a − , whence γ ( x ) = γ x b − a , Γ ( x ) = (cid:82) x γ ( y ) dy = γ b − a +1 x b − a +1 . Notice that
Γ (0) = 0 , Γ ( ∞ ) = ∞ and Γ ( x t ( x )) − Γ ( x ) = γ b − a + 1 (cid:2) y b − a +1 (cid:3) x t ( x ) x = γ b − a + 1 (cid:16) x t ( x ) b − a +1 − x b − a +1 (cid:17) = γ b − a + 1 (cid:16)(cid:0) x − a + α (1 − a ) t (cid:1) ( b − a +1) / (1 − a ) − x b − a +1 (cid:17) . In this case, T x has a shifted Weibull distribution, with mean E ( T x ) = e γ b − a +1 x b − a +1 α ( b − a + 1) (cid:90) ∞ x b − a +1 u − ab − a +1 − e − γ b − a +1 u du < ∞ . As x → , Γ ( x t ( x )) − Γ ( x ) → γ b − a +1 ( α (1 − a ) t ) ( b − a +1) / (1 − a ) . T has a Weibulldistribution, with E ( T ) = 1 α (1 − a ) (cid:18) γ b − a + 1 (cid:19) − − ab − a +1 Γ (cid:18) − ab − a + 1 (cid:19) . If a = 1 (0 absorbing), E ( T ) = ∞ since β (0) = 0 . If a < (0 reflecting), < E ( T ) < ∞ . We finally present an example where Assumption 2 is not verified. Such cases willnot be considered in the sequel of this work.
OPULATION GROWTH 11
Example 3. (Critical case). If α ( x ) = α x a with a < (state ∞ inaccessibleand state reflecting) and β ( x ) = β x b with b = a − then γ ( x ) = γ /x ,Γ ( x ) = (cid:82) x γ ( y ) dy = γ log x such that Γ (0) = −∞ and Γ ( ∞ ) = ∞ . We have
Γ ( x t ( x )) − Γ ( x ) = γ log (cid:16) x t ( x ) x (cid:17) and P ( T x > t ) = (cid:18) x t ( x ) x (cid:19) − γ = (cid:18) α (1 − a ) tx − a (cid:19) − γ / (1 − a ) , following a Pareto distribution. We have E ( T x ) = x γ α (cid:90) ∞ x z − ( a + γ ) dz which is finite if and only if γ > − a. In this case, E ( T x ) = α ( γ + a − x − a . Clearly, lim x → E ( T x ) = E ( T ) = 0 , since a < , and this corresponds to Γ(0) = −∞ . First jump time in case of finite time explosion. If x t ( x ) explodes infinite time t ∗ ( x ) > , that is, if I ∞ ( x ) < ∞ , then we still have for all t ≥ P ( T x > t ) = e − (cid:82) t β ( x s ( x )) ds which equals, for all t < t ∗ ( x ) , P ( T x ≥ t ) = P ( T x > t ) = e − (cid:82) t β ( x s ( x )) ds = e − [Γ( x t ( x )) − Γ( x )] . Letting t ↑ t ∗ ( x ) in the above equation, we get P ( T x ≥ t ∗ ( x )) = e − [Γ( ∞ ) − Γ( x )] by monotone convergence, since Γ is increasing, whence the necessary and sufficientcondition(14) P ( T x ≥ t ∗ ( x )) = 0 ⇐⇒ Γ ( ∞ ) = ∞ . Notice that under Assumption 1, the representation (13) remains valid for all x > , and also for x = 0 if 0 is reflecting. Notice finally that E ( T x ) < ∞ since T x < t ∗ ( x )almost surely. Example 4.
We consider α ( x ) = α x a with a > such that the solution x t ( x ) = (cid:0) x − a + α (1 − a ) t (cid:1) / (1 − a ) explodes in finite time at t ∗ ( x ) = x − a / [ α ( a − . Taking β ( x ) = β x b , we have for b (cid:54) = a − , Γ ( x ) = γ b − a + 1 x b − a +1 such that Γ ( ∞ ) = ∞ ⇐⇒ b > a − . If b > a − , then T x < t ∗ ( x ) almost surely.If < b < a − , then Γ ( ∞ ) = 0 and β ( ∞ ) = ∞ , and T x has an atom at t ∗ ( x ) with mass e γ xb − a +1 b − a +1 . If b = 0 , the process jumps at constant rate independently ofits value (finite or infinite). Finally, if b < , then β ( ∞ ) = 0 and T x = + ∞ withprobability e γ xb − a +1 b − a +1 . Example 5.
We continue the preceding example with α ( x ) = α x a , a > , butnow we take β ( x ) = β . Then γ ( x ) = γ x − a and Γ ( x ) = γ − a x − a . In particular,
Γ (0) = −∞ and Γ ( ∞ ) = 0 . is absorbing in this case, such that formally, T = ∞ almost surely. Moreover, for all x > , T x follows an exponential distribution with parameter β . We check that
Γ ( x t ( x )) − Γ ( x ) = (cid:82) x t ( x ) x γ ( y ) dy = γ − a (cid:2) y − a (cid:3) x t ( x ) x = γ − a (cid:16) x t ( x ) − a − x − a (cid:17) = β t if t < t ∗ ( x ) = x − a / [ α ( a − , where we recall that x t ( x ) = (cid:0) x − a + α (1 − a ) t (cid:1) / (1 − a ) . Applying the representation in the middle of (12) , we obtain E ( T x ) = β − as expected. Joint distribution of ( T x , X T x ) . Under the assumption I ∞ ( x ) = + ∞ , wehave for all y ∈ [0 , x t ( x )) , P ( T x ∈ dt, X T x ∈ dy ) = dtβ ( x t ( x )) e − (cid:82) t β ( x s ( x )) ds H ( x t ( x ) , dy )= dtβ ( x t ( x )) e − (cid:82) xt ( x ) x γ ( z ) dz H ( x t ( x ) , dy ) . Moreover, P ( T x > τ , X T x ∈ dy ) = (cid:90) ∞ τ dtβ ( x t ( x )) e − (cid:82) xt ( x ) x γ ( z ) dz H ( x t ( x ) , dy )= e Γ( x ) (cid:90) ∞ x τ ( x ) dzγ ( z ) e − Γ( z ) H ( z, dy )and E ( T x ( X T x ∈ dy )) = (cid:90) ∞ dτ (cid:90) ∞ x τ ( x ) dzγ ( z ) e − (Γ( z ) − Γ( x )) H ( z, dy )= e Γ( x ) (cid:90) ∞ x dz (cid:48) α ( z (cid:48) ) (cid:90) ∞ z (cid:48) dzγ ( z ) e − Γ( z ) H ( z, dy ) , such that P ( X T x ∈ dy ) = (cid:90) ∞ dtβ ( x t ( x )) e − (cid:82) xt ( x ) x γ ( z ) dz H ( x t ( x ) , dy )= (cid:90) ∞ x dzγ ( z ) e − (cid:82) zx γ ( z (cid:48) ) dz (cid:48) H ( z, dy )= − (cid:90) ∞ x d (cid:16) e − (cid:82) zx γ ( z (cid:48) ) dz (cid:48) (cid:17) H ( z, dy )and (cid:90) ∞ P ( X T x ∈ dy ) = (cid:90) ∞ x dzγ ( z ) e − (cid:82) zx γ ( z (cid:48) ) dz (cid:48) = − (cid:90) ∞ x d (cid:16) e − (cid:82) zx γ ( z (cid:48) ) dz (cid:48) (cid:17) = 1 . Classification of state . We start classifying state 0 . With x >
0, if andonly if I ( x ) = (cid:90) x dyα ( y ) < ∞ , is state 0 non-absorbing or reflecting (if I ( x ) = ∞ , state 0 is absorbing). I ( x ) isthe time necessary for x t to move from 0 to x >
0. In particular, if I ( x ) < ∞ , thenstate 0 is a reflecting boundary. Moreover, if I ( x ) = ∞ , then it is an absorbingboundary.We can get IN from some x ∈ (0 , ∞ ) to the boundary point 0 iff H ( x, > I ( x ) < ∞ for some x ∈ (0 , ∞ ). OPULATION GROWTH 13
This leads to four possible combinations for the boundary state 0: H ( x, > I ( x ) < ∞ : regular (accessible and reflecting). H ( x, > I ( x ) = ∞ : exit (accessible and absorbing). H ( x,
0) = 0 and I ( x ) < ∞ : entrance (inaccessible and reflecting). H ( x,
0) = 0 and I ( x ) = ∞ : natural (inaccessible and absorbing).The first case is called regular because we can get in to 0 and we can start theprocess afresh from there. The second case is called exit because we can get in to0 but cannot get out. The third is called an entrance boundary because we cannotget in to 0 but we can start the process there. Finally, in the fourth case the processcan neither get to nor start afresh from 0, so it is reasonable to exclude 0 from thestate space.3.6. Classification of state ∞ and explosion. We now classify state ∞ . State ∞ is absorbing iff for all y ∈ [0 , ∞ ) , H ( ∞ , y ) = 0. However, under Assumption 1, X t ( x ) is not able to hit state ∞ before its explosion time. Here, we say that theprocess possesses a finite explosion time S ∞ if(15) lim n →∞ S n = S ∞ < ∞ , where the sequence of successive jump times of the process is strictly increasing,that is, S < S < . . . Proposition 1.
Suppose that Γ( ∞ ) = ∞ and I ∞ ( x ) < ∞ for some (and henceall) x > . Let T ∞ ( x ) = inf { t > X t − ( x ) = ∞} . Then P ( T ∞ ( x ) < S ∞ ) = 0 . The above result implies that the process is not able to reach the state + ∞ beforethe time of explosion S ∞ . Proof.
Suppose that T ∞ ( x ) < S ∞ with positive probability and write T = T ∞ ( x ) . Let S T = sup { S n : S n < T } be the last jump of the process strictly before hittingthe state + ∞ . T < S ∞ implies that there is only a finite number of jumps on[0 , T ] , such that, almost surely, S T < T and X S T < ∞ . Moreover, conditionally on X S T = y < ∞ ,X S T + t = x t ( y ) , for all t < T − S T and T − S T d = t ∗ ( y ) . In particular, X does not jump in ( S T , T ) . However, since Γ( ∞ ) = ∞ , by (14),almost surely, T y < t ∗ ( y ) , implying that X does indeed jump strictly before time T, which is a contradiction. (cid:3) The above arguments show that on the event of explosion { S ∞ < ∞} , the pro-cess approaches state ∞ in finite time, that is, on { S ∞ < ∞} , we have thatlim n →∞ X S n = ∞ almost surely. This also follows from the following result whichextends the classical explosion criterion for pure Markov jump processes withoutdrift (see e.g. [15]) to the present frame of PDMP’s. Proposition 2.
Grant Assumptions 1 and 2 and suppose moreover that I ( x ) < ∞ . Then almost surely ( S ∞ < ∞ ) ⇐⇒ (cid:32)(cid:88) n e Γ( X Sn ) (cid:90) ∞ X Sn α ( z ) e − Γ( z ) dz < ∞ (cid:33) . Proof.
Let us write for short(16) e ( x ) := E ( T x ) = e Γ( x ) (cid:90) ∞ x α ( z ) e − Γ( z ) dz. Then the process A n = n (cid:88) k =1 E (cid:0) S k − S k − |F S k − (cid:1) = n (cid:88) k =1 e ( X S k − )is the predictable increasing compensator of S n , that is, M n := S n − A n is amartingale. Putting τ a := inf { n : A n +1 > a } it follows that M − n ∧ τ a ≤ a, and themartingale convergence theorem implies that { A ∞ < ∞} ⊂ { S ∞ < ∞} almostsurely. To prove the opposite inclusion, suppose S ∞ < ∞ with positive probability.Then necessarily I ∞ ( x ) < ∞ . In particular, recalling (14),sup n ( S n − S n − ) ≤ sup n t ∗ ( X S n − ) ≤ (cid:90) ∞ α ( y ) dy < ∞ since 0 is reflecting by assumption and since I ∞ ( x ) < ∞ . Introducing the stoppingtime σ a := inf { n : S n > a } , it follows from the above that sup n E ( M + n ∧ σ a ) < ∞ . Classical arguments then allow to conclude that { S ∞ < ∞} ⊂ { A ∞ < ∞} almostsurely. (cid:3) In what follows, we give conditions ensuring that the process reaches state + ∞ starting from any point x ∈ (0 , ∞ ) . We also exhibit conditions implying that theprocess comes down from infinity to y ∈ (0 , ∞ ).We can get IN from some x ∈ (0 , ∞ ) to the boundary point ∞ iff Γ( ∞ ) < ∞ and I ∞ ( x ) < ∞ . We can get OUT from the boundary point ∞ iff H ( ∞ , y ) > y ∈ (0 , ∞ )(see e.g. Example 8).This leads to four possible combinations for the boundary state ∞ . To classifythem, we introduce Σ( x ) = (cid:80) n ≥ e ( X S n ) , where X S n is the embedded chain of X t ( x ) started at x. Then we have:Σ ( x ) < ∞ ⇒ I ∞ ( x ) < ∞ and H ( ∞ , y ) > x ) < ∞ ⇒ I ∞ ( x ) < ∞ and H ( ∞ , y ) = 0 : exit (accessible and absorbing). I ∞ ( x ) = ∞ ⇒ Σ ( x ) = ∞ and H ( ∞ , y ) > I ∞ ( x ) = ∞ ⇒ Σ ( x ) = ∞ and H ( ∞ , y ) = 0 : natural (inaccessible and absorbing).3.7. Kolmogorov backward and forward equations.
We describe the infini-tesimal generators of the process X t ( x ) . Backward:
With u t ( x ) := E x u ( X t ), u ( x ) = u ( x ), we have (Kolmogorov back-ward equation) ∂ t u t ( x ) = ( Gu t ) ( x ) , OPULATION GROWTH 15 where G is given in (8). Forward:
With Π t,x ( dy ) = P x ( X t ∈ dy ), Π ,x ( dy ) = δ x , this also means ddt (cid:90) ∞ u ( y ) Π t,x ( dy ) = (cid:90) ∞ ( Gu ) ( y ) Π t,x ( dy ) . Considering the family of test functions u ( y ) = e λ ( y ) := e − λy , λ ≥
0, for which( Ge λ ) ( x ) = − λα ( x ) e λ ( x ) + λβ ( x ) (cid:90) x H ( x, y ) e λ ( y ) dy, we get, using Fubini’s theorem and putting Π t,x ( y ) = (cid:82) y Π t,x ( dz ) , (17) ddt (cid:90) ∞ dye λ ( y ) Π t,x ( y ) = ddt λ (cid:90) ∞ e λ ( y ) Π t,x ( dy )= − (cid:90) ∞ e λ ( y ) α ( y ) Π t,x ( dy ) + (cid:90) ∞ dye λ ( y ) (cid:90) ∞ y β ( z ) H ( z, y ) Π t,x ( dz ) . Writing D (cid:48) + ( IR ) for all distributions having support in [0 , ∞ ) , we may define thedistribution δ t Π t,x by < δ t Π t,x , u > := ddt (cid:90) u ( y )Π t,x ( y ) dy for any smooth test function u having compact support.Therefore, Laplace transforms characterizing distributions with support in IR + , byduality (Kolmogorov foreward equation)(18) δ t Π t,x = − α ( y ) Π t,x ( dy ) + dy (cid:90) ∞ y β ( z ) H ( z, y ) Π t,x ( dz ) . Proposition 3.
The measure Π t,x ( dy ) has support [0 , x t ( x )] with an atom at x t ( x ) with mass P ( T x > t ) . In particular, δ t Π t,x is of compact support. Proposition 4.
Suppose either that α is strictly positive on [0 , ∞ ) or, in case that α (0) = 0 , either that I ( x ) < ∞ or that H ( x,
0) = 0 for all x > . Then for all x > , Π t,x is absolutely continuous on [0 , x t ( x )) . Proof.
Let g be a smooth test function having compact support in [0 , x t ( x )) . Then E ( g ( X t ( x ))) = E ( g ( X t ( x )) { t ≤ T x } ) . Recall that S < S < . . . denote the succes-sive jumps of X t ( x ) . Then we have E ( g ( X t ( x )) = ∞ (cid:88) n =1 E ( g ( X t ( x )) { N t = n } ) . The joint law of Y n := ( S , . . . , S n +1 , X S ( x ) , . . . , X S n ( x )) is given by f Y ( s , . . . , s n +1 , dx , . . . , dx n ) ds . . . ds n +1 = β ( x s ( x )) e s ( x ) ds (cid:90) IR + H ( x s ( x ) , dx ) β ( x s ( x )) e s ( x ) ds . . . (cid:90) IR + H ( x s n ( x n − ) , dx n ) β ( x s n +1 ( x n )) e s n +1 ( x n ) ds n +1 , where e t ( x ) := e − (cid:82) t β ( x s ( x )) ds . Therefore, E ( g ( X t ( x )) { N t = n } ) = (cid:90) [0 ,t ] n × [ t, ∞ [ (cid:90) IR n + f Y ( s , . . . , s n +1 , dx , . . . , dx n ) g ( x t − s n ( x n )) ds . . . ds n +1 . Notice that under our condition, x t − s n ( x n ) > s n < t. In particular wealso have that α ( x t − s n ( x n )) > . Using the change of variables s n (cid:55)→ z ( s n ) with z ( s n ) := x t − s n ( x n ) ∈ [ x n , x t ( x n )] , for fixed x n , with s n = z − ( z, x n ) , we then have dzds n = − α ( x t − s n ( x n )) = − α ( z ) , such that E ( g ( X t ( x ) { N t = n } ) = (cid:90) IR + dz g ( z ) α ( z ) (cid:16) (cid:90) [0 ,t ] n − × [ t, ∞ [ (cid:90) IR n + { x n ≤ z ≤ x t ( x n ) } f Y ( s , . . . , z − ( z, x n ) , s n +1 , dx , . . . , dx n ) ds . . . ds n − ds n +1 (cid:17) . Summing over n implies the result. (cid:3) Let us come back to equation (18) together with the preceding considerations. Wenow know that under the conditions of Proposition 4, Π t,x ( dy ) admits a density π t,x ( y ) on [0 , x t ( x )) and we haveΠ t,x ( dy ) = P ( T x > t ) δ x t ( x ) ( dy ) + π t,x ( y ) ( y ∈ [0 ,x t ( x ))) dy. (18) implies that on [0 , x t ( x )) , the distribution δ t Π t,x has a density δ t Π t,x ( y ) givenby δ t Π t,x ( y ) = − α ( y ) π t,x ( y ) + (cid:90) ∞ y β ( z ) H ( z, y ) Π t,x ( dz )= − α ( y ) π t,x ( y ) + (cid:90) ∞ y β ( z ) H ( z, y ) π t,x ( z ) dz + β ( x t ( x )) H ( x t ( x ) , y ) P ( T x > t )In the separable case H ( x, y ) = h ( y ) /h ( x ) , this can be rewritten as δ t Π t,x ( y ) = − α ( y ) π t,x ( y ) + h ( y ) (cid:90) ∞ y β ( z ) h ( z ) π t,x ( z ) dz + β ( x t ( x )) h ( y ) h ( x t ( x )) P ( T x > t ) . If (cid:101) π t,x ( y ) := α ( y ) π t,x ( y ), putting(19) γ ( x ) := β ( x ) /α ( x ) , we have for all y ∈ [0 , x t ( x )) ,δ t Π t,x ( y ) = − (cid:101) π t,x ( y ) + (cid:90) ∞ y γ ( z ) H ( z, y ) (cid:101) π t,x ( z ) dz + β ( x t ( x )) H ( x t ( x ) , y ) P ( T x > t ) OPULATION GROWTH 17
In the separable case, this reads δ t Π t,x ( y ) = − (cid:101) π t,x ( y ) + h ( y ) (cid:90) ∞ y γ ( z ) h ( z ) (cid:101) π t,x ( z ) dz, + β ( x t ( x )) h ( y ) h ( x t ( x )) P ( T x > t ) . Clearly Π t,x (0) = 0 . We conclude for y = 0: if h (0) = 0, (cid:101) π t,x (0) = 0 . If h (0) > , then (cid:101) π t,x (0) = h (0) (cid:90) ∞ γ ( z ) h ( z ) (cid:101) π t,x ( z ) dz + β ( x t ( x )) h (0) h ( x t ( x )) P ( T x > t ) , and the value of (cid:101) π t,x (0) requires the knowledge of the whole (cid:101) π t,x ( z ) , for all z ∈ (0 , x t ( x )) . Remark 4. If y = ∞ , Π t,x ( ∞ ) = P ( X t ( x ) < ∞ ) = 1 since X t ( x ) ≤ x t ( x ) almostsurely. Thus, δ t Π t,x ( ∞ ) = 0 and there is no mass loss. Remark 5.
Let
T > and suppose that x T ( x ) < ∞ . Fix any y ∈ ( x, x T ( x )) . Then t (cid:55)→ Π t,x ( y ) is not differentiable in t = I x ( y ) := (cid:82) yx α ( s ) ds. The proof of this remark is in the appendix.We close this section with the following observation.
Proposition 5.
Suppose that I ∞ ( x ) < ∞ and that P ( T x < t ∗ ( x )) = 1 . Grantmoreover the assumptions of Proposition 4. Then Π t,x is absolutely continuous on IR + for all t ≥ t ∗ ( x ) . Recurrence criteria
In this section, we will discuss several different recurrence criteria.4.1.
Recurrence of X t and of the embedded chain. In what follows we shallrely on the notion of Harris recurrence for Markov processes which we recall herefor the convenience of the reader.
Definition 1 (see [2]) . X is called Harris recurrent if there exists some σ -finitemeasure m on ( IR + , B ( IR + )) such that for all A ∈ B ( IR + ) ,m ( A ) > implies P x (cid:18)(cid:90) ∞ A ( X s ) ds = ∞ (cid:19) = 1 for all x ∈ IR + . If is well-known (see again [2]) that if X is Harris recurrent, then there is a unique(up to constant multiples) invariant measure π for X , and the above property holdswith π in place of m . X is then called positive recurrent (or also sometimes ergodic )if π ( IR + ) < ∞ , null recurrent if π ( IR + ) = ∞ .Whenever an invariant measure π exists which is not equal to δ , the same argumentleading to (17) implies that α ( x ) π ( dx ) admits a Lebesgue density ˜ π ( x ) solving thefunctional equation (cid:101) π ( y ) = (cid:90) ∞ y γ ( z ) H ( z, y ) (cid:101) π ( z ) dz for λ − almost all y > . In the separable case H ( z, y ) = h ( y ) h ( z ) , this yields the explicitexpression(20) π ( y ) = C h ( y ) α ( y ) e − Γ( y ) , up to a multiplicative constant C > . Notice that under Assumption 2, π isintegrable in 0+ if and only if (cid:82) h ( x ) /α ( x ) dx < ∞ which is equivalent to 0 reflectingin case h (0) > . These expressions of the speed measure were also obtained by [9],using a different approach.
Example 6. If h ( x ) ∼ e Γ( x ) as x → ∞ , we have π ( x ) ∼ α ( x ) , as x → ∞ . Inparticular, (cid:82) ∞ π ( y ) dy < ∞ if and only if I ∞ ( x ) < ∞ for some (and thus all) x > . This means that the deterministic flow hits state + ∞ in finite time. Thus, finitetime explosion of the deterministic flow helps the process being positive recurrent(compare also to (14) ). Example 7. (Non-separable cases) − If for some fixed u ∈ (0 , , H ( x, dy ) = δ ux , then H ( x, y ) = { y ≥ ux } , and (cid:101) π ( x ) = (cid:90) x/ux γ ( y ) (cid:101) π ( y ) dy. The solution to this functional equation is given in [9] page , Example . − If, with U ∈ (0 , random, with pdf F U ( u ) = P ( U ≤ u ) , H ( x, y ) = F U (cid:0) yx (cid:1) , then we have (cid:101) π ( x ) = (cid:90) ∞ x γ ( y ) F U (cid:18) xy (cid:19) (cid:101) π ( y ) dy = x (cid:90) ∞ γ ( xz ) F U (cid:18) z (cid:19) (cid:101) π ( xz ) dz. Let us now come back to our general framework. The following result establishes arelation between π and the invariant measure of the jump chain. Proposition 6.
Suppose that X t is Harris recurrent having invariant measure π such that < π ( β ) < ∞ . Let S k , k ≥ , be the successive jump times of theprocess and suppose that ( S k ) k ≥ is a strictly increasing sequence. Then ( U k ) k and ( Z k ) k are both Harris recurrent, where U k = X S k − and Z k = X S k . Their invariantmeasures π U and π Z are respectively given by π U ( g ) = 1 π ( β ) π ( βg ) , π Z ( g ) = 1 π ( β ) π ( βHg ) , for any g : IR N → IR measurable and bounded, where βHg ( x ) = β ( x ) (cid:90) H ( x, dy ) g ( y ) . Proof.
We just give the proof for ( Z k ) k , the case of ( U k ) k is treated analogously.Let g ≥ n (cid:80) nk =1 g ( Z k ) → π Z ( g ) as n → ∞ , P x − almost surely, for any fixed starting point x. But 1 n n (cid:88) k =1 g ( Z k ) = 1 n n (cid:88) k =1 g ( X S k ) . OPULATION GROWTH 19
Introduce the jump measure µ ( ds, dy, dz ) = (cid:88) n ≥ { S n < ∞} δ ( S n ,X Sn − ,X Sn ) ( dt, dy, dz ) . Its compensator is given by ν ( ds, dy, dz ) = β ( X s − ) dsδ X s − ( dy ) (cid:90) H ( y, dz ) . Putting N t = sup { n : S n ≤ t } , lim n →∞ n n (cid:88) k =1 g ( X S k ) = lim t →∞ tN t t N t (cid:88) k =1 g ( X S k ) = lim t →∞ tN t A t t , where A t = (cid:82) t (cid:82) IR N (cid:82) IR N g ( z ) µ ( ds, dy, dz ) and N t are additive functionals of theprocess X. By the ergodic theorem for the process X (which holds thanks to theHarris recurrence of X t ), N t /t → E π ( N ) and A t /t → E π ( A ) , and this conver-gence holds almost surely, for every starting point x. But E π ( N ) = E π ( ˆ N ) and E π ( A ) = E π ( ˆ A ) , whereˆ N t = (cid:90) t (cid:90) (cid:90) ν ( ds, dy, dz ) = (cid:90) t β ( X s ) ds andˆ A t = (cid:90) t (cid:90) (cid:90) g ( z ) ν ( ds, dy, dz ) = (cid:90) t β ( X s ) (cid:90) H ( X s , dz ) g ( z ) ds = (cid:90) t βHg ( X s ) ds. Therefore, E π ( N ) = π ( β ) and E π ( A ) = π ( βHg ) , and this finishes the proof. (cid:3) We use the above considerations to discuss rapidly that explosion of the process X t in the sense that S ∞ < ∞ is only possible if the jump chain Z n is transient. Proposition 7. If Z n is recurrent, explosion of X t (that is, lim S n = S ∞ < ∞ with positive probability) is not possible.Proof. We know that explosion of X t is equivalent to (cid:80) n ≥ e ( Z n ) < ∞ (recall thedefinition of e in (16)). But, if Z n is recurrent (possibly null-recurrent), we knowthat for any function g > π Z ( g ) ∈ (0 , ∞ ) , (cid:80) nk =1 e ( Z k ) (cid:80) nk =1 g ( Z k ) → π Z ( e ) /π Z ( g )almost surely. Since (cid:80) Nk =1 g ( Z k ) ↑ ∞ as n → ∞ , explosion implies that π Z ( e ) = 0 , whence e = 0 π Z − almost surely. e being strictly positive on (0 , ∞ ) , this yields acontradiction. (cid:3) Corollary 8.
In particular, if Z n is recurrent (positive or null), then X is alsorecurrent (positive or null).Proof. Z n recurrent implies S n ↑ ∞ almost surely, thanks to Proposition 7. Nowlet A ∈ B ( IR + ) be such that π Z ( A ) > A ( Z n ) = 1 infinitely often.Then lim sup t →∞ A ( X t ) ≥ lim sup n →∞ A ( X S n ) = lim n →∞ A ( Z n ) = 1 , whencethe recurrence of X t . (cid:3) Sufficient conditions for positive recurrence.
With i ( x ) = x the identityfunction, we obtain( Gi ) ( x ) = α ( x ) − β ( x ) (cid:90) x ( x − y ) H ( x, dy ) , such that ( Gi ) ( x ) < ⇐⇒ (cid:90) x ( x − y ) H ( x, dy ) > γ ( x ) . The quantity (cid:82) x ( x − y ) H ( x, dy ) is the average size of a downward jump from state x . The quantity 1 /γ ( x ) = α ( x ) /β ( x ) is the local size of a move up.- Suppose 0 is reflecting: If for some x ∗ > (cid:90) x ( x − y ) H ( x, dy ) > γ ( x ) for all x > x ∗ ,the process X t is positive recurrent because above this threshold, X t has a negativedrift pointing towards state 0. The speed density is integrable and can be tuned toa probability (invariant) density.- If now 0 is absorbing and accessible ( H ( x, > X t is transient at . Condition (21) is not a necessary condition for recurrence, as shows the followingexample.
Example 8.
Take h ( x ) = 2 − e − x , H ( x, y ) = h ( y ) /h ( x ) and let α ( x ) = 1 + 3 x,β ( x ) = 1 . Assumption 1 is trivially satisfied, Assumption 2 is verified since isreflecting. Then (cid:90) x ( x − y ) H ( x, dy ) = (cid:90) x H ( x, y ) dy = 1 h ( x ) (cid:90) x h ( y ) dy = 1 h ( x ) (2 x − e − x − ∼ x as x → ∞ . As a consequence, lim sup x →∞ (cid:90) x ( x − y ) H ( x, dy ) − γ ( x ) < such that the drift criterion (21) is not satisfied.However, inf x H ( x,
0) = inf x h (0) /h ( x ) = , and jumps occur at constant rate . Thus, at each jump time of the process, there is a minimal probability of ofjumping directly to which implies, by the conditional Borel Cantelli lemma, thatthe hitting time of is finite almost surely, whence the recurrence of the process. Exit probabilities and excursions.
With x >
0, we introduce τ x, = inf { t > X t = 0 | X = x } the first time the process comes back to 0 . OPULATION GROWTH 21
Proposition 9.
We have τ x, < ∞ almost surely if and only if (cid:90) ∞ β ( X s ( x )) H ( X s ( x ) , ds = ∞ almost surely.Proof. Writing N t := (cid:80) n ≥ { S n ≤ t } { X Sn =0 } , the result follows from the fact thatthe predictable compensator ˆ N t of N t is given byˆ N t = (cid:90) t β ( X s ( x )) H ( X s ( x ) , ds together with the fact τ x, < ∞ if and only if N ∞ = ∞ . (cid:3) Corollary 10. If β ( . ) H ( ., is lowerbounded, then ( X t ) t is recurrent. In what follows we fix 0 < x < h and are interested in establishing explicit formulaefor p ( x, h ) = P ( τ x, < τ x,h ) . Notice that it follows from the properties of our process that lim x → h p ( x ; h ) = p ( h, h ) = 0 . However, we do not have that lim x → p ( x, h ) = p (0 , h ) = 1 . In general, p (0 , h ) < p ( x, h ) = (cid:90) t x ( h )0 L ( T x )( ds ) (cid:32) H ( x s ( x ) ,
0) + (cid:90) x s ( x )0+ H ( x s ( x ) , dy ) p ( y, h ) (cid:33) , with t x ( h ) = (cid:82) hx dyα ( y ) the time needed to go from x to h. A simple change of variablesimplies that p ( x, h ) = (cid:90) hx γ ( v ) e − (Γ( v ) − Γ( x )) H ( v, dv + (cid:90) hx γ ( v ) e − (Γ( v ) − Γ( x )) (cid:90) v H ( v, dy ) p ( y, h ) . In the sequel we shall only consider the separable case H ( x, y ) = h ( y ) h ( x ) with h (0) > . In this case, the above formula implies that x (cid:55)→ p ( x, h ) ∈ C ([0 , h ]) . Recalling that p ( h, h ) = 0 , we rewrite p ( y, h ) = − (cid:90) hy p (cid:48) ( z, h ) dz, where p (cid:48) ( x, h ) = ∂ x p ( x, h ) denotes partial derivative with respect to the initialposition. We obtain p ( x, h ) = (1 − p (0 , h )) (cid:90) hx γ ( v ) e − (Γ( v ) − Γ( x )) h (0) h ( v ) dv + (cid:90) hx γ ( v ) e − (Γ( v ) − Γ( x )) p ( v, h ) dv − (cid:90) hx γ ( v ) h ( v ) e − (Γ( v ) − Γ( x )) (cid:90) v h ( z ) p (cid:48) ( z, h ) dzdv. Taking derivatives, we obtain p (cid:48) ( x, h ) h ( x ) = γ ( x ) (cid:90) x h ( z ) p (cid:48) ( z, h ) dz − γ ( x )(1 − p (0 , h )) h (0) . Let κ ( x ) := (cid:90) x h ( z ) p (cid:48) ( z, h ) dz − (1 − p (0 , h )) h (0) , then we have κ (cid:48) ( x ) = h ( x ) p (cid:48) ( x, h ) and κ (0) = − (1 − p (0 , h )) h (0) . The above equationreads κ (cid:48) ( x ) = γ ( x ) κ ( x )leading to κ ( x ) = Ce Γ( x ) , where C is such that Ce Γ(0) = − (1 − p (0 , h )) h (0); that is, C = − e − Γ(0) ( h (0)(1 − p (0 , h )) . We deduce from this that p (cid:48) ( x, h ) = C γ ( x ) h ( x ) e Γ( x ) , and thus, using once more that p ( h, h ) = 0 ,p ( x, h ) = − C (cid:90) hx γ ( y ) h ( y ) e Γ( y ) dy = e − Γ(0) h (0)(1 − p (0 , h )) (cid:90) hx γ ( y ) h ( y ) e Γ( y ) dy. Finally, the value of p (0 , h ) is deduced from p (0 , h ) = e − Γ(0) h (0)(1 − p (0 , h )) (cid:90) h γ ( y ) h ( y ) e Γ( y ) dy. Let(22) s ( x ) = (cid:90) x γ ( y ) h ( y ) e Γ( y ) dy. Then we obtain(23) p (0 , h ) = e − Γ(0) h (0) s ( h )1 + e − Γ(0) h (0) s ( h ) and P ( τ x, < τ x,h ) = p (0 , h )[1 − s ( x ) s ( h ) ] . We have just proven the following
Proposition 11.
Grant Assumptions 1 and 2 and let < x < h. Suppose moreoverthat H ( x, y ) = h ( y ) h ( x ) with h (0) > . Put κ := e Γ(0) /h (0) . Then (24) P ( τ x, > τ x,h ) = κ + s ( x ) κ + s ( h ) . Notice that in case h ( x ) = 1 (total disaster), we obtain P ( τ x,h < τ x, ) = e − (Γ( h ) − Γ( x )) . Discussion of the role of . Proposition 11 holds true in both cases 0 reflectingor absorbing. However what follows does only make sense in case 0 is reflecting,that is, I ( x ) < ∞ . In this case we may introduce the height H of an excursion by H = sup { X t (0) : t < τ , } , OPULATION GROWTH 23 where τ , = inf { t > X t (0) = 0 } > . Since τ x, L → τ , as x → , we may interpret p (0 , h ) by means of the distribution functionof the height of an excursion. Proposition 12.
Grant the assumptions of Proposition 11 and suppose that I ( x ) < ∞ . Then (25) P ( H < h ) = P ( τ , < τ ,h ) = p (0 , h ) = s ( h ) κ + s ( h ) . Remark 6.
Suppose is absorbing, that is, I ( x ) = ∞ . In this case, letting x → in (24) , we still obtain lim x → P ( τ x, > τ x,h ) = κκ + s ( h ) (cid:54) = 0 and lim x → P ( τ x, < τ x,h ) = s ( h ) κ + s ( h ) (cid:54) = 1 . This means that lim x → τ x, (cid:54) = τ , , in other words, x (cid:55)→ τ x, is discontinuous in . Corollary 13.
Grant Assumptions 1 and 2 and suppose moreover that H ( x, y ) = h ( y ) h ( x ) with h (0) > , that I ∞ ( x ) = ∞ and I ( x ) < ∞ . Then the process is recur-rent if and only if s ( ∞ ) = ∞ , where the function s ( x ) is given by (22) . In thislatter case, τ x, < ∞ almost surely, and the unique invariant measure possesses aLebesgue density on IR + which is given by (20) . The process is positive recurrent if (cid:82) ∞ h ( x ) α ( x ) e − Γ( x ) dx < ∞ , null-recurrent else.Proof. Suppose s ( ∞ ) = ∞ . We let h → ∞ in (23) and notice that lim h →∞ p (0 , h ) =1 such that P ( τ x, < τ x, ∞ ) = 1 . This implies that τ x, < ∞ almost surely.On the other hand, suppose that the process is recurrent. It is straightforwardto show that the recurrence implies that τ , < ∞ almost surely (recall that 0 isreflecting by assumption and that β is positive on (0 , ∞ ) . ) Since H ≤ x τ , (0) andsince I ∞ ( x ) = ∞ , this implies that H < ∞ almost surely, i.e., lim h →∞ P ( H < h ) =lim h →∞ p (0 , h ) = 1 . Under our assumptions, this is only possible if s ( ∞ ) = ∞ . (cid:3) Remark 7.
We impose all assumptions of Corollary 13 except that now we considerthe absorbing case I ( x ) = ∞ . In this case we still have that τ x, < ∞ almost surelyif and only if s ( ∞ ) = ∞ : the process gets absorbed in after a finite time almostsurely and then stays there forever. When h ( x ) = 1 (total disasters), the event τ x,h < τ x, coincides with the event T x > t h ( x ) where t h ( x ) = (cid:82) hx dy/α ( y ) is the time needed for the flow to reach level h starting from x . Example 9.
Consider a growth model with α ( x ) = α x a , β ( x ) = β , γ ( x ) = γ x − a and assume h ( x ) = 1 . Assuming a < for which boundary is reflecting,then x t ( x ) = (cid:0) x − a + α (1 − a ) t (cid:1) / (1 − a ) = h ⇒ t h ( x ) = h − a − x − a α (1 − a ) . Thus, P ( τ x,h < τ x, ) = P ( T x > t h ( x )) = P (cid:18) T x > h − a − x − a α (1 − a ) (cid:19) = e − [Γ( x t ( x )) − Γ( x )] | t = h − a − x − aα − a ) = e Γ( x ) e Γ( h ) with Γ ( x ) = γ − a x − a . As x → , with t h := t h (0) P ( τ ,h < τ , ) = P ( H ≥ h ) = P (cid:18) T > h − a α (1 − a ) = t h (0) (cid:19) = e − Γ( h ) , where H denotes the height of an excursion, which makes sense because boundary is reflecting and the chain is recurrent ( s ( ∞ ) = ∞ ). So here H d = ( α (1 − a ) T ) / (1 − a ) , showing how height and length of excursions scale. Example 10.
Consider a growth model with α ( x ) = α + α x (Malthus growthwith immigration) , β ( x ) = β , γ ( x ) = β / ( α + α x ) and assume h ( x ) = e x . Wehave
Γ ( x ) = β α log ( α + α x ) satisfying Assumptions and . State is reflecting and the process X is transientat ∞ . Here κ = β α log α , and s ( x ) = β (cid:90) x ( α + α y ) β /α − e − y dy = β e α /α α (cid:90) α + α xα z β /α − e − z/α dz, involving an integral Gamma function. It holds that P ( H ≥ h ) = κκ + s ( h ) , with P ( H = ∞ ) = κ/ ( κ + s ( ∞ )) > , s ( ∞ ) < ∞ . Remark 8.
Under the assumptions of Proposition 11, let us discuss the situation s ( ∞ ) < ∞ . In this case we have P ( τ x, < τ x, ∞ ) < . Then either τ x, ∞ = ∞ . In this case with positive probability the process never comesback to and thus is transient, that is, converges to + ∞ as t → ∞ . Or τ x, ∞ < ∞ , such that the process hits state + ∞ even in finite time. Proposition 1implies that in this case S ∞ < ∞ such that the jump chain Z n = X S n is transient.However in case ∞ is regular, we can add state + ∞ to the state space. In thisparticular situation the process X t is even recurrent having + ∞ as recurrent state. Remark 9.
Under the assumptions of Proposition 11, let us introduce the modifiedgenerator ˜ G of the process X by ˜ Gu ( x ) = α ( x ) u (cid:48) ( x ) − β ( x ) /h ( x ) (cid:90) x h ( y ) u (cid:48) ( y ) dy, for any smooth test function u. This modified generator differs from the true genera-tor Gu ( x ) defined in (9) only through the fact that the definite integral (cid:82) x h ( y ) u (cid:48) ( y ) dy appearing in Gu ( x ) is replaced by an indefinite integral (cid:82) x h ( y ) u (cid:48) ( y ) dy. The func-tion s introduced in (22) above satisfies ˜ Gs = 0 , OPULATION GROWTH 25 with boundary condition s (0) = 0 . We shall call s a modified scale function ofthe process. Notice however that in general a true scale function, that is, a functiontransforming X t into a martingale, does not exist. Classification of the recurrence/transience of state in the separablecase. We close this section with a classification of the recurrence/transience of state0 in the separable case with h (0) > . We have : s ( ∞ ) = ∞ , I ( x ) < ∞ : 0 is recurrent, positive recurrent iff (cid:82) ∞ h ( x ) α ( x ) e − Γ( x ) dx < ∞ .s ( ∞ ) = ∞ , I ( x ) = ∞ : The process is transient in 0 (almost surely hits 0 in finitetime and stays there forever). s ( ∞ ) < ∞ , I ∞ ( x ) = ∞ : The process is transient (converges to + ∞ with positiveprobability). s ( ∞ ) < ∞ , I ∞ ( x ) < ∞ : The process is either transient (converges to + ∞ withpositive probability) or hits state ∞ in finite time ( τ x, ∞ < ∞ with positive prob-ability). If state + ∞ is REGULAR, we can add it to the state space, and it willbecome a recurrent state. If it is EXIT the process hits + ∞ in finite time and thenstays there forever with positive probability.4.5. Expected return times to . This section is devoted to obtain an explicitformula for u ( x ) = E ( τ x, ) in the case of positive recurrence. In case of totaldisaster when H ( x,
0) = 1 for all x, we have τ x, = T x which has already beendiscussed. So we suppose 0 < H ( x, < x in this subsection. If x > , wehave(26) τ x, d = T x ( X T x = 0) + ( X T x > (cid:16) T x + τ (cid:48) X Tx , (cid:17) , where τ (cid:48) X Tx is independent of F T x and distributed as τ X Tx . The first time to local extinction distribution is given in principle by ( x = x ): P ( τ x , > t ) = P ( T x > t )+ (cid:88) n ≥ (cid:90)
2. The function IR + (cid:51) x (cid:55)→ (cid:82) x g ( y ) H ( x, dy ) is continuous for all bounded testfunctions g. Proposition 14.
Suppose that Assumptions 1, 2 and 3 hold. Suppose moreoverthat u ( x ) = E ( τ x, ) is locally bounded, that is, sup { u ( y ) , ≤ y ≤ x } < ∞ for all x > . Then u ∈ C ((0 , ∞ )) , and it solves (28) G u ( x ) = − on (0 , ∞ ) , where for all x > , G u ( x ) = α ( x ) u (cid:48) ( x ) − β ( x ) H ( x, u ( x ) + β ( x ) (cid:90) x ¯ H ( x, dy )[ u ( y ) − u ( x )] . Proof.
From (26), we have E ( τ x, ) = E ( T x ) + (cid:90) ∞ + P ( X T x ∈ dy ) E ( τ y, ) . If y > , P ( X T x ∈ dy ) = (cid:82) ∞ x dzγ ( z ) e − (cid:82) zx γ ( z (cid:48) ) dz (cid:48) H ( z, dy ) . Therefore E ( τ x, ) = E ( T x ) + (cid:90) ∞ x dzγ ( z ) e − (cid:82) zx γ ( z (cid:48) ) dz (cid:48) (cid:90) z H ( z, dy ) E ( τ y, )= u ( x ) + e Γ( x ) (cid:90) ∞ x dzγ ( z ) e − Γ( z ) (cid:90) z H ( z, dy ) E ( τ y, ) , where u is given in (31) and differentiable on (0 , ∞ ) . Since z (cid:55)→ γ ( z ) e − Γ( z ) (cid:82) z H ( z, dy ) E ( τ y, ) is continuous, u ( x ) = E ( τ x, ) is differ-entiable on (0 , ∞ ) and obeys u (cid:48) ( x ) = u (cid:48) ( x ) + γ ( x ) ( u ( x ) − u ( x )) − γ ( x ) (cid:90) x H ( x, dy ) u ( y ) . Recalling u (cid:48) ( x ) = γ ( x ) u ( x ) − /α ( x ) , this is(29) u (cid:48) ( x ) = − /α ( x ) + γ ( x ) (cid:20) u ( x ) − (cid:90) x H ( x, dy ) u ( y ) (cid:21) . OPULATION GROWTH 27
Moreover, u ( x ) − (cid:90) x H ( x, dy ) u ( y ) = H ( x, u ( x ) + (cid:90) x ¯ H ( x, dy )( u ( x ) − u ( y ))= H ( x, u ( x ) + (cid:90) x u (cid:48) ( y ) ¯ H ( x, y ) dy, where we have used Fubini’s theorem to obtain the second equality. This impliesthe assertion. (cid:3) In what follows, π ( y ) designs the speed density with integration constant C intro-duced in (20) above. By our assumptions, π ( y ) is integrable. We also recall thedefinition of the modified scale function s in (22). Corollary 15.
Grant the assumptions of Proposition 14 and suppose that H ( x, y ) = h ( y ) /h ( x ) , where h is differentiable, non-decreasing, with h (0) > . Then u ( x ) isgiven by u ( x ) = u (0) + (cid:90) x dy γ ( y ) e Γ( y ) h ( y ) (cid:90) ∞ y e − Γ( z ) h ( z ) α ( z ) dz − (cid:90) x α ( y ) dy = u (0) + s ( x ) (cid:90) ∞ x π ( y ) dy + (cid:90) x s ( y ) π ( y ) dy − (cid:90) x α ( y ) dy, (30) with (31) u (0) = 1 h (0) e Γ(0) (cid:90) ∞ e − Γ( y ) h ( y ) α ( y ) dy = 1 h (0) e Γ(0) /C, where /C = π ( IR + ) . Proof.
We come back to (29) and we put ¯ h ( y ) = h ( y ) − h (0) . Using Fubini and thefact that for y > , u ( x ) − u ( y ) = (cid:82) xy u (cid:48) ( z ) dz, since u differentiable on (0 , ∞ ) ,u ( x ) − (cid:90) x H ( x, dy ) u ( y ) = H ( x, u ( x ) + (cid:90) x ( u ( x ) − u ( y )) ¯ H ( x, dy )= H ( x, u ( x ) + (cid:90) x ¯ H ( x, y ) u (cid:48) ( y ) dy = h (0) h ( x ) u ( x ) + 1 h ( x ) (cid:90) x ¯ h ( y ) u (cid:48) ( y ) dy = h (0) h ( x ) u (0) + 1 h ( x ) (cid:90) x h ( y ) u (cid:48) ( y ) dy. Therefore, u solves α ( x ) u (cid:48) ( x ) − β ( x ) h ( x ) (cid:90) x h ( y ) u (cid:48) ( y ) dy − β ( x ) h ( x ) h (0) u (0) = − , ∞ ) . Put v ( x ) = (cid:82) x h ( y ) u (cid:48) ( y ) dy + h (0) u (0) , for x > . Then v (cid:48) ( x ) = h ( x ) u (cid:48) ( x ) and v (0) = h (0) u (0) , and thus(32) v (cid:48) ( x ) − γ ( x ) v ( x ) = − h ( x ) α ( x ) . Putting w ( x ) := e − Γ( x ) v ( x ) , we have w (cid:48) ( x ) = − e − Γ( x ) h ( x ) α ( x ) = − C π ( x ) , where π is the speed density given in (20).By our assumptions, π, and hence w (cid:48) , is integrable on IR + implying that the explicitsolution of the above equation is given by(33) w ( x ) = (cid:90) ∞ x e − Γ( y ) h ( y ) α ( y ) dy, such that(34) v ( x ) = e Γ( x ) (cid:90) ∞ x e − Γ( y ) h ( y ) α ( y ) dy. Since by (32) v (cid:48) ( x ) h ( x ) = γ ( x ) v ( x ) h ( x ) − α ( x ) = u (cid:48) ( x ) , this implies u ( x ) = u (0) + (cid:90) x u (cid:48) ( y ) dy = u (0) + (cid:90) x dy γ ( y ) e Γ( y ) h ( y ) (cid:90) ∞ y e − Γ( z ) h ( z ) α ( z ) dz − (cid:90) x α ( y ) dy. The value of u (0) is deduced from the fact that on the one hand v (0) = h (0) u (0)and on the other hand, by (34) v (0) = e Γ(0) (cid:90) ∞ e − Γ( y ) h ( y ) α ( y ) dy. (cid:3) Example 11.
Consider a growth model with catastrophe for which h ( x ) = e x . Let α ( x ) = α x a , a < (entailing reflecting) , β ( x ) = β x a , ( b = a > a − ).Assumptions and are satisfied. To insure recurrence, we assume γ ( x ) = γ > and due to this, we obtain the expected first return time to as u (0) = E ( τ , ) = 1 α (cid:90) ∞ y − a e − ( γ − y = Γ (1 − a ) α ( γ − − a < ∞ . Note that, consistently, u (0) diverges when γ ↓ and also when a ↑ . We alsohave u ( x ) = u (0) + γ ( γ − α (cid:90) x de ( γ − y (cid:90) ∞ y e − ( γ − z z a dz − α (1 − a ) x − a ∼ γ − α (1 − a ) x − a as x → ∞ , where, after integration by parts, we used a large x estimate of the integral Gammafunction. The large x expected time to local extinction is algebraic. An exact ex-pression (involving the integral Gamma function) of u ( x ) for all x is available fromthe first expression of u ( x ) . OPULATION GROWTH 29
A short discussion of the absorbing case.
In case 0 is absorbing thearguments used in the proof of Proposition 14 do not apply directly. Indeed, since I ( x ) = ∞ , we have that u (0) = ∞ (compare to (31)) such that x (cid:55)→ u ( x ) will notbe bounded any more as x → . However we obtain the following (semi-)explicitformula for the expected return time to 0 in the absorbing case showing that u ( x )is known up to an additive constant. Proposition 16.
Grant Assumptions 1 and 2. Suppose moreover (cid:90) ∞ π ( x ) dx < ∞ but (cid:90) π ( x ) dx = ∞ , that H ( x, y ) = h ( y ) /h ( x ) with h (0) > , and that u ( x ) < ∞ for any x > . Thenfor all < x < y < ∞ , (35) u ( y ) = u ( x ) + (cid:90) yx du γ ( u ) e Γ( u ) h ( u ) (cid:90) ∞ u e − Γ( z ) h ( z ) α ( z ) dz − (cid:90) yx α ( u ) du. Proof.
Throughout the proof, 0 < x < y are fixed. We start from the equation u ( x ) = u ( x ) + e Γ( x ) (cid:90) ∞ x dzγ ( z ) e − Γ( z ) h ( z ) (cid:90) z u ( y ) d ¯ h ( y ) . Now fix some 0 < ε < x.
In a first step, we consider an approximate solution ˜ u ε ( x )of the above equation, where we use truncated integrals, that is,˜ u ε ( x ) = u ( x ) + e Γ( x ) (cid:90) ∞ x dzγ ( z ) e − Γ( z ) h ( z ) (cid:90) zε ˜ u ε ( y ) d ¯ h ( y ) , x ≥ ε. Due to our assumptions, ˜ u ε ∈ C (( ε, ∞ )) , and we obtain, following the same argu-ments as in the proof of Corollary 15, that ˜ u ε solves α ( x )˜ u (cid:48) ε ( x ) − β ( x ) h ( x ) (cid:90) xε h ( y )˜ u (cid:48) ε ( y ) dy − β ( x ) h ( x ) h ( ε )˜ u ε ( ε ) = − . Putting v ε ( x ) = (cid:82) xε h ( y )˜ u (cid:48) ε ( y ) dy + h ( ε )˜ u ε ( ε ) , we obtain the explicit representation˜ u ε ( x ) = ˜ u ε ( ε ) + (cid:90) xε dy γ ( y ) e Γ( y ) h ( y ) (cid:90) ∞ y e − Γ( z ) h ( z ) α ( z ) dz − (cid:90) xε α ( y ) dy. In particular, for any y > x, ˜ u ε ( y ) = ˜ u ε ( x ) + (cid:90) yx du γ ( u ) e Γ( u ) h ( u ) (cid:90) ∞ u e − Γ( z ) h ( z ) α ( z ) dz − (cid:90) yx α ( u ) du. The claim now follows from the fact that by monotone convergence, for any x > , ˜ u ε ( x ) → u ( x ) as ε → . (cid:3) It follows from (35) that u ( x ) = (cid:90) x dyγ ( y ) e Γ( y ) h ( y ) (cid:90) ∞ y e − Γ( z ) h ( z ) α ( z ) dz − (cid:90) x dyα ( y )where (cid:82) x f ( y ) dy denotes one of the antiderivative of f , defined up to an additiveconstant. So u ( x ) is known up to an additive constant, say λ. The exact knowledgeof this constant is of no use if one is to estimate the large x expected time to totalextinction starting from x . Example 12.
Consider a transient growth model with catastrophe for which h ( x ) = e x . Let α ( x ) = α x (Malthus), with absorbing. Let β ( x ) = β x , so that γ ( x ) = γ . Assumptions and hold and is hit with probability and we assume γ > .We have u ( x ) = γ ( γ − α (cid:90) x de ( γ − y (cid:90) ∞ y e − ( γ − z z dz − α log x = λ + γ ( γ − α (cid:18) e ( γ − x (cid:90) ∞ x e − ( γ − y y dy + log x (cid:19) − α log x = λ + 1( γ − α log x + γ ( γ − α e ( γ − x (cid:90) ∞ x e − ( γ − y y dy. Here, γ ( γ − α e ( γ − x (cid:90) ∞ x dzz e − ( γ − z = γ ( γ − α e ( γ − x (cid:90) ∞ ( γ − x dz (cid:48) z e − z (cid:48) , involving an exponential integral function E ( x ) = (cid:82) ∞ x dz (cid:48) z e − z (cid:48) . Using a large xestimate of the E function, it can be shown that u ( x ) ∼ γ − α (cid:18) log x + γ log (cid:18) γ − x (cid:19)(cid:19) , implying that u ( x ) ∼ γ − α log x as x → ∞ as x → ∞ . The large x expected time from x to total extinction is logarithmic. Some Simulations
We illustrate our results by some simulations involving a growth model with im-migration. In our simulations we take α ( x ) = α + α x a and β ( x ) = x b with α = α = 1 , a = 2 and b = . In this case, the state 0 is reflecting, and there isexplosion of the process x t ( x ) in finite time. Assumptions 1 and 2 are both satisfied.We work in the separable case H ( x, y ) = h ( y ) h ( x ) . The following simulations are done in discrete time by using the embedded chain Z n = X S n . in the case where 0 is not absorbing. In this case, we have for all x ≥ , P ( Z n ∈ dy | Z n − = x ) = (cid:90) ∞ dtβ ( x t ( x )) e − (cid:82) xt ( x ) x γ ( z ) dz H ( x t ( x ) , dy )= e Γ( x ) (cid:90) ∞ x dzγ ( z ) e − Γ( z ) H ( z, dy ) , translating that Z n is a time-homogeneous discrete-time Markov chain on [0 , ∞ ].We also have(36) P ( Z n ≤ y | Z n − = x ) = e Γ( x ) (cid:90) ∞ x dzγ ( z ) e − Γ( z ) (cid:90) y H ( z, dy (cid:48) )= 1 − e − (Γ( x ∨ y ) − Γ( x )) + (cid:90) ∞ x ∨ y dzγ ( z ) e − (Γ( z ) − Γ( x )) H ( z, y ) . OPULATION GROWTH 31
Indeed, since H ( z, y ) = 1 for all y ≥ z and only whenever y ≥ x , the secondintegral in the first equation has to be cut into two pieces corresponding to ( z > y and x < z ≤ y ). Equivalently, P ( Z n > y | Z n − = x ) = (cid:90) ∞ y e Γ( x ) (cid:90) ∞ x dzγ ( z ) e − Γ( z ) H ( z, dy (cid:48) )= e − (Γ( x ∨ y ) − Γ( x )) − (cid:90) ∞ x ∨ y dzγ ( z ) e − (Γ( z ) − Γ( x )) H ( z, y ) . Note that E ( Z n | Z n − = x ) = e Γ( x ) (cid:90) ∞ x dzγ ( z ) e − Γ( z ) (cid:90) z y (cid:48) H ( z, dy (cid:48) )= e Γ( x ) (cid:90) ∞ x dzγ ( z ) e − Γ( z ) (cid:18) z (1 − H ( z, − (cid:90) z H ( z, y ) dy (cid:19) = e Γ( x ) (cid:90) ∞ x dzγ ( z ) e − Γ( z ) (cid:20) z − (cid:90) z H ( z, y (cid:48) ) dy (cid:48) (cid:21) , such that E ( Z n | Z n − = x ) − x = e Γ( x ) (cid:90) ∞ x dzγ ( z ) e − Γ( z ) (cid:20) ( z − x ) − (cid:90) z H ( z, y (cid:48) ) dy (cid:48) (cid:21) . In the above equation, the first part of bracket concerns a move up, the second parta move down.To simulate the embedded chain, we have to decide first if, given Z n − = x, theforthcoming move is down or up.- A move down occurs with probability P ( Z n ≤ x | Z n − = x ) = (cid:90) ∞ x dzγ ( z ) e − (Γ( z ) − Γ( x )) H ( z, x ) . - A move up occurs with complementary probability.As soon as the type of move is fixed (down or up), to decide where the process goesprecisely, we must use the inverse of the corresponding distribution function (36)(with y ≤ x or y > x ), conditioned on the type of move. Remark 10. ( i ) If the jump kernel H ( z, y ) is decreasing in z for each fixed y ,then, from (36), the embedded chain is stochastically monotone in that, for eachfixed y , P ( Z n ≤ y | Z n − = x ) is decreasing in x . Note that P ( Z n ∈ dy | Z n − = x ) = e Γ( x ) (cid:90) ∞ x dzγ ( z ) e − Γ( z ) H ( z, dy ) = E H ( G ( x ) , dy ) . ( ii ) If state is absorbing, equation (36) is valid only when x > and the boundarycondition P ( Z n = 0 | Z n − = 0) = 1 should be added. The first simulation is done with the choice h ( x ) = e x . Here, state + ∞ is anabsorbing state. Z_n with h(x)=exp(x) n Z _n We can remark the occurence of many jumps for small values of the process and thescarcity of jumps for large values. In other words, the probability of disaster whenthe process is at position x tends to 0 when x tends to infinity. It is decreasingin x , i.e. the greater x is, the less is the probability of disaster at that point. Inparticular, H ( ∞ , {∞} ) = 1 , that is, state + ∞ is absorbing.By doing a simple calculation we notice that s ( ∞ ) < ∞ and I ∞ ( x ) < ∞ . Usingthe last criterion in section 4 . X is transient(converges to + ∞ as t → ∞ ) or hits + ∞ in finite time and then stays there forever.In the next simulation we choose h ( x ) = 1 for all x (total disaster case). In thiscase Z n = 0 for all n ≥ . To obtain some information about the process, in thiscase we have simulated U n = X S n − . Since s ( ∞ ) = ∞ , the process X is recurrentand comes back to 0 infinitely often. We have P ( U n ∈ dy | U n − = x ) = (cid:90) x H ( x, dz ) (cid:90) ∞ dtβ ( x t ( z )) e − (cid:82) xt ( z ) z γ ( u ) du δ x t ( z ) ( dy )= (cid:90) x H ( x, dz ) e Γ( z ) (cid:90) ∞ z duγ ( u ) e − Γ( u ) δ u ( dx ) . In the particular case h ( x ) = 1 , that is, H ( x, dz ) = δ ( dz ) , this gives(37) P ( U n ∈ dy | U n − = x ) = γ ( y ) e − (Γ( y ) − Γ(0)) dy, that is, ( U n ) n ≥ is an i.i.d. sequence with common distribution given according to(37). OPULATION GROWTH 33
U_n with total disaster n U _n Appendix
Proof of Remark 5.
Notice that t = I x ( y ) is the unique time needed to go from x to y, that is, x t ( x ) = x I x ( y ) = y. As a consequence, for this choice of t and for any h > , using that x t + h ( x ) > x t ( x ) , Π t + h,x ( y ) = P ( X t + h ( x ) ≤ x t ( x )) ≤ P ( X t + h < x t + h ( x )) = 1 − P ( T x > t + h ) , while Π t,x ( y ) = 1 , implying thatΠ t,x ( y ) − Π t + h,x ( y ) h ≥ P ( T x > t + h ) h → ∞ , as h → , since P ( T x > t + h ) → P ( T x > t ) > . On the other hand, obviously,Π t − h,x ( y ) = 1 for all h > , such thatΠ t − h,x ( y ) − Π t,x ( y ) h = 0for all h ≥ . This implies that ∂ t − Π t,x ( y ) = 0 , while ∂ t + Π t,x ( y ) = −∞ for t = I x ( y ) . (cid:3) Acknowledgments:
T. Huillet acknowledges partial support from the “Chaire
Mod´elisation math´ematiqueet biodiversit´e”.
B. Goncalves and T. Huillet acknowledge support from the labex
MME-DII Center of Excellence (
Mod`eles math´ematiques et ´economiques de la dy-namique, de l’incertitude et des interactions , ANR-11-LABX-0023-01 project). Fi-nally, this work was also funded by CY Initiative of Excellence (grant “Investisse-ments d’Avenir” ANR- 16-IDEX-0008), Project EcoDep PSI-AAP 2020-0000000013 . References [1] Altman, E., Avrachenkov, K., Kherani, A. A. and Prabhu, B. J. Performance analysis andstochastic stability of congestion control protocols. In Proc. IEEEINFOCOM2005 (Miami,FL, March 2005), eds K. Makki and E. Knightly, IEEE, Piscataway, NJ, (2005).[2] Az´ema, J., Duflo, M., Revuz, D.:
Mesures invariantes des processus de Markov r´ecurrents.
S´em. Proba. III, Lecture Notes in Math. , 24-33, Springer Verlag: Berlin 1969.[3] Boxma, O., Perry, D., Stadje, W., Zacks, S. A Markovian growth-collapse model. Adv. inAppl. Probab. 38, no. 1, 221-243, (2006).[4] Brockwell, P. J., Gani, J., Resnick, S. I. : Birth immigration and catastrophe processes. Adv.in Appl. Probab. 14, 709-731 (1982).[5] Brockwell, P. J., Gani, J., Resnick, S. I. : Catastrophe processes with continuous state space.Austral. J. Stat. in press (1983).[6] Brockwell, P. J., Resnick, S. I., Tweedie, R. L.: Storage processes with general release ruleand additive inputs. Adv. in Appl. Probab. 14, 392-433 (1982).[7] Davis, M. H. A. Piecewise-deterministic Markov Processes: A General Class of Non-diffusion.J. R. Statist. Soc. B, 46, No.3, pp. 353-388, (1984).[8] Goriely, A.; Hyde, C. Necessary and sufficient conditions for finite-time singularities in ordi-nary differential equations. Journal of Differential Equations, 161, 422-448, (2000).[9] Gripenberg, G. A stationary distribution for the growth of a population subject to randomcatastrophes. J. Math. Biol. 17, no. 3, 371-379, (1983).[10] Gripenberg, G. Extinction in a Model for the Growth of a Population Subject to Catastrophes.Stochastics, 14, pp. 149-163, (1985).[11] Hanson, F. B., Tuckwell, H. C.: Persistence times of populations with large random fluctua-tions. Theoret. Population Biol. 14, 46-61 (1978).[12] Hanson, F. B., Tuckwell, H. C.: Logistic growth with random density independent disasters.Theoret. Population Biol. 14, 1-18 (1981).[13] H¨usler, A. D.; Sornette, D. Human population and atmospheric carbon dioxide growth dy-namics: Diagnostics for the future. The European Physical Journal: Special Topics. Volume223, Issue 11, 2065-2085, (2014).[14] Johansen, A.; Sornette, D. Finite-time singularity in the dynamics of the world population,economic and financial indices. Physica A 294 (3-4), 465-502, (2001).[15] Kersting G. and Klebaner, F.C. Sharp Conditions for Nonexplosions and Explosions inMarkov Jump Processes. The Annals of Probability. Volume 23, Number 1, 268-272, (1995).[16] Neuts, M. An interesting random walk on the non-negative integers. Journal of AppliedProbability, Volume 31, Number 1, 48-58, (1994).[17] Romer, P. M. The Origins of Endogenous Growth. The Journal of Economic Perspectives,Vol. 8, No. 1, 3-22, (1994).[18] Pakes, A. G., Trajstman, A. C., Brockwell, P. S. : A stochastic model for a population subjectto mass emigration due to population pressure. Math. Biosci. 45, 137-157 (1979).[19] Sornette, D.; Andersen, J. V. A Nonlinear Super-Exponential Rational Model of SpeculativeFinancial Bubbles. Int. J. Mod. Phys. C, 13, 2, 171-187, (2002).[20] Trajstman, A. C.: A bounded growth population subjected to emigrations due to populationpressure. J. Appl. Probab. 18, 571-582 (1981).[21] Varfolomeyev S. D., Gurevich K. G. The hyperexponential growth of the human populationon a macro-historical scale. J. Theor. Biol. 7; 212(3), 367-372, (2001). B. Goncalves and T. Huillet: Laboratoire de Physique Th´eorique et Mod´elisation,CY Cergy Paris Universit´e, CNRS UMR-8089, 2 avenue Adolphe-Chauvin, 95302 Cergy-Pontoise, FRANCE, E-mails: [email protected], [email protected]