Real Self-Similar Processes Started from the Origin
aa r X i v : . [ m a t h . P R ] J a n REAL SELF-SIMILAR PROCESSES STARTED FROM THE ORIGIN
STEFFEN DEREICH, LEIF D ¨ORING, AND ANDREAS E. KYPRIANOU
Abstract.
Since the seminal work of Lamperti there is a lot of interest in the understanding ofthe general structure of self-similar Markov processes. Lamperti gave a representation of positiveself-similar Markov processes with initial condition strictly larger than 0 which subsequently wasextended to zero initial condition.For real self-similar Markov processes (rssMps) there is a generalization of Lamperti’s representa-tion giving a one-to-one correspondence between Markov additive processes and rssMps with initialcondition different from the origin.We develop fluctuation theory for Markov additive processes and use Kuznetsov measures to con-struct the law of transient real self-similar Markov processes issued from the origin. The construc-tion gives a pathwise representation through two-sided Markov additive processes extending theLamperti-Kiu representation to the origin.
Contents
1. Introduction 11.1. Positive Self-Similar Markov Processes 21.2. Real Self-Similar Markov Processes - Main Results 31.3. Sketch of the Proof 61.4. Organisation of the Article 72. Proof 72.1. Convergence Lemma 72.2. Verification of Conditions (1a)-(1c) 82.3. Verification of Conditions (2a)-(2b) and Construction of P Introduction
A fundamental property appearing in probabilistic models is self-similarity, also called the scalingproperty. In the context of stochastic processes, this is the phenomenon of scaling time and spacein a carefully chosen manner such that there is distributional invariance. An example of the latteris Brownian motion for which the distribution of ( cB c − / t ) t ≥ and ( B t ) t ≥ is the same for any c >
0; its so-called scaling index is thus understood as . A natural question is if the knowledgeof the scaling property alone implies structural properties for a given model and whether such canbe used to deduce non-trivial implications. In this article we focus on the case of Markov processestaking values in R that fulfil the same scaling relation as a Brownian motion, except the scalingindex is taken more generally to be α > . In particular we focus on entrance laws ofsuch processes from the origin, a problem which, although well understood in the case of Brownianmotion, is more difficult to address in the the general setting of real self-similar Markov processes. Before coming to our main results we review results and ideas for self-similar Markov processeswith non-negative sample paths.1.1.
Positive Self-Similar Markov Processes.
A strong Markov family { P z , z > } with c`adl`agpaths on the state space [0 , ∞ ) - 0 being an absorbing cemetery state - is called positive self-similarMarkov process of index α > pssMp ) if the scaling property holds:the law of ( cZ c − α t ) t ≥ under P z is P cz (1)for all z, c >
0. Here, and in what follows, Z = ( Z t ) t ≥ denotes the canonical process. The anal-ysis of positive self-similar processes is typically based on the Lamperti representation (see forinstance Chapter 13 of [24]). It ensures the existence of a L´evy process ( ξ t ) t ≥ , possibly killed atan exponential time with cemetery state −∞ , such that, under P z for z > Z t = exp( ξ ϕ − ( t ) ) , t ≥ , where ϕ ( t ) = R t exp( αξ s ) ds and the L´evy process ξ is started in log( z ). We use the convention thatexp( ξ ϕ − ( t ) ) is equal to zero, if t ϕ ([0 , ∞ )).It is a consequence of the Lamperti representation that pssMps can be split into two regimes: (R) P z ( T < ∞ ) = 1 for all z > ⇐⇒ ξ drifts to − ∞ or is killed (T) P z ( T < ∞ ) = 0 for all z > ⇐⇒ ξ drifts to + ∞ or oscillatesTwo major questions remained open after Lamperti:(i) How to extend a pssMp after hitting 0 in the recurrent regime (R) with an instantaneousentrance from zero?(ii) How to start a pssMp from the origin in the transient regime (T) ? More precisely, one asks forextensions { P z , z ≥ } with the Feller property so that in particular P := w- lim z ↓ P z exists in theSkorokhod topology.Both questions have been solved in recent years: In the recurrent regime it was proved by Fitzsim-mons [12] and Rivero [29] that there is a unique recurrent self-similar Markov extension (or equiv-alently a self-similar excursion measure with summable excursion lengths) that leaves zero contin-uously if and only if E [ e λξ ] = 1 for some 0 < λ < α. (2)For the transient regime, it was shown in Chaumont et al. [8] and also in Bertoin and Savov [4]that, if the ascending ladder height process of ξ is non-lattice, the weak limit P exists if and onlyif the weak limit of overshoots O := w- lim x ↑∞ ( ξ τ x − x ) exists , (3)where τ x := inf { t : ξ t ≥ x } . If (3) holds then one says ξ has stationary overshoots.There are different ways of proving the results that involve more or less complicated constructionsfor the underlying L´evy process ξ . The construction that appears to be the most natural to us, inthe sense that it works for (R) and (T) , was carried out for the recurrent regime by Fitzsimmonsand shall be developed in this article for the transient regime.It has been known for a long time in probabilistic potential theory that excessive measures of Markovprocesses are closely linked to the entrance behaviour from so called entrance boundaries. One waythe relation is implemented involves Markov processes with random birth and death (Kuznetsovmeasures) and apart from diffusion processes not many examples are known in which the generaltheory yields concrete results. Self-similar Markov processes form a nice class of non-trivial examplesfor which the abstract theory gives explicit results. The essense is a combination of Lamperti’srepresentation with Kaspi’s theorem on time-changing Kuznetsov measures. Excursions away from EAL SELF-SIMILAR PROCESSES STARTED FROM THE ORIGIN 3 n corresponding to a particular excessivemeasure for the pssMp that itself turns out to be a transformation of an invariant measure of ξ .Invariant measures for L´evy processes are known explicitly from the Choquet-D´eny theorem, hence,excursion measures for pssMp can be identified and constructed through Kuznetsov measures.It is interesting to observe that the constructions of self-similar excursion measures n as Kuznetsovmeasures work in the recurrent and transient regimes without using Conditions (2) and (3) [Notethat we also interpret P as a normalized “excursion measure” even though an excursion starts at0, does not return to 0 and P must be a probability measure]. The necessity and sufficiency entersas follows: (R) Condition (2) is necessary and sufficient to construct from n a Markov process by gluingexcursions drawn according to a point process of excursions (using Blumenthal’s theoremon It¯o’s synthesis). (T) To define P as normalized “excursion measure” the Kuznetsov measure needs to be finiteand this is equivalent to Condition (3), see Remark 16 below.We present our constructions for (T) directly in the more general setting of real self-similar Markovprocesses. Remark 1.
The argument of Fitzsimmons [14] for recurrent extensions extends readily to the real-valued setting by replacing L´evy processes through MAPs. Since our main purpose is to show howthe potential theoretic approach has to be carried out in the transient case and since the article isalready technical enough we do not address this topic here.
Real Self-Similar Markov Processes - Main Results.
Let D ∗ be the space of c`adl`agfunctions w : R + → R with 0 as absorbing cemetery state endowed with the Skorokhod topologyand the corresponding Borel σ -field D ∗ . A family of distributions { P z : z ∈ R \{ }} on ( D ∗ , D ∗ )is called strong Markov family on R \{ } if the canonical process ( Z t ) t ≥ is strong Markov withrespect to the canonical right continuous filtration. If additionally the process satisfies the scalingproperty (1) for all z ∈ R \{ } and c >
0, then the process is called real self-similar Markov process .A result of Chaumont et al. [7], completing earlier work of Kiu [21], is that for any real self-similarMarkov process, there is a Markov additive process ( ξ t , J t ) t ≥ on R × {± } such that under P z thecanonical process can be represented as Z t = exp (cid:0) ξ ϕ − ( t ) (cid:1) J ϕ − ( t ) , t ≥ , (4)where ϕ ( t ) = R t exp( αξ s ) ds and ( ξ , J ) = (log | z | , [ z ]) with[ z ] = ( z > − z < . Again we use the convention that exp (cid:0) ξ ϕ − ( t ) (cid:1) J ϕ − ( t ) is equal to zero if t ϕ ([0 , ∞ )). A Markovadditive process (MAP) is a stochastic process ( ξ t , J t ) t ≥ on R × E , where E is a finite set, if ( J t ) t ≥ is a continuous time Markov chain on E (called modulating chain) and, for any i ∈ E and s, t ≥ { J t = i } , the pair ( ξ t + s − ξ t , J t + s ) s ≥ is independent of the pastand has the same distribution as ( ξ s , J s ) s ≥ under P ,i . If the MAP is killed, then ξ shall be set to −∞ . An important feature of MAPs that will be usedthroughout our analysis is their close proximity to L´evy processes. For a textbook treatment ofstandard results for MAPs see for instance Asmussen [2]. Proposition 2.
A process ( ξ, J ) is a MAP if and only if there exist sequences of • L´evy processes ( ξ n,i ) n ∈ N , iid for i ∈ E fixed, STEFFEN DEREICH, LEIF D ¨ORING, AND ANDREAS E. KYPRIANOU • real random variables (∆ ni,j ) n ∈ N , iid for i, j ∈ E fixed,independent of J and of each other such that, if T n is the n th jump-time of J , then ξ can be writtenas ξ t = ξ + ξ ,J t : t < T ξ T n − + ∆ nJ Tn − ,J Tn + ξ n,J Tn t − T n : t ∈ [ T n , T n +1 ) , t < k ξ t = −∞ : t ≥ k , where the killing time k is the first time one of the appearing L´evy processes is killed: k = inf n t > ∃ n ∈ N , T n ≤ t such that ξ n,J Tn is killed at time t − T n o . In words, the idea behind a MAP is as follows: There is a time-dependent random environmentgoverned by the state of J and for every state there is a corresponding L´evy process ξ i with triplet( a i , σ i , Π i ). If J is in state i , then ξ evolves according to a copy of ξ i . Once J changes from i to j ,which happens at rate q i,j , ξ has an additional transitional jump ∆ i,j and until the next jump of J , ξ evolves according to a copy of ξ j . The MAP is killed as soon as one of the L´evy processes iskilled.Consequently, the mechanism behind the Lamperti-Kiu representation is simple: J governs the signof Z and on intervals with constant sign the Lamperti-Kiu representation simplifies to the Lampertirepresentation. Remark 3.
The MAP formalism for the Lamperti-Kiu representation does not appear in [7] buthas been introduced in [23] . From now on we assume (I) J is irreducible on {± } that is, neither 1 nor − (I) involves no loss of generality: If J is notirreducible, then (4) implies that the self-similar process changes sign at most once, thus, can betreated as positive (or negative) self-similar process to which the results for pssMps apply. Notealso that (I) ensures that the modulating chain J has a unique stationary distribution, which wedenote by π = ( π + , π − ). In keeping with this notation, we shall also write the off diagonal elementsof the transition matrix of J as q + , − and q − , + .We also assume (NL) ξ is non-latticewhich is a standard assumption to avoid technicalities. The reader is referred to the discussion atthe end of Appendix A.3 for some discussion on this assumption.Throughout the article some notation for first hitting times is used: For a real-valued process T { } = inf { t : Z t = 0 } and T ε = inf { t : | Z t | ≥ ε } and for a bi-variate process ( Z , Z ) τ − x = inf { t : Z t ≤ x } and τ + x = inf { t : Z t ≥ x } (5)for x ∈ R .Analogously to L´evy processes one knows that an unkilled MAP ( ξ, J ) almost surely either drifts to+ ∞ (i.e. lim t ↑∞ ξ t = + ∞ ), drifts to −∞ (i.e. lim t ↑∞ ξ t = −∞ ) or oscillates (i.e. lim inf t ↑∞ ξ t = −∞ and lim sup t ↑∞ ξ t = + ∞ ). As for pssMps a simple 0-1 law for real self-similar Markov processes canbe deduced from the Lamperti-Kiu representation: EAL SELF-SIMILAR PROCESSES STARTED FROM THE ORIGIN 5
Proposition 4. If ( ξ, J ) is the Markov additive process corresponding to a real self-similar Markovprocess through the Lamperti-Kiu representation, then one has the following dichotomy: (R) P z ( T { } < ∞ ) = 1 for all z = 0 ⇐⇒ ( ξ, J ) drifts to − ∞ or is killed, (T) P z ( T { } < ∞ ) = 0 for all z = 0 ⇐⇒ ( ξ, J ) drifts to + ∞ or oscillates . The proof is very close in spirit to the proof of the analogous result for pssMps (see for instanceChapter 13 of [24]).For the rest of this article we assume (T) and ask for the existence and a construction of a measure P on the Skorokhod space ( D ( R ) , D ( R )) of c`adl`ag functions w : R + → R such that the extension { P z : z ∈ R } of { P z : z ∈ R \{ }} is a self-similar Markov family. In other words, the aim is toextend the Lamperti-Kiu representation to transient self-similar Markov processes that do not havezero as a trap.smallskipLet ξ + and ξ − be the L´evy processes and ∆ + , − and ∆ − , + the random variables appearing in therepresentation of ( ξ, J ) from Proposition 2 when applied to the two state MAP of the Lamperti-Kiurepresentation (4). (C) ξ has finite absolute moment and either of the following holds:(i) ( ξ, J ) drifts to + ∞ (ii) ( ξ, J ) oscillates and Z ∞ x Π([ x, ∞ ))1 + R x R ∞ y Π(( −∞ , − z ]) dz dy dx < ∞ , where Π is the measureΠ = Π + + Π − + q + , − L (∆ + , − ) + q − , + L (∆ − , + )for the L´evy measure Π + of ξ + (resp. Π − of ξ − ) and the probability distribution L (∆ + , − ) of ∆ + , − (resp. L (∆ − , + ) of ∆ − , + ).Condition (C) shall be called stationary overshoot condition for ( ξ, J ) as it is the precise conditionfor the corresponding MAP to have stationary overshoots in the following sense: Theorem 5. If (NL) and (I) hold, then w- lim a → + ∞ P ,i ( ξ τ + a − a ∈ dx, J + τ + a = j ) = w- lim a → + ∞ P − a,i ( ξ τ +0 ∈ dx, J + τ +0 = j ) exists independently of i ∈ {± } and is non-degenerate if and only if Condition (C) holds. Theorem 5 is the MAP version of an important result on the existence of stationary overshoots forL´evy processes (see for instance Chapter 7 of [24]) for which Π reduces to the L´evy measue only.From Theorem 28 in the Appendix it follows that stationary overshoots are equivalent to requiringfinite mean for the ladder height processes of ( ξ, J ), the analytic condition is provided in Theorem35.We can now state the main theorem of the present article:
Theorem 6.
Suppose { P z : z = 0 } is a real self-similar Markov process for which the correspondingMAP ( ξ, J ) satisfies Conditions (I) and (NL) . Then Condition (C) for ( ξ, J ) is necessary andsufficient for the existence of an extension { P z : z ∈ R } on ( D ( R ) , D ( R )) such that the followingproperties hold: (1) Under P the process leaves instantaneously. (2) The corresponding transition semigroup ( P t ) on R has the Feller property. STEFFEN DEREICH, LEIF D ¨ORING, AND ANDREAS E. KYPRIANOU (3)
The family { P z : z ∈ R } is self similar.Furthermore, P is the unique distribution satisfying one of the properties (1) or (2). The reader might have realized that Assumption (I) excludes the special case of positive self-similarMarkov processes that occurs for the trivial case that the Markov chain J is constant and the MAP( ξ, J ) reduces to a L´evy process. In fact, the proof for pssMps is a line-by-line tranlation of theproof given here replacing in all arguments MAPs by L´evy processes. Since the fluctuation theoryfor MAPs developed in the Appendix is classical for L´evy processes, the proof for pssMps onlyrequires the main body of the article which also simplifies drastically in notation.1.3. Sketch of the Proof.
The necessity of Condition (C) is straight forward. Combining theLamperti-Kiu representation and Theorem 35 the failure of Condition (C) implieslim | z |→ P z ( | Z T ε | < c ) = lim | z |→ P log | z | , [ z ] (cid:16) exp( ξ τ +log( ε ) ) < c (cid:17) = lim | z |→ P log | z | , [ z ] (cid:16) ξ τ +log( ε ) − log( ε ) < log( c/ε ) (cid:17) = 0(6)for any positive c, ε fixed. Now define f ( z ) = ( P z ( | Z T ε | < c ) : z = 00 : z = 0 , then, using the calculation from (6) and the remark following (28) in the Appendix, f is continuous.Hence, for any δ > a > | z |≤ a f ( z ) < δ . Suppose P is as in Theorem6, then, by the strong Markov property, P ( | Z T ε | < c ) = lim ε ′ → Z P z ( | Z T ε | < c ) P ( Z T ε ′ ∈ dz )= lim ε ′ → (cid:16) Z | z |≤ a f ( z ) P ( Z T ε ′ ∈ dz ) + Z | z | >a f ( z ) P ( Z T ε ′ ∈ dz ) (cid:17) ≤ δ + lim ε ′ → P ( | Z T ε ′ | ∈ [ a, ∞ )) . By assumption, under P , paths are right-continuous and start from zero so the limiting probabilityon the right-hand side vanishes. As δ is arbitrary we proved that P ( | Z T ε | < c ) = 0 for all ε, c > Step 1:
Suppose { P z : z = 0 } is a Markov family that is continuous in R \{ } with respect to weakconvergence on the Skorokhod space - which is true for real self-similar Markov processes due to theLamperti-Kiu representation - and P is a candidate for the weak limit lim | z |↓ P z . Then a naturalguess, for instance from Aldou’s criterion, of conditions for the weak convergence is as follows:(a) All overshoots for given levels should converge weakly to the overshoot of P for that level. Ifso, then nothing has to be controlled past the overshoots due to the strong Markov property andthe weak continuity of z P z away from 0.(b) The behaviour before the overshoots should be nice in the sense that overshoots over smalllevels will occur quickly.To summarize, and this is the content of our Proposition 7, to have weak convergence one needscontrol on overshoots and times of overshoots. For real self-similar Markov processes both quantities EAL SELF-SIMILAR PROCESSES STARTED FROM THE ORIGIN 7 can be expressed and analyzed through the Lamperti-Kiu representation and fluctuation theory forMarkov additive processes.
Step 2:
To construct the candidate P assumed in Step 1 we use potential theory: If P is the self-similar process started from zero, then it is a restriction of the Kuznetsov measure Q η correspondingto the excessive measure η ( dx ) = E [ R ∞ ( Z s ∈ dx ) ds ], Proposition 3.2 of [13]. Of course, η is notknown a priori but this Ansatz leads to a good guess for P : P is necessarily the restriction of aKuznetsov measure for some purely excessive measure. Since there are many excessive measures ofwhich only one can be the good one, the Ansatz might be too naive.What saves us here is the Lamperti-Kiu representation and Kaspi’s time-change theorem: Combinedthey tell us that the excessive measure should be the Revuz measure of an invariant measure of theMAP. Since invariant measures for MAPs are easy to find, this approach works.Potential theory is most effective when the Markov process is transient. We distinguished twocases in our proof: ( ξ, J ) drifting to + ∞ and ( ξ, J ) oscillating. In the latter case, the transienceis artificially achieved by killing at T . Such a killing is by no means unnatural: Since only theentrance behavior from 0 needs clarification, it is equivalent to explain the entrance behavior forthe entire process or the process killed at a set bounded away from 0.1.4. Organisation of the Article.
The main argument is relatively short but also we need todevelop a fair amount of fluctuation theory for Markov additive processes. In order to keep a clearfocus the proof is split into two parts: In the next section we give the main argument containingLamperti-Kiu based calculations for overshoots and times of overshoots (Subsection 2.2) and thepotential theoretic construction of P (Subsection 2.3). The fluctuation theory is collected in anAppendix. 2. Proof
Throughout the proof, fluctuation theory for Markov additive processes is applied as developed inthe Appendix. Unless otherwise stated, we assume throughout that (NL) , (I) and (C) are in force.An initial browse of the Appendix at this point may prove to be instructive before digesting theremainder of this section. The main items that are needed from the Appendix is the role of theoccupation formula (Theorem 27), the Markov Renewal Theorem (Theorem 28) and the equivalentconditions for the existence of stationary overshoots (Theorem 35).2.1. Convergence Lemma.
The following proposition is the formalization of Step 1 in the sketchof the proof given in Section 1.3.
Proposition 7.
Suppose the following conditions hold for a strong Markov family { P z : z ∈ R \{ }} and a candidate law P on ( D ( R ) , D ( R )) : (1a) lim ε → lim sup | z |→ E z [ T ε ] = 0(1b) w- lim z → P z ( Z T ε ∈ · ) =: µ ε ( · ) exists for all ε > R \{ } ∋ z P z is continuous in the weak topology on the Skorokhod spaceand (2a) P -almost surely, Z = 0 and Z t = 0 for all t > P (( Z T ε + t ) t ≥ ∈ · ) = P µ ε ( · ) for every ε > Then the mapping R ∋ z P z is continuous in the weak topology on the Skorokhod space. STEFFEN DEREICH, LEIF D ¨ORING, AND ANDREAS E. KYPRIANOU
Proof.
To show convergence in the Skorokhod topology we work with Prokhorov’s metric: for m ∈ N and two c`adl`ag paths x, y : R + → R define d m ( x, y ) = inf (cid:8) δ > ∃ an increasing continuous function S : [0 , m ] → [0 , ∞ ) with S = 0 , k S − id k [0 ,m ] ≤ δ and k x ◦ S − y k [0 ,m ] ≤ δ (cid:9) and set d ( x, y ) = ∞ X m =1 − m ( d m ( x, y ) + d m ( y, x )) ∧ . Since d generates the Skorokhod topology it suffices to verify that, for arbitrary bounded Lipschitzfunctions f : D ( R ) → R with Lipschitz constant κ , say, one has E z n [ f ( Z )] → E [ f ( Z )]for every sequence ( z n ) →
0. By property (1b), w- lim z n → P z n ( Z T ε ∈ · ) = µ ε ( · ), so that by thecontinuity property (1c)w- lim z n → Z P x ( · ) P z n ( Z T ε ∈ dx ) = Z P x ( · ) µ ε ( dx ) = P µ ε ( · ) . In combination with the Markov property and property (2b) we getw- lim z n → P z n (cid:0) ( Z T ε + · ) ∈ · (cid:1) = w- lim z n → Z P x ( · ) P z n ( Z T ε ∈ dx ) = P µ ε ( · ) = P (cid:0) ( Z T ε + · ) ∈ · (cid:1) . Using the Skorokhod coupling we can define c`adl`ag processes Z , Z , Z , . . . on an appropriateprobability space (Ω , F , P ) on which • L ( Z n ) = L z n ( Z ) for n ∈ N and L ( Z ) = L ( Z ), • ( Z nT nε + · ) → ( Z T ε + · ), almost surely, in the Skorokhod space.For n ∈ { , , . . . } we denote by T nε the first entrance time of Z n into ( − ε, ε ) c . We note that, forevery m ∈ N and n, n ′ ∈ { , , . . . } , d m ( Z n , Z n ′ ) ≤ ε + | T nε − T n ′ ε | + d m (( Z nT nε + · ) , ( Z n ′ T n ′ ε + · ))(7)which yields d ( Z n , Z ) ≤ ε + 2 | T nε − T ε | ∧ d (( Z nT nε + · ) , ( Z T ε + · )) . Consequently, using Lipschitz continuity of f , we get (cid:12)(cid:12) E [ f ( Z n )] − E [ f ( Z )] (cid:12)(cid:12) ≤ κ E [ d ( Z n , Z )] ≤ κε + 2 κ E [ | T nε − T ε | ∧ (cid:3) + κ E (cid:2) d (cid:0) ( Z nT nε + · ) , ( Z T ε + · ) (cid:1)(cid:3) . By dominated convergence this giveslim sup n →∞ (cid:12)(cid:12) E [ f ( Z n )] − E [ f ( Z )] (cid:12)(cid:12) ≤ κε + 2 κ lim sup n →∞ E (cid:2)(cid:12)(cid:12) T nε − T ε (cid:12)(cid:12) ∧ (cid:3) and letting ε → ε → lim sup n →∞ E [ T nε ∧
1] = 0 and,using (2a), lim ε → E [ T ε ∧
1] = 0. (cid:3)
Verification of Conditions (1a)-(1c).
To verify the first three conditions of Proposition 7we use the Lamperti-Kiu representation and fluctuation theory for Markov additive processes.
Lemma 8.
Condition (1a) from Proposition 7 holds.
EAL SELF-SIMILAR PROCESSES STARTED FROM THE ORIGIN 9
Proof.
Using the Lamperti-Kiu representation (4), one has T ε = inf { t ≥ | Z t | ≥ ε } ( d ) = inf (cid:8) t : ξ ϕ − ( t ) ≥ log( ε ) (cid:9) = ϕ ( τ +log( ε ) )with τ +log( ε ) = inf { t : ξ t ≥ log( ε ) } . Taking expectations and applying the definition of ϕ yields E z [ T ε ] = E log | z | , [ z ] [ ϕ ( τ +log( ε ) )] = E log | z | , [ z ] h Z τ +log( ε ) e αξ s ds i . In order to calculate the right-hand side we use the preparations from the Appendix. Let ˆ P be thelaw of the dual MAP introduced in Section A.2. It will be useful below to note that, for example,for bounded measurable functions f , E z,i [ f ( − ξ t ) , J t = j ] = π j π i ˆ E − z,j [ f ( ξ t ) , J t = i ] , z ∈ R , i, j ∈ {± } , t ≥ . (Compare for instance (18) in the Appendix.) Similarly to L´evy processes, MAPs are spatiallyhomogeneous in the first variable. Using duality in the second and homogeneity in the third equalitygives E log | z | , [ z ] h Z τ +log( ε ) e αξ s ds i = X j = ± E log | z | , [ z ] h Z τ +log( ε ) e αξ s ds ; J τ +log( ε ) = j i = X j = ± π j π [ z ] ˆ E − log | z | ,j h Z τ −− log( ε ) e − αξ s ds ; J τ −− log( ε ) = [ z ] i = X j = ± π j π [ z ] ˆ E log( ε/ | z | ) ,j h Z τ − e − α ( ξ s − log( ε )) ds ; J τ − = [ z ] i = ε α X j = ± π j π [ z ] ˆ E log( ε/ | z | ) ,j h Z τ − e − αξ s ds ; J τ − = [ z ] i ≤ ε α X j,k = ± π j π [ z ] ˆ E log( ε/ | z | ) ,j h Z τ − e − αξ s ( J s = k ) ds i . Appealing to Remark 25 and Theorem 27 in Appendix A.5, we can put the pieces above togetherand write E z [ T ε ] ≤ ε α X j,k = ± π j π [ z ] X ℓ = ± Z [0 , ∞ ) e − αy ˆ U + j,ℓ ( dy ) Z [0 , log( ε/ | z | )] e − α (log( ε/ | z | ) − z ) U + k,ℓ ( dz ) , where the measure U + k,ℓ (resp. ˆ U + j,ℓ ) is the potential measure of the ascending (resp. descending)Markov additive ladder height process of ξ . The reader is referred to Section A.5 of the Appendixfor the precise definition. What is important to note for their use in this proof are the followingtwo facts. First, the integrals R [0 , ∞ ) e − αy ˆ U + j,ℓ ( dy ) are all finite; see e.g. formula (26) in Section A.5of the Appendix. Second, the Key Renewal-type theorem given in Theorem 28 (ii) of AppendixA.6 ensures that lim | z |→ R [0 , log( ε/ | z | )] e − α (log( ε/ | z | ) − z ) U + k,ℓ ( dz ) = π ℓ /αE ,π [ H +1 ] for each k, ℓ ∈ {± } ,where the exact nature of the expectation E ,π [ H +1 ] ∈ (0 , ∞ ] is again explained in the Appendix.All that we need to know at this point of the argument is that it is finite. This follows from Theorem35 in the Appendix thanks to Condition (C) . In conclusion, we havelim ε → lim | z |→ E z [ T ε ] ≤ lim ε → ε α αE ,π [ H +1 ] X j,k,ℓ = ± π j π ℓ π [ z ] Z [0 , ∞ ) e − αy ˆ U + j,ℓ ( dy ) = 0 , and the proof is complete. (cid:3) In the next lemma we deduce the overshoot distributions for real self-similar Markov processes fromthe overshoot distributions of the corresponding Markov additive processes. In particular, we provethat Condition (1b) is satisfied in the setting of Theorem 6.
Lemma 9. (i)
There are proper weak limits w- lim | z |→ P z ( Z T ε ∈ dy ) = µ ε ( dy ) , ε > , if and only if Condition (C) holds. (ii) If Condition (C) holds, then P µ ε ( Z T ε ′ ∈ dy ) = µ ε ′ ( dy ) for < ε < ε ′ .Proof. (i) The Lamperti-Kiu representation (4) and spatial homogeneity of Markov additive pro-cesses imply that, for 0 < a < b , P z (cid:0) Z T ε ∈ [ a, b ] (cid:1) = P log | z | , [ z ] (cid:0) exp (cid:0) ξ τ +log( ε ) (cid:1) ∈ [ a, b ]; J τ +log( ε ) = 1 (cid:1) = P log | z | , [ z ] (cid:0) ξ τ +log( ε ) − log( ε ) ∈ [log( a/ε ) , log( b/ε )]; J τ +log( ε ) = 1 (cid:1) and, analogously, P z (cid:0) Z T ε ∈ [ − b, − a ] (cid:1) = P log | z | , [ z ] (cid:0) ξ τ +log ( ε ) − log( ε ) ∈ [log( a/ε ) , log( b/ε )]; J τ +log ( ε/ | z | ) = − (cid:1) . Hence, the distributions L z ( Z T ε ) converge for | z | → (C) by Theorem 5.(ii) We use the strong Markov property and (i) for an interval A : µ ε ′ ( A ) = lim | z |→ P z (cid:0) Z T ε ′ ∈ A (cid:1) = lim | z |→ Z P x ( Z T ε ′ ∈ A ) P z ( Z T ε ∈ dx ) = lim | z |→ Z f A ( x ) P z ( Z T ε ∈ dx )with f A ( x ) := P x ( Z T ε ′ ∈ A ). Using that f A is bounded and continuous (see (28) and the remarkbeneath it) and the weak convergence from (i) yields µ ε ′ ( A ) = Z f A ( x ) µ ε ( dx ) = P µ ε ( Z T ε ′ ∈ A ) . as required. (cid:3) A direct consequence of the Lamperti-Kiu representation (4) is
Lemma 10.
Condition (1c) from Proposition 7 holds.
Verification of Conditions (2a)-(2b) and Construction of P . In this section we con-struct the measure P and verify conditions (2a)-(2b) of Proposition 7. Before doing so a briefoverview of some notation and results from probabilistic potential theory is given. For a moredetailed account the reader is referred to Dellacherie et al. [9] (available in French only). Notation.
We work in the setting of Fitzsimmons and Maisonneuve [15] that was also used byKaspi [19].Let E be a locally compact Polish space equipped with its Borel σ -algebra E . We extend E byan isolated cemetery state ∂ and also equip the extended space E ∪ { ∂ } with its respective Borel σ -algebra. Let W be the space of functions w : R → E ∪ { ∂ } that are E -valued and c`adl`ag ona nonempty interval ( α ( w ) , β ( w )) and are equal to ∂ on the complement of ( α ( w ) , β ( w )). Onecalls α ( w ) = inf { t : w t ∈ E } the time of birth, β ( w ) = sup { t : w t ∈ E } the time of death and ζ ( w ) := β ( w ) − α ( w ) the life-time. We denote by ( Y t ( w )) t ∈ R = ( w t ) t ∈ R the canonical process on W and by G = σ ( Y s : s ∈ R ) the canonical σ -algebra on W . We assume that P = ( P t ) t ≥ is thetransition semigroup of a Feller process on E . A family ( η t ) t ∈ R of measures on ( E, E ) is called an EAL SELF-SIMILAR PROCESSES STARTED FROM THE ORIGIN 11 entrance rule for P if η t P s − t ≤ η s for s > t , and an entrance law (at time zero) if η t = 0 for t ≤ η t P s − t = η s for s ≥ t >
0. In the stationary case where η t ≡ m , m is called excessive measure .Write Q η for the Kuznetsov measure corresponding to ( η, P ) and Q m for the stationary case. Thatis to say, Q η is the unique measure on ( W, G ) with one-dimensional marginals η t and transitionsemigroup ( P t ). More precisely Q η (cid:0) α ( Y ) < t , Y t ∈ dx , · · · , Y t n ∈ dx n , t n < β ( Y ) (cid:1) = η t ( dx ) P t − t ( x , dx ) · · · P t n − t n − ( x n − , dx n )for −∞ < t < · · · < t n < + ∞ . Under a Kuznetsov measure the canonical process is a strongMarkov process with random birth and death, i.e. if τ is a stopping time with respect to thecanonical right continuous filtration ( G t ) one has Q η (( Y τ + t ) t ≥ ∈ · |G τ ) = P Y τ ( · ) , on { α < τ < β } . The existence and uniqueness of Kuznetsov measures Q η follows from Kuznetsov’s work [22].For the stationary case η t = m , a particularly simple construction of Kuznetsov measures wasgiven by Mitro [27] for a Markov process in duality, with respect to m , to a second Markov process( ˆ X t ) t ≥ with transition semigroup ( ˆ P t ) t ≥ , i.e. P t ( x, dy ) m ( dx ) = ˆ P t ( y, dx ) m ( dy ) . (8)In the dual setting Q m is the unique measure on ( W, G ) that is translation invariant and has finitedimensional marginals Q ( α ( Y ) < s l ,Y s l ∈ dy l , . . . , Y s ∈ dy , Y t ∈ dx , . . . , Y t k ∈ dx k , β ( Y ) > t k )= Z E m ( dx ) ˆ P x [ ˆ X s ∈ dy , · · · , ˆ X s l ∈ dy l ] P x [ X t ∈ dx , · · · , X t k ∈ dx k ]at the times s l < · · · < s < ≤ t < · · · < t k . In words, to build Q m | { α< <β } one samples theinvariant measure m at time 0, and from the outcome starts an independent copy of X to the rightand an independent copy of the dual ˆ X to the left. An important consequence is that time-reversingthe Kuznetsov measure for ( η, P ) yields the Kuznetsov measure for ( η, ˆ P ). We should also recallthe fact • Q m ( α = −∞ ) = 0 if m is purely excessive (i.e. mP t → t → ∞ ), • Q m ( α > −∞ ) = 0 if m is invariant (i.e. mP t = m for all t > Q η - recall that automatically α = 0 for almost all trajectories - and via Q η extend the Markovfamily { P z : z ∈ R \{ }} in the following way: Lemma 11.
Let E ∪ { θ } be a Polish space and let { P x : x ∈ E } denote a (killed) Markov familyon the space E . Suppose that ( η t ) is an entrance law for the Markov family on E for which thecorresponding Kuznetsov measure Q η fulfills (i) Q η is a finite non-trivial measure (ii) lim t → Y t = θ , Q η -a.e., in the space E ∪ { θ } and define the restriction mapping π : W → D ( E ∪ { θ, ∂ } ) , π ( w ) t = ( θ : t = 0 w t : t > . For the normalized measure P θ ( A ) := Q η ( π − ( A )) Q η ( W ) , A ∈ D ( E ∪ { θ ∪ ∂ } ) , the extended family { P x : x ∈ E ∪ { θ }} is a (killed) Markov family on E ∪ { θ } so that under P θ thecanonical process leaves the initial value θ instantaneously and satisfies the strong Markov propertyfor strictly positive stopping times. The lemma is an immediate consequence of the strong Markov property of Kuznetsov measures.In order to construct a good entrance law at zero for the real self-similar Markov process we usethe theory of random time-changes for Kuznetsov measures as developped by Kaspi.
Random Time-Change.
Let us recall Theorems (2.3) and (2.10) of Kaspi [19] in the simplestform: Given a (killed) Markov process on E with transition semigroup ( P t ) and a locally boundedmeasurable function h : E ∪ { ∂ } → (0 , ∞ ) that defines a time-changed Markov transition semigroupvia ˜ P t f ( x ) := E x [ f ( Z S t )] , where S t = inf n s > Z s h ( Z u ) du > t o . Let Q m be the Kuznetsov measure for ( m, P ) and suppose B t := R ( α,t ] h ( Y s ) ds < ∞ for almost allrealizations (by time homogeneity of Q m it suffices to check the property only for time t = 0). Thenthere is an entrance law ( η t ) at time zero for ( ˜ P t ) such that the corresponding Kuznetsov measure˜ Q η satisfies ˜ Q η ( A, β > t ) = Q m (cid:0) π − ( A ) , < B − t ≤ (cid:1) , A ∈ G , t > , where π ( Y ) t = ( Y B − t : t > ∂ : t ≤ . In what follows we fix the MAP ( ξ, J ) on R × {± } obtained from the given real self-similar Markovprocess through the Lamperti-Kiu representation and consider the time-change˜ P t f ( x, i ) := E x,i (cid:2) f ( ξ S t , J S t ) (cid:3) , where S t = inf n s > Z s e αξ u du > t o . (9)We use the knowledge of invariant measures for MAPs to construct an entrance law at zero for( ˜ P t ), thus, through concatenation with h ( x, i ) = exp( x ) i , for the real self-similar Markov process. Lemma 12. If ( ξ, J ) drifts to + ∞ , then there exists a distribution P on ( D ( R ) , D ( R )) for whichConditions (2a) and (2b) of Proposition 7 hold.Proof. We construct an entrance law ( η t ) at time zero for ( ˜ P t ) such that the associated Kuznetsovmeasure ˜ Q η satisfies, for Y = ( Y , Y ),(i) lim t ↓ Y t = −∞ and β ( Y ) = ∞ , ˜ Q η -a.e.(ii) ˜ Q η is a finite measure(iii) if τ + z = inf { t : Y t ≥ z } for z ∈ R then˜ Q η (cid:0)(cid:0) Y τ + z − z, Y τ + z (cid:1) ∈ ( dx, { i } ) (cid:1) = ˜ Q η ( W ) ν ( dx, { i } ) , where ν is the stationary overshoot distribution apparing in Theorem 28 for the MAP ( ξ, J ).If such a measure ˜ Q η can be constructed, then by the Lamperti-Kiu representation (4) and throughLemma 11, we obtain P from ˜ Q η by pathwise applying h ( x, i ) = exp( x ) i and normalizing to aprobabillity measure. The claimed properties (2a) and (2b) follow from the construction.Lemma 22 in the Appendix shows that ( ξ, J ) and ( ˆ ξ, ˆ J ) are in duality on E = R × {± } withrespect to the invariant measure m ( dx, { i } ) = dx π ( i ). By assumption ( ξ, J ) drifts to + ∞ and the EAL SELF-SIMILAR PROCESSES STARTED FROM THE ORIGIN 13 dual ( ˆ ξ, ˆ J ) drifts to −∞ . We use Mitro’s construction for Q m : Sample ( x, i ) from m and startindependently copies of P x,i in the positive time-direction and ˆ P x,i in the negative time-direction.We conclude that, Q m -a.e., α ( Y ) = −∞ and β ( Y ) = + ∞ as well aslim t →−∞ Y t = −∞ and lim t → + ∞ Y t = + ∞ . (10)We now apply Kaspi’s time-change as discussed above the lemma to Q m with B t = R t −∞ exp( αY r ) dr .In order to use Kaspi’s result we need to check that B < ∞ for Q m -almost all realizations. Fromthe two-sided construction of Q m it is enough to show that R ∞ exp( αξ r ) dr < ∞ for ˆ P x,i -almost all( ξ, J ). This holds due to the law of large numbers for the dual Markov additive process that driftsto −∞ . Hence, there is an entrance law ( η t ) at time zero for ( ˜ P t ) and the corresponding Kuznetsovmeasure ˜ Q η satisfies ˜ Q η ( A, β > t ) = Q m (cid:0) π − ( A ) , < B − t ≤ (cid:1) , A ∈ F , (11)with π ( Y ) t = Y B − t for t > π ( Y ) t = ∂ for t ≤
0. Formula (11) combined with (10) entailproperty (i).Next we show that the measure ˜ Q η is finite. We combine convergence of the overshoots of the MAPwith Theorem (2.3) of Kaspi. By Theorem 28 in the Appendix, there exists a limiting overshootdistribution for the MAP, say ν . We choose c > ν ((0 , c ) × {± } ) > A =(0 , c ) × {± } . Note that the map R ∋ x E x,i h Z ∞ A ( ξ S t , J S t ) ) dt i is lower semi-continuous so that by the Markov property and weak convergence of the overshootdistributionlim inf x ↓−∞ E x,i h Z ∞ A ( ξ S ( t ) , J S ( t ) ) dt i ≥ E ν h Z ∞ A ( ξ S ( t ) , J S ( t ) ) dt i =: κ > . Hence, by Fatou’s inequality and the strong Markov property for ˜ Q η ,˜ Q η (cid:16) Z ∞ A ( Y s ) ds (cid:17) ≥ lim inf ε ↓ ˜ Q η (cid:18) E Y ε h Z ∞ A ( ξ S ( t ) , J S ( t ) ) dt i(cid:19) ≥ κ ˜ Q η ( W ) , where we have used that lim ε ↓ Y ε = −∞ ˜ Q η -a.e. Conversely, Theorem (2.3) of Kaspi relates theoccupation time of the set A under the measures ˜ Q η and Q m as follows:˜ Q η (cid:18)Z ∞ A ( Y s ) ds (cid:19) = Q m Z [0 , A ( Y t ) e αY t dt ! = Z A e αy m ( dy ) < ∞ . Here we used in the latter step that we can interchange the order of integration by Fubini’s theoremsince Q m is σ -finite by construction. Combining the two display formulas gives that ˜ Q η ( W ) is finiteand nonzero. Thus we proved property (ii).To prove property (iii) we note that the overshoot distribution is not effected by a time change andhence agrees for ( P t ) and ( ˜ P t ). Consequently, using the Markov property under the measure ˜ Q η weget that ˜ Q η (cid:0)(cid:0) Y τ + z − z, Y τ + z (cid:1) ∈ · (cid:1) = w- lim k ↓−∞ ˜ Q η h ˜ P Y τ + k (cid:0) ( ξ τ + z − z, J τ + z ) ∈ · (cid:1)i = w- lim k ↓−∞ ˜ Q η h P Y τ + k (cid:0) ( ξ τ + z − z, J τ + z ) ∈ · (cid:1)i = ˜ Q η ( W ) ν ( · ) . This shows (iii) and the proof is complete. (cid:3)
The same proof can not be carried out if ( ξ, J ) oscillates. Chosing the same invariant measure η leadsto a Kuznetsov measure Q η under which trajectories oscillate in both directions of time. Hence,there is no way this construction yields a law P satisfying (2a) of Proposition 7. Essentially, theproblem is that Z is not transient. To circumvent this issue, Z is killed at T and then we proceedsimilarly as before. This is captured in the lemma below. Remark 13.
Before turning to the aforesaid lemma, let us note that the cases that ( ξ, J ) driftsto + ∞ or oscillates can of course be treated both with killing as in the proof of Lemma 43. Inorder to work out clearly the main ideas we prefer to give two proofs. In particular, the reader willfind it easier to compare our proof to Fitzsimmons’ [14] construction of excursion measures in therecurrent case. Lemma 14. If ( ξ, J ) oscillates, then there exists a distribution P on ( D ( R ) , D ( R )) for whichConditions (2a) and (2b) of Proposition 7 hold.Proof. We mimik the proof of Lemma 12 with additional killing.Recall from Remark 32 in the Appendix that there exists a harmonic function ( x, i ) U + i ( x ) relatedto the MAP killed when its first component reaches the positive half-line, henceforth denoted by( ξ † , J † ). The corresponding h -transformed process is indicated with the superscript ↓ . We shall alsowrite their respective transition kernels as P † t (( x, i ) , ( dy, { j } )) and P ↓ t (( x, i ) , ( dy, { j } )), with theaddition of a hat to mean the dual map as defined in Section A.2.Next, we show duality in the sense of (8) for ( ˆ ξ ↓ , ˆ J ↓ ) and ( ξ † , J † ) with respect to the duality measure m ( dx, { i } ) = π i ˆ U + i ( x ) dx on ( −∞ , × {± } . The duality comes from the short calculationˆ P ↓ t (cid:0) ( x, i ) , ( dy, { j } ) (cid:1) m ( dx, { i } ) = ˆ U + j ( y )ˆ U + i ( x ) ˆ P † t (cid:0) ( x, i ) , ( dy, { j } ) (cid:1) π i ˆ U + i ( x ) dx = π j ˆ U + j ( y ) P † t (cid:0) ( y, j ) , ( dx, { i } ) (cid:1) dy = P † t (cid:0) ( y, j ) , ( dx, { i } ) (cid:1) m ( dy, { j } ) , where we used the generic h -transform formula for semigroups P ht ( x, dy ) = h ( y ) h ( x ) P t ( x, dy )(12)for transition probabilities of h -transformed processes and the ordinary MAP duality formulaˆ P † t (cid:0) ( x, i ) , ( dy, { j } ) (cid:1) π i dx = P † t (cid:0) ( y, j ) , ( dx, { i } ) (cid:1) π j dy from Lemma 22 in the Appendix.Mitro’s construction of the Kuznetsov measure Q † m for the killed MAP with respect to m worksas follows: Sample ( x, i ) ∈ ( −∞ , × {± } according to m at time zero and start independentlya copy of the killed process P x,i, † in positive time-direction and a copy of the conditioned processˆ P x,i, ↓ in negative time-direction. Since the MAP was assumed to oscillate, the killing time of theformer is finite almost surely. Furthermore, the conditioned process drifts to −∞ almost surely byProposition 33 in the Appendix. Hence, almost all trajectories Y = ( Y , Y ) under Q † m are bornat time α ( Y ) = −∞ , die at a finite time β ( Y ) < + ∞ and satisfy lim t ↓−∞ Y t = −∞ .We now apply Kaspi’s time-change to Q † m with B t = R t −∞ exp( αY r ) dr . In order to use Kaspi’s resultwe need to check that B < ∞ for Q † m -almost all realizations. From the two-sided construction EAL SELF-SIMILAR PROCESSES STARTED FROM THE ORIGIN 15 of Q † m it is clearly enough to show that ˆ P x,i, ↓ -almost surely R ∞ exp( αξ r ) dr < ∞ for all ( x, i ) ∈ ( −∞ , × {± } . To do so we show finiteness of the expectation:ˆ E x,i, ↓ h Z ∞ e αξ s ds i = Z ∞ ˆ E x,i, ↓ (cid:2) e αξ s (cid:3) ds = Z ∞ X j =1 , Z R e αy ˆ P x,i, ↓ ( ξ s ∈ dy, J s = j ) ds = X j =1 , Z R e αy Z ∞ ˆ P x,i, ↓ ( ξ s ∈ dy, J s = j ) ds =: X j =1 , Z R e αy ˆ U ↓ (cid:0) ( x, i ) , dy, { j } (cid:1) = 1ˆ U + i ( x ) X j =1 , Z ∞ e αy ˆ U + j ( y ) ˆ U † (cid:0) ( x, i ) , ( dy, { j } ) (cid:1) ≤ C ˆ U + i ( x ) X j =1 , Z ∞ e αy ˆ U † (cid:0) ( x, i ) , ( dy, { j } ) (cid:1) = C ˆ U + i ( x ) ˆ E x,i, † h Z ∞ e αξ s ds i = C ˆ U + i ( x ) ˆ E x,i h Z τ +0 e αξ s ds i , (13)where we used Fubini’s theorem and the relationˆ U ↓ (cid:0) ( x, i ) , ( dy, { j } ) (cid:1) = ˆ U + j ( y )ˆ U + i ( x ) ˆ U † (cid:0) ( x, i ) , ( dy, { j } ) (cid:1) , with ˆ U † (cid:0) ( x, i ) , ( dy, { j } ) (cid:1) being the potential measure of ( ξ † , J † ), (a consequence of (12)) and thatthe potentials y ˆ U + j ( y ) grow at most linearly (see Theorem 28 of the Appendix). The right-handside of (13) was already shown to be finite in the proof of Lemma 8.Theorems (2.3) and (2.10) of Kaspi [19] thus gives us an entrance law ( η t ) at zero and a corre-sponding Kuznetsov measure ˜ Q † η for the time-changed killed process˜ P † t f ( x, i ) := E x,i, † (cid:2) f ( ξ S ( t ) , J S ( t ) ) (cid:3) , with S t = inf n s > Z s exp( αξ u ) du > t o , (14)and furthermore ˜ Q † η ( A, β > t ) = Q † m (cid:0) π − ( A ) , < B − t ≤ (cid:1) , A ∈ F , (15)with π ( Y ) t = Y B − t . As in the previous proof, (15) and the almost sure behavior under Q † m implythe following claim: Claim: ˜ Q † η -almost all trajectories satisfy lim t ↓ Y t = −∞ and β ( Y ) < + ∞ . Claim: ˜ Q † η ( W ) < ∞ The proof is exactly as in the proof of Lemma 12.
Claim: Q † η (cid:0)(cid:0) Y τ + z − z, Y τ + z (cid:1) ∈ ( dx, { i } ) (cid:1) = Q η ( W ) ν ( dx, { i } ) for all z < z < Normalizing ˜ Q † η to a probability measure and concatenating pathwise with h ( x, i ) = exp( x ) i yieldsa law P , † which is a Kuznetsov measure for the transition semigroup ( P † t ) killed at T . Theovershoot distribution under P , † at levels ε < µ ε (see the proof of Lemma 9).Concatenating P , † with an independent copy of P µ , i.e. running a trajectory under P , † until T and then continuing with an independent copy of P µ , yields P . From the above and Lemma 9, P has the claimed properties. (cid:3) Proof of Theorem 6.
The argument for the necessity of Condition (C) was given in Section1.3.Now suppose (C) holds and let P as in Lemma 12 or Lemma 43, respectively. Then property (1)of Theorem 6 is satisfied and the canonical process under P is strongly Markov for strictly positivestopping times as it is a Kuznetsov measure. In particular, properties (2a) and (2b) of Proposition 7are true. As shown in Section 2.2, properties (1a) to (1c) are also fulfilled, thus,w- lim | z |→ P z = P . We will use these properties to conclude the remaining assertions of Theorem 6.
Step 1:
We show that the extension { P z : z ∈ R } is Feller. First we show that for arbitrary t > f : R → R the semigroup P t f ( x ) = E x [ f ( X t )] iscontinuous on R . Suppose that the sequence ( x n ) n ∈ N converges to x ∈ R . We know already thatw- lim n →∞ P x n = P x on the Skorokhod space and it follows that P t f ( x n ) = E x n [ f ( Z t )] → E x [ f ( Z t )] = P t f ( x ) , once we ensured that under P x the canonical process Z is almost surely continuous in t since pointevaluations on the Skorokhod space are continuous on the set of functions being continuous in therespective point. To show this we recall that the paths of real self-similar Markov processes arequasi-left-continuous because the same is true of MAPs, in particular, when they are time changedby the sequence of stopping times that appear in the Lamperti-Kiu transform. In particular, thismeans that Z is continuous in t , almost surely, under P x if x = 0. In the case where x = 0 we usethe Markov property, to conclude that P ( Z has jump at t ) = E [ P Z t/ ( Z has jump at t/ . Next, we show that if additionally f vanishes at infinity, then this is also the case for P t f . This isa consequence of the fact that for every C > | x |→∞ P x ( min s ∈ [0 ,t ] | Z s | < C ) = 0which itself follows easily from the Lamperti-Kiu representation. Indeed, this estimate implies that | P t f ( x ) | ≤ max y : | y |≥ C | f ( y ) | + P x ( min s ∈ [0 ,t ] | Z s | < C ) max y ∈ R | f ( y ) | → max y : | y |≥ C | f ( y ) | (16)for | x | → ∞ . Thus, P t f is vanishing at infinity since C > f : R → R vanishing at infinity.Let ( t n ) be a decreasing sequence with t n → x n ) a sequence in R with either | x n | → ∞ or x n → x for an x ∈ R . In the case where | x n | → ∞ , with the same estimate as in (16), we find | P t n f ( x n ) − f ( x n ) | ≤ | P t n f ( x n ) | + | f ( x n ) | → . Moreover, if x n → x , we get that P t n f ( x n ) = E x n [ f ( Z t n )] → E x [ f ( Z )] = f ( x )since the functional D ( R ) × [0 , ∞ ) ∋ ( w, t ) w t ∈ R EAL SELF-SIMILAR PROCESSES STARTED FROM THE ORIGIN 17 is continuous in P x ⊗ δ -almost all entries. Consequently, one haslim t ↓ sup x ∈ R | P t f ( x ) − f ( x ) | = 0 , since we could otherwise construct sequences ( t n ) and ( x n ) as above contradicting the above prop-erties (based on the compactness of the one point compactification of R ). Step 2:
Next we show that P is self similar. For a continuous and bounded functional f : D ( R ) → R we have E [ f ( cZ c − α · )] = lim z → E z [ f ( cZ c − α · )] = lim z → E cz [ f ( Z )] = E [ f ( Z )] . Step 3:
Finally, we show that P is the unique Markovian extension satisfying one of the properties(1) or (2). Suppose there exists another Markovian extension satisfying property (1) in the statementof the theorem and denote it by ¯ P . Then, for t > P ( Z t ∈ · ) = w- lim ε ↓ ¯ P ( Z t + ε ∈ · ) = w- lim ε ↓ ¯ P ( P Z ε ( Z t ∈ · )) = P ( Z t ∈ · ) , where we used in the first step that ( Z t ) is right-continuous, in the second step the Markov propertyof ¯ P and in the third step that Z ε ⇒ δ under ¯ P and w- lim z → P z ( Z t ∈ · ) = P ( Z t ∈ · ) by theFeller property for P . By using the Markov property one easily sees that the distributions ¯ P and P coincide.Suppose now that, instead, that ¯ P satisfies the Feller property (2) instead of (1). Then using theFeller property twice we get¯ P ( Z t ∈ · ) = w- lim x → P x ( Z t ∈ · ) = P ( Z t ∈ · )so that ¯ P and P coincide again by the Markov property.2.5. Remarks on the Proof.Remark 15.
The way the limiting law P is constructed one can say that the Lamperti-Kiu repre-sentation extends in a slightly unhandy way to initial condition 0. Due to the explicit constructionof the Kuznetsov measure from two-sided MAPs one can for instance deduce from almost sureresults for MAPs almost sure results for self-similar Markov processes started from zero. Remark 16 (Proof of Theorem 6 fails if (C) fails) . Calculations similar to those from Lemma12 (resp. Lemma 43) can be used in order to show that the divergence of overshoots implies˜ Q η ( W ) = ∞ (resp. ˜ Q † η ( W ) = ∞ ). Hence, if Condition (C) fails, then necessarily ˜ Q η (resp. ˜ Q † η ) isan infinite measure and as such cannot be normalized to a probability measure P . Remark 17.
The previous remark has an interesting consequence: in contrast to other knownconstructions of P in the setting of pssMps, our construction works irrespectively of Condition (C) . When (C) fails, then the infinite Kuznetsov measure can still be used to study conditionallimits, such as lim | x |→ P z ( · | the interval [ a, b ] is hit). Remark 18 (Relation to Bertoin, Savov [4]) . For pssMps Bertoin and Savov constructed P byhand without appealing to the probabilistic potential theory centred around Kuznetsov’s measure.Their construction is in the spirit of the Fitzsimmons and Taksar [16] construction of stationaryregenerative sets as range of stationary subordinators. In essence, we first constructed a Kuznetsovmeasure and then produced the so-called quasi-process by taking Palm measures in (11) (resp. in(15)). Bertoin and Savov directly wrote down the quasi-process and their construction only worksunder Condition (C) . Remark 19.
The advantage of going the detour through Kuznetsov measures is mostly of technicalnature. It allowed us to write down, with a minimal use of fluctuation theory, the limiting object P . For instance, there was no need to use the non-trivial existence of ˆ P ↓ issued from the origin. Since fluctuation theory is delicate a proof with minimal use is desirable, in particular, for possiblefutur generalizations to more general domains. One direction for which our construction worksbut fluctuation theory is not available are multi-self-similar Markov processes introduced in Yor,Jacobson [17].
Remark 20.
For real self-similar Markov processes with jumps only towards the origin a con-struction of P was already given in [10] through jump-type stochastic differential equations. Thatapproach lacks the full generality since the weak uniqueness argument does not extend. It mightbe an interesting question to ask if the potential theory of the present article can be used to provethe weak uniqueness of the differential equations. Appendix A. Results for Markov additive processes
Unlike the case of L´evy processes, general fluctuation theory for Markov additive processes (MAPs)appears to be relatively incomplete in the literature. Accordingly, in this Appendix, we address thoseparts of the fluctuation theory that are needed in the main body of the text above.The contents of the Appendix is as follows:A.1 BasicsA.2 DualityA.3 Local time and Cox process of excursionsA.4 Splitting at the maximumA.5 Occupation formulaA.6 Markov Renewal theoryA.7 Harmonic functionsA.8 Conditioning to stay positiveA.9 Laws of large numbersA.10 Tightness of the overshootsUnfortunately a complete treatment would require a whole book’s worth of text. Therefore, as acompromise and with an apology to the reader, the presentation of A.1 to A.6 mostly highlightsselected results and the main steps to prove them. Almost all fluctuation theory can be constructedby analogy with fluctuation theory of L´evy processes. The selected computations we dwell on belowpertain largely to the peculiarities that are specific to the case of MAPs. Results in A.9 and A.10are not in analogy to L´evy processes and non-trivial so full proofs are given.A.1.
Basics.
Recall that ( ξ t , J t ) t ≥ denotes a MAP on R × E , where E is a finite set. Recall alsothat its natural filtration is denoted by ( F t ) t ≥ and its probabilities by ( P x,i ) x ∈ R ,i ∈ E . We shall alsoassume that E is irreducible and aperiodic and hence ergodic. Denote the intensity matrix of J by Q = ( q i,j ) i,j ∈ E . Its stationary distribution is denoted by π = ( π , · · · , π | E | ). Unless otherwise stated, we assume throughout that ξ is non-lattice, that is (NL) is in force. Referring to Proposition 2, the characteristic exponents of the ‘pure-state’ L´evy processes appear-ing in Proposition 2 will be denoted by ψ i ( z ) = log E [exp( zξ i )], z ∈ C , whenever the right-handside exists. It suffices for us to deal with the case that ψ i (0) = 0 for all i ∈ E , i.e. none of theL´evy processes are killed. Furthermore, whenever it exists, define the matrix G ( z ) = (cid:0) G i,j ( z ) (cid:1) i,j ∈ E ,where G i,j ( z ) = E [exp( z ∆ i,j )], i, j ∈ E . For each i, j ∈ E such that i = j , the random variables∆ i,j have law F i,j corresponding to the distribution of the additional jump that is inserted intothe path of the MAP when J undergoes a transition from i to j . For convenience we assume that∆ i,j = 0 whenever q i,j = 0 and also set ∆ i,i = 0 for each i ∈ E . According to Proposition 2 this EAL SELF-SIMILAR PROCESSES STARTED FROM THE ORIGIN 19 assumption is without loss of generality since those transitional jumps never occur.A crucial role will be played by the matrices F ( z ) := diag (cid:0) ψ ( z ) , ..., ψ | E | ( z ) (cid:1) + ( q i,j G i,j ( z )) i,j ∈ E , (17)which are defined on C whenever the right-hand side exists. The matrix F is called the matrixexponent of the MAP ( ξ, J ) because E ,i (cid:2) e zξ t , J t = j (cid:3) = (cid:0) e F ( z ) t (cid:1) i,j , i, j ∈ E, for all z ∈ C for which one of the sides is defined.A.2 Duality.
Given the MAP ξ with probabilities P x,i , x ∈ R , i ∈ E , we can introduce the dualprocess; that is, the MAP with probabilities ˆ P x,i , x ∈ R , i ∈ E , whose matrix exponent, when it isdefined, is given by, ˆ E ,i (cid:2) e zξ t , J t = j (cid:3) = (cid:0) e ˆ F ( z ) t (cid:1) i,j , i, j ∈ E, where ˆ F ( z ) := diag (cid:0) ψ ( − z ) , ..., ψ | E | ( − z ) (cid:1) + ˆ Q ◦ G ( − z ) T and ˆ Q is the intensity matrix of the modulating Markov chain on E with entries given byˆ q i,j = π j π i q j,i , i, j ∈ E. Note that the latter can also be written ˆ Q = ∆ − π Q T ∆ π , where ∆ π = diag( π , · · · , π | E | ) and hence,when it exists, ˆ F ( z ) = ∆ − π F ( − z ) T ∆ π , showing that π i ˆ E ,i (cid:2) e zξ t , J t = j (cid:3) = π j E ,j (cid:2) e − zξ t , J t = i (cid:3) . (18)At the level of processes, one can understand (18) as changing time-directions: Lemma 21.
We have that { ( ξ ( t − s ) − − ξ t , J ( t − s ) − ) : s ≤ t } under P ,π = P | E | i =1 π i P ,i is equal inlaw to { ( ξ s , J s ) : s ≤ t } under ˆ P ,π . Additionally to the ordinary duality (18) we will use duality in the general sense of (8) for thekilled MAP P † t (( x, i ) , ( dy, { j } )) = P x,i [ ξ t ∈ dy, ¯ ξ t ≤ J t = j ] , x, y ≤ , t ≥ , i, j ∈ E, where ¯ ξ t = sup s ≤ t ξ . The next two duality formulas are called switching identities: Lemma 22. If x, y ∈ R and i, j ∈ E , then ˆ P x,i ( ξ t ∈ dy ; J t = j ) π i dx = P y,j ( ξ t ∈ dx ; J t = i ) π j dy and, for x, y ≤ , ˆ P † t (cid:0) ( x, i ) , ( dy, { j } ) (cid:1) π i dx = P † t (cid:0) ( y, j ) , ( dx, { i } ) (cid:1) π j dy. The proofs of the previous two lemmas are standard, especially in light of the straightforwardnature of the analogous proofs for for L´evy processes (see for example Chapter II of [3]), and weleave them to the reader.A.3.
Local time and Cox process of excursions.
Let Y ( x ) t = ( x ∨ ¯ ξ t ) − ξ t , t ≥
0, where werecall that ¯ ξ t = sup s ≤ t ξ s . Following ideas that are well known from the theory of L´evy processes,it is straightforward to show that, as a pair, the process ( Y ( x ) , J ) is a strong Markov process. For convenience, write Y in place of Y (0) . Since ( Y, J ) is a strong Markov process, by the general theory(c.f. Chapter IV of [3]) there exists a local time at the point (0 , i ), which we henceforth denote by { ¯ L ( i ) t : t ≥ } . Now consider the process¯ L t := X i ∈ E ¯ L ( i ) t , t ≥ . Since, almost surely, for each i = j in E , the points of increase of ¯ L ( i ) and ¯ L ( j ) are disjoint,it follows that ( ¯ L − , H + , J + ) := { ( ¯ L − t , H + t , J + t ) : t ≥ } is a (possibly killed) Markov additivebivariate subordinator, where H + t := ξ ¯ L − t and J + t := J ¯ L − t , if ¯ L − t < ∞ , and H + t := ∞ and J + t := ∞ otherwise. Note that the rate at which the process ( ¯ L − , H + , J + ) iskilled depends on the state of the chain J + when killing occurs. This will be addressed in moredetail shortly. We also note that { ǫ t : t ≥ } is a (killed) Cox process, where ǫ t = { ξ ¯ L − t − + s − ξ ¯ L − t − : s ≤ ∆ ¯ L − t } , if ∆ ¯ L − t > , and ǫ t = ∂ , some isolated state, otherwise. Henceforth, write n i for the intensity measure of thisCox process when the underlying modulating chain J + is in state i ∈ E . As a bivariate Markovadditive subordinator, the process ( ¯ L − , H + , J + ) has a matrix Laplace exponent given by E ,i (cid:2) e − α ¯ L − t − βH + t , J + t = j (cid:3) = (cid:0) e − κ + ( α,β ) t (cid:1) i,j , α, β ≥ , where the matrix κ + ( α, β ) has the structure κ + ( α, β ) = diag (cid:0) Φ +1 ( α, β ) , · · · , Φ + k ( α, β ) (cid:1) − Q + ◦ G + ( β ) , α, β ≥ i ∈ E , Φ + i ( α, β ) is the subordinator exponent that describes the movement of( ¯ L − , H + ) when the modulating chain J + is in state i . Moreover, Q + is the intensity of J + and thematrix G + ( β ) = ( G + ( β )) i,j is such that, for i = j in E , its ( i, j )-th entry is the Laplace transformof F + i,j , the distribution of the additional jump incurred by H when the modulating chain changesstate from i to j . The diagonal elements of G + ( β ) are set to unity. Note that there is no additionaljump incurred by ¯ L − when the modulating chain changes state. For future reference, writeΦ + i ( α, β ) = n i ( ζ = ∞ ) + a i α + b i β + Z ∞ Z ∞ (1 − e − αx − βy ) n i ( ζ ∈ dx, ǫ ζ ∈ dy, J ζ = i ) , α, β ≥ , where a i , b i ≥ ζ = inf { s ≥ ǫ > } the excursion length. Note in particular that the matrix κ + (0 ,
0) = diag (cid:0) n ( ζ = ∞ ) , · · · , n k ( ζ = ∞ ) (cid:1) , encodes the respective killing rates of ( ¯ L − , H + , J + ) when J + is in each state of E .The assumption that ξ is non-lattice implies that the jump measures associated to H + , namely n i ( ǫ ζ ∈ dx, J + ζ = i ), i ∈ E , and F + i,j , i = j , i, j ∈ E , are diffuse on (0 , ∞ ). For the sake of brevity,we give no proof of this fact here. Instead we refer to proof of the analogous result for the caseof L´evy processes. In that case, one may draw the desired conclusion out of, for example, Vigon’sidentity for the jump measure of the ascending ladder height process; see Theorem 7.8 in [24]. Asone sees from the proof there, this identity is derived using the the so-called quintuple law of thefirst passage problem, which itself follows from a straightforward application of the compensationformula for the Poisson point process of jumps. A quintuple law can also be derived in the MAPsetting using the same technique as in the L´evy setting, where one appeals to an analogue of thecompensation formula for the Cox process of jumps. This would also form the basis of the proofthat the jump measures associated to H + are diffuse in the MAP case. EAL SELF-SIMILAR PROCESSES STARTED FROM THE ORIGIN 21
A.4.
Splitting at the maximum.
Now suppose that e q is an exponentially distributed randomvariable with rate q >
0. Consider a marked version of the Cox process described above in whicheach excursion ǫ t = ∂ is marked with an independent copy of e q , denoted by e ( t ) q , for t ≥
0. Let m t = sup { s ≤ t : ξ t = ξ s } . Poisson thinning dictates that ( ξ e q , m e q ) is equal in law to the process( ¯ L − , H + ) conditioned on { ∆ ¯ L − t < e ( t ) q for all t ≥ } and stopped with rate matrixdiag( a q + n ( ζ > e q ) , · · · , a | E | q + n | E | ( ζ > e q ))= diag( a q + n (1 − e − qζ ) , · · · , a | E | q + n | E | (1 − e − qζ ))= diag(Φ +1 ( q, , · · · , Φ + | E | ( q, . In particular, the conditioned process is stopped at a random time θ q with the property that P ,i (cid:0) θ q > t | σ { J + s : s ≤ t } (cid:1) = exp (cid:16) − Z t Φ + J s ( q, ds (cid:17) . The aforementioned conditioned process has matrix exponent which can be derived from the matrixexponent κ + ( α, β ). Indeed, whereas in κ + ( α, β ) the pure states are represented as Φ + i ( α, β ) in theconditioned process, this is replaced by n i ( ζ = ∞ ) + a i α + b i β + Z ∞ Z ∞ (1 − e − αx − βy ) e − qx n i ( ζ ∈ dx, ǫ ζ ∈ dy, J ζ = i ) , α, β ≥ , which is also equal to Φ + i ( q + α, β ) − Φ + i ( q, κ + ( α, β ) := diag(Φ +1 ( q + α, β ) − Φ +1 ( q, , · · · , Φ + | E | ( q + α, β ) − Φ + | E | ( q, − Q + ◦ G + ( β ) , for α, β ≥ L − , H , J + ) the process corresponding to ( ¯ L − , H + ) conditioned on { ∆ ¯ L − t < e ( t ) q for all t ≥ } , i.e. the Markov additive process with joint Laplace exponent give by(19). It now follows that the pair ( ξ e q , m e q ) has matrix Laplace transform given by E ,i ( e − αm e q − βξ e q , J m e q = j )= E ,i h e − α L − θq − β H θq ( J + θq = j ) i = E ,i h Z ∞ du ( J + u = j ) Φ + J + u ( q, e − R u Φ + J + s ( q, ds e − α L − u − β H u i = Z ∞ du Φ + j ( q, E ,i h e − R u Φ + J + s ( q, ds e − α L − u − β H u ( J + u = j ) i , (20)for α, β ≥
0. Note that the final expectation above can be written in terms of the matrix Laplaceexponent of ( L − , H , J + ) with a potential corresponding to diag(Φ +1 ( q, , · · · , Φ + | E | ( q, κ + ( q + α, β ) = diag(Φ +1 ( q + α, β ) , · · · , Φ + | E | ( q + α, β )) − Q + ◦ G + ( β ) , α, β ≥ . Indeed, one has, E ,i (cid:20) e − R u Φ + J + s ( q, ds e − α L − u − β H u ( J + u = j ) (cid:21) = [ e − κ + ( q + α,β ) ] i,j . Continuing the computation in (20), we now have the following result.
Theorem 23.
For i, j ∈ E , α, β ≥ and q > , E ,i (cid:2) e − αm e q − βξ e q , J m e q = j (cid:3) = Φ + j ( q, κ + ( q + α, β ) − ] i,j . (21) We can go a little further in our analysis of the previous section and note that, on the event { J + θ q = j } , the excursion ǫ J + θq is independent of { ( ¯ L − t , H + t , J + t ) : t < θ q } . In particular, on { J + θ q = j } ,we have that ( ξ e , m e q ) is independent of ( ξ e q − ξ e q , e q − m e q ).Duality allows us to conclude that on the event { J + θ q = j, J e q = k } = { J m e q = j, J e q = k } thepair ( ξ e q − ξ e q , e q − m e q ) is equal in law to the pair ( ˆ ξ e q , ˆ m e q ) on { ˆ J = k, ˆ J ˆ m e q = j } , where { ( ˆ ξ s , ˆ J s ) : s ≤ t } := { ( ξ ( t − s ) − − ξ t , J ( t − s ) − ) : s ≤ t } , t ≥
0, is equal in law to the dual of ξ ,ˆ ξ t = sup s ≤ t ˆ ξ s and ˆ m = sup { s ≤ t : ˆ ξ s = ˆ ξ t } .From the previous section, we may now deduce that, for i, j, k ∈ E and α, β ≥ E ,i (cid:2) e − α ( e q − m e q ) − β ( ξ e q − ξ e q ) , J m e q = j, J e q = k (cid:3) = E ,k (cid:2) e − α ˆ m e q − β ˆ ξ e q , ˆ J ˆ m e q = j (cid:3) = ˆ E ,k (cid:2) e − αm e q − βξ e q , J m e q = j (cid:3) . (22)We can also use the ideas above to prove the following technical lemma which will be of use lateron. Lemma 24.
For all j ∈ E , c := X j ∈ E lim q ↓ Φ + j ( q,
0) ˆΦ + j ( q, q exists in (0 , ∞ ) and, for each j ∈ E , (23) c j := lim q ↓ Φ + j ( q,
0) ˆΦ + j ( q, q exists in [0 , ∞ ) .Proof. Write ˆ κ + ( α, β ) for the dual matrix exponent, that is, to ˆ F ( z ) what κ + ( α, β ) is to F ( z ). Onthe one hand, for all i, k ∈ E and α > E ,i (cid:2) e − α e q , J e q = k (cid:3) = h Z ∞ qe − ( α + q ) t e Qt dt i i,k = q (cid:2)(cid:0) ( q + α ) I − Q (cid:1) − (cid:3) i,k . On the other hand, from (22), for all i, k ∈ E and α > E ,i (cid:2) e − α e q , J e q = k (cid:3) = X j ∈ E E ,i (cid:2) e − α ( m e q + e q − m e q ) , J m e q = j, J e q = k (cid:3) = X j ∈ E Φ + j ( q, κ + ( q + α, − ] i,j ˆΦ + j ( q, κ + ( q + α, − ] k,j Taking limits as q ↓ (cid:2) ( αI − Q ) − (cid:3) i,k = X j ∈ E lim q ↓ Φ + j ( q,
0) ˆΦ + j ( q, q [ κ + ( α, − ] i,j [ˆ κ + ( α, − ] k,j , where the limit on the right-hand side exists because the limit exits on the lefthand side. Thestatement of the theorem now follows. (cid:3) The next theorem below gives the Wiener–Hopf factorisation for MAPs. It is a natural consequenceof Theorem 23 and a well-established method of splitting stochastic processes at their maximum.Some results already exist in the literature in this direction, see for example Chapter XI of [2] and[19], however, none of them are in an appropriate form for our purposes.
EAL SELF-SIMILAR PROCESSES STARTED FROM THE ORIGIN 23
Remark 25.
As a consequence of the Wiener–Hopf factorisation, it will turn out that the constants c j , j ∈ E , are all strictly positive and may be taken to be equal to unity without loss of generality. Theorem 26.
For z ∈ R \{ } and α ≥ , αI − F (i z ) = ∆ − π [ˆ κ + ( α, i z ) T ]∆ π κ + ( α, − i z ) . Proof.
We start by sampling ξ over an independent and exponentially distributed time horizondenoted, as usual, by e q . By splitting at the maximum, applying duality and appealing to theidentity (21), we have for α ≥ E ,i (cid:2) e − α e q +i zξ e q , J e q = j (cid:3) = X k ∈ E E ,i (cid:2) e − α ( e q − m e q + m e q )+i zξ e q e i z ( ξ e q − ξ e q ) , J m e q = k, J e q = j (cid:3) = X k ∈ E E ,i (cid:2) e − αm e q +i zξ e q , J m e q = k (cid:3) π j π k ˆ E ,j (cid:2) e − αm e q − i zξ e q , J m e q = k (cid:3) = X k ∈ E Φ + k ( q, κ + ( q + α, − i z ) − ] i,k π j π k ˆΦ + k ( q, κ + ( q + α, i z ) − ] j,k Noting that we can write the lefthand side above as q [(( q + α ) I − F (i z )) − ] i,j , we can divide by q and take limits as q ↓ αI − F (i z )) − ] i,j = X k ∈ E c k [ κ + ( α, − i z ) − ] i,k π j π k [[ˆ κ + ( α, i z ) T ] − ] k,j , where we recall that the constants c k , k ∈ E were introduced in (23). In matrix form, the aboveequality can be rewritten as(24) ( αI − F (i z )) − = κ + ( α, − i z ) − ∆ c/π [ˆ κ + ( α, i z ) T ] − ∆ π , where ∆ c/π = diag( c /π , · · · c | E | /π | E | ). Since all matrices are invertible except possibly ∆ c/π (onaccount of the fact that some of the constants c k may be zero), it follows that necessarily c k > k ∈ E and hence the matrix ∆ c/π is indeed invertible and is its inverse equal to ∆ − π/c (usingobvious notation). The proof is now completed by inverting the matrices on both left- and right-hand sides of (24) and noting that, without loss of generality, the constants c k may be taken asunity by choosing an appropriate normalisation of local time (which in turn means that the equalityin (23) can be determined up to a multiplicative constant). (cid:3) A.5.
Occupation formula.
The objective in this section is to use the preceding constructions toestablish a key identity which is central to the analysis of real self-similar Markov processes in themain body of the text. In order to state the main result, some more notation is needed.For i, j ∈ E the potential measure U + i,j on [0 , ∞ ) is defined by(25) U + i,j ( dx ) = E ,i h Z ∞ ( H + t ∈ dx, J + t = j ) dt i , x ≥ . Note that, for λ > Z ∞ e − λx U + i,j ( dx ) = Z ∞ E ,i (cid:2) e − λH + t , J + t = j (cid:3) dt = [ κ + (0 , λ ) − ] i,j . Moreover, it should also be noted that the non-lattice assumption on the process ξ ensures thatthe measure U + i,j is diffuse on (0 , ∞ ); see the discussion at the end of A.3 as well as the proof ofTheorem 5.4 in [24] in the L´evy case for guidance. We can define by analogy the measures ˆ U + i,j , i, j ∈ E , for to the dual process ˆ ξ . The reader might also want to recall the definitions of τ − and τ +0 from (5). Theorem 27.
There exist non-negative constants c j , j ∈ E , satisfying P j ∈ E c j > such that forall bounded measurable f : R → [0 , ∞ ) and x > , E x,i h Z τ − f ( ξ t ) ( J t = k ) dt i = X j ∈ E c j Z y ∈ [0 , ∞ ) Z z ∈ [0 ,x ] U + i,j ( dy ) ˆ U + k,j ( dz ) f ( x + y − z ) . Proof.
Start by noting that E x,i h Z τ − e − qt f ( ξ t ) ( J t = k ) dt i = 1 q E x,i (cid:2) f ( ξ e q ) ( J e q = k ) , e q < τ − (cid:3) = 1 q X j ∈ E E x,i (cid:2) f ( ξ e q − ( ξ e q − ξ e q )) ( J m e q = j ) ( J e q = k ) , e q < τ − (cid:3) = Z y ∈ [0 , ∞ ) Z z ∈ [0 ,x ] f ( x + y − z ) X j ∈ E q P ,i (cid:0) ξ e q ∈ dy, J m e q = j (cid:1) P ,i (cid:0) ξ e q − ξ e q ∈ dz, J m e q = j, J e q = k (cid:1) = Z y ∈ [0 , ∞ ) Z z ∈ [0 ,x ] f ( x + y − z ) X j ∈ E q P ,i (cid:0) ξ e q ∈ dy, J m e q = j (cid:1) P ,k (cid:0) ˆ ξ e q ∈ dz, J ˆ m e q = j (cid:1) . (27)Next, with the help of (21), Z [0 , ∞ ) e − λy − µz X j ∈ E q P ,i (cid:0) ξ e q ∈ dy, J m e q = j (cid:1) P ,k (cid:0) ˆ ξ e q ∈ dz, J ˆ m e q = j (cid:1) = X j ∈ E Φ + j ( q,
0) ˆΦ + j ( q, q [ κ + ( q, λ ) − ] i,j [ˆ κ + ( q, µ ) − ] k,j , for λ, µ >
0. Taking account of (26), it follows with the help of Lebesgue’s Continuity Theorem forLaplace transforms that, in the vague sense, the product measure on the right-hand side of (27)satisfieslim q ↓ X j ∈ E q P ,i (cid:0) ξ e q ∈ dy, J m e q = j (cid:1) P ,k (cid:0) ˆ ξ e q ∈ dz, J ˆ m e q = j (cid:1) = X j ∈ E c j U + i,j ( dy ) ˆ U + k,j ( dz ) . The result now follows for non-negative compactly supported, bounded measurable f ≥ f ≥ (cid:3) A.6.
Markov Renewal theory.
The measures U + i,j play an analogous role to the potential measure U of the ascending ladder process for a L´evy process, which can also be seen as a renewal measure.For example, using an analogue of the compensation formula for Cox processes, it is straightforwardto deduce that, for a, x > P ,i ( ξ τ + a − a > x, J + τ + a = j )= Z [0 ,a ) U + i,j ( dy ) n j ( ǫ ζ > a − y + x, J ζ = j ) + X k = j Z [0 ,a ) q + k,j U + i,k ( dy )(1 − F + k,j ( a − y + x )) . (28)It is worth noting here that the fact that U + i,j is diffuse on (0 , ∞ ) ensures that the right-hand sideabove is continuous in x . EAL SELF-SIMILAR PROCESSES STARTED FROM THE ORIGIN 25
There is a relatively wide body of literature concerning Markov additive renewal theory; see for ex-ample [25], [20] and [1]. Although mostly dealt with for the case of discrete-time, we can nonethelessidentify the following renewal-type theorem for the non-lattice measures U + i,j . Theorem 28.
The family { ξ τ + a − a : a > } of overshoots converges in distribution under P ,i forevery i ∈ E if and only if E ,π [ H +1 ] := X i ∈ E π i E ,i [ H +1 ] < ∞ and in that case the following hold: (i) For all i, j ∈ E , lim x →∞ U + i,j ( x ) x = π j E ,π [ H +1 ] . (ii) In the spirit of the Key Renewal Theorem, for α > and i, j ∈ E , lim y →∞ Z [0 ,y ] e − α ( y − z ) U + i,j ( dz ) = π j αE ,π [ H +1 ] . (iii) For x > and i, j ∈ E , ν ( dx, { j } ) := w- lim a →∞ P ,i ( ξ τ + a − a ∈ dx, J + τ + a = j )= 1 E ,π [ H +1 ] h π j n j ( ǫ ζ > x, J ζ = j ) + X k = j π k q + k,j (1 − F + k,j ( x )) i dx, where F + k,j is the distribution whose Laplace transform is G + k,j .For all limits above, we interpret the right-hand side as zero when E ,π [ H +1 ] = ∞ . In particular,this means that the overshoot distributions diverge to an atom at + ∞ and are not tight. Parts (i) and (iii) are the continuous-time analogue of the Markov Additive Renewal Theoremin [25], whereas part (ii) is the continuous time analogue of the version of the Markov AdditiveRenewal Theorem in [20].A.7.
Harmonic functions
The main objective of this section is to prove a result which identifiesa harmonic function for the process ( ξ, J ) when killed on entering ( −∞ , × E . In the forthcominganalysis we use L t to denote P k ∈ E L ( k ) t , the sum of local times of ξ − ξ at (0 , k ), k ∈ E , where ξ t = inf s ≤ t ξ s , t ≥
0. Moreover, similarly to previous sections in this Appendix, we work with H − t := ξ L − t and J − t = J L − t , for all t such that L − t < ∞ and otherwise the pair H − t and J − t areboth assigned the value ∞ . Furthermore, define U − i ( x ) = E ,i h Z ∞ ( H − t ≤ x ) dt i , x ≥ , then the following theorem holds: Theorem 29.
For all i ∈ E and x > , U − J t ( ξ t ) ( t<τ − ) , t ≥ , is a P x,i -martingale if and only if ξ − ξ is recurrent at zero; that is to say the Markov process ( ξ − ξ, J ) is recurrent at (0 , k ) for some (and hence all) k ∈ E . We start by proving a preliminary lemma giving us an important fluctuation identity. To this end,define for q > q U − i,j ( dx ) = E ,i h Z ∞ e − qL − t ( H − t ∈ dx, J − t = j ) dt i , x ≥ , and set q U − i ( x ) = X j ∈ E q U − i,j ( x ) , x ≥ . Recall that e q denotes an independent exponentially distributed random variable with rate q > τ − := inf { t > ξ t < } . Let n i be the excursion measures of ξ − ξ from the point(0 , i ), i ∈ E . For convenience, let us assume that each of the subordinators [ L ( k ) ] − t , k ∈ E haveno drift component. The corresponding forthcoming computation when this is not the case is astraightforward modification, e.g. in the spirit of, for example, the proof of Lemma VI.8 of [3].If we mark the excursion from the minimum indexed by local time t > e ( t ) q , then using the compensation formula for theCox process of excursions of ξ − ξ from 0, we have P x,i (cid:0) τ − > e q , J m e q = j (cid:1) = E ,i " E ,i h X t ≥ ( H − t − ≤ x, ∆ L − s < e ( s ) ∀ s
0) := n j (1 − e − qζ ) is a notational choice that, by analogy, respects the definition ofΦ + j ( α, β ) given in Section A.2. In the second and third equalities above, the letter ζ denotes thecanonical excursion length. In conclusion, we have established the following lemma. Lemma 30.
For all i, j ∈ E and x > , (29) P x,i (cid:0) τ − > e q , J m e q = j (cid:1) = q U − i ( x )Φ − j ( q, . We now return to the proof of Theorem 29.
Proof of Theorem 29.
Thanks to the Markov property, it suffices to prove that, for all i ∈ E and x > E x,i [ U − J t ( ξ t ) ( t<τ − ) ] = U − i ( x ) . EAL SELF-SIMILAR PROCESSES STARTED FROM THE ORIGIN 27
To proceed, we use ideas from [6] and Chapter 13 of [24]. With the help of monotone convergence,we have that E x,i [ U − J t ( ξ t ) , t < τ − ]= lim q ↓ E x,i " ( t<τ − ) P ξ t ,J t ( τ − > e q ) P j ∈ E Φ − j ( q, = lim q ↓ P j ∈ E Φ − j ( q, P x,i (cid:2) τ − > e q (cid:12)(cid:12) e q > t (cid:3) = lim q ↓ " e qt P x,i ( τ − > e q ) P j ∈ E Φ − j ( q, − e qt Z t qe − qs P x,i ( τ − > s ) P j ∈ E Φ − j ( q, ds = U − i ( x ) − lim q ↓ q P j ∈ E Φ − j ( q, Z t P x,i ( τ − > s ) ds. The proof is complete as soon as we can show that the limit preceding the integral term is equalto zero. To this end, note that for each j ∈ E ,lim q ↓ q Φ − j ( q,
0) = Φ −′ j (0 ,
0) = E ,k (cid:2) [ L ( k ) ] − (cid:3) ∈ (0 , ∞ ] . We want to show that the expectation on the right-hand side above to be + ∞ as a consequence ofthe fact that 0 is recurrent for ξ − ξ . Appealing to (21), we have E ,i (cid:2) e − αm e q , J m e q = k (cid:3) = Φ − k ( q, − ( q + α, − ] i,k . Duality dictates that E ,i (cid:2) e − αm e q , J m e q = k (cid:3) = π k π i ˆ E ,k (cid:2) e − αm e q , J m e q = i (cid:3) , which tells us that1 q Φ − k ( q, κ − ( q + α, − ] i,k = π k π i q ˆΦ + j ( q, κ + ( q + α, − ] i,k . In turn, this means that lim q ↓ Φ − k ( q, /q and lim q ↓ ˆΦ + k ( q, /q are simultaneously (in)finite. Notethat both have limits because they are Bernstein functions.Now recall from (23) that, since,(30) c k := lim q ↓ Φ + k ( q,
0) ˆΦ + k ( q, q , it follows that lim q ↓ Φ − k ( q, /q = ∞ if Φ + k (0 ,
0) = 0. However, the assumption that ξ − ξ is recurrentat 0 ensures that Φ + k (0 ,
0) = 0 for all k ∈ E .In conclusion, we have that, under the assumption that ξ − ξ is recurrent at 0, the termlim q ↓ q P j ∈ E Φ − j ( q,
0) = 0 , and subsequently the claim of the theorem is proved. (cid:3) A.8.
Conditioning to stay positive.
It turns out that the harmonic function U − j ( x ), j ∈ E , x > h -function that appears in the Doob h -transform corresponding to theprocess ξ conditioned to stay positive. Let A ∈ F t := σ (( ξ s , J s ) : s ≤ t ) and assume that 0 is recurrent for ξ − ξ . Appealing to the Markovand lack of memory properties, we havelim q ↓ P x,i ( A, t < e q | τ − > e q ) = lim q ↓ E x,i (cid:20) ( A, t<τ − < e q ) P ξ t ,J t ( τ − > e q ) P x,i ( τ − > e q ) (cid:21) . Next note that, for all q < q , P ξ t ,J t ( τ − > e q ) P x,i ( τ − > e q ) = q U − J t ( ξ t ) q U − i ( x ) ≤ q U − J t ( ξ t ) U − i ( x ) . Hence, by dominated convergence, we have thatlim q ↓ P x,i (cid:0) A, t < e q (cid:12)(cid:12) τ − > e q (cid:1) = E x,i h ( A, t<τ − ) U − J t ( ξ t ) U − i ( x ) i . In conclusion, we have the following theorem which confirms the existence of the law of ( ξ, J ) with ξ conditioned to stay positive. Theorem 31.
Suppose that is recurrent for ξ − ξ . Then there exists a family of probabilitymeasures on the Skorokhod space, say P ↑ x,i , defined via the Doob h -transform dP x,i, ↑ dP x,i (cid:12)(cid:12)(cid:12)(cid:12) F t = U − J t ( ξ t ) U − i ( x ) ( t<τ − ) , t ≥ , i ∈ E, x > , such that, for all A in F t , P x,i, ↑ ( A ) = lim q ↓ P x,i ( A, t < e q | τ − > e q ) . Remark 32.
Setting U + i ( x ) = E ,i h Z ∞ ( H + t ≤− x ) dt i , x < , (31) the above discussion applied to the MAP ( − ξ, J ) implies that U + J t ( ξ t ) ( t<τ +0 ) is a martingale andthe h -transformed law dP x,i, ↓ dP x,i (cid:12)(cid:12)(cid:12)(cid:12) F t = U + J t ( ξ t ) U + i ( x ) ( t<τ +0 ) , t ≥ , i ∈ E, x < , is the MAP conditioned to be negative. For the proof of Lemma 43 we shall need that conditioned MAPs tend to infinity. In the context ofL´evy processes many proofs exist for analogue of the next lemma. Those proofs are consequencesof complicated pathwise constructions for the conditioned processes that we do not want to repeatfor the setting of MAPs. Instead we give a simple argument based on potential calculations only.The argument is inspired by more explicit calculations for spectrally negative L´evy processes inLemma VII.12 of [3].
Proposition 33.
For each x < and i ∈ E , we have that P x,i, ↓ (lim t →−∞ ξ t = −∞ ) = 1 .Proof. First note that, for all z < x < i ∈ E , with the help of (12), E x,i, ↓ (cid:20)Z ∞ ( ξ t ≥ z ) dt (cid:21) = U ↓ (( x, i ) , ([ z, , E )) = X j ∈ E Z [ z, U + j ( y ) U + i ( x ) U † (( x, i ) , ( dy, { j } )) , (32)where U † (( x, i ) , ( dy, { j } )) is the potential measure of the process ( ξ, J ) killed when ξ first en-ters (0 , ∞ ). Since U + is locally bounded the righthand side can be estimated from above by EAL SELF-SIMILAR PROCESSES STARTED FROM THE ORIGIN 29 CU + ( x ) U † (( x, i ) , ([0 , z ] , E )) which is finite by Theorem 27 applied with f ≡ P x,i, ↓ ( τ − z < ∞ ) = 1 , for all z < x < , i ∈ E. (33)Otherwise, the trajectory of ξ is bounded from below by z with positive probability under P x,i, ↓ and, hence, R ∞ ( ξ t ≥ z ) dt = ∞ with positive probability. But then the left-hand side of (32) wouldbe infinite, giving a contradiction.Next, we show that lim z →−∞ P z,i, ↓ ( ξ t < a for all t ≥
0) = 1 , for all a < , i ∈ E. (34)To see this, define τ [ a, = inf { t > ξ t ∈ [ a, } . Use the change of measure in Remark 32 to notethat, for z < a , P z,i, ↓ (there is t ≥ ξ t ≥ a ) = E z,i, ↓ [ ( τ [ a, < ∞ ) ]= E z,i, † U + J τ [ a, ( ξ τ [ a, ) U + i ( z ) ( τ [0 ,a ] <τ +0 ) ( τ [0 ,a ] < ∞ ) . Using the monotonicity of z U + i ( z ) from the definition (31) the right-hand side can be boundedfrom above by max j ∈ E U + j ( a ) U + i ( z ) P z,i, † ( τ [0 ,a ] < τ +0 ) . Finally, since lim z →−∞ U + i ( z ) = + ∞ , (34) is proved.The claim of the proposition now follows from the strong Markov property applied to τ − z , which isfinite by (33), and (34). (cid:3) A.9.
Laws of large numbers.
Similarly to the case of L´evy process, it is known that a MAP( ξ, J ) grows linearly, meaning that lim t →∞ ξ t t = E ,π [ ξ ](35)provided E ,π [ ξ ] = X i ∈ E π i E ,i [ ξ ]is defined. Moreover, when E ,π [ ξ ] is defined there is a trichotomy which dictates whether ( ξ, J )drifts to + ∞ , −∞ or oscillates accordingly as E ,π [ ξ ] > < k ∈ E and consider the MAP at the discrete set of return times of J to k . Let σ = inf { t ≥ J t = k } and inductively define, for n ∈ N , σ n +1 = inf { t > σ n : J t = k and ∃ s ∈ ( σ n − , t ) with J s = k } . (36)The skeleton ( ξ σ n ) n ∈ N is a Markov chain. The following theorem relates the law of large numbersto moments of the underlying L´evy processes and transition jumps appearing in Proposition 2 andgives an identity that is crucial for the next section. Theorem 34.
The following statements are equivalent for a MAP ( ξ, J ) : (i) ξ has finite absolute mean for one (any) starting distribution with ξ = 0 . (ii) ξ σ has finite absolute mean when started in (0 , k ) . (iii) The L´evy processes ξ i have finite absolute moment and any ∆ i,j with q i,j > has finiteabsolute moment. (iv) lim t →∞ ξ t t exists almost surely for one (any) starting distribution.Under (i) to (iv) we have lim t →∞ ξ t t = E ,π [ ξ ] = E ,k [ ξ σ ] E ,k [ σ ] , k ∈ E. (37) Proof.
Throughout the proof, we shall use the fact that for any L´evy process { η t : t ≥ } E [ | η s | ] < ∞ for some s > ⇐⇒ E [ | η t | ] < ∞ for all t ≥ ⇐⇒ E [sup s ≤ t | η s | ] < ∞ for all t ≥ . See Theorem 25.18 of Sato [30] for a proof.(iii) ⇒ (i): Note that for a fixed distribution P ,µ , by Proposition 2, the distribution of ξ is identicalto the law of X i ∈ E ξ it i (1) + X i = j n i,j (1) X ℓ =1 ∆ ℓi,j , (38)where t i (1) denotes the time J spends in state i , and n i,j (1) the number of jumps of J from i to j over the time interval [0 ,
1] and, for each i, j ∈ E such that i = j , { ∆ ℓi,j : ℓ ≥ } are iid copiesof ∆ i,j . Since the expected number of total jumps is finite, the triangle inequality shows that (iii)implies (i).(i) ⇒ (iii): By considering the event that the first jump away from the initial state i ∈ E occursafter time 1, we have that E ,i [ | ξ | ] ≥ E ,i [ | ξ i | ] exp {−| q i,i |} , thereby showing that each of the pure-state L´evy processes ξ i , i ∈ E , have finite absolute moment. Now consider the event that the firstjump of the Markov chain J occurs before time 1 and the second jump occurs after time 1. In thatcase, we have Z | q i,i | e −| q i,i | t X j = i q i,j | q i,i | e −| q j,j | (1 − t ) E ,i [ | ξ it + ∆ i,j + ξ j − t | ] dt < E ,i [ | ξ | ] < ∞ . This tells us that for each j ∈ E , Lebesgue almost everywhere in [0 , E ,i [ | ξ it + ∆ i,j + ξ j − t | ] < ∞ . For a given j ∈ E with j = i , fix such a t ∈ [0 ,
1] and note that E ,i [ | ∆ i,j | ] = E ,i [ | ξ it + ∆ i,j + ξ j − t − ξ it − ξ j − t | ] ≤ E ,i [ | ξ it + ∆ i,j + ξ j − t | ] + E ,i [ | ξ it | ] + E ,i [ | ξ j − t | ] < ∞ , where the final inequality follows by (39), the previously established fact that E ,i [ | ξ i | ] < ∞ for i ∈ E and the opening remark at the beginning of this proof.(i) ⇒ (ii): We can identify the distribution of ξ σ with that of X i ∈ E ξ it i ( σ ) + X i = j n i,j ( σ ) X ℓ =1 ∆ ℓi,j , (40) EAL SELF-SIMILAR PROCESSES STARTED FROM THE ORIGIN 31 where t i ( σ ) denotes the time J spends in state i , and n i,j ( σ ) the number of jumps of J from i to j over the time interval [0 , σ ] and, for each i, j ∈ E such that i = j , { ∆ ℓi,j : ℓ ≥ } are iid copies of ∆ i,j (also independent of n i,j ( σ ), which depends only on the chain J ). Note that t i ( σ ) is a random sumof an independent, geometrically distributed number of independent exponential random variablesthat depend only on J , so that E ,k [ | ξ it i ( σ ) | ] < ∞ whenever (iii) holds. Having already shown theequivalence of (i) and (iii), it follows from the triangle inequality and the distributional equivalencein (40) that (i) implies (ii).(ii) ⇒ (iii): On the event that the sojourn of J from k consists of a first jump from k to j = k ,followed by a jump back to k , written { k → j → k } , we can write ξ σ = ξ k e | qk,k | + ∆ k,j + ξ j e | qj,j | + ∆ j,k , where, for i ∈ E , e | q i,i | is an independent exponentially distributed random variable with rate | q i,i | and the sum on the right-hand side above consists of four independent random variables. Thismeans that ∞ > E ,k [ | ξ σ | ] ≥ E ,k [ | ξ σ | { k → j → k } ] = E [ | ξ k e | qk,k | + ∆ k,j + ξ j e | qj,j | + ∆ j,k | ](41)if we denote by E the product space of the two L´evy processes, two transition jumps and twoexponential variables.From the aforesaid independence we can deduce (iii). As a first step integrate out the final threesummands on the righthand side of (41): E ,k [ | ξ σ | { k → j → k } ] = Z R Z ∞ Z ∞ E [ | ξ k e | qk,k | + a + b + c | ] | P (∆ k,j ∈ da, ξ j e | qj,j | ∈ db, ∆ j,k ∈ dc ) . The left-hand side is finite and non-zero so there is some x ∈ R with E [ | ξ k e | qk,k | + x | ] < ∞ . Integratingout the independent exponential time and using that ξ k is a L´evy process implies that E [ | ξ k | ] < ∞ and E [ | ξ k e | qk,k | | ] < ∞ (compare the remark at the beginning of the proof and also note that E [ | ξ k | ] < ∞ if and only if E [ | ξ k + x | ] < ∞ for any x ∈ R ).Similarly, we find that E [ | ξ j | ] < ∞ and E [ | ξ j e | qj,j | | ] < ∞ . Using the triangle inequality implies E [ | ∆ k,j + ∆ k,j | ] ≤ E [ | (∆ k,j + ∆ j,k ) + ( ξ k e | qk,k | + ξ j e | qj,j | ) | ] + E [ | ξ k e | qk,k | | ] + E [ | ξ j e | qj,j | | ]and the right-hand side is finite by (41) and the above. Hence, by positivity of the transition jumpswe obtain E [ | ∆ j,k | ] ≤ E [ | ∆ k,j + ∆ k,j | ] < ∞ and E [ | ∆ k,j | ] ≤ E [ | ∆ k,j + ∆ k,j | ] < ∞ . In total we proved that ∆ j,k , ∆ k,j , ξ j and ξ k all have finite absolute mean which confirms (iii).(iv) ⇔ (i) ⇔ (ii) ⇔ (iii): First note that under P ,k , σ has finite first moment so thatlim n →∞ σ n − σ n = lim n →∞ P ni =1 σ i n = E ,k [ σ ] . Assume that the limit lim t →∞ ξ t /t exists almost surely. In this case the limit is equal tolim n →∞ nE ,k [ σ ] n X l =1 ( ξ σ l − ξ σ l − ) . However, considering the case of strong laws of large numbers for random walks (cf. Theorem 7.2of [24]), the latter limit exists and is finite if and only if E ,k [ | ξ σ | ] < ∞ , in which case the limit above must equal E ,k [ ξ σ ] /E ,k [ σ ]. It follows that (iv) implies (i)-(iii) and also that lim t →∞ ξ t /t has the second claimed limit in (37).Conversely, now assuming the equivalent statements (i), (ii) and (iii), in particular (ii), we canconclude that lim n →∞ ξ σ n σ n = E ,k [ ξ σ ](42)almost surely by the strong law of large numbers for random walks. Next, we need that E ,k h sup t ∈ [0 ,σ ] | ξ t | i < ∞ (43)which can be seen as follows: By the triangle inequality and (40),sup t ∈ [0 ,σ ] | ξ t | ≤ X i ∈ E sup t ∈ [0 ,σ ] | ξ it | + X i = j n i,j ( σ ) X ℓ =1 | ∆ ℓi,j | . The expectation of the right hand side is finite thanks to the independence of ξ i , i ∈ E and J , theassumption (iii) and the remark at the very beginning of this proof. Now we use (43) to deducelim n →∞ sup t ∈ [ σ n − ,σ n ] | ξ t − ξ σ n − | n = 0 , which then implies in combination with (42) almost sure convergence of ξ t /t to a finite constantwhich is (iv).It remains to verify (37) under any of the equivalent conditions (i) to (iv). The first equality is thelaw of large numbers (35) under finite mean and the second equality was already derived in theargument for (iv) implies (i)-(iii). (cid:3) A.10.
Tightness of the overshoots.
We now characterise when a general MAP has tight over-shoots. That it to say, taking account of the conclusion in Theorem 28, we provide necessary andsufficient conditions for E ,π [ H +1 ] < ∞ , thereby giving a proof of Theorem 5. Although we have assumed the non-lattice condition in this Appendix, the results given below do notneed it.
Theorem 35.
The MAP ( ξ, J ) has tight overshoots if and only if ξ has finite absolute momentand (i) ( ξ, J ) drifts to + ∞ ; or (ii) ( ξ, J ) oscillates and satisfies (TO) Z ∞ κ x Π([ x, ∞ ))1 + R x R ∞ y Π(( −∞ , − z ]) dz dy dx < ∞ for one (any) κ > and Π := X i = j,i,j ∈ E q i,j L (∆ i,j ) + X i ∈ E Π i , (44) where Π i is the L´evy measure of the i -th L´evy process and L (∆ i,j ) is the probability distri-bution of the transition jump from i to j in Proposition 2. EAL SELF-SIMILAR PROCESSES STARTED FROM THE ORIGIN 33
In order to prove the theorem it suffices to analyze tightness of the overshoots on the discrete timeskeleton embedded in ( ξ, J ) at the return times of the Markov chain to a fixed state k ∈ E . As inA.9 we let σ = inf { t ≥ J t = k } and inductively define, for n ∈ N , σ n +1 = inf { t > σ n : J t = k and ∃ s ∈ ( σ n − , t ) with J s = k } so that ( ξ σ n ) n ∈ N is a Markov chain. Lemma 36.
The MAP ( ξ, J ) has tight overshoots if and only if the Markov chain ( ξ σ n ) n ∈ N hastight overshoots under P ,k .Proof. For x, s ≥ ρ x = inf { t : ξ t ≥ x, J t = k, J t − = k } and σ ( s ) = inf { t > s : J t = k, J t − = k } . For c ≥ { ξ ρ x − x ≥ c } ⊂ { ξ τ + x + c − ( x + c ) ≥ c } ∪ n sup s ∈ [ τ + x + c ,σ ( τ + x + c )] | ξ s − ξ τ + x + c | ≥ c o . Indeed, in the case where the overshoot of the discrete time process ( ξ σ n ) is larger than 3 c and theovershoot of the continuous time process over x + c is smaller than c , one has ξ τ + x + c ∈ [ x + c, x + 2 c ]so that the process has to oscillate between time τ + x + c and the next entry of J into k at time σ ( τ + x + c )by at least c . For every i, j ∈ E and x ≥ P ,i (cid:16) sup s ∈ [ τ + x + c ,σ ( τ + x + c )] | ξ s − ξ τ + x + c | ≥ c (cid:12)(cid:12)(cid:12) J τ + x + c = j (cid:17) = P ,j (cid:16) sup s ∈ [0 ,σ ] | ξ s | ≥ c (cid:17) and since finite families and mixtures thereof are always tight, there exists a decreasing function g : [0 , ∞ ) → [0 ,
1] with limit 0 such that P ,i (cid:16) sup s ∈ [ τ + x + c ,σ ( τ + x + c )] | ξ s − ξ τ + x + c | ≥ c (cid:17) ≤ g ( c ) , for i ∈ E, x, c ≥ . If the continuous time process has tight overshoots, then there is a function g : [0 , ∞ ) → [0 , P ,i ( ξ τ + x + c − ( x + c ) ≥ c ) ≤ g ( c ) , for i ∈ E, x, c ≥ , so that altogether sup i ∈ E,x ≥ P ,i ( ξ ρ x − x ≥ c ) ≤ g ( c ) + g ( c )and the overshoots of the discrete time process are tight.The converse direction follows analogously. Using that { ξ τ + x − x ≥ c } ⊂ { ξ ρ x − x ≥ c } ∪ n sup s ∈ [ τ + x ,σ ( τ + x )] | ξ s − ξ τ + x + c | ≥ c o one deduces that the continuous time process has tight overshoots if the discrete time process hastight overshoots under any of the laws P ,i . Further using that for i ∈ E and x, c ≥ P ,i ( ξ ρ x − x ≥ c ) ≤ P ,i (cid:16) sup s ∈ [0 ,σ (0)] ξ s ≥ c (cid:17) + E ,i h P ξ σ (0) ∧ x,k ( ξ ρ x − x ≥ c ) i one deduces that tightness of the overshoots of the discrete time process under the law P ,k inducestightness under any law P ,i with i ∈ E . (cid:3) The following lemma is a consequence of Theorem 8 of [11].
Lemma 37.
A random walk has tight overshoots if and only if the distribution of its incrementshas finite absolute moment, it drifts to infinity or oscillates and the distribution Π of its incrementssatisfies the integrabililty condition (TO). The next result will be helpful later to separate big jumps from small jumps in the L´evy processescorresoponding through Proposition 2 to the MAP ( ξ, J ). Lemma 38.
Let
X, Y be real random variables with Y being square integrable, then the distributionof X satisfies (TO) if and only if the distribution of X + Y satifies (TO).Proof. It suffices to show that X + Y satisfies (TO), if X satisfies (TO). For the reverse statementthe same argument applies with the use of − Y instead of Y . We use that for z ≥ P ( X + Y ≥ z ) ≤ P ( X ≥ z/
2) + P ( Y ≥ z/
2) and P ( X + Y ≤ − z ) ≥ P ( X ≤ − z ) − P ( Y ≥ z )to deduce that Z ∞ κ x P ( X + Y ≥ x )1 + R x R ∞ y P ( X + Y ≤ − z ) dz dy dx ≤ Z ∞ κ x P ( X ≥ x/ R x R ∞ y ( P ( X ≤ − z ) − P ( Y ≥ z )) + dz dy dx + Z ∞ κ x P ( Y ≥ x/ dx. The latter intergral is finite since Y has finite second moment and the proof is finished once weshowed that the former integral is finite. One has Z ∞ Z ∞ y P ( Y ≥ z ) dz dy = 12 E [ Y ] < ∞ and taking c ≥ c > E [ Y ] we conclude with substitution that Z ∞ κ x P ( X ≥ x/ R x R ∞ y ( P ( X ≤ − z ) − P ( Y ≥ z )) + dz dy dx ≤ c Z ∞ κ/ x P ( X ≥ x ) c + R x R ∞ y ( P ( X ≤ − z ) − P ( Y ≥ z )) + dz dy dx ≤ c Z ∞ κ/ x P ( X ≥ x ) c/ R x R ∞ y P ( X ≤ − z ) dz dy dx = 4 c Z ∞ κ/ x P ( X ≥ x ) c/ R x R ∞ y P ( X ≤ − z ) dz dy dx ≤ c ′ Z ∞ κ/ x P ( X ≥ x )1 + R x R ∞ y P ( X ≤ − z ) dz dy dx < ∞ , where c ′ = max { , c } . (cid:3) Lemma 39.
Let Π i , i ∈ E, be probability distributions on R and let { X i,n : i ∈ E, n ∈ N } be afamily of independent random variables with X i,n ∼ Π i . Define Z = N X n =1 X Y n ,n with ( Y n ) n ∈ N being an E -valued process and N an N -valued random variable both being jointlyindependent of ( X i,n ) . If furthermore we suppose E [ N ] < ∞ and P ( i ∈ { Y , . . . , Y N } ) > for i ∈ E , then the following properties are equivalent: (i) The distribution of Z satisfies (TO). EAL SELF-SIMILAR PROCESSES STARTED FROM THE ORIGIN 35 (ii)
For one (any) sequence ( ρ i ) i ∈ E of strictly positive numbers Π sum ( · ) := X i ∈ E ρ i Π i ( · ) satisfies (TO). (iii) The measure Π max on R \{ } defined by Π max ([ t, ∞ )) = max i ∈ E Π i ([ t, ∞ ))Π max (( −∞ , − t ]) = max i ∈ E Π i (( −∞ , − t ]) for t > satisfies (TO).Proof. The equivalence of (ii) and (iii) follows immediately from the definition of (TO), the estimatemin i ∈ E ρ i Π max ([ x, ∞ )) ≤ Π sum ([ x, ∞ )) ≤ X i ∈ E ρ i Π max ([ x, ∞ )) , for x ≥ , and its analogues version for the set ( −∞ , − x ]. It remains to show that property (i) is equivalentto properties (ii) and (iii).We start with proving that (iii) implies (i). Note that, for x ≥ P ( Z ≥ x | N ) ≤ N Π max ([ x/N, ∞ )) . (45)Furthermore, for any i ∈ E there exists 1 ≤ n ′ i ≤ n i such that P ( Y n ′ i = i, N = n i ) > . Hence, for all κ i ∈ [0 , ∞ ), one finds P ( Z ≤ − z ) ≥ P ( Y n ′ i = i, N = n i ) P (cid:16) n i X n =1 { n = n ′ i } X Y n ,n ≤ κ i (cid:12)(cid:12)(cid:12) Y n ′ i = i, N = n i (cid:17) P ( X i, ≤ − z − κ i ) . Now we fix κ i such that q i := P ( Y n ′ i = i, N = n i ) P (cid:16) n i X n =1 { n = n ′ i } X Y n ,n ≤ κ i (cid:12)(cid:12)(cid:12) Y n ′ i = i, N = n i (cid:17) > . We set κ = max { κ i : i ∈ E } and q = min { q i : i ∈ E } and get, for z ≥ P ( Z ≤ − z ) ≥ q max i ∈ E P ( X i, ≤ − z − κ ) = q Π max (( −∞ , − z − κ ]) . (46)Combining this estimate with (45) we get that Z [ κ, ∞ ) x P ( Z ≥ x )1 + R x R ∞ y P ( Z ≤ − z ) dz dy dx ≤ Z [ κ, ∞ ) x E [ N Π max ([ x/N, ∞ )]1 + q R xκ R ∞ y Π max (( −∞ , − z ]) dz dy dx. Since R κ R ∞ y P ( Z ≤ − z ) dz dy is finite we conclude that there exists a constant c > x ≥ κ q Z xκ Z ∞ y Π max (( −∞ , − z ]) dz dy ≥ c (cid:16) Z x Z ∞ y Π max (( −∞ , − z ]) dz dy (cid:17) . Hence, we have Z [ κ, ∞ ) x P ( Z ≥ x )1 + R x R ∞ y P ( Z ≤ − z ) dz dy dx ≤ c − ∞ X n =1 P ( N = n ) n Z (0 , ∞ ) x Π max ([ x/n, ∞ )]1 + R x R ∞ y Π max (( −∞ , − z ]) dz dy dx = c − ∞ X n =1 P ( N = n ) n Z (0 , ∞ ) x Π max ([ x, ∞ )]1 + 4 − R nx R ∞ y Π max (( −∞ , − z ]) dz dy dx ≤ c − E [ N ] Z (0 , ∞ ) x Π max ([ x, ∞ )]1 + R x R ∞ y Π max (( −∞ , − z ]) dz dy dx. Next, we consider the converse direction. In analogy to the derivation of (46), one sees that thereare constants κ, q > P ( Z ≥ z ) ≥ q Π max ([ z + κ, ∞ )) , for all z ≥
0. Further, it is also the case that Z x Z ∞ y P ( Z ≤ − z ) dz dy ≤ ∞ X n =1 P ( N = n ) Z x Z ∞ y n Π max (( −∞ , − z/n ]) dz dy = ∞ X n =1 P ( N = n ) n Z x/n Z ∞ y Π max (( −∞ , − z ]) dz dy ≤ E [ N ] Z x Z ∞ y Π max (( −∞ , − z ]) dz dy, so that we have Z [2 κ, ∞ ) x Π max ([ x, ∞ ))1 + R x R ∞ y Π max (( −∞ , − z ]) dz dy dx ≤ q − ( E [ N ] ∨ Z [2 κ, ∞ ) x P ( Z ≥ x/ R x R ∞ y P ( Z ≤ − z ) dz dy dx = 4 q − ( E [ N ] ∨ Z [ κ, ∞ ) x P ( Z ≥ x )1 + R x R ∞ y P ( Z ≤ − z ) dz dy dx. The proof is now complete. (cid:3)
Proof of Theorem 35.
We again consider the process ξ at the discrete set of return times to the state k . By Lemma 36 tightness of the overshoots of the MAP is equivalent to tightness of the overshootsof the discrete time process ( ξ σ n ) n ∈ N under the law P ,k which is the underlying measure in thefollowing considerations. By the Markov property and the translation invariance of the MAP, theprocess ( ξ σ n ) has iid increments and starts in 0 and thus is a random walk. By Lemma 37, ( ξ σ n )has tight overshoots if and only if ξ σ has finite absolute moment and either • ( ξ σ n ) drifts to infinity, or • ( ξ σ n ) oscillates and the distribution of ξ σ satisfies (TO).By Theorem 34, Formula (37), the latter properties are equivalent to the ones obtained whenreplacing the discrete time process ( ξ σ n ) by the continuous time process ( ξ t ) and keeping the (TO)property for ξ σ . EAL SELF-SIMILAR PROCESSES STARTED FROM THE ORIGIN 37
To finish the proof it remains to show that in the oscillating case with finite absolute moment onehas the equivalence L ( ξ σ ) satisfies (TO) ⇐⇒ Π from (44) satisfies (TO) . In order to do so let us identify the distribution of ξ σ . Enumerate the times at which either ξ hasjumps with modulus larger than 1 or J changes its state in increasing order 0 ≤ τ < τ < . . . andrepresent ξ σ as telescopic sum ξ σ = X j : τ j ≤ σ ( ξ τ j − ξ τ j − ) + X j : τ j ≤ σ ( ξ τ j − − ξ τ j − )(47)with τ := 0. Using the representation from Proposition 2, we can identify the conditional distri-butions of the terms appearing in the former sum when conditioning on J and the set of times { τ , τ , . . . } : If τ j is triggered by a large jump of the L´evy process (meaning that the process J does not switch states at that time) the conditional distribution of ξ τ j − ξ τ j − is the normalisedL´evy measure restricted to jumps larger than one of the L´evy process that is switched on by themodulating chain. If τ j is triggered by a change of J , then the conditional distribution of ξ τ j − ξ τ j − is L (∆ J ( τ j − ) ,J ( τ j ) ) with U as in Proposition 2. The random number of j ’s with τ j ≤ σ has finitethird moment and applying Lemma 39 we get that L (cid:16) X j : τ j ≤ σ ( ξ τ j − ξ τ j − ) (cid:17) satisfies (TO) ⇐⇒ X i = j q i,j L (∆ i,j ) + X i ∈ E Π i | B (0 , c satisfies (TO) . An elementary calculation furthermore shows that X i = j q i,j L (∆ i,j ) + X i ∈ E Π i | B (0 , c satisfies (TO) ⇐⇒ Π from (44) satisfies (TO) . Combining the two equivalences with (47) the theorem is proved (compare Lemma 38) if theremainder P j : τ j ≤ σ ( ξ τ j − − ξ τ j − ) has finite second moment.However, the latter term is just the value of a MAP starting in (0 , k ) evaluated at the time ofthe first return of J to k with an appropriately modified evolution: the L´evy measures need to bereplaced by the old ones restricted to the unit ball and the process has no discontinuity when J switches states. Such a MAP obviously has finite second moment. (cid:3) References [1] G. Alsmeyer: On the Markov renewal theorem.
Stoch. Proc. Appl. , pp. 37-56, (1994).[2] S. Asmussen: Applied Probability and Queues. 2nd Edition.
Springer (2003).[3] J. Bertoin: L´evy Processes.
Cambridge University Press (1996).[4] J. Bertoin and M. Savov: Some applications of duality for L´evy processes in a half-line.
Bull. Lond. Math. Soc. ,pp. 97–110, (2011).[5] R.M. Blumenthal: On construction of Markov processes.
Z. Wahrsch. verw. Gebiete , pp. 433-444, (1983).[6] L. Chaumont and R. A. Doney: On L´evy processes conditioned to stay positive.
Electronic Journal of Probability ,pp. 948-961, (2005).[7] L. Chaumont, H. Panti, V. Rivero: The Lamperti representation of real-valued self-similar Markov processes.
Bernoulli , pp. 494-523, (2013).[8] L. Chaumont, A. Kyprianou, J.C. Pardo and V. Rivero: Fluctuation theory and exit systems for positive self-similar Markov processes.
Ann. Probab. , pp. 245–279, (2012).[9] C. Dellacherie, B. Maisonneuve, P.A. Meyer: Probabilit´es et potentiel: Processus de Markov (fin). Compl´ementsdu Calcul Stochastique.
Paris: Hermann , (1992).[10] L. D¨oring: A Jump-Type SDE Approach to Real-Valued Self-Similar Markov Processes. to appear in
Transactionsof the AMS [11] R.A. Doney, R.A. Maller: Stability and attraction to normality for L´evy processes at zero and at infinity.
J.Theoret. Probab. , pp. 751-792, (2002). [12] P.J. Fitzsimmons: On a connection between Kuznetsov processes and quasi-processes.
Sem. Stoch. Proc. , pp.123-134, (1987).[13] P.J. Fitzsimmons, B. Maisonneuve: Excessive Measures and Markov Processes with Random Birth and Death.
Probab. Th. Rel. Fields , pp. 319-336, (1986).[14] P. Fitzsimmons: On the existence of recurrent extensions of self-similar Markov processes.
Electron. Comm.Probab. , pp. 230-241, (2006).[15] P.J. Fitzsimmons, B. Maisonneuve: Excessive measures and Markov processes with random birth and death.
Z.Wahrsch. verw. Gebiete , pp. 319-336, (1986).[16] P.J. Fitzsimmons, M. Taksar: Stationary regenerative sets and subordinators.
Ann. Prob. , pp. 1299-1305, (1988).[17] M. Jacobson, M. Yor: Multi-self-similar Markov processes on R + and their Lamperti representations. Probab.Th. Rel. Fields , pp. 1-28, (2003).[18] H. Kaspi: On the Symmetric Wiener-Hopf Factorization for Markov Additive Processes.
Z. Wahrsch. verw.Gebiete , pp. 179-196, (1982).[19] H. Kaspi: Random Time Changes for Processes with Random Birth and Death.
Ann. Probab. , pp. 586-599,(1988).[20] H. Kesten: Renewal Theory for Functionals of a Markov Chain with General State Space.
Ann. Probab. , pp.355-386, (1974).[21] S. W. Kiu: Semistable Markov processes in R n . Stochastic Process. Appl. , pp. 183-191, (1980).[22] S.E. Kuznetsov: Construction of Markov processes with random birth and death.
Th. Prob. Appl. , pp. 571-574,(1974).[23] A. Kuznetsov, A. E. Kyprianou, J.C. Pardo, A. Watson: The hitting time of zero for a stable process.
Electron.J. Probab. , pp. 1-26, (2014).[24] A. E. Kyprianou: Fluctuations of L´evy processes with applications. Introductory Lectures. Second edition.
Springer , (2014).[25] S. P. Lalley: Conditional Markov Renewal Theory I. Finite and Denumerable State Space.
Ann. Probab. , pp.1113-1148, (1984).[26] J. Lamperti: Semi-stable Markov processes. I.
Z. Wahrsch. verw. Gebiete , pp. 205–225, (1972).[27] J.B. Mitro: Dual Markov processes: construction of a useful auxiliary process.
Z. Wahrsch. verw. Gebiete , pp.139-156, (1979).[28] D. Revuz: Measures Associees aux Fonctionelles Additives de Markov. I.
Trans. of the AMS , pp. 501-531, (1970).[29] V. Rivero: Recurrent extensions of self-similar Markov processes and Cram´er’s condition. II.
Bernoulli , pp.1053–1070, (2007).[30] K-I Sato. L´evy processes and infinitely divisible distributions.
Cambridge University Press , (1999), (1999)