Distributional representations and dominance of a Lévy process over its maximal jump processes
aa r X i v : . [ m a t h . P R ] J un Bernoulli (4), 2016, 2325–2371DOI: 10.3150/15-BEJ731 Distributional representations and dominanceof a L´evy process over its maximal jumpprocesses
BORIS BUCHMANN , YUGUANG FAN and ROSS A. MALLER Research School of Finance, Actuarial Studies & Statistics, Mathematical Sciences In-stitute, Australian National University, Australia. E-mail: * [email protected]; ** [email protected] School of Mathematics & Statistics, University of Melbourne, ARC Centre of Excellence forMathematics & Statistical Frontiers, Australia. E-mail: [email protected]
Distributional identities for a L´evy process X t , its quadratic variation process V t and its maximaljump processes, are derived, and used to make “small time” (as t ↓
0) asymptotic comparisonsbetween them. The representations are constructed using properties of the underlying Poissonpoint process of the jumps of X . Apart from providing insight into the connections between X , V , and their maximal jump processes, they enable investigation of a great variety of limiting be-haviours. As an application, we study “self-normalised” versions of X t , that is, X t after divisionby sup
0, sothat X is either comparable to, or dominates, its largest jump. The former situation tends tooccur when the singularity at 0 of the L´evy measure of X is fairly mild (its tail is slowly varyingat 0), while the latter situation is related to the relative stability or attraction to normalityof X at 0 (a steeper singularity at 0). An important component in the analyses is the way thelargest positive and negative jumps interact with each other. Analogous “large time” (as t → ∞ )versions of the results can also be obtained. Keywords:
Distributional representation; domain of attraction to normality; dominance; L´evyprocess; maximal jump process; relative stability
1. Introduction
We study relations between a L´evy process X = ( X t ) t ≥ , its quadratic variation process V = ( V t ) t ≥ and its maximal jump processes, with particular interest in how these pro-cesses, and how positive and negative parts of the X process, interact. Representationsof distributions related to these processes are calculated and used as a basis for makingasymptotic (small time) comparisons in their behaviours. This is an electronic reprint of the original article published by the ISI/BS in
Bernoulli ,2016, Vol. 22, No. 4, 2325–2371. This reprint differs from the original in pagination andtypographic detail. (cid:13)
B. Buchmann, Y. Fan and R.A. Maller
A convenient way of proceeding is to derive identities for the distributions of X t mod-ified by subtracting a number of its largest jumps, or its jumps of largest modulus, upuntil time t , joint with V t , modified similarly. These identities are obtained by consideringthe Poisson point process of jumps of X , allowing for possible ties in the order statisticsof the jumps.The distributions thus obtained enable the study of a wide variety of small or largetime kinds of behaviour of X . As an application, we investigate “self-normalised” ver-sions of X t , giving a comprehensive analysis of the behaviour of X t / sup
0, and similarly with X t replaced by | X t | . Two extreme situa-tions are considered; first, when X is of comparable size to a maximal jump process, forexample, X t / sup
0; or, alternatively, when X dominates a maxi-mal jump process, in the sense that X t / sup
0; and similarly with X t replaced by | X t | , and/or | ∆ X s | replaced by ∆ X s . Complementary to these is the waythe largest positive and negative jumps interact with each other.Such results can be seen as continuations in one way or another of a growing literaturein this area which has some classical antecedents. The original developments occurredin the context of random walks, where the concept of “trimming” by removing extremesfrom a sample sum has been studied extensively in the past. Our particular emphasis onthe ratio of the process to its extremes goes back in the random walk situation to resultsof Darling [10] and Arov and Bobrov [2]. Later, Maller and Resnick [41] gave conditionsfor a random walk to be comparable in magnitude to its large values (a heavy-tailedsituation), while Kesten and Maller [24, 25] studied the other end of the spectrum, whenthe sum dominates its large values (see Table 1 of [25] for a convenient summary).Subsequent to these papers there was much development in the general area of trimmedsums, especially concerning heavy tailed distributions; see, for example, Cs¨org˝o, Haeuslerand Mason [8], Berkes and Horv´ath [3], Berkes, Horv´ath and Schauer [4], and Griffinand Pruitt [21]. We mention in this context also results of Silvestrov and Teugels [48]concerning sums and maxima of random walks and triangular arrays, and Ladoucetteand Teugels [31] for an insurance application. There are also recent results about the St.Petersburg game; Gut and Martin-L¨of [22] give a “maxtrimmed” version of the game,while Fukker, Gy¨orfi and Kevei [18] determine the limit distribution of the St. Petersburgsum conditioned on its maximum. Cs¨org˝o and Simons [9] give a review of the later St.Petersburg literature.For almost sure versions of particular kinds of sum/max relationships, see Feller [16],Kesten and Maller [26] and Pruitt [43].Studies of small time or local behaviour of L´evy processes go back to the work of L´evyand Khintchine [28, 29], in the 1930s. More recent work, relevant to our topic, includesthat of Doney [11], who gives conditions for a L´evy process X to remain positive near 0with probability approaching 1, and Andrew [1], who similarly analyses the behaviours ofthe positive and negative jump processes near 0. There is a connection also with resultsof Bertoin [6], who in studying regularity of a L´evy process X at 0 was concerned withthe dominance of the positive part of X over its negative part, when X is of boundedvariation. For further background along these lines, we refer to Doney [12]. istributional representations of a L´evy process X . Particular attention is paid to the possibility of tied jumps,related to atoms in the canonical measure of X . We make brief mention of some otherpossible applications of the methodology in the final discussion Section 6.
2. Distributional representations
Our object of study will be a real-valued L´evy process X = ( X t ) t ≥ with canonical triplet( γ, σ , Π), thus having characteristic function Ee i θX t = e t Ψ( θ ) , t ≥ θ ∈ R , with charac-teristic exponentΨ( θ ) := i θγ − σ θ + Z R ∗ ( e i θx − − i θx {| x |≤ } )Π(d x ) . (2.1)Here, γ ∈ R , σ ≥ R , that is, a Borel measure on R ∗ := R \ { } such that R R ∗ ( x ∧ x ) < ∞ . Define measures Π (+) , Π ( − ) , and Π |·| on (0 , ∞ )such that Π (+) is Π restricted to (0 , ∞ ), Π ( − ) is Π( −· ) restricted to (0 , ∞ ), and Π |·| :=Π (+) + Π ( − ) . The positive, negative and two-sided tails of Π areΠ + ( x ) := Π { ( x, ∞ ) } , Π − ( x ) := Π { ( −∞ , − x ) } and (2.2)Π( x ) := Π + ( x ) + Π − ( x ) , x > . We are only interested in small time behaviour of X t , so we eliminate trivial cases byassuming Π(0+) = ∞ or Π + (0+) = ∞ , as appropriate. Let ∆Π( y ) := Π( { y } ), y ∈ R ∗ ,and ∆Π( y ) := Π( y − ) − Π( y ), y >
0. Denote the jump process of X by (∆ X t ) t ≥ , where∆ X t = X t − X t − , t >
0, with ∆ X ≡
0. The quadratic variation process associated with X is V t := σ t + X , B. Buchmann, Y. Fan and R.A. Maller with V ≡
0. Recall that X is of bounded variation if P σ = 0 and R | x |≤ | x | Π(d x ) < ∞ . If this is the case, (2.1) takes the formi θ d X + Z R ( e i θx − x ) , where d X is the drift of X .In deriving representations for the joint distributions of X t , V t and the r th maximaljump processes, it is convenient to work with the processes having the r largest jumps,or the r jumps largest in modulus, subtracted. These “trimmed” processes are no longerL´evy processes, but we can give useful representations for their marginal distributions.The expressions are in terms of a truncated L´evy process, together with one or twoPoisson processes, and a Gamma random variable, all processes and random variablesindependent of one another.For any integer r = 1 , , . . . , let ∆ X ( r ) t and g ∆ X ( r ) t be the r th largest positive jump andthe r th largest jump in modulus up to time t , respectively. Formal definitions of these,allowing for the possibility of tied values (we choose the order uniformly among the ties),are given in Section 2.1 below. “One-sided” and “modulus” trimmed versions of X arethen defined as ( r ) X t := X t − r X i =1 ∆ X ( i ) t and ( r ) e X t := X t − r X i =1 g ∆ X ( i ) t , (2.3)with corresponding trimmed quadratic variation processes ( r ) V t := V t − r X i =1 (∆ X ( i ) t ) and ( r ) e V t := V t − r X i =1 ( g ∆ X ( i ) t ) , t > . Recall the definitions of the tails of Π in (2.2). LetΠ ← ( x ) = inf { y > y ) ≤ x } , x > , be the right-continuous inverse of the nonincreasing function Π, and similarly for Π + , ← and Π − , ← . By convention, the inf of the empty set is taken as ∞ . The following prop-erties of the inverse function will be used frequently (see Resnick [45], Section 0.2). Foreach x, y >
0, Π ← ( x ) ≤ y if and only if Π( y ) ≤ x ; Π(Π ← ( x )) ≤ x ≤ Π(Π ← ( x ) − ); andΠ ← (Π( x )) ≤ x ; similarly, for Π ± . We refer to Appendix A in Fan [15] for more details.We introduce four families of processes, indexed by v >
0, truncating jumps fromsample paths of X t and V t , respectively. Let v, t >
0. When Π(0+) = ∞ , we set e X vt := X t − X
Let r ∈ N = { , , , . . . } and S r be a Gamma( r, random variable. Sup-pose Y ± = ( Y ± t ) t ≥ and Y = ( Y t ) t ≥ are independent Poisson processes with EY ± = EY = 1 . Assume that X , S r , Y + , Y − , and Y are independent as random elements. (i) Assume
Π(0+) = ∞ . For each v > , let κ ± ( v ) := (Π(Π ← ( v ) − ) − v ) ∆Π( ± Π ← ( v ))∆Π(Π ← ( v )) { ∆Π(Π ← ( v )) =0 } (2.7) and for v > , t > , set e G vt := Π ← ( v )( Y + tκ + ( v ) − Y − tκ − ( v ) ) and e H vt := (Π ← ( v )) ( Y + tκ + ( v ) + Y − tκ − ( v ) ) . (2.8) Then, for each t > , we have ( ( r ) e X t , ( r ) e V t , | g ∆ X ( r ) t | ) D = ( e X vt + e G vt , e V vt + e H vt , Π ← ( v )) | v = S r /t . (2.9)(ii) Assume Π + (0+) = ∞ . For each v > , let κ ( v ) := Π + (Π + , ← ( v ) − ) − v , and for v > , t > , set G vt := Π + , ← ( v ) Y tκ ( v ) and H vt := (Π + , ← ( v )) Y tκ ( v ) . (2.10) Then, for each t > , we have ( ( r ) X t , ( r ) V t , ∆ X ( r ) t ) D = ( X vt + G vt , V vt + H vt , Π + , ← ( v )) | v = S r /t . (2.11) B. Buchmann, Y. Fan and R.A. Maller
Remark 2.1.
Processes ( r ) e X t and ( r ) X t are not L´evy processes; their increments arenot independent, or homogeneous in distribution. But the identities (2.9) and (2.11)express their marginal distributions in terms of distributions of L´evy processes, mixed ina sense according to their r th largest jumps, with allowance made for ties. This opensthe possibility for results obtained from analyses of the underlying L´evy processes to betransferred to the trimmed processes. We exemplify this procedure in a variety of waysin Sections 3 and 5.As an immediate corollary of Theorem 2.1, the following identities will be useful. Corollary 1.
Using the notation in Theorem 2.1, we have, for x ∈ R , y ≥ , t > , r = 1 , , . . . :(i) when Π(0+) = ∞ , P ( ( r ) e X t ≤ x | g ∆ X ( r ) t | , ( r ) e V t ≤ y | g ∆ X ( r ) t | ) (2.12)= Z ∞ P ( e X vt + e G vt ≤ x Π ← ( v ) , e V vt + e H vt ≤ y (Π ← ( v )) ) P ( S r ∈ t d v );(ii) when Π + (0+) = ∞ , P ( ( r ) X t ≤ x ∆ X ( r ) t , ( r ) V t ≤ y (∆ X ( r ) t ) ) (2.13)= Z ∞ P ( X vt + G vt ≤ x Π + , ← ( v ) , V vt + H vt ≤ y (Π + , ← ( v )) ) P ( S r ∈ t d v ) . In proving Theorem 2.1, we make use of the underlying Poisson point process (PPP)structure of the jumps of a L´evy process. We begin in Section 2.1 with a precise definitionof the order statistics of a PPP when tied values may be present. In Section 2.2, wereview basic properties of standard PPPs and in Section 2.3 construct the distributionof a Poisson random measure (PRM) from the jumps of a L´evy process through a seriesof marking and deterministic transformations. Also, in Section 2.3, we derive the jointdistribution of the trimmed point process using the point process order statistics. Thismachinery allows us to complete the proof of Theorem 2.1 in Section 2.4.
Introduce X as the point measure associated with the jumps of X : X = X s δ ( s, ∆ X s ) . istributional representations of a L´evy process X is a Poisson point process (PPP) on [0 , ∞ ) × R ∗ with intensity measure d s ⊗ Π(d x ).Analogously, the PPPs of positive and negative jumps and jumps in modulus associatedwith X are X + = X s (0 , ∞ ) (∆ X s ) δ ( s, ∆ X s ) , X − = X s (0 , ∞ ) ( − ∆ X s ) δ ( s, − ∆ X s ) , X |·| = X + + X − = X s δ ( s, | ∆ X s | ) , having intensity measures d s ⊗ Π ± , |·| (d x ), respectively. For t >
0, we consider restrictionsof these processes to the time interval [0 , t ] by introducing X t ( · ) := X ([0 , t ] × R ∗ ∩ · ) and X ± , |·| t ( · ) = X ± , |·| ([0 , t ] × (0 , ∞ ) ∩ · ) . Assume Π(0+) = ∞ and t >
0. Our first task is to specify the points with maximummodulus in X t .Let e T (1) ( X t ) be randomly chosen, independently of ( X t ) t ≥ , according to the discreteuniform distribution in the set { ≤ s ≤ t : | ∆ X s | = sup ≤ u ≤ t | ∆ X u |} , which is almostsurely finite. Then define g ∆ X (1) t = g ∆ X (1) ( X t ) := ∆ X e T (1) ( X t ) . Define the maximum mod-ulus trimmed point process on [0 , t ] × R ∗ by (1) e X t := X t − δ ( e T (1) ( X t ) , g ∆ X (1) t ) . Let r = 2 , , . . . . Iteratively, we define e T ( r ) ( X t ) := e T (1) ( ( r − e X t ) and g ∆ X ( r ) t := ∆ X e T ( r ) ( X t ) .The r - fold modulus trimmed point process of modulus jumps is then defined by ( r ) e X t := X t − r X i =1 δ ( e T ( i ) ( X t ) , g ∆ X ( i ) t ) . In a similar way, under the assumption Π + (0+) = ∞ , we can define the ordered pairs( T (1) ( X + t ) , ∆ X (1) t ) , ( T (2) ( X + t ) , ∆ X (2) t ) , ( T (3) ( X + t ) , ∆ X (3) t ) , . . . ∈ [0 , t ] × (0 , ∞ ) , such that ∆ X (1) t ≥ · · · ≥ ∆ X ( r ) t are the r th largest order statistics of positive jumps of X sampled on time interval [0 , t ]. By subtracting the points corresponding to large jumps,analogously as we did for ( r ) e X t , we then define the r - positive trimmed point process ofpositive jumps by ( r ) X + t := X + t − X ≤ i ≤ r δ ( T ( i ) ( X + t ) , ∆ X ( i ) t ) . For necessary material on point processes, we refer to Chapter 12 in Kallenberg [23] or Chapter 5 inResnick [44].
B. Buchmann, Y. Fan and R.A. Maller
In this section, we provide alternative constructions of X t , ( r ) e X t , X + t , ( r ) X + t , this timestarting from homogeneous processes.Let ( U i ), ( U ′ i ) and ( E i ) be independent, where ( U i ) and ( U ′ i ) are i.i.d. sequences of uni-formly distributed random variables in (0 , E i ) is an i.i.d. sequence of exponen-tially distributed random variables with common parameter E E i = 1. Then S r = P ri =1 E i is a Gamma( r,
1) random variable, r ∈ N .For t >
0, we introduce V t := X i ≥ δ ( t U i , S i /t ) and V ′ t := X i ≥ δ ( t U i , U ′ i , S i /t ) . Then V t and V ′ t are homogeneous PPPs on [0 , t ] × (0 , ∞ ) and [0 , t ] × (0 , × (0 , ∞ ) withintensity measures d s ⊗ d v and d s ⊗ d u ′ ⊗ d v , respectively. For r ∈ N := { , , , . . . } , wedefine their r -fold trimmed counterparts by ( r ) V t := X i>r δ ( t U i , S i /t ) and ( r ) V ′ t := X i>r δ ( t U i , U ′ i , S i /t ) . When Π(0+) = ∞ , we consider the transformation( I, I, Π ← ) : [0 , t ] × (0 , × (0 , ∞ ) → [0 , t ] × (0 , × (0 , ∞ ) , ( s, u ′ , v ) ( s, u ′ , Π ← ( v )) . Still assuming Π(0+) = ∞ , by the Radon–Nikodym theorem, there exist Borelian func-tions g ± : (0 , ∞ ) → (0 , ∞ ) with g + + g − ≡ ± = g ± dΠ |·| and, in particular,Π ± ( x ) = Z ( x, ∞ ) g ± ( y )Π |·| (d y ) , x > . (2.14)We use g + to return the sign to the process by a second transformation m : [0 , t ] × (0 , × (0 , ∞ ) → [0 , t ] × R ∗ , defined by m ( s, u ′ , x ) := (cid:26) ( s, x ) , if u ′ < g + ( x ),( s, − x ) , if u ′ ≥ g + ( x ). (2.15)In summary, let V ′ m ◦ ( I,I, Π ← ) t be the point process on [0 , t ] × R ∗ , being the image of thecomposition of the above transformations applied to V ′ t : V ′ t ( I,I, Π ← ) −→ V ′ ( I,I, Π ← ) t := X i ≥ δ ( t U i , U ′ i , Π ← ( S i /t )) m −→ V ′ m ◦ ( I,I, Π ← ) t := X i ≥ δ m ( t U i , U ′ i , Π ← ( S i /t )) . istributional representations of a L´evy process r ∈ N , ( r ) V ′ t ( I,I, Π ← ) −→ ( r ) V ′ ( I,I, Π ← ) t := X i>r δ ( t U i , U ′ i , Π ← ( S i /t )) m −→ ( r ) V ′ m ◦ ( I,I, Π ← ) t := X i>r δ m ( t U i , U ′ i , Π ← ( S i /t )) . When Π + (0+) = ∞ we can contrive Π + , ← as a transformation of (0 , ∞ ) into (0 , ∞ ) andwe will consider the image measures of V t and ( r ) V t under ( I, Π + , ← ) : [0 , t ] × (0 , ∞ ) → [0 , ∞ ) × (0 , ∞ ), defined by V ( I, Π + , ← ) t := X i ≥ δ ( t U i , Π + , ← ( S i /t )) and ( r ) V ( I, Π + , ← ) t := X i>r δ ( t U i , Π + , ← ( S i /t )) . r -trimmed PPPs In this section, the original point process X , its ordered jumps, and the trimmed pointprocess, is related to a corresponding standard version V . Lemma 1.
Let t > and r ∈ N . (i) If Π(0+) = ∞ , we have the following distributional equivalences: X t D = V ′ m ◦ ( I,I, Π ← ) t , (2.16)( e T ( i ) ( X t ) , g ∆ X ( i ) t ) i ≥ = ( m ( t U i , U ′ i , Π ← ( S i /t ))) i ≥ , (2.17) { ( e T ( i ) ( X t ) , g ∆ X ( i ) t ) ≤ i ≤ r , ( r ) e X t } (2.18) D = { ( m ( t U i , U ′ i , Π ← ( S i /t ))) ≤ i ≤ r , ( r ) V ′ m ◦ ( I,I, Π ← ) t } . (ii) If Π + (0+) = ∞ , we have the following distributional equivalences: X + t D = V ( I, Π + , ← ) t , ( T ( i ) ( X + t ) , ∆ X ( i ) t ) i ≥ = ( t U i , Π + , ← ( S i /t )) i ≥ , { ( T ( i ) ( X + t ) , ∆ X ( i ) t ) ≤ i ≤ r , ( r ) X + t } D = { ( t U i , Π + , ← ( S i /t )) ≤ i ≤ r , ( r ) V ( I, Π + , ← ) t } . Proof. (i) Assume Π(0+) = ∞ , and introduce e m : (0 , × (0 , ∞ ) → R ∗ , e m ( u ′ , x ) := x u ′
0, write X + · Assume that X , ( U i ) , ( U ′ i ) , S r , Y ± = ( Y ± ( t )) t ≥ , Y = ( Y ( t )) t ≥ , are in-dependent processes, with Y ± and Y being standard Poisson processes. (i) Assume Π(0+) = ∞ . Then, for all t > , r ∈ N , ( | g ∆ X ( r ) t | , ( r ) e X t ) (2.21) D = Π ← ( v ) , X |·| < Π ← ( v ) t + Y + ( tκ + ( v )) X i =1 δ ( t U i , Π ← ( v )) + Y − ( tκ − ( v )) X i =1 δ ( t U ′ i , − Π ← ( v )) ! v = S r /t , where κ ± ( v ) are the quantities in (2.7). (ii) Assume Π + (0+) = ∞ . Then for all t > , r ∈ N , (∆ X ( r ) t , ( r ) X + t ) D = Π + , ← ( v ) , X + · < Π + , ← ( v ) t + Y ( tκ ( v )) X i =1 δ ( t U i , Π + , ← ( v )) ! v = S r /t , where κ ( v ) = Π + (Π + , ← ( v ) − ) − v . Proof. Let t > r ∈ N , and introduce a point measure e V ′ t as follows: e V ′ t := X i ≥ δ ( t U i + r , U ′ i + r , ( S i + r − S r ) /t ) . Then e V ′ t is independent of V := S r /t with e V ′ t D = V ′ t . Observe that E exp (cid:26) − λ V − Z f d { δ (0 , , V ) ⋆ e V ′ t } (cid:27) = E exp (cid:26) − λ V − Z t Z Z ∞ V (1 − e − f ( s,u ′ ,v ) ) d s d u ′ d v (cid:27) (2.22)= E exp (cid:26) − λ V − Z f d e V ′ t ·≥ V (cid:27) , B. Buchmann, Y. Fan and R.A. Maller for all nonnegative Borelian f and λ ≥ 0. Here e V ′·≥ vt ( · ) := e V ′ t ([0 , t ] × (0 , × [ v, ∞ ) ∩ · ).Assume Π(0+) = ∞ . Combining (2.18) and (2.22) yields( | g ∆ X ( r ) t | , ( r ) e X t ) D = (Π ← ( V ) , { δ (0 , , V ) ⋆ e V ′ t } m ◦ ( I,I, Π ← ) ) (2.23) D = (Π ← ( V ) , { e V ′ t ·≥ V } m ◦ ( I,I, Π ← ) ) . Next, set Y t := { e V ′ t ·≥ V } m ◦ ( I,I, Π ← ) , and let Y |·| 0, ( Z t ) t ≥ is a standard Brownian motion, and ( X ( J ) t ) t ≥ , the jumpprocess of X , is independent of ( Z t ) t ≥ . It satisfies, locally uniform in t ≥ X ( J ) t = a.s. lim ε ↓ (cid:18) X We will prove part (i), the identity for the r -fold modulustrimmed L´evy process. Trimming of positive jumps as in part (ii) follows similarly. Let t > r ∈ N be fixed. By (2.28) and the definition of ( r ) e X t , the r -fold modulus trimmed4 B. Buchmann, Y. Fan and R.A. Maller L´evy process is ( r ) e X t = γt + σZ t + X ( J ) t − r X i =1 g ∆ X ( i ) t , t > . Note that the jump process of ( r ) e X t and its quadratic variation are obtained by applyingthe summing functional to the r -fold modulus trimmed point process ( r ) e X and to thesquared jumps of ( r ) e X . Using (2.29), we can write X ( J ) t − r X i =1 g ∆ X ( i ) t = a.s. lim ε ↓ (cid:18)Z [0 ,t ] ×{| x | >ε } x ( r ) e X (d s, d x ) − t Z ε< | x |≤ x Π(d x ) (cid:19) . (2.30)The corresponding r -trimmed quadratic variation is simply ( r ) e V t = Z [0 ,t ] × R ∗ x r ) e X (d s, d x ) . Recall from Lemma 1 and Theorem 1 that the distribution of ( r ) e X t can be decomposed asthe superposition of three independent point measures, as in (2.21). Splitting the integralin (2.30) into these components givesa.s. lim ε ↓ (cid:18)Z [0 ,t ] ×{| x | >ε } x ( r ) e X (d s, d x ) − t Z ε< | x |≤ x Π(d x ) (cid:19) D = a.s. lim ε ↓ (cid:18)Z [0 ,t ] ×{| x | >ε } x X |·| < Π ← ( S r /t ) (d s, d x ) − t Z ε< | x |≤ x Π(d x ) (cid:19) + Π ← ( S r /t )( Y + ( tκ + ( S r /t )) − Y − ( tκ − ( S r /t ))) . A similar expression holds for ( r ) e V t . Thus, we conclude( ( r ) e X t , ( r ) e V t , | g ∆ X ( r ) t | ) D = { e X vt + Π ← ( v )( Y + tκ + ( v ) − Y − tκ − ( v ) ) , e V vt + Π ← ( v ) ( Y + tκ + ( v ) + Y − tκ − ( v ) ) , Π ← ( v ) } v = S r /t . This is (2.9) and completes the proof of part (i). (cid:3) This completes our derivation of the trimming identities. In the next sections, we turnto applications of them. X comparable with its large jump processes In this section, we apply Theorem 2.1 to complete a result of Maller and Mason [38]concerning the ratio of the process to its jump of largest magnitude. Note that when istributional representations of a L´evy process ∞ , we have | g ∆ X (1) t | = sup 0; similarly, whenΠ + (0+) = ∞ , ∆ X (1) t = sup 0. Recall that Π( x ) is said to beslowly varying (SV) as x ↓ x ↓ Π( ux ) / Π( x ) = 1 for all u > Theorem 2. Suppose σ = 0 and Π(0+) = ∞ . Then X t g ∆ X (1) t P → , as t ↓ , (3.1) iff Π( x ) ∈ SV at 0 (so that X is of bounded variation) and X has drift 0. These imply | g ∆ X (2) t || g ∆ X (1) t | P → , as t ↓ 0; (3.2) and conversely (3.2) implies Π( x ) ∈ SV at 0. For the proof, we need two preliminary lemmas. The first calculates a distribution re-lated to the large jumps, and the second applies Theorem 2.1 to derive a useful inequality. Lemma 2. Assume Π(0+) = ∞ . Then for t > , < u < , P ( | g ∆ X (2) t | ≤ u | g ∆ X (1) t | ) = t Z (0 , ∞ ) e − t Π( u Π ← ( v )) d v. (3.3) A similar expression to (3.3) is true when Π + (0+) = ∞ , with | g ∆ X (1) t | and | g ∆ X (2) t | re-placed by ∆ X (1) t and ∆ X (2) t , and Π and Π ← replaced by Π + and Π + , ← . Proof. Assume Π(0+) = ∞ and take t > 0. We get from (2.17) that( | g ∆ X (1) t | , | g ∆ X (2) t | ) D = (Π ← ( E /t ) , Π ← (( E + E ) /t )) , (3.4)where E and E are independent unit exponential random variables. Take 0 < u < v > y t,u ( v ) := t Π( u Π ← ( v/t )). Then, in view of (3.4), P ( | g ∆ X (2) t | ≤ u | g ∆ X (1) t | ) = P (Π ← (( E + E ) /t ) ≤ u Π ← ( E /t ))= P ( E + E ≥ y t,u ( E ))= Z (0 , ∞ ) e − ( y t,u ( v ) − v ) e − v d v = Z (0 , ∞ ) exp {− t Π( u Π ← ( v/t )) } d v. B. Buchmann, Y. Fan and R.A. Maller Changing the variable from v/t to v gives (3.3). The version for large jumps, rather thanjumps large in modulus, is proved similarly. (cid:3) Lemma 3. Assume Π(0+) = ∞ , and let a t be any nonstochastic function in R . Thenfor t > and < u < / , P ( | (1) e X t − a t | > u | g ∆ X (1) t | ) ≥ P ( | g ∆ X (2) t | > u | g ∆ X (1) t | ) . (3.5) Assuming Π + (0+) = ∞ , the same inequality (3.5) holds with (1) X t , ∆ X (1) t and ∆ X (2) t in place of (1) e X t , | g ∆ X (1) t | and | g ∆ X (2) t | . Proof. Let E be an exponential random variable with E E = 1, thus, E D = S . Using theidentity in (2.12) with r = 1, the left-hand side of (3.5) is, for u > Z ∞ P ( | e X vt + e G vt − a t | > uy v ) P ( E ∈ t d v ) , (3.6)where we abbreviate y v := Π ← ( v ), v > 0. For each v > 0, let ( X vt ) t ≥ and ( G vt ) t ≥ beindependent copies of ( e X vt ) t ≥ and ( e G vt ) t ≥ , with ( G vt ) t ≥ also independent of ( X vt ) t ≥ .Define the symmetrised process ( b Y vt ) t ≥ by b Y vt = ( e X vt + e G vt ) − ( X vt + G vt ) , t > , with jump process ∆ b Y vt = b Y vt − b Y vt − , t > 0. Then the integrand in (3.6) satisfies4 P ( | e X vt + e G vt − a t | > uy v ) = 2 P ( | e X vt + e G vt − a t | > uy v ) + 2 P ( | X vt + G vt − a t | > uy v ) ≥ P ( | ( e X vt + e G vt − a t ) − ( X vt + G vt − a t ) | > uy v ) (3.7)= 2 P ( | b Y vt | > uy v ) . Substitute the inequality (3.7) in (3.6) and equate to the left-hand side of (3.5) to get4 P ( | (1) e X t − a t | > u | g ∆ X (1) t | ) ≥ Z ∞ P ( | b Y vt | > uy v ) P ( E ∈ t d v ) , u > . (3.8)Take u ∈ (0 , / P ( | b Y vt | > uy v ) = 2 lim n →∞ P (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) n X i =1 ( b Y vit/ n − b Y v ( i − t/ n ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) > uy v ! ≥ lim n →∞ P (cid:16) max ≤ j ≤ n | ( b Y vjt/ n − b Y v ( j − t/ n ) | > uy v (cid:17) (3.9) ≥ P (cid:16) sup 0. Then e G vt is nonzero. Its L´evy measure consists of pointmasses at ± y v with magnitudes κ ± ( v ), given by (2.7). Hence, it has tail(Π( y v − ) − v ) (∆Π( y v ) + ∆Π( − y v ))∆Π( y v ) { x 0. The sym-metrisation b Y vt has L´evy tail being twice the magnitude of this. This result remains truewhen ∆Π( y v ) = 0, as e G vt ≡ y v − ) = v then.We can now calculate the right-hand side of (3.9) and deduce from it that2 P ( | b Y vt | > uy v ) ≥ − e − t (Π(4 uy v ) − v ) (3.10) ≥ − e − t (Π(4 uy v ) − v ) . Finally, (3.8), (3.10) and Lemma 2 give4 P ( | (1) e X t − a t | > u | g ∆ X (1) t | ) ≥ t Z ∞ ( e − tv − e − t Π(4 uy v ) ) d v = P ( | g ∆ X (2) t | > u | g ∆ X (1) t | ) . This proves (3.5). To derive the version for (1) X t , define the one-sided L´evy process X ∗ t having triplet ( γ, σ , Π ∗ (d x ) = Π(d x ) ( x> ), and let g ∆ X ∗ , ( r ) t be the jump of r th largestmodulus up until time t for ( X ∗ t ) t ≥ , r ∈ N . Then g ∆ X ∗ , ( r ) t = ∆ X ( r ) t and (1) X t = (1) e X ∗ t = X ∗ t − g ∆ X ∗ , (1) t . Assuming Π + (0+) = ∞ , inequality (3.5) with (1) X t , ∆ X (1) t and ∆ X (2) t replacing (1) e X t , | g ∆ X (1) t | and | g ∆ X (2) t | then follows from (3.5) itself, applied to X ∗ t . (cid:3) Lemma 4. Assume Π(0+) = ∞ . Then | g ∆ X (2) t || g ∆ X (1) t | P → , as t ↓ , (3.11) implies Π( x ) is SV at 0. Proof. From (3.3), for 0 < u < 1, with y v := Π ← ( v ), P ( | g ∆ X (2) t | > u | g ∆ X (1) t | ) = t Z ∞ ( e − tv − e − t Π( uy v ) ) d v B. Buchmann, Y. Fan and R.A. Maller (3.12)= Z ∞ ( e − v − e − t Π( uy v/t ) ) d v. Assume (3.11), so the integral on the right-hand side of (3.12) tends to 0 as t ↓ 0. Take anysequence t k ↓ u > t k ′ = t k ′ ( u ) ↓ t k ′ Π( uy v/t k ′ ) converges vaguely to g u ( v ), as k ′ → ∞ , where g u ( v ) is a monotonefunction of v . Since t Π( uy v/t ) ≥ t Π( y v/t − ) ≥ v , we have g u ( v ) ≥ v . Fatou’s lemma appliedto (3.12) shows then that g u ( v ) = v for v > 0, thus t k ′ Π( uy v/t k ′ ) → v , and since this istrue for all subsequences we deducelim t ↓ t Π( uy v/t ) = v, v > , < u < . Given x > v > 0, let t ( x ) = v/ Π( x ). Then y v/t ( x ) = Π ← (Π( x )) ≤ x , implyingΠ( uy v/t ( x ) ) ≥ Π( ux ). So we get, for 0 < u < ≤ Π( ux )Π( x ) ≤ t ( x )Π( uy v/t ( x ) ) v → vv = 1 , as x ↓ , and Π ∈ SV at 0. (cid:3) Proof of Theorem 2. Observe that (3.1) is equivalent to | (1) e X t || g ∆ X (1) t | P → , as t ↓ , and this implies (3.11) by Lemma 3. Thus, by Lemma 4, Π ∈ SV at 0. Hence, R Π( x ) d x < ∞ and X is of bounded variation, with drift d X . By, for example, Bertoin ([5], Proposition11, page 167), X t /t P → d X as t ↓ 0, while, for any δ > P (cid:16) sup 0. But | X t | t = | X t || g ∆ X (1) t | · | g ∆ X (1) t | t P → (1)(0) = 0 , showing that d X = 0.Conversely, (3.1) holds when Π ∈ SV at 0 and d X = 0, as shown in Lemma 5.1 ofMaller and Mason [38]. (cid:3) The next result follows by applying Theorem 2 to the L´evy process ( P Suppose σ = 0 and Π(0+) = ∞ . X t is of bounded variation and P As another corollary of Theorem 2, it is not hard to show that Π( x ) ∈ SV at 0 implies t Π( | X t | ) D −→ E as t ↓ 0. The variance gamma model, widely used in financialmodelling, has L´evy measure whose tail is slowly varying at 0 (Madan and Seneta ([35],page 519)). Our results for such processes provide useful intuition and, more specifically,may be of immediate use in applications, such as for estimation of Π or simulation, andso forth.The next theorem gives a one-sided version of Theorem 2. Condition (3.13) reflects akind of dominance of the positive part of X over its negative part. We defer the proof ofTheorem 3 to the following section, where we study such dominance ideas in detail. Theorem 3. Suppose Π + (0+) = ∞ . Then X t ∆ X (1) t P → , as t ↓ iff Π + ( x ) ∈ SV at 0, X is of bounded variation with drift 0, and lim x ↓ Π − ( x ) / Π + ( x ) = 0 . 4. Comparing positive and negative jumps In this section, we are concerned with comparing magnitudes of positive and negativejumps of X , in various ways. Define ∆ X + t = max(∆ X t , X − t = max( − ∆ X t , X + ) (1) t = sup B. Buchmann, Y. Fan and R.A. Maller Theorem 4. Suppose Π ± (0+) = ∞ . For (4.1) assume P To prove the equivalence in (4.1), note that, for any λ > E exp (cid:18) − λ P 0. It is easily checked that A − ( z ) /z is nonincreasing for z > 0, so the lastintegral in (4.8) is not smaller than(1 − e − tA − ( z ) / z ) λ + t ( z ) = (1 − e − tA − ( z ) / z ) e − t Π + ( z ) . Now choose t = 1 / Π + ( z ) and let t ↓ z ↓ 0) to get the righthand relation in (4.1).Conversely, assume the right-hand relation in (4.1). Then the upper bound in (4.7)shows that the integral in (4.6) is no larger than Z (0 , ∞ ) (1 − e − λtA − ( y/λ ) /y ) λ + t (d y ) . This is a nondecreasing function of λ so it suffices to show that it tends to 0 as t ↓ λ > 1. Then since A − is nondecreasing, for any z > Z [ z, ∞ ) (1 − e − λtA − ( y ) /y ) λ + t (d y ) + λ + t ( z − ) ≤ − e − λtA − ( z ) /z + e − t Π + ( z − ) . Take t > a > z = Π + , ← ( a/t ). Then the last expression is no larger than1 − e − aλA − ( z ) /z Π + ( z ) + e − a . Letting t ↓ 0, so z ↓ 0, then a → ∞ , this tends to 0 by the right-hand relation in (4.1).The equivalence in (4.2) is proved similarly to that in (4.1), by reversing the numeratorand denominator and interchanging + / − and noting that the left-hand relation in (4.2)holds if and only if the Laplace transform of the ratio on the left of (4.2) tends to 1 as t ↓ t ↓ E exp (cid:18) − λ X Then ρ ( y ) is nonincreasing, ρ (0+) = ∞ , ρ (+ ∞ ) = 0, and R yρ ( y ) d y < ∞ . So ρ is a L´evymeasure and we can define a L´evy process ( U t ) t ≥ , independent of ( X t ) t ≥ , having L´evycharacteristics (0 , , ρ ) and jump process ∆ U t := U t − U t − , t > 0. Then P (cid:16) sup Proof of Theorem 3. Assume Π + (0+) = ∞ and suppose first that (3.13) holds. Thenthe same proof as used for showing that (3.1) is equivalent to Π( x ) ∈ SV at 0, shows herethat Π + ( x ) ∈ SV at 0. This implies that the L´evy process P 0. So by (3.13) X t t = X t ∆ X (1) t · ∆ X (1) t t P → . (4.13)Then σ = 0 and x Π( x ) → x ↓ 0, by Doney and Maller ([13], Theorem 2.1).Use the L´evy–Itˆo decomposition (2.28) (with σ = 0) to write X t as X t = γt + a.s. lim ε ↓ (cid:18) X 0, as t ↓ 0, for δ > 0, and X ( − ) t := a.s. lim ε ↓ (cid:18) X 0, and so by Doney and Maller [13], Theorem 2.1 (see also Doney [11]), the integral R ( x, y Π ( − ) (d y ) has a finite limit as x ↓ 0. This means that X ( − ) t , and hence X t are ofbounded variation, with drift d X = 0 by (4.13) and Lemma 4.1 of Doney and Maller [13].So we can write1 + o P (1) = X t sup 0. In addition, Π − ( x ) = o (Π + ( x )) implies Z x Π − ( y ) d y = o (cid:18)Z x Π + ( y ) d y (cid:19) = o ( x Π + ( x )) , as x ↓ , and then (4.16) follows as in (4.1). Thus, we get (3.13) from (4.15). (cid:3) X dominating its large jump processes In this section, we characterise divergences like X t sup 0; (5.1)and similarly with | ∆ X s | replaced by ∆ X s . We think of these kinds of conditions asexpressing the “dominance” of X over its largest jump processes, at small times.These conditions will be shown to be related to the relative stability of the process X ,and to its attraction to normality , as t ↓ 0. Relative stability is the convergence of thenormed process to a finite nonzero constant which, by rescaling of the norming function,can be taken as ± 1. Thus, we are concerned with the property X t b t P → ± , as t ↓ , (5.2)where b t > X t is replaced by | X t | in (5.1), we also bring intoplay the idea of X being in the domain of attraction of the normal distribution , as t ↓ a t ∈ R , b t > 0, such that ( X t − a t ) /b t D −→ N (0 , t ↓ X t to Recall that Π(0+) = ∞ implies sup 0. The main result concerning relativestability is in Section 5.2, while Section 5.3 deals with 2-sided versions. The domainof attraction of the normal is needed here. Subsequential versions of the results are inSections 5.4 and 5.5. X staying positive near 0, in probability Versions of truncated first and second moment functions, we will use are ν ( x ) = γ − Z x< | y |≤ y Π(d y ) and V ( x ) = σ + Z < | y |≤ x y Π(d y ) , x > . (5.3)Variants of ν ( x ) and V ( x ) are Winsorised first and second moment functions defined by A ( x ) = γ + Π + (1) − Π − (1) − Z x (Π + ( y ) − Π − ( y )) d y (5.4)and U ( x ) = σ + 2 Z x y Π( y ) d y for x > . (5.5) A ( x ) and U ( x ) are continuous for x > 0. Using Fubini’s theorem, we can show that A ( x ) = ν ( x ) + x (Π + ( x ) − Π − ( x )) (5.6)and U ( x ) = V ( x ) + x (Π + ( x ) + Π − ( x )) = V ( x ) + x Π( x ) . (5.7)These functions are finite for all x > R < | y |≤ y Π(d y ) < ∞ of theL´evy measure Π, which further implies that lim x ↓ x Π( x ) = 0, and, as is easily verified,lim x ↓ xν ( x ) = lim x ↓ xA ( x ) = 0 . (5.8)Also, lim x →∞ A ( x ) /x = lim x →∞ U ( x ) /x = 0. We have the obvious inequality U ( x ) ≥ σ + x Π( x ) ≥ x Π( x ) , x ≥ . This can be amplified to U ( x ) ≥ σ + x Π( x − ) ≥ x Π( x − ) , x > . (5.9)Another calculation gives (recall ∆Π( x ) = Π { x } ) ν ( x ) − x (∆Π( x ) − ∆Π( − x )) = A ( x ) − x (Π + ( x − ) − Π − ( x − )) . (5.10)6 B. Buchmann, Y. Fan and R.A. Maller Lemma 5. Suppose σ > . Then X t / √ t D −→ N (0 , σ ) and P ( X t > → / , as t ↓ . Proof. The asymptotic normality of X t / √ t when σ > t ↓ P ( X t > 0) = 1 / (cid:3) Next, we quote the (slightly modified) theorem originally due to Doney [11]. It showsthat X remains positive with probability approaching 1 iff X dominates its large negativejumps, and explicit equivalences for this are given in terms of the functions A ( x ), U ( x )and the negative tail of Π. The latter conditions reflect the positivity of X in thatthe function A ( x ) remains positive for small values of x ; and A ( x ) dominates U ( x )and the negative tail of Π in certain ways. Recall the notation ∆ X + t = max(∆ X t , X − t = max( − ∆ X t , X + ) (1) t = sup Suppose Π + (0+) = ∞ . (i) Suppose also that Π − (0+) > . Then the following are equivalent: lim t ↓ P ( X t > 0) = 1; (5.11) X t (∆ X − ) (1) t P → ∞ , as t ↓ 0; (5.12) σ = 0 and lim x ↓ A ( x ) x Π − ( x ) = ∞ ; (5.13)lim x ↓ A ( x ) q U ( x )Π − ( x ) = ∞ ; (5.14) there is a nonstochastic nondecreasing function ℓ ( x ) > , which is slowly varying at 0,such that X t tℓ ( t ) P → ∞ , as t ↓ . (5.15)(ii) Suppose X is spectrally positive, so Π − ( x ) = 0 for x > . Then (5.11) is equivalentto σ = 0 and A ( x ) ≥ for all small x, (5.16) and this happens if and only if X is a subordinator. Furthermore, we then have A ( x ) ≥ ,not only for small x , but for all x > . Remark 2. We adopt the convention that (5.12) is taken to hold when (5.11) holds butsup 0. This is the case when Π − (0+) < ∞ . istributional representations of a L´evy process Lemma 6. If Π − (0+) > , then lim sup x ↓ A ( x ) q Π − ( x ) < ∞ . (5.17) If Π − (0+) = 0 and Π + (0+) > , then lim sup x ↓ A ( x ) q Π + ( x ) < ∞ . (5.18) Proof of Lemma 6. (i) Assume Π − (0+) > x k ↓ k → ∞ such that A ( x k ) q Π − ( x k ) = γ + Π + (1) − Π − (1) − R x k Π + ( y ) d y + R x k Π − ( y ) d y q Π − ( x k ) → ∞ . Since Π − (0+) > 0, we deduce from this that − Π − (1) + R x k Π − ( y ) d y q Π − ( x k ) → ∞ . Thus, integrating by parts, − x k Π − ( x k ) + R x k 0. Then, for 0 < x < A ( x ) q Π + ( x ) = γ + Π + (1) − R x Π + ( y ) d y q Π + ( x ) ≤ γ + Π(1) q Π + ( x ) , B. Buchmann, Y. Fan and R.A. Maller and since Π + (0+) > x ↓ 0, so (5.18) is proved. (cid:3) Proof. Theorem 5 only differs from Theorem 1 in Doney [11] (and his remark followingthe theorem, regarding part (ii) of our Theorem 5) in that he assumes a priori that σ = 0. Clearly, (5.11), (5.12) and (5.15) imply this by Lemma 5. (5.14) also implies σ = 0. To see this, suppose on the contrary that σ > 0. Then U ( x ) ≥ σ for all x ≥ (cid:3) We have the following subsequential version of Theorem 5. We omit the proof whichis along the lines of Doney’s proof, together with similar ideas as in Theorem 9. Theorem 6. Suppose Π + (0+) = ∞ . (i) Suppose also that Π − (0+) > . Then the following are equivalent: there is a non-stochastic sequence t k ↓ such that P ( X t k > → 1; (5.19) there is a nonstochastic sequence t k ↓ such that X t k (∆ X − ) (1) t k P → ∞ , as k → ∞ ; (5.20)lim sup x ↓ A ( x ) q U ( x )Π − ( x ) = ∞ . (5.21)(ii) Suppose X is spectrally positive, that is, Π − ( x ) = 0 for all x > . Then (5.19) isequivalent to lim t ↓ P ( X t > → , thus to (5.16), equivalently, X t is a subordinator, and A ( x ) ≥ for all x > . Remark 3. We get equivalences for X t (∆ X + ) (1) t P → −∞ (or the subsequential version) by applying Theorem 5 (or Theorem 6) with X replacedby − X .In the next two subsections, we characterise when X dominates its large positive jumpsand its jumps large in modulus, while remaining positive in probability, and when | X | dominates its jumps large in modulus. These kinds of behaviour require more stringentconditions on X , namely, relative stability or attraction to normality, in the respectivecases. istributional representations of a L´evy process Recall that X is said to be relatively stable (RS) at 0 if (5.2) holds. X is positivelyrelatively stable (PRS) at 0 if (5.2) holds with a “+” sign, and negatively relatively stable (NRS) at 0 if (5.2) holds with a “ − ” sign. In either case, the function b t > ∞ ) that there is a measurable nonstochastic function b t > | X t | b t P → , as t ↓ , (5.22)iff X ∈ RS at 0, equivalently, iff σ = 0 and lim x ↓ | A ( x ) | x Π( x ) = ∞ . (5.23)The following conditions characterise the convergence in (5.2) (Kallenberg [23], Theo-rem 15.14): for all x > t ↓ t Π( xb t ) = 0 , lim t ↓ tA ( xb t ) b t = ± , lim t ↓ tU ( xb t ) b t = 0 . (5.24)Obvious modifications of these characterise convergence through a subsequence t k in(5.2).Next is our main result relating “one-sided” dominance to positive relative stability.The identity (2.12) supplies a key step in the proof. Theorem 7. Assume Π + (0+) = ∞ . Then the following are equivalent: X t (∆ X + ) (1) t P → ∞ , as t ↓ 0; (5.25) X t | g ∆ X (1) t | P → ∞ , as t ↓ 0; (5.26) σ = 0 and lim x ↓ A ( x ) x Π( x ) = ∞ ; (5.27) X ∈ PRS at 0; (5.28)lim x ↓ A ( x ) q U ( x )Π( x ) = ∞ ; (5.29)lim x ↓ xA ( x ) U ( x ) = ∞ . (5.30)0 B. Buchmann, Y. Fan and R.A. Maller Before proving the theorem, we record the following moment formulae. Recall that e X vt is defined in (2.4). Lemma 7. When Π ← ( v ) < and t > : t − E e X vt = ν (Π ← ( v )) − Π ← ( v )(∆Π(Π ← ( v )) − ∆Π( − Π ← ( v ))) (5.31)= A (Π ← ( v )) − Π ← ( v )(Π + (Π ← ( v ) − ) − Π − (Π ← ( v ) − )) . For all t > , v > , E ( e X vt ) = t (cid:18) σ + Z | x | < Π ← ( v ) x Π(d x ) (cid:19) + ( E e X vt ) . (5.32) Proof. Let ( U t ) t ≥ be a L´evy process with triplet ( γ U , σ U , Π U ). Provided the partici-pating integrals are finite (see Example 25.11 in Sato [47]), for instance, we have EU t = t (cid:18) γ U + Z | y | > y Π U (d y ) (cid:19) and E ( U t ) = t (cid:18) σ U + Z R ∗ y Π U (d y ) (cid:19) + ( EU t ) . Apply these to e X vt with triplet as in (2.5) to get, when Π ← ( v ) < t > t − E e X vt = γ − Z Π ← ( v ) ≤| x |≤ x Π(d x )= γ − Z Π ← ( v ) < | x |≤ x Π(d x ) − Π ← ( v )(∆Π(Π ← ( v )) − ∆Π( − Π ← ( v ))) , which gives the first equation in (5.31). For the second equation in (5.31), use (5.10).(5.32) is proved similarly. (cid:3) Proof of Theorem 7. Assume Π + (0+) = ∞ throughout. Case (i). Suppose Π − (0+) > ⇒ (5.26): Assume (5.25). This implies lim t ↓ P ( X t > 0) = 1, so by The-orem 5, (5.12) holds. (5.12) together with (5.25) implies (5.26), because | g ∆ X (1) t | =max((∆ X + ) (1) t , (∆ X − ) (1) t ).(5.26) = ⇒ (5.27): Assume (5.26). Then lim t ↓ P ( X t > 0) = 1, so (5.13) holds. SinceΠ − (0+) > 0, (5.13) implies lim x ↓ A ( x ) /x = ∞ ; in particular, A ( x ) > x .Since lim t ↓ P ( X t > 0) = 1, Lemma 5 in Doney [11] gives U ( x ) ≤ xA ( x ) for all small x, x ≤ x , say . (5.33)Without loss of generality, assume x < istributional representations of a L´evy process (1) e X t | g ∆ X (1) t | P → ∞ , as x ↓ t ↓ P ( (1) e X t ≤ a | g ∆ X (1) t | ) = 0 for some a > . (5.34)(In fact, this holds for all a > 0. But it will be enough to assume (5.34).) Without lossof generality, take a ≤ ← ( v ) to y v throughout this proof. Then by (2.12), we can write P ( (1) e X t ≤ a | g ∆ X (1) t | ) = Z ∞ P ( e X vt + e G vt ≤ ay v ) P ( E ∈ t d v ) , (5.35)where E = S is a unit exponential r.v. By (2.7) and (2.8), | E e G vt | = y v | EY + tκ + ( v ) − EY − tκ − ( v ) |≤ ty v (Π( y v − ) − v ) ∆Π( y v ) + ∆Π( − y v )∆Π( y v ) { ∆Π( y v ) =0 } (5.36) ≤ ty v Π( y v − ) ≤ tU ( y v ) /y v (by (5.9)) ≤ tA ( y v ) (by (5.33))and similarly Var( e G vt ) ≤ ty v Π( y v − ) . (5.37)With x as in (5.33), keep v ≥ Π( x ), so y v ≤ x < 1. Then E e X vt = t ( A ( y v ) − y v (Π + ( y v − ) − Π − ( y v − ))) (by (5.31)) ≤ t ( A ( y v ) + y v Π − ( y v − )) (5.38) ≤ tA ( y v ) (by (5.9) and (5.33)) . Apply (5.36) and (5.38) to obtain from (5.35) P ( (1) e X t ≤ a | g ∆ X (1) t | ) (5.39) ≥ Z ∞ Π( x ) P ( e X vt − E e X vt + e G vt − E e G vt ≤ ay v − tA ( y v )) P ( E ∈ t d v ) . B. Buchmann, Y. Fan and R.A. Maller For t > a as in (5.34) define b t := sup (cid:26) x > A ( x ) x > a t (cid:27) , (5.40)with b := 0. Recall that lim x ↓ A ( x ) /x = ∞ , lim x →∞ A ( x ) /x = 0, and A ( x ) is continuous.So 0 < b t < ∞ , b t is strictly increasing, b t ↓ t ↓ 0, and tA ( b t ) b t = a . (5.41)Assume t is small enough for b t ≤ x and keep v < Π( b t ). Then y v ≥ b t , and so 7 tA ( y v ) ≤ a y v / b t . This implies 7 tA ( y v ) ≤ ay v / 2. Thus, by Chebyshev’s inequalityand (5.39) P ( (1) e X t ≤ a | g ∆ X (1) t | ) ≥ Z Π( b t )Π( x ) P ( e X vt − E e X vt + e G vt − E e G vt ≤ ay v / P ( E ∈ t d v )(5.42) ≥ Z Π( b t )Π( x ) (cid:18) − e X vt ) + Var( e G vt )) a y v (cid:19) P ( E ∈ t d v ) . Also Var( e X vt ) + Var( e G vt ) ≤ t ( V ( y v ) + y v Π( y v − )) (by (5.32) and (5.37)) ≤ tU ( y v ) (see (5.7) and (5.9)) (5.43) ≤ ty v A ( y v ) (by (5.33), since y v ≤ x ) ≤ a y v / . The last inequality holds since y v ≥ b t . Hence, from (5.42), P ( (1) e X t ≤ a | g ∆ X (1) t | ) ≥ t Z Π( b t )Π( x ) e − tv d v/ e − t Π( x ) (1 − e − t (Π( b t ) − Π( x )) ) / . Since the left-hand side tends to 0 as t ↓ t Π( b t ) = a b t Π( b t )56 A ( b t ) → , as t ↓ . (5.44)Now take λ > b λt b t = 56 λtA ( b λt ) a b t = 56 λtA ( b t ) a b t + 56 λt ( A ( b λt ) − A ( b t )) a b t istributional representations of a L´evy process λ + 56 λt R b λt b t (Π + ( y ) − Π − ( y )) d ya b t = λ + O ( t Π( b t )) (cid:18) b λt − b t b t (cid:19) = λ + o (cid:18) b λt b t (cid:19) . Thus, b t is regularly varying with index 1 as t ↓ 0. Also, (5.44) implies A ( b t ) /b t Π( b t ) →∞ as t ↓ 0. From those we obtain (5.27) as follows. Given x > t = t ( x ) so that b t − ≤ x ≤ b t + . Then, for any ε ∈ (0 , b t (1 − ε ) ≤ x ≤ b t (1+ ε ) , while b t (1+ ε ) ∼ (1 + ε ) b t ∼ (1 + ε ) b (1 − ε ) t / (1 − ε ) as t ↓ 0. So A ( x ) = A ( b t (1 − ε ) ) + Z xb t (1 − ε ) (Π + ( y ) − Π − ( y )) d y ≥ A ( b t (1 − ε ) ) − b t (1+ ε ) Π( b t (1 − ε ) ) ≥ (1 + o (1)) A ( b t (1 − ε ) ) (by (5.44)) . Hence, as x ↓ A ( x ) x Π( x ) ≥ (1 + o (1)) A ( b t (1 − ε ) ) b t (1 − ε ) Π( b t (1 − ε ) ) × b t (1 − ε ) b t (1+ ε ) → ∞ , (5.45)and (5.27) is proved.(5.27) ⇐⇒ (5.28) is in Theorem 2.2 of Doney and Maller [13].(5.28) = ⇒ (5.29): (5.28) implies A ( b t ) / q U ( b t )Π( b t ) → ∞ by (5.24) and then (5.29)follows from the regular variation of b t (noted prior to (5.22)), by similar arguments aswe used in proving (5.45) from (5.44).(5.29) ⇐⇒ (5.30): (5.29) implies X ∈ PRS at 0, so b t A ( b t ) /U ( b t ) → ∞ by (5.24), and b t is regularly varying with index 1 at 0. Then (5.30) follows by similar arguments as we usedin proving (5.45) from (5.44). Conversely, (5.30) implies (5.29) because U ( x ) ≥ x Π( x ).In the reverse direction, we will show that (5.29) = ⇒ (5.27) = ⇒ (5.26) = ⇒ (5.25).(5.29) = ⇒ (5.27): (5.29) implies (5.14), hence σ = 0 by Theorem 5. Then (5.27) followsfrom (5.29) since U ( x ) ≥ x Π( x ).(5.27) = ⇒ (5.26): Assume (5.27). This implies X ∈ PRS, so X t /b t P → +1 as t ↓ b t > 0. By (5.24), lim t ↓ t Π( εb t ) = 0 for all ε > 0. This implies P (cid:16) sup 0. So we get (5.26).(5.26) = ⇒ (5.25) is true since | g ∆ X (1) t | ≥ (∆ X + ) (1) t . So we have shown the equivalenceof (5.25)–(5.30) for case (i).4 B. Buchmann, Y. Fan and R.A. MallerCase (ii). Suppose Π − (0+) = 0. By part (ii) of Theorem 5, each of (5.25)–(5.28) implies X is a subordinator (with drift) and A ( x ) ≥ x ≥ 0. (5.25) and (5.26) are thesame thing in this case.(5.26) = ⇒ (5.27): Assume (5.26). Since X is a subordinator, we can write A ( x ) = d X + Z x Π + ( y ) d y, x ≥ , where d X ≥ X and R x Π + ( y ) d y < ∞ . The latter implies lim x ↓ x Π + ( x ) =0. Of course σ = 0 and if d X > X = 0. As in(5.38), we get E e X vt ≤ tA ( y v ) and (5.36) and (5.37) remain true. Since Π + (0+) = ∞ ,lim x ↓ A ( x ) x ≥ Z lim inf x ↓ Π + ( xy ) d y = ∞ . Define b t again by (5.40). Then the same working as in case (i) gives t Π( b t ) → b t regularly varying with index 1, so again we get (5.27).(5.27) ⇐⇒ (5.28) is in Theorem 2.2 of Doney and Maller [13] in this case also; theirtheorem only requires Π(0+) > (cid:3) The domain of attraction of the normal distribution , as t ↓ 0, appears in the next result,which is a corollary to Theorem 7. We say X ∈ D ( N ) at 0 if there are functions a t ∈ R , b t > 0, such that ( X t − a t ) /b t D −→ N (0 , 1) (a standard normal random variable ) as t ↓ a t may be taken as 0, we write X ∈ D ( N ) (no centering required). The followingcondition characterises the domain of attraction of the normal at 0 (Doney and Maller[13], Theorem 2.5): lim x ↓ U ( x ) x Π( x ) = ∞ ; (5.46)in fact, D ( N ) (at 0) equals D ( N ) (at 0) (Maller and Mason [38], Theorem 2.4). Acharacterisation for D ( N ) at 0 (equivalent to (5.46)) islim x ↓ U ( x ) x | A ( x ) | + x Π( x ) = ∞ . (5.47)The following conditions are also equivalent to X t /b t D −→ N (0 , 1) (Kallenberg [23],Theorem 15.14): for all x > t ↓ t Π( xb t ) = 0 , lim t ↓ tA ( xb t ) b t = 0 , lim t ↓ tU ( xb t ) b t = 1 . (5.48) istributional representations of a L´evy process X t /b t D −→ N (0 , 1) through asubsequence t k ↓ Corollary 3 (Corollary to Theorem 7). Assume Π + (0+) = ∞ . Then the followingare equivalent:there is a nonstochastic function c t > such that V t c t P → , as t ↓ 0; (5.49) V t sup 0; (5.50) X is in the domain of attraction of the normal distribution, as t ↓ . Proof. V t is a subordinator with drift d V = σ and L´evy measure Π V , where Π V ( x ) =Π( √ x ) { x> } . Let the triplet of V t be ( γ V , , Π V ( · )). Then d V = γ V + R y Π V (d y ). Thus,in obvious notation A V ( x ) = γ V + Π V (1) − Z x Π V ( y ) d y = d V + Z x Π V ( y ) d y = σ + 2 Z √ x y Π( y ) d y = U ( √ x ) , x > . Hence, A V ( x ) x Π V ( x ) = U ( √ x )( √ x ) Π( √ x )tends to ∞ iff (5.46) holds. By Theorem 7 these are equivalent to (5.49) and (5.50), and(5.46) characterises the domain of attraction of the normal, as noted. (cid:3) Remark 4. (i) Another interesting kind of “self-normalisation” of a L´evy process is todivide X t by √ V t , possibly after removal of one or the other kind of maximal jump. See,for example, Maller and Mason [36, 39]. Our methods can be used to extend these resultsin a variety of directions, but we omit further details here.(ii) Relative stability of X is directly related to the stability of the “one-sided” and“two-sided” passage times over power law boundaries defined by T b ( r ) := inf { t ≥ X t > rt b } , r ≥ , and T ∗ b ( r ) := inf { t ≥ | X t | > rt b } , r ≥ , when ≤ b < 1. Griffin and Maller [20] show that, then, T b ( r ) is relatively stable as r ↓ T b ( r ) /C ( r ) P → r ↓ C ( r ) > 0, iff X ∈ Griffin and Maller [20] show that relative stability of T b ( r ) or T ∗ b ( r ) cannot obtain when b ≥ B. Buchmann, Y. Fan and R.A. Maller PRS, while T ∗ b ( r ) is relatively stable as r ↓ 0, in the sense that T ∗ b ( r ) /C ( r ) P → r ↓ C ( r ) > 0, iff X ∈ RS . Further connections made in Griffin andMaller [20] are that X ∈ PRS iff X t := sup Assume Π(0+) = ∞ . Then the following are equivalent: | X t || g ∆ X (1) t | P → ∞ , as t ↓ 0; (5.51)lim x ↓ x | A ( x ) | + U ( x ) x Π( x ) = ∞ ; (5.52)lim x ↓ U ( x ) x | A ( x ) | + x Π( x ) = + ∞ , or lim x ↓ | A ( x ) | x Π( x ) = + ∞ ; (5.53) X ∈ D ( N ) ∪ RS at . (5.54) Proof. Assume Π(0+) = ∞ . (5.51) = ⇒ (5.52): Assume (5.51). This implies | (1) e X t || g ∆ X (1) t | P → ∞ , as t ↓ , so we have lim t ↓ P ( | (1) e X t | ≤ a | g ∆ X (1) t | ) = 0 for some a > . (5.55)Without loss of generality take a ≤ ← ( v ) to y v throughout. Then by (2.12), we can write P ( | (1) e X t | ≤ a | g ∆ X (1) t | ) = Z ∞ P ( | e X vt + e G vt | ≤ ay v ) P ( E ∈ t d v ) . (5.56)By (5.36), we have | E e G vt | ≤ ty v Π( y v − ) ≤ tU ( y v ) /y v , (5.57) istributional representations of a L´evy process | E e X vt | = t | A ( y v ) − y v (Π + ( y v − ) − Π − ( y v − )) |≤ t ( | A ( y v ) | + y v Π( y v − )) (5.58) ≤ t ( | A ( y v ) | + U ( y v ) /y v ) . Apply (5.57) and (5.58) to obtain from (5.56) P ( | (1) e X t | ≤ a | g ∆ X (1) t | ) ≥ Z ∞ P ( | e X vt − E e X vt + e G vt − E e G vt | ≤ ay v − | E e X vt | − | E e G vt | ) P ( E ∈ t d v ) (5.59) ≥ Z ∞ P ( | e X vt − E e X vt + e G vt − E e G vt | ≤ ay v − t ( | A ( y v ) | + U ( y v ) /y v )) P ( E ∈ t d v ) . For t > 0, define b t := sup (cid:26) x > x | A ( x ) | + U ( x ) x > a t (cid:27) , (5.60)with b := 0. Since Π(0+) = ∞ , we have lim x ↓ ( x | A ( x ) | + U ( x )) /x = ∞ . In addition,lim x →∞ ( x | A ( x ) | + U ( x )) /x = 0. Then 0 < b t < ∞ , b t is strictly increasing, b ( t ) ↓ t ↓ 0, and t ( b t | A ( b t ) | + U ( b t )) b t = a , t > . (5.61)Now keep v < Π( b t ). Then y v ≥ b t , and so t ( | A ( y v ) | + U ( y v ) /y v ) ≤ a y v ≤ ay v , by definition of b t . Thus, by Chebyshev’s inequality and (5.59) P ( | (1) e X t | ≤ a | g ∆ X (1) t | ) ≥ Z Π( b t )0 P ( | e X vt − E e X vt + e G vt − E e G vt | ≤ ay v / P ( E ∈ t d v ) ≥ Z Π( b t )0 (cid:18) − e X vt ) + Var( e G vt )) a y v (cid:19) P ( E ∈ t d v ) . Also, as in (5.43), Var( e X vt ) + Var( e G vt ) ≤ a y v / , B. Buchmann, Y. Fan and R.A. Maller giving P ( | (1) e X t | ≤ a | g ∆ X (1) t | ) ≥ t Z Π( b t )0 e − tv d v/ − e − t Π( b t ) ) / . (5.62)Since the left-hand side tends to 0 as t ↓ t Π( b t ) = a b t Π( b t )56( b t | A ( b t ) | + U ( b t )) → , as t ↓ . (5.63)We need to replace b t by a continuous variable x ↓ λ > t > b tλ b t = 56 tλ ( b tλ | A ( b tλ ) | + U ( b tλ )) a b t = 56 tλ ( b tλ | A ( b t ) | + U ( b t )) a b t + 56 tλb tλ ( | A ( b tλ ) | − | A ( b t ) | ) a b t (5.64)+ 56 tλ ( U ( b tλ ) − U ( b t )) a b t ≤ λ + 56 tλ ( b tλ − b t ) | A ( b t ) | a b t + 56 tλb tλ ( b tλ − b t )Π( b t ) a b t + 56 tλ ( b tλ − b t )Π( b t ) a b t . Observe that 56 tλ ( b λt − b t ) | A ( b t ) | /a b t ≤ λ ( b λt − b t ) /b t . Since t Π( b t ) = o (1), (5.64) im-plies b tλ b t ≤ λ + λ (cid:18) b tλ b t − (cid:19) + o (cid:18) b tλ b t (cid:19) ≤ λ + λ b tλ b t + o (cid:18) b tλ b t (cid:19) . From this, we deduce that lim sup t ↓ b tλ /b t < ∞ .Now return to (5.63) and take x > 0. Choose t = t ( x ) such that b t ≤ x ≤ b λt , λ > 1. Itis shown in Klass and Wittmann [30] that the function x | A ( x ) | + U ( x ) is nondecreasing in x > 0. Thus, x | A ( x ) | + U ( x ) x Π( x ) ≥ b t | A ( b t ) | + U ( b t ) b t Π( b t ) × b t b λt . The first factor on the right tends to ∞ as t ↓ t ↓ b t /b tλ > 0, so weget (5.52).(5.52) ⇐⇒ (5.53) is proved in Lemma 4 of Doney and Maller [14].(5.53) = ⇒ (5.54): Assume (5.53). If σ > X ∈ D ( N ) hence X ∈ D ( N ) ∪ RS . So suppose σ = 0. Then the left-hand side of (5.53) is equivalent to Klass and Wittmann prove this for versions of A and U defined for distribution functions. But theirproof is easily modified to apply to the present A and U . istributional representations of a L´evy process X ∈ D ( N ) at 0 by (5.47), and the right-hand side of (5.53) is equivalent to X t ∈ RS at0 by (5.23). Thus again, X ∈ D ( N ) ∪ RS .(5.54) = ⇒ (5.51): Finally, if X ∈ D ( N ) ∪ RS then X t /b t D −→ N (0 , 1) for some b t > g ∆ X (1) t = o P ( b t ) or X t /c t P → ± c t > g ∆ X (1) t = o P ( c t ), and in eithercase (5.51) holds. This completes Theorem 8. (cid:3) We say that X is subsequentially relatively stable (SRS) at 0 if there are nonstochasticsequences t k ↓ b k > X t k b k P → ± , as k → ∞ . (5.65)Define positive and negative subsequential relative stability (PSRS and NSRS) in theobvious ways. Theorem 9. Assume Π + (0+) = ∞ . Then the following are equivalent: there is a non-stochastic sequence t k ↓ such that X t k | g ∆ X (1) t k | P → ∞ , as k → ∞ ; (5.66) there is a nonstochastic sequence t k ↓ such that X t k (∆ X + ) (1) t k P → ∞ , as k → ∞ ; (5.67) X ∈ PSRS at 0; (5.68)lim sup x ↓ A ( x ) q U ( x )Π( x ) = ∞ ; (5.69)lim sup x ↓ xA ( x ) U ( x ) = ∞ . (5.70) Proof. Assume Π + (0+) = ∞ . Each of (5.66)–(5.70) implies σ = 0; by Lemma 5 in thecase of (5.66) and (5.67), by Lemma 6 in the case of (5.69), and by (5.8) and U ( x ) ≥ σ ,in the case of (5.70). So we assume throughout that σ = 0.(5.66) ⇐⇒ (5.67): clearly, (5.66) implies (5.67). Conversely, assume (5.67). From(5.20), we have that X t k / (∆ X − ) (1) t k P → ∞ , as k → ∞ , when lim k →∞ P ( X t k > 0) = 1.Together with (5.67) and | g ∆ X (1) t | = max((∆ X + ) (1) t , (∆ X − ) (1) t ), this implies (5.66).0 B. Buchmann, Y. Fan and R.A. Maller (5.69) ⇐⇒ (5.70): Assume (5.69), so there is a nonstochastic sequence x k ↓ A ( x k ) q U ( x k )Π( x k ) → ∞ , as k → ∞ . Define t k = 1 A ( x k ) s U ( x k )Π( x k ) . Then t k Π( x k ) = q U ( x k )Π( x k ) A ( x k ) → > t k → 0. Also U ( x k ) t k A ( x k ) = 1 A ( x k ) q Π( x k ) U ( x k ) → . Let b k = t k A ( x k ), then b k x k = t k A ( x k ) x k = s U ( x k ) x k Π( x k ) ≥ . Now since b k ≥ x k we have t k U ( b k ) b k = U ( x k ) t k A ( x k ) + 2 t k R b k x k y Π( y ) d yb k ≤ o (1) + O ( t k Π( x k )) = o (1) . This implies t k U ( xb k ) /b k = o (1) for all x ∈ (0 , k →∞ t k Π( xb k ) = 0 for all x ∈ (0 , , (5.71)because U ( x ) ≥ x Π( x ). But then since Π is nonincreasing, (5.71) holds for all x > x > t k U ( xb k ) b k = t k U ( b k ) b k + O ( t k Π( b k )) = o (1) . (5.72)Again since b k ≥ x k , we can write t k A ( b k ) b k = 1 + t k R b k x k (Π + ( y ) − Π − ( y )) d yb k = 1 + O ( t k Π( x k )) = 1 + o (1) . (5.73) istributional representations of a L´evy process U ( x ) ≥ x Π( x ).(5.69) ⇐⇒ (5.68): (5.69) implies (5.71)–(5.73), as just shown, and these togetherimply (5.65) (with a “+” sign) by the subsequential version of (5.24). Thus, (5.68) holds.Conversely, assuming (5.68), we get (5.71)–(5.73) by the subsequential version of (5.24).But then (5.69) holds because A ( b k ) q U ( b k )Π( b k ) = t k A ( b k ) b k s(cid:18) b k t k U ( b k ) (cid:19)(cid:18) t k Π( b k ) (cid:19) → ∞ . So we have proved the equivalence of (5.68)–(5.70).(5.66) = ⇒ (5.69): Assume (5.66). Case (i). Suppose Π − (0+) > 0. Then, using Theorem 6, we have lim k →∞ P ( X t k > 0) = 1, σ = 0, and (5.21). Since Π − (0+) > U ( x ) ≥ x Π − ( x ), (5.21) implieslim sup x ↓ A ( x ) /x = ∞ . (5.66) also implies (1) e X t k | g ∆ X (1) t k | P → ∞ , as k → ∞ , so we have lim k →∞ P ( (1) e X t k ≤ a | g ∆ X (1) t k | ) = 0 for some a ∈ (0 , . Define b k similarly as in (5.60): b k := sup (cid:26) x > x | A ( x ) | + U ( x ) x > a t k (cid:27) . (5.74)Then by the same calculation as in (5.60)–(5.62), we find, for large k , P ( (1) e X t k ≤ a | g ∆ X (1) t k | ) ≥ t k Z Π( b k )0 e − t k v d v /2 = (1 − e − t k Π( b k ) ) / . From this, we conclude that t k Π( b k ) → 0. Take a subsequence k ′ → ∞ if necessary sothat t k ′ A ( b k ′ ) b k ′ → A and t k ′ U ( b k ′ ) b k ′ → B, (5.75)where B ≥ | A | + B = a / B. Buchmann, Y. Fan and R.A. Maller Now A ≤ k ′ ifnecessary so that, for some functions Λ ± ( x ) and B ( x ),lim k ′ →∞ t k ′ Π ± ( xb k ′ ) = Λ ± ( x ) and lim k ′ →∞ t k ′ U ( xb k ′ ) b k ′ = B ( x )at continuity points x > ± . Then Λ( x ) := Λ + ( x ) + Λ − ( x ) = 0 for all x ≥ 1. Fatou’s lemma gives ∞ > B = lim k ′ →∞ t k ′ U ( b k ′ ) b k ′ = 2 lim k ′ →∞ Z yt k ′ Π( yb k ′ ) d y ≥ Z y Λ( y ) d y, and shows that the integral on the right is finite. This means that Λ is a L´evy measure on R and by Kallenberg ([23], Theorem 15.14), as k ′ → ∞ we have ( X t k ′ − t k ′ ν ( b k ′ )) /b k ′ D −→ Y ′ , an infinitely divisible r.v. with canonical measure Λ. Since Λ( x ) = 0 for all x ≥ Y ′ has finite variance. Further, since t k Π( b k ) → k ′ →∞ t k ′ ν ( b k ′ ) /b k ′ = A (recall(5.6)). The L´evy–Itˆo decomposition can equivalently be written as X t = tν ( b ) + σZ t + X ( S,b ) t + X ( B,b ) t , t ≥ , (5.76)where b > X ( S,b ) t is the compensated small jump component of X , that is, having jumpsless than or equal to b in modulus, and X ( B,b ) t is the sum of jumps larger in modulusthan b ; see, for example, Doney and Maller ([13], Lemma 6.1). Choose b = b k in (5.76),and notice that the sum of jumps larger in modulus than b k is o ( b k ) as k → ∞ because t k Π( b k ) → 0. Also, σ = 0. So we deduce X ( S,b k ′ ) t k ′ − t k ′ ν ( b k ′ ) b k ′ = X t k ′ − t k ′ ν ( b k ′ ) b k ′ + o P (1) D −→ Y ′ . (5.77)From the inequality, E ( X ( S,b k ′ ) t k ′ ) b k ′ ≤ t k ′ U ( b k ′ ) b k ′ ≤ a X ( S,b k ′ ) t k ′ /b k ′ ) is uniformly integrable. Thus, we deduce from (5.77) that E ( X ( S,b k ′ ) t k ′ ) b k ′ → EY ′ + A. The expectation on the left equals 0, so this implies EY ′ = − A . Now argue thatlim k ′ →∞ P ( X t k ′ ≤ 0) = lim k ′ →∞ P (cid:18) X t k ′ − t k ′ ν ( b k ′ ) b k ′ ≤ − t k ′ ν ( b k ′ ) b k ′ (cid:19) = P ( Y ′ ≤ − A ) . But since Y ′ + A has mean 0 and finite variance, P ( Y ′ ≤ − A ) = P ( Y ′ + A ≤ > 0, incontradiction to (5.66). Thus, A ≤ istributional representations of a L´evy process A > B < ∞ . It follows from (5.75) that A ( b k ′ ) q U ( b k ′ )Π( b k ′ ) → ∞ , which implies (5.69). Case (ii). Still assuming (5.66), suppose Π − (0+) = 0. (5.66) implies P ( X t k > → X is a subordinator and A ( x ) ≥ x ≥ 0. Then x − A ( x ) = x − (cid:18) d X + Z x Π + ( y ) d y (cid:19) ≥ Z Π + ( xy ) d y → ∞ , as x ↓ , so we can define b k by (5.74) and proceed as before to get t k Π( b k ) → 0, and hence (5.69).Conversely, in either cases (i) or (ii), we know (5.69) = ⇒ (5.68), and (5.68) = ⇒ (5.66)follows easily from the subsequential version of (5.24). (cid:3) The following corollary to Theorem 9 is also proved in Theorem 4 of Maller [40]. Corollary 4. Assume Π(0+) > . The following are equivalent: (i) X t ∈ SRS at 0; (ii) there are nonstochastic sequences t k ↓ and b k > , such that, as k → ∞ , | X t k | b k P → 1; (5.78)(iii) σ = 0 and lim sup x ↓ | A ( x ) | q Π( x ) U ( x ) = ∞ ; (5.79)(iv) lim sup x ↓ x | A ( x ) | U ( x ) = ∞ . (5.80) Proof. Assume Π(0+) > 0. First, X t ∈ SRS at 0 = ⇒ (5.78) is obvious by definition.(5.78) = ⇒ (5.79) and (5.80): Let (5.78) hold with t k ↓ b k > 0. Take a furthersubsequence t k ′ ↓ X t k ′ /b k ′ D −→ Z ′ . Z ′ is infinitely divisible by Lemma4.1 of Maller and Mason [36]. Then | Z ′ | = 1 a.s., thus, as a bounded infinitely divisiblerandom variable, Z ′ is degenerate at a constant which must be ± 1. When Z = +1, X ∈ PSRS. Apply Theorem 9 to get (5.79) and (5.80). If Z = − − X ∈ PSRS. Thenapply Theorem 9 to − X to get (5.79) and (5.80) again.(5.79) or (5.80) = ⇒ X t ∈ SRS at 0: Let (5.79) or (5.80) hold. Then there is a sequence x k ↓ k → ∞ such that | A ( x k ) | > 0. By taking a further subsequence, we may assume4 B. Buchmann, Y. Fan and R.A. Maller that A ( x k ) > k or A ( x k ) < k . Suppose the former; then (5.69) or (5.70)holds, so we get X ∈ PSRS by Theorem 9. If the latter, then by applying Theorem 9 to − X , we get X ∈ NSRS. (cid:3) We can also have subsequential convergence to normality, as t ↓ 0. The next theoremgives an “uncentered” version of this. We describe (5.81) as “ X ∈ D P ( N ) at 0”. Theorem 10. Assume σ > or Π(0+) = ∞ . Then there are nonstochastic sequences t k ↓ and b k ↓ such that, as k → ∞ , X t k b k D −→ N (0 , iff lim sup x ↓ U ( x ) x Π( x ) + x | A ( x ) | = ∞ . (5.82) Proof. Both conditions hold when σ > 0, so we can assume σ = 0, thus, Π(0+) = ∞ .Let (5.82) hold and choose x k ↓ U ( x k ) x k Π( x k ) → ∞ and U ( x k ) x k | A ( x k ) | → ∞ . (5.83)Then define t k = min (cid:26)s x k Π( x k ) U ( x k ) , s x k | A ( x k ) | U ( x k ) (cid:27) . (5.84)(If A ( x k ) = 0 interpret the second component in (5.84) as + ∞ .) Thus, t k Π( x k ) ≤ s x k Π( x k ) U ( x k ) → , and since Π(0+) > 0, we have t k → k → ∞ . Now let b k = t k U ( x k ) . Since σ = 0, U ( x k ) = 2 R x k y Π( y ) d y → k → ∞ . Then b k → k → ∞ . Also b k x k = min (cid:26)s U ( x k ) x k Π( x k ) , s U ( x k ) x k | A ( x k ) | (cid:27) → ∞ (by (5.83)) . istributional representations of a L´evy process x > k so large that xb k ≥ x k . Then t k Π( xb k ) ≤ t k Π( x k ) → , and t k U ( xb k ) b k = 1 + t k ( U ( xb k ) − U ( x k )) b k = 1 + 2 t k R xb k x k y Π( y ) d yb k (5.85)= 1 + O ( t k Π( x k )) = 1 + o (1) . Also t k | A ( x k ) | x k ≤ s x k | A ( x k ) | U ( x k ) → , while t k | A ( b k ) | b k ≤ o (cid:18) t k | A ( x k ) | x k (cid:19) + t k | R b k x k (Π + ( y ) − Π − ( y )) d y | b k (5.86) ≤ o (1) + t k Π( x k ) = o (1) . It follows from (5.85), (5.86) and the subsequential version of (5.48) that X t k /b k D −→ N (0 , t k ↓ X t k /b k D −→ N (0 , (cid:3) Our final result in this section shows that a 2-sided version of (5.66) holds iff X ∈ D P ( N ) at 0 or X ∈ SRS at 0. Theorem 11. Assume Π(0+) = ∞ . Then the following are equivalent:there is a nonstochastic sequence t k ↓ such that | X t k || g ∆ X (1) t k | P → ∞ , as k → ∞ ;lim sup x ↓ x | A ( x ) | + U ( x ) x Π( x ) = ∞ ; (5.88)(a) lim sup x ↓ U ( x ) x | A ( x ) | + x Π( x ) = + ∞ , or (b) lim sup x ↓ x | A ( x ) | U ( x ) = + ∞ ; (5.89) X ∈ D P ( N ) ∪ SRS at . (5.90)6 B. Buchmann, Y. Fan and R.A. Maller Proof. Assume Π(0+) = ∞ .(5.87) = ⇒ (5.88): Assume (5.87). Then just as in the proof of Theorem 8, we find t k Π( b k ) → k → ∞ where b k satisfies (5.61). Thus, (5.88) holds.(5.88) = ⇒ (5.89) follows from Theorem 3 of Maller [40].(5.89) ⇐⇒ (5.90): follows from Theorem 10 and Corollary 4.(5.90) = ⇒ (5.87): (5.90) implies that there are t k ↓ b k ↓ X t k /b k D −→ N (0 , 1) or | X t k | /b k P → k → ∞ . Either of these implies t k Π( b k ) → k → ∞ andhence sup 0. It is shown in Theorem 3 of Maller [40] that (5.88) is equivalentto the existence of a nonstochastic function B t > t ↓ | X t | B t = 1 a.s.Maller [40] also gives a.s. equivalences for (5.46) and (5.89)(a). We hope to consider a.s.results related to those in Sections 3–5 elsewhere.(ii) We note that in many conditions such as (5.88) and (5.89) we may replace thefunctions A ( x ) and U ( x ) in (5.4) and (5.5) by the functions ν ( x ) and V ( x ) in (5.3). Thisis because x | A ( x ) − ν ( x ) | ≤ x Π( x ) and 0 ≤ U ( x ) − V ( x ) = x Π( x ) , x > . But there is some advantage to working with the continuous functions A ( x ) and U ( x ),and sometimes it is essential, for example, in Theorem 5. 6. Related large time results Most of the small time results derived herein have exact or close analogues for largetimes (i.e., allowing t → ∞ rather than t ↓ t > 0; this is the casefor all results in Section 2, as well as Lemmas 2 and 3. Some analogous large time resultsfor L´evy processes can be found in Kevei and Mason [27], and Maller and Mason [37, 39],and we expect that others can be derived by straightforward modification of our smalltime methods. These would include compound Poisson processes as special cases. Acknowledgements We are grateful to a referee for a very careful reading and for suggesting substantialimprovements to the original version of the paper. R. Maller’s research was partiallysupported by ARC grant DP1092502. istributional representations of a L´evy process References [1] Andrew, P. (2008). On the limiting behaviour of L´evy processes at zero. Probab. TheoryRelated Fields Arov, D.Z. and Bobrov, A.A. (1960). The extreme members of a sample and their rolein the sum of independent variables. Theory Probab. Appl. Berkes, I. and Horv´ath, L. (2012). The central limit theorem for sums of trimmedvariables with heavy tails. Stochastic Process. Appl. Berkes, I. , Horv´ath, L. and Schauer, J. (2010). Non-central limit theorems for randomselections. Probab. Theory Related Fields Bertoin, J. (1996). L´evy Processes . Cambridge Tracts in Mathematics . Cambridge:Cambridge Univ. Press. MR1406564[6] Bertoin, J. (1997). Regularity of the half-line for L´evy processes. Bull. Sci. Math. Bingham, N.H. , Goldie, C.M. and Teugels, J.L. (1987). Regular Variation . Encyclo-pedia of Mathematics and Its Applications . Cambridge: Cambridge Univ. Press.MR0898871[8] Cs¨org˝o, S. , Haeusler, E. and Mason, D.M. (1988). A probabilistic approach to theasymptotic distribution of sums of independent, identically distributed random vari-ables. Adv. in Appl. Math. Cs¨org˝o, S. and Simons, G. (2002). A Bibliography of the St. Petersburg Paradox. Anal-ysis and Stochastic Research Group of the Hungarian Academy of Sciences and theUniversity of Szeged.[10] Darling, D.A. (1952). The influence of the maximum term in the addition of independentrandom variables. Trans. Amer. Math. Soc. Doney, R.A. (2004). Small-time behaviour of L´evy processes. Electron. J. Probab. Doney, R.A. (2005). Fluctuation theory for L´evy processes: Ecole d’Et´e de Probabilit´esde Saint-Flour XXXV-2005, Issue 1897.[13] Doney, R.A. and Maller, R.A. (2002). Stability and attraction to normality for L´evyprocesses at zero and at infinity. J. Theoret. Probab. Doney, R.A. and Maller, R.A. (2002). Stability of the overshoot for L´evy processes. Ann. Probab. Fan, Y. (2015). A study in lightly trimmed L´evy processes. PhD thesis, the AustralianNational Univ.[16] Feller, W. (1968/1969). An extension of the law of the iterated logarithm to variableswithout variance. J. Math. Mech. Feller, W. (1971). An Introduction to Probability Theory and Its Applications. Vol. II ,2nd ed. New York: Wiley. MR0270403[18] Fukker, G. , Gy¨orfi, L. and Kevei, P. (2015). Asymptotic behaviour of the St. Petersburgsum conditioned on its maximum. Bernoulli . To appear.[19] Griffin, P.S. and Maller, R.A. (2011). Stability of the exit time for L´evy processes. Adv. in Appl. Probab. Griffin, P.S. and Maller, R.A. (2013). Small and large time stability of the time takenfor a L´evy process to cross curved boundaries. Ann. Inst. Henri Poincar´e Probab. Stat. Griffin, P.S. and Pruitt, W.E. (1989). Asymptotic normality and subsequential limitsof trimmed sums. Ann. Probab. B. Buchmann, Y. Fan and R.A. Maller [22] Gut, A. and Martin-L¨of, A. (2014). A maxtrimmed St. Petersburg game. J. Theoret.Probab. To appear.[23] Kallenberg, O. (2002). Foundations of Modern Probability , 2nd ed. New York: Springer.MR1876169[24] Kesten, H. and Maller, R.A. (1992). Ratios of trimmed sums and order statistics. Ann.Probab. Kesten, H. and Maller, R.A. (1994). Infinite limits and infinite limit points of randomwalks and trimmed sums. Ann. Probab. Kesten, H. and Maller, R.A. (1995). The effect of trimming on the strong law of largenumbers. Proc. Lond. Math. Soc. (3) Kevei, P. and Mason, D.M. (2013). Randomly weighted self-normalized L´evy processes. Stochastic Process. Appl. Khintchine, A. (1939). Sur la croissance locale des processus stochastiques homog`enes `aaccroissements ind´ependants. Bull. Acad. Sci. URSS. S´er. Math. [Izvestia Akad. NaukSSSR] Khintchine, Y.A. (1937). Zur Theorie der unbeschr¨ankt teilbaren Verteilungsgesetze. Mat.Sb. Klass, M.J. and Wittmann, R. (1993). Which i.i.d. sums are recurrently dominated bytheir maximal terms? J. Theoret. Probab. Ladoucette, S.A. and Teugels, J.L. (2007). Asymptotics for ratios with applicationsto reinsurance. Methodol. Comput. Appl. Probab. LePage, R. (1980). Multidimensional infinitely divisible variables and processes. I, Tech-nical Rept. 292, Dept. Statistics, Stanford Univ.[33] LePage, R. (1981). Multidimensional infinitely divisible variables and processes. II. In Probability in Banach Spaces, III (Medford, Mass., 1980) . Lecture Notes in Math. LePage, R. , Woodroofe, M. and Zinn, J. (1981). Convergence to a stable distributionvia order statistics. Ann. Probab. Madan, D.B. and Seneta, E. (1990). The variance gamma (V.G.) model for share marketreturns. J. Business Maller, R. and Mason, D.M. (2008). Convergence in distribution of L´evy processes atsmall times with self-normalization. Acta Sci. Math. (Szeged) Maller, R. and Mason, D.M. (2009). Stochastic compactness of L´evy processes. In HighDimensional Probability V: The Luminy Volume . Inst. Math. Stat. Collect. Maller, R. and Mason, D.M. (2010). Small-time compactness and convergence behaviorof deterministically and self-normalised L´evy processes. Trans. Amer. Math. Soc. Maller, R. and Mason, D.M. (2013). A characterization of small and large time limitlaws for self-normalized L´evy processes. In Limit Theorems in Probability, Statisticsand Number Theory ( P. Eichelsbacher et al ., eds.). Springer Proc. Math. Stat. Maller, R.A. (2009). Small-time versions of Strassen’s law for L´evy processes. Proc. Lond.Math. Soc. (3) Maller, R.A. and Resnick, S.I. (1984). Limiting behaviour of sums and the term ofmaximum modulus. Proc. Lond. Math. Soc. (3) Mori, T. (1984). On the limit distributions of lightly trimmed sums. Math. Proc. Cam-bridge Philos. Soc. istributional representations of a L´evy process [43] Pruitt, W.E. (1987). The contribution to the sum of the summand of maximum modulus. Ann. Probab. Resnick, S.I. (2007). Heavy-Tail Phenomena: Probabilistic and Statistical Modeling . Springer Series in Operations Research and Financial Engineering . New York:Springer. MR2271424[45] Resnick, S.I. (2008). Extreme Values, Regular Variation and Point Processes . SpringerSeries in Operations Research and Financial Engineering . New York: Springer. Reprintof the 1987 original. MR2364939[46] Rosi´nski, J. (2001). Series representations of L´evy processes from the perspective of pointprocesses. In L´evy Processes Sato, K.-i. (1999). L´evy Processes and Infinitely Divisible Distributions . Cambridge Studiesin Advanced Mathematics . Cambridge: Cambridge Univ. Press. MR1739520[48] Silvestrov, D.S. and Teugels, J.L. (2004). Limit theorems for mixed max-sum processeswith renewal stopping. Ann. Appl. Probab.1838–1868. MR2099654ε } − t Z ε< | x |≤ x Π(d x ) (cid:19) . (2.29)Now we can complete the proof of Theorem 2.1. Proof of Theorem 2.1. t > t > uy v (cid:17) . istributional representations of a L´evy process e X vt is Π(d x ) {| x | δt (cid:17) = 1 − e − t Π( δt ) → , thus g ∆ X (1) t /t P → t ↓ ,when X t is of bounded variation. istributional representations of a L´evy process Corollary 2.
. In the Poisson point process of jumps (∆ X t ) t> , the numbers of jumps and theirmagnitudes in disjoint regions are independent. Thus, the positive and negative jumpprocesses are independent. When the integrals are finite, define A ± ( x ) := Z x Π ± ( y ) d y = x Z Π ± ( xy ) d y. We obtain the following.0 . (4.4) Proof. , t > . By (4.5), the left-hand relation in (4.1) holds if and only if, for all λ > t ↓ Z (0 , ∞ ) (1 − e − t R (0 , ∞ ) (1 − e − λx/y )Π ( − ) (d x ) ) λ + t (d y ) = 0 . (4.6)Use the lower bound in the inequalities (cf. Bertoin [5], Proposition 1, page 74)( λ/ y ) A − ( y/λ ) ≤ Z (0 , ∞ ) (1 − e − λx/y )Π ( − ) (d x ) ≤ ( λ/y ) A − ( y/λ ) , y > , λ > , (4.7) istributional representations of a L´evy process λ = 1 to get a lower bound for the integral in (4.6) of Z (0 , ∞ ) (1 − e − tA − ( y ) / y ) λ + t (d y ) ≥ Z (0 ,z ] (1 − e − tA − ( y ) / y ) λ + t (d y ) (4.8)for any z > . (4.9)The Laplace transform on the left-hand side of (4.9) equals Z (0 , ∞ ) exp (cid:18) − t Z (0 , ∞ ) (1 − e − x/y )Π ( − ) (d x ) (cid:19) P (cid:18) X . B. Buchmann, Y. Fan and R.A. Maller , and the right-hand side of (4.10) is Z (0 , ∞ ) e − tρ ( y ) P (cid:18) X u sup
δt (cid:17) ≤ − e − t Π + ( δt ) → , as t ↓ δ > , thus sup
} = o P ( t ) , as t ↓ , because P ( | X ( B, t | > δt ) ≤ − e − t Π(1) →
t > istributional representations of a L´evy process t ↓
εb t (cid:17) = 1 − e − t Π( εb t ) → , thus sup
< c < c < ∞ such that lim t ↓ P ( c < | X t | /b ∗ t < c ) = 1iff X ∈ RS , and (ii) there is a nonstochastic function b † t > t k ↓ t k ′ ↓ | X t k ′ | /b † t k ′ P → c ′ , where 0 < | c ′ | < ∞ , iff X ∈ RS . Seealso Griffin and Maller [19]. The next theorems look at two-sided results, concerning stability and dominance of | X | .Now the domain of attraction of the normal enters as an alternative to relative stability. Theorem 8.