Sample Path Large Deviations for Lévy Processes and Random Walks with Regularly Varying Increments
aa r X i v : . [ m a t h . P R ] D ec Sample Path Large Deviations for L´evy Processesand Random Walks with Regularly VaryingIncrements
Chang-Han Rhee , Jose Blanchet , Bert Zwart , July 2, 2018
Abstract
Let X be a L´evy process with regularly varying L´evy measure ν . Weobtain sample-path large deviations for scaled processes ¯ X n ( t ) , X ( nt ) /n and obtain a similar result for random walks. Our results yield detailedasymptotic estimates in scenarios where multiple big jumps in the incre-ment are required to make a rare event happen; we illustrate this throughdetailed conditional limit theorems. In addition, we investigate connec-tions with the classical large deviations framework. In that setting, weshow that a weak large deviation principle (with logarithmic speed) holds,but a full large deviation principle does not hold. Keywords
Sample Path Large Deviations · Regular Variation · M -convergence · L´evy Processes
Mathematics Subject Classification · · In this paper, we develop sample-path large deviations for one-dimensional L´evyprocesses and random walks, assuming the jump sizes are heavy-tailed. Specif-ically, let X ( t ) , t ≥ , be a centered L´evy process with regularly varying L´evymeasure ν . Assume that P ( X (1) > x ) is regularly varying of index − α , andthat P ( X (1) < − x ) is regularly varying of index − β ; i.e. there exist slowlyvarying functions L + and L − such that P ( X (1) > x ) = L + ( x ) x − α , P ( X (1) < − x ) = L − ( x ) x − β . (1.1)Throughout the paper, we assume α, β >
1. We also consider spectrally one-sided processes; in that case only α plays a role. Define ¯ X n = { ¯ X n ( t ) , t ∈ [0 , } ,with ¯ X n ( t ) = X ( nt ) /n, t ≥
0. We are interested in large deviations of ¯ X n . Stochastics Group, Centrum Wiskunde & Informatica, Science Park 123, Amsterdam,North Holland, 1098 XG, The Netherlands Management Science & Engineering, Stanford University, Stanford, CA 94305, USA Supported by NWO VICI grant. Supported by NSF grants DMS-0806145/0902075, CMMI-0846816 and CMMI-1069064. X n (or random walks with heavy-tailed step sizedistribution) was initiated in Nagaev (1969, 1977). The state of the art of suchresults is well summarized in Borovkov and Borovkov (2008); Denisov et al.(2008); Embrechts et al. (1997); Foss et al. (2011). In particular, Denisov et al.(2008) describe in detail how fast x needs to grow with n for the asymptoticrelation P ( X ( n ) > x ) = n P ( X (1) > x )(1 + o (1)) (1.2)to hold, as n → ∞ , in settings that go beyond (1.1). If (1.2) is valid, the so-called principle of one big jump is said to hold. A functional version of thisinsight has been derived in Hult et al. (2005). A significant number of stud-ies investigate the question of if and how the principle of a single big jump isaffected by the impact of (various forms of) dependence, and cover stable pro-cesses, autoregressive processes, modulated processes, and stochastic differentialequations; see Buraczewski et al. (2013); Foss et al. (2007); Hult and Lindskog(2007); Konstantinides and Mikosch (2005); Mikosch and Wintenberger (2013,2016); Mikosch and Samorodnitsky (2000); Samorodnitsky (2004).The problem we investigate in this paper is markedly different from all ofthese works. Our aim is to develop asymptotic estimates of P ( ¯ X n ∈ A ) for asufficiently general collection of sets A , so that it is possible to study continuousfunctionals of ¯ X n in a systematic manner. For many such functionals, andmany sets A , the associated rare event will not be caused by a single big jump,but multiple jumps. The results in this domain (e.g. Blanchet and Shi (2012);Foss and Korshunov (2012); Zwart et al. (2004)) are few, each with an ad-hocapproach. As in large deviations theory for light tails, it is desirable to havemore general tools available.Another aspect of heavy-tailed large deviations we aim to clarify in thispaper is the connection with the standard large-deviations approach, which hasnot been touched upon in any of the above-mentioned references. In our setting,the goal would be to obtain a function I such that − inf ξ ∈ A ◦ I ( ξ ) ≤ lim inf n →∞ log P ( ¯ X n ∈ A )log n ≤ lim sup n →∞ log P ( ¯ X n ∈ A )log n ≤ − inf ξ ∈ ¯ A I ( ξ ) , (1.3)where A ◦ and ¯ A are the interior and closure of A ; all our large deviationsresults are derived in the Skorokhod J topology. Equation (1.3) is a classicallarge deviations principle (LDP) with sub-linear speed (cf. Dembo and Zeitouni(2009)). Using existing results in the literature (e.g. Denisov et al. (2008)), it isnot difficult to show that X ( n ) /n = ¯ X n (1) satisfies an LDP with rate function I = I ( x ) which is 0 at 0, equal to ( α −
1) if x >
0, and ( β −
1) if x <
0. This isa lower-semicontinuous function of which the level sets are not compact. Thus,in large-deviations terminology, I is a rate function, but is not a good one.This implies that techniques such as the projective limit approach cannot beapplied. In fact, in Section 4.4, we show that there does not exist an LDP of the2orm (1.3) for general sets A , by giving a counterexample. A version of (1.3) forcompact sets is derived in Section 4.3, as a corollary of our main results. A resultsimilar to (1.3) for random walks with semi-exponential (Weibullian) tails hasbeen derived in Gantert (1998) (see also Gantert (2000); Gantert et al. (2014)for related results). Though an LDP for finite-dimensional distributions can bederived, lack of exponential tightness also persists at the sample-path level. Tomake the rate function good (i.e., to have compact level sets), a topology chosenin Gantert (1998) is considerably weaker than any of the Skorokhod topologies(but sufficient for the application that is central in that work).The approach followed in the present paper is based on the recent devel-opments in the theory of regular variation. In particular, in Lindskog et al.(2014), the classical notion of regular variation is re-defined through a new con-vergence concept called M -convergence (this is in itself a refinement of other re-formulations of regular variation in function spaces; see de Haan and Lin (2001);Hult and Lindskog (2005, 2006)). In Section 2, we further investigate the M -convergence framework by deriving a number of general results that facilitatethe development of our proofs.This paves the way towards our main large deviations results, which arepresented in Section 3. We actually obtain estimates that are sharper than(1.3), though we impose a condition on A . For one-sided L´evy processes, ourresult takes the form C J ( A ) ( A ◦ ) ≤ lim inf n →∞ P ( ¯ X n ∈ A )( nν [ n, ∞ )) J ( A ) ≤ lim sup n →∞ P ( ¯ X n ∈ A )( nν [ n, ∞ )) J ( A ) ≤ C J ( A ) ( ¯ A ) . (1.4)Precise definitions can be found in Section 3.1; for now we just mention that C j is a measure on the Skorokhod space, and J ( · ) is an integer valued set functiondefined as J ( A ) = inf ξ ∈ A ∩ D ↑ s D + ( ξ ), where D + ( ξ ) is the number of discontinu-ites of ξ , and D ↑ s is the set of all non-increasing step functions vanishing at theorigin. Throughout the paper, we adopt the convention that the infimum overan empty set is ∞ . Letting D j and D 0, one can study a subordinated version S N ( t ) , t ≥ N ( t ) , t ≥ J distance between rescaled versions of S k , k ≥ S N ( t ) , t ≥ N ( t ) from t , which have been studied4horoughly.In Section 4.2, we provide conditional limit theorems which give a precisedescription of the limit behavior of ¯ X n given that ¯ X n ∈ A as n → ∞ . An earlyresult of this type is given in Durrett (1980), which focuses on regularly varyingrandom walks with finite variance conditioned on the event A = { ¯ X n (1) > a } .Using the recent results that we have discussed (e.g. Hult et al. (2005)) moregeneral conditional limit theorems can be derived for single-jump events.We prove an LDP of the form (1.3) in Section 4.3, where the upper boundrequires a compactness assumption. We construct a counterexample showingthat the compactness assumption cannot be totally removed, and thus, a fullLDP does not hold. Essentially, if a rare event is caused by j big jumps, then theframework developed in this paper applies if each of these jumps is bounded awayfrom below by a strictly positive constant. Our counterexample in Section 4.4indicates that it is not trivial to remove this condition.As one may expect, it is not possible to apply classical variational methodsto derive an expression for the exponent J ( A ), as is often the case in largedeviations for light tails. Nevertheless, there seems to be a generic connectionwith a class of control problems called impulse control problems. Equation (1.5)is a specific deterministic impulse-control problem, which is related to Barles(1985). We expect that techniques similar to those in Barles (1985) will beuseful to characterize optimality of solutions for problems like (1.5). The latterchallenge is not taken up in the present study and will be addressed elsewhere.Instead, in Section 6, we analyse (1.5) directly in several examples; see alsoChen et al. (2017). In each case, a condition needs to be checked to see whetherour framework is applicable. We provide a general result that essentially statesthat we only need to check this condition for step functions in A, which makesthis check rather straightforward.In summary, this paper is organized as follows. After developing some pre-liminary results in Section 2, we present our main results in Section 3. Appli-cations to random walks and connections with classical large deviations theoryare investigated in Section 4. Section 5 is devoted to proofs. We collect someuseful bounds in Appendix A. M -convergence This section reviews and develops general concepts and tools that are useful inderiving our large deviations results. The proofs of the lemmas and corollariesstated throughout this section are provided in Section 5.1. We start with brieflyreviewing the notion of M -convergence, introduced in Lindskog et al. (2014).Let ( S , d ) be a complete separable metric space, and S be the Borel σ -algebra on S . Given a closed subset C of S , let S \ C be equipped with therelative topology as a subspace of S , and consider the associated sub σ -algebra S S \ C , { A : A ⊆ S \ C , A ∈ S } on it. Define C r , { x ∈ S : d ( x, C ) < r } for r > 0, and let M ( S \ C ) be the class of measures defined on S S \ C whoserestrictions to S \ C r are finite for all r > 0. Topologize M ( S \ C ) with a sub-5asis (cid:8) { ν ∈ M ( S \ C ) : ν ( f ) ∈ G } : f ∈ C S \ C , G open in R + (cid:9) where C S \ C is theset of real-valued, non-negative, bounded, continuous functions whose supportis bounded away from C (i.e., f ( C r ) = { } for some r > µ n ∈ M ( S \ C ) converges to µ ∈ M ( S \ C ) if µ n ( f ) → µ ( f ) for each f ∈ C S \ C . Note that this notion of convergence in M ( S \ C ) coincides with theclassical notion of weak convergence of measures (Billingsley, 2013) if C is anempty set. We say that a set A ⊆ S is bounded away from another set B ⊆ S ifinf x ∈ A,y ∈ B d ( x, y ) > 0. An important characterization of M ( S \ C )-convergenceis as follows: Theorem 2.1 (Theorem 2.1 of Lindskog et al., 2014) . Let µ, µ n ∈ M ( S \ C ) .Then µ n → µ in M ( S \ C ) as n → ∞ if and only if lim sup n →∞ µ n ( F ) ≤ µ ( F ) (2.1) for all closed F ∈ S S \ C bounded away from C and lim inf n →∞ µ n ( G ) ≥ µ ( G ) (2.2) for all open G ∈ S S \ C bounded away from C . We now introduce a new notion of equivalence between two families of ran-dom objects, which will prove to be useful in Section 3.1, and Section 4.1. Let F δ , { x ∈ S : d ( x, F ) ≤ δ } and G − δ , (( G c ) δ ) c . (Compare these notationsto C r ; note that we are using the convention that superscript implies open setsand subscript implies closed sets.) Definition 1. Suppose that X n and Y n are random elements taking values ina complete separable metric space ( S , d ) , and ǫ n is a sequence of positive realnumbers. Y n is said to be asymptotically equivalent to X n with respect to ǫ n iffor each δ > , lim sup n →∞ ǫ − n P ( d ( X n , Y n ) ≥ δ ) = 0 . The usefulness of this notion of equivalence comes from the following lemma,which states that if Y n is asymptotically equivalent to X n , and X n satisfies alimit theorem, then Y n satisfies the same limit theorem. Moreover, it also allowsone to extend the lower and upper bounds to more general sets in case there areasymptotically equivalent distributions that are supported on a subspace S of S : Lemma 2.1. Suppose that ǫ − n P ( X n ∈ · ) → µ ( · ) in M ( S \ C ) for some sequence ǫ n and a closed set C . In addition, suppose that µ ( S \ S ) = 0 and P ( X n ∈ S ) = 1 for each n . If Y n is asymptotically equivalent to X n with respect to ǫ n ,then lim inf n →∞ ǫ − n P ( Y n ∈ G ) ≥ µ ( G ) if G is open and G ∩ S is bounded away from C ; lim sup n →∞ ǫ − n P ( Y n ∈ F ) ≤ µ ( F ) if F is closed and there is a δ > such that F δ ∩ S is bounded away from C . S is the class of step functions. Taking S = S , a simpler version of Lemma 2.1follows immediately: Corollary 2.1. Suppose that ǫ − n P ( X n ∈ · ) → µ ( · ) in M ( S \ C ) for somesequence ǫ n . If Y n is asymptotically equivalent to X n with respect to ǫ n , thenthe law of Y n has the same (normalized) limit, i.e., ǫ − n P ( Y n ∈ · ) → µ ( · ) in M ( S \ C ) . Next, we discuss the M -convergence in a product space as a result of the M -convergences on each space. Lemma 2.2. Suppose that S , . . . , S d are separable metric spaces, C , . . . , C d are closed subsets of S , . . . , S d , respectively. If µ ( i ) n ( · ) → µ ( i ) ( · ) in M ( S i \ C i ) for each i = 1 , . . . , d then, µ (1) n × · · · × µ ( d ) n ( · ) → µ (1) × · · · × µ ( d ) ( · ) (2.3) in M (cid:16)(cid:0) Q di =1 S i (cid:1) \ S di =1 (cid:0)(cid:0) Q i − j =1 S j (cid:1) × C i × (cid:0) Q dj = i +1 S j (cid:1)(cid:1)(cid:17) . It should be noted that Lemma 2.2 itself is not exactly “right” in the sensethat the set we take away is unnecessarily large, and hence, has limited applica-bility. More specifically, the M -convergence in (2.3) applies only to the sets thatare contained in a “rectangular” domain Q di =1 ( S i \ C i ). Our next observationallows one to combine multiple instances of M -convergences to establish a morerefined one so that (2.3) applies to a class of sets that are not confined to arectangular domain. In particular, we will see later in Theorem 3.3 and The-orem 5.1 that in combination with Lemma 2.2, the following lemma producesthe “right” M -convergence for two-sided L´evy processes and random walks. Lemma 2.3. Consider a family of measures { µ ( i ) } i =0 , ,...,m and a family ofclosed subsets { C ( i ) } i =0 , ,...,m of S such that ǫ n ( i ) P ( X n ∈ · ) → µ ( i ) ( · ) in M ( S \ C ( i )) for i = 0 , . . . , m where (cid:8) { ǫ n ( i ) : n ≥ } (cid:9) i =0 , ,...,m is the familyof associated normalizing sequences. Suppose that µ (0) ∈ M (cid:0) S \ T mi =0 C ( i ) (cid:1) ; lim sup n →∞ ǫ n ( i ) ǫ n (0) = 0 for i = 1 , . . . , m ; and for each r > , there exist positivenumbers r , . . . , r m such that T mi =0 C ( i ) r i ⊆ (cid:0) T mi =0 C ( i ) (cid:1) r . Then ǫ n (0) P ( X n ∈ · ) → µ (0) in M (cid:0) S \ T mi =0 C ( i ) (cid:1) . A version of the continuous mapping principle is satisfied by M -convergence.Let ( S ′ , d ′ ) be a complete separable metric space, and let C ′ be a closed subsetof S ′ . Theorem 2.2 (Mapping theorem; Theorem 2.3 of Lindskog et al. (2014)) . Let h : ( S \ C , S S \ C ) → ( S ′ \ C ′ , S S ′ \ C ′ ) be a measurable mapping such that h − ( A ′ )7 s bounded away from C for any A ′ ∈ S S ′ \ C ′ bounded away from C ′ . Then ˆ h : M ( S \ C ) → M ( S ′ \ C ′ ) defined by ˆ h ( ν ) = ν ◦ h − is continuous at µ provided µ ( D h ) = 0 , where D h is the set of discontinuity points of h . For our purpose, the following slight extension will prove to be useful indeveloping rigorous arguments. Lemma 2.4. Let S be a measurable subset of S , and h : ( S , S S ) → ( S ′ \ C ′ , S ′ S ′ \ C ′ ) be a measurable mapping such that h − ( A ′ ) is bounded away from C for any A ′ ∈ S S ′ \ C ′ bounded away from C ′ . Then ˆ h : M ( S \ C ) → M ( S ′ \ C ′ ) defined by ˆ h ( ν ) = ν ◦ h − is continuous at µ provided that µ ( ∂ S \ C r ) = 0 and µ ( D h \ C r ) = 0 for all r > , where D h is the set of discontinuity points of h . When we focus on L´evy processes, we are specifically interested in thecase where S is R ∞↓ + × [0 , ∞ , where R ∞↓ + , { x ∈ R ∞ + : x ≥ x ≥ . . . } ,and S ′ is the Skorokhod space D = D ([0 , , R ) — the space of real-valuedRCLL functions on [0 , d R ∞↓ + ( x, y ) = P ∞ i =1 | x i − y i |∧ i and d [0 , ∞ ( x, y ) = P ∞ i =1 | x i − y i | i for R ∞↓ + and [0 , ∞ , respec-tively. For the finite product of metric spaces, we use the maximum metric;i.e., we use d S ×···× S d (( x , . . . , x d ) , ( y , . . . , y d )) , max i =1 ,...,d d S i ( x i , y i ) for theproduct S × · · · × S d of metric spaces ( S i , d S i ). For D , we use the usual Sko-rokhod J metric d ( x, y ) , inf λ ∈ Λ k λ − e k ∨ k x ◦ λ − y k , where Λ denotes theset of all non-decreasing homeomorphisms from [0 , 1] onto itself, e denotes theidentity, and k · k denotes the supremum norm. Let S j , { ( x, u ) ∈ R ∞↓ + × [0 , ∞ : 0 , , u , . . . , u j are all distinct } . This set will play the role of S of Lemma 2.4. Define T j : S j → D to be T j ( x, u ) = P ji =1 x i [ u i , . Let D j be the subspaces of the Skorokhod spaceconsisting of nondecreasing step functions, vanishing at the origin, with exactly j jumps, and D j , S ≤ i ≤ j D i —i.e., nondecreasing step functions vanishingat the origin with at most j jumps. Similarly, let D 8o obtain the large deviations for two-sided L´evy measures, we will firstestablish the large deviations for independent spectrally positive L´evy processes,and then apply Lemma 2.4 with h ( ξ, ζ ) = ξ − ζ . The next lemma verifies twoimportant conditions of Lemma 2.4 for such h . Let D l,m denote the subspaceof the Skorokhod space consisting of step functions vanishing at the origin withexactly l upward jumps and m downward jumps. Given α, β > 1, let D < j,k , S ( l,m ) ∈ I < j,k D l,m and D < ( j,k ) , S ( l,m ) ∈ I < j,k D l × D m , where I < j,k , (cid:8) ( l, m ) ∈ Z \ { ( j, k ) } : ( α − l + ( β − m ≤ ( α − j + ( β − k (cid:9) and Z + denotes theset of non-negative integers. Note that in the definition of I < j,k , the inequalityis not strict; however, we choose to use the strict inequality in our notation toemphasize that ( j, k ) is not included in I < j,k . Lemma 2.6. Let h : D × D → D be defined as h ( ξ, ζ ) , ξ − ζ. Then, h iscontinuous at ( ξ, ζ ) ∈ D × D such that ( ξ ( t ) − ξ ( t − ))( ζ ( t ) − ζ ( t − )) = 0 for all t ∈ (0 , . Moreover, h − ( A ) ⊆ D × D is bounded away from D < ( j,k ) for any A ⊆ D bounded away from D < j,k . We next characterize convergence-determining classes for the convergence in M ( S \ C ). Lemma 2.7. Suppose that (i) A p is a π -system; (ii) each open set G ⊆ S bounded away from C is a countable union of sets in A p ; and (iii) for eachclosed set F ⊆ S bounded away from C , there is a set A ∈ A p bounded awayfrom C such that F ⊆ A ◦ and µ ( A \ A ◦ ) = 0 . If, in addition, µ ∈ M ( S \ C ) and µ n ( A ) → µ ( A ) for every A ∈ A p such that A is bounded away from C , then µ n → µ in M ( S \ C ) . Remark 1. Since S is a separable metric space, the Lindel¨of property holds.Therefore, a sufficient condition for assumption (ii) of Lemma 2.7 is that forevery x ∈ S \ C and ǫ > , there is A ∈ A p such that x ∈ A ◦ ⊆ B ( x, ǫ ) . To seethat this implies assumption (ii), note that for any given open set G , one canconstruct a cover { ( A x ) ◦ : x ∈ G } of G by choosing A x so that x ∈ ( A x ) ◦ ⊆ G and then extract a countable subcover (due to the Lindel¨of property) whose unionis equal to G . Note also that if A in assumption (iii) is open, then µ ( A \ A ◦ ) = µ ( ∅ ) = 0 automatically. In this section, we present large-deviations results for scaled L´evy processes withheavy-tailed L´evy measures. Section 3.1 studies a special case, where the L´evymeasure is concentrated on the positive part of the real line, and Section 3.2extends this result to L´evy processes with two-sided L´evy measures. In bothcases, let X n ( t ) , X ( nt ) be a scaled process of X , where X is a L´evy processwith a L´evy measure ν . Recall that X n has Itˆo representation (see, for example,9ection 2 of Kyprianou, 2014): X n ( s ) = nsa + B ( ns )+ Z | x |≤ x [ N ([0 , ns ] × dx ) − nsν ( dx )] + Z | x | > xN ([0 , ns ] × dx ) , with a a drift parameter, B a Brownian motion, and N a Poisson randommeasure with mean measure Leb × ν on [0 , n ] × (0 , ∞ ); Leb denotes the Lebesguemeasure. Let X be a L´evy process with L´evy measure ν . In this section, we assumethat ν is a regularly varying (at infinity, with index − α < − 1) L´evy measureconcentrated on (0 , ∞ ). Consider a centered and scaled process¯ X n ( s ) , n X n ( s ) − sa − µ +1 ν +1 s, (3.1)where µ +1 , ν +1 R [1 , ∞ ) xν ( dx ), and ν +1 , ν [1 , ∞ ). For each constant γ > 1, let ν γ ( x, ∞ ) , x − γ , and let ν jγ denote the restriction (to R j ↓ + ) of the j -fold productmeasure of ν γ . Let C ( · ) , δ ( · ) be the Dirac measure concentrated on thezero function. Additionally, for each j ≥ 1, define a measure C j ∈ M ( D \ D < j )concentrated on D j as C j ( · ) , E h ν jα { y ∈ (0 , ∞ ) j : P ji =1 y i [ U i , ∈ ·} i , wherethe random variables U i , i ≥ , Theorem 3.1. For each j ≥ , ( nν [ n, ∞ )) − j P ( ¯ X n ∈ · ) → C j ( · ) , (3.2) in M ( D \ D < j ) , as n → ∞ . Moreover, ¯ X n is asymptotically equivalent to aprocess that assumes values in D J ( A ) almost surely.Proof Sketch. The proof of Theorem 3.1 is based on establishing the asymptoticequivalence of ¯ X n and the process obtained by just keeping its j biggest jumps,which we will denote by ˆ J jn in Section 5. Such an equivalence is established viaProposition 5.1, and Proposition 5.2. Then, Proposition 5.3 identifies the limitof ˆ J jn , which coincides with the limit in (3.2). The full proof of Theorem 3.1is provided in Section 5.2.Recall that D ↑ s denotes the subset of D consisting of non-decreasing stepfunctions vanishing at the origin, and D + ( ξ ) denotes the number of upwardjumps of an element ξ in D . Finally, set J ( A ) , inf ξ ∈ D ↑ s ∩ A D + ( ξ ) . (3.3)10ow we are ready to present the main result of this section, which is the followinglarge-deviations theorem for ¯ X n . Theorem 3.2. Suppose that A is a measurable set. If J ( A ) < ∞ , and if A δ ∩ D J ( A ) is bounded away from D < J ( A ) for some δ > , then C J ( A ) ( A ◦ ) ≤ lim inf n →∞ P ( ¯ X n ∈ A )( nν [ n, ∞ )) J ( A ) ≤ lim sup n →∞ P ( ¯ X n ∈ A )( nν [ n, ∞ )) J ( A ) ≤ C J ( A ) ( ¯ A ) . (3.4) If J ( A ) = ∞ , and A δ ∩ D i +1 is bounded away from D i for some δ > and i ≥ , then lim n →∞ P ( ¯ X n ∈ A )( nν [ n, ∞ )) i = 0 . (3.5) In particular, in case J ( A ) < ∞ , (3.4) holds if A is bounded away from D < J ( A ) ;in case J ( A ) = ∞ , (3.5) holds if A is bounded away from D i .Proof. We first consider the case J ( A ) < ∞ . Note that J ( A ◦ ) > J ( A ) impliesthat A ◦ doesn’t contain any element of D J ( A ) . Since C J ( A ) is supported on D J ( A ) , A ◦ is a C J ( A ) -null set. Therefore, the lower bound holds trivially if J ( A ◦ ) > J ( A ). On the other hand, J ( A ) = J ( ¯ A ). To see this, supposenot—i.e., J ( ¯ A ) < J ( A ). Then, there exists ζ ∈ D ↑ s ∩ ¯ A such that ζ ∈ D < J ( A ) .This implies that ζ ∈ A δ ∩ D J ( A ) for any δ > 0, which is contradictory to theassumption that A δ ∩ D J ( A ) is bounded away from D < J ( A ) for some δ > J ( A ◦ ) = J ( A ) = J ( ¯ A ). Now, from Theorem 3.1 with j = J ( A ◦ ) along with the lower bound ofLemma 2.1, C J ( A ) ( A ◦ ) = C J ( A ◦ ) ( A ◦ ) ≤ lim inf n →∞ P ( ¯ X n ∈ A ◦ )( nν [ n, ∞ )) J ( A ◦ ) ≤ lim inf n →∞ P ( ¯ X n ∈ A )( nν [ n, ∞ )) J ( A ) . Similarly, from Theorem 3.1 with j = J ( ¯ A ) along with the upper bound ofLemma 2.1, lim sup n →∞ P ( ¯ X n ∈ A )( nν [ n, ∞ )) J ( A ) ≤ lim sup n →∞ P ( ¯ X n ∈ ¯ A )( nν [ n, ∞ )) J ( ¯ A ) ≤ C J ( ¯ A ) ( ¯ A ) = C J ( A ) ( ¯ A ) . In case J ( A ) = ∞ , we reach the conclusion by applying Theorem 3.1 with j = i along with noting that C i ( ¯ A ) = 0.Theorem 3.2 dictates the “right” choice of j in Theorem 3.1 for which (3.2)can lead to a limit in (0 , ∞ ). We conclude this section with an investigation of11 sufficient condition for C j -continuity; i.e., we provide a sufficient condition on A which guarantees C j ( ∂A ) = 0. The latter property implies C j ( A ◦ ) = C j ( A ) = C j ( ¯ A ) , (3.6)implying that the liminf and limsup in our asymptotic estimates yield thesame result. Assume that A is a subset of D j bounded away from D < j ; i.e., d ( A, D < j ) > γ for some γ > 0. Consider a path ξ ∈ A . Note that ev-ery ξ ∈ D j is determined by the pair of jump sizes and jump times ( x, u ) ∈ (0 , ∞ ) j × [0 , j ; i.e., ξ ( t ) = P ji =1 x i [ u i , ( t ). Formally, we define a map-ping ˆ T j : ˆ S j → D j by ˆ T j ( x, u ) = P ji =1 x i [ u i , , where ˆ S j , { ( x, u ) ∈ R j ↓ + × [0 , j : 0 , , u , . . . , u j are all distinct } . Since d ( A, D < j ) > γ , we know thatˆ T j ( x, u ) ∈ A implies x ∈ ( γ, ∞ ) j ; see Lemma 5.4 (b). In view of this, we cansee that (3.6) holds if the Lebesgue measure of ˆ T − j ( ∂A ) is 0 since C j ( A ) = R ( x,u ) ∈ ˆ T − j ( A ) dudν jα ( x ). One of the typical settings that arises in applications isthat the set A can be written as a finite combination of unions and intersectionsof φ − ( A ) , . . . , φ − m ( A m ), where each φ i : D → S i is a continuous function, andall sets A i are subsets of general topological space S i . If we denote this operationof taking unions and intersections by Ψ (i.e., A = Ψ( φ − ( A ) , . . . , φ − m ( A m ))),thenΨ( φ − ( A ◦ ) , . . . , φ − m ( A ◦ m )) ⊆ A ◦ ⊆ A ⊆ ¯ A ⊆ Ψ( φ − ( ¯ A ) , . . . , φ − m ( ¯ A m )) . Therefore, (3.6) holds if ˆ T − j (Ψ( φ − ( ¯ A ) , . . . , φ − m ( ¯ A m ))) \ ˆ T − j (Ψ( φ − ( A ◦ ) , . . . ,φ − m ( A ◦ m ))) has Lebesgue measure zero. A similar principle holds for the limitmeasures C j,k , defined in the next section where we deal with two-sided L´evyprocesses. Consider a two-sided L´evy measure ν for which ν [ x, ∞ ) is regularly varying withindex − α and ν ( −∞ , − x ] is regularly varying with index − β . Let¯ X n ( s ) , n X n ( s ) − sa − ( µ +1 ν +1 − µ − ν − ) s, where µ +1 , ν +1 Z [1 , ∞ ) xν ( dx ) , ν +1 , ν [1 , ∞ ) ,µ − , − ν − Z ( −∞ , − xν ( dx ) , ν − , ν ( −∞ , − . Recall the definition of D j,k given below Corollary 2.2, and the definition of ν jα and ν kβ as given below (3.1). Let C , ( · ) , δ ( · ) be the Dirac measure concen-trated on the zero function. For each ( j, k ) ∈ Z \ { (0 , } , define a measure12 j,k ∈ M ( D \ D For each ( j, k ) ∈ Z , ( nν [ n, ∞ )) − j ( nν ( −∞ , − n ]) − k P ( ¯ X n ∈ · ) → C j,k ( · ) (3.7) in M ( D \ D Suppose that A is a measurable set. If the argument minimum in(3.8) is non-empty and A is bounded away from D < J ( A ) , K ( A ) , then the argumentminimum is unique and lim inf n →∞ P ( ¯ X n ∈ A )( nν [ n, ∞ )) J ( A ) ( nν ( −∞ , − n ]) K ( A ) ≥ C J ( A ) , K ( A ) ( A ◦ ) , lim sup n →∞ P ( ¯ X n ∈ A )( nν [ n, ∞ )) J ( A ) ( nν ( −∞ , − n ]) K ( A ) ≤ C J ( A ) , K ( A ) ( ¯ A ) . (3.9) Moreover, if the argument minimum in (3.8) is empty and A is bounded awayfrom D Suppose that a sequence of D -valued random elements Y n satisfies (3.7) (with ¯ X n replaced with Y n ) for each ( j, k ) ∈ Z . Then (3.9) (with ¯ X n replaced with Y n ) holds if A is a measurable set for which the argument minimumin (3.8) is non-empty, and A is bounded away from D < J ( A ) , K ( A ) . Moreover, (3.10) (with ¯ X n replaced with Y n ) holds if the argument minimum in (3.8) isempty and A is bounded away from D The uniqueness of the argument minimum is immediatefrom the assumption that A is bounded-away from D < J ( A ) , K ( A ) . Since ¯ X n sat-isfies (3.7) by Theorem 3.3, the conclusion of the theorem follows from applyingLemma 3.1 with Y n = ¯ X n .In case one is interested in a set for which the arg min of I in (3.8) is notunique, a natural approach is to partition A into smaller sets and analyze eachelement separately. In the next theorem, we show that this strategy can besuccessfully employed with a minimal requirement on A . However, due to thepresence of two different slowly varying functions n α ν [ n, ∞ ) and n β ν ( −∞ , − n ],the limit behavior may not be dominated by a single D l,m .To deal with this case, let I = j,k , { ( l, m ) : ( α − l + ( β − m = ( α − j +( β − k } , I ≪ j,k , { ( l, m ) : ( α − l + ( β − m < ( α − j + ( β − k } , D = j,k , S ( l,m ) ∈ I = j,k D l,m , and D ≪ j,k , S ( l,m ) ∈ I ≪ j,k D l,m . Denote the slowly varyingfunctions n α ν [ n, ∞ ) and n β ν ( −∞ , − n ] with L + ( n ) and L − ( n ), respectively. Theorem 3.5. Let A be a measurable set and suppose that the argument min-imum in (3.8) is non-empty and contains a pair of integers ( J ( A ) , K ( A )) . If A δ ∩ D = J ( A ) , K ( A ) is bounded away from D ≪J ( A ) , K ( A ) for some δ > , then forany given ǫ > , there exists N ∈ N such that P ( ¯ X n ∈ A ) ≥ P ( l,m ) (cid:0) C l,m ( A ◦ ) − ǫ (cid:1) L l + ( n ) L m − ( n ) n ( α − J ( A )+( β − K ( A ) , P ( ¯ X n ∈ A ) ≤ P ( l,m ) (cid:0) C l,m ( ¯ A ) + ǫ (cid:1) L l + ( n ) L m − ( n ) n ( α − J ( A )+( β − K ( A ) , (3.11) for all n ≥ N , where the summations are over the pairs ( l, m ) ∈ I = J ( A ) , K ( A ) .In particular, (3.11) holds if A is bounded away from D ≪J ( A ) , K ( A ) .Proof. Note first that from Lemma 5.5 (i), there exists a δ ′ > D ≪J ( A ) , K ( A ) is bounded away from A ∩ ( D l,m ) δ ′ for all ( l, m ) ∈ I = J ( A ) , K ( A ) .Moreover, applying Lemma 5.5 (ii) to each A ∩ ( D l,m ) δ ′ , we conclude that thereexists ρ > A ∩ ( D l,m ) ρ is bounded away from ( D j,k ) ρ for any twodistinct pairs ( l, m ) , ( j, k ) ∈ I = J ( A ) , K ( A ) . This means that A ∩ ( D l,m ) ρ ’s are alldisjoint and bounded away from D 0, there exists an N l,m ∈ N such that (cid:0) C l,m ( A ◦ ) − ǫ (cid:1) L l + ( n ) L m − ( n ) n ( α − l +( β − m ≤ P (cid:0) ¯ X n ∈ A ∩ ( D l,m ) ρ (cid:1) , (3.12)14or all n ≥ N l,m . Meanwhile, an obvious bound holds for A \ S ( l,m ) ∈ I = J ( A ) , K ( A ) ( D l,m ) ρ ; i.e., 0 ≤ P (cid:16) ¯ X n ∈ A \ S ( l,m ) ∈ I = J ( A ) , K ( A ) ( D l,m ) ρ (cid:17) . (3.13)Since ( α − l + ( β − m = ( α − J ( A ) + ( β − K ( A ) for ( l, m ) ∈ I = J ( A ) , K ( A ) ,summing (3.12) over ( l, m ) ∈ I = J ( A ) , K ( A ) together with (3.13), we arrive at thelower bound of the theorem, with N = max ( l,m ) ∈ I = J ( A ) , K ( A ) N l,m .Turning to the upper bound, we apply Theorem 3.4 to ¯ A ∩ ( D l,m ) ρ to getlim sup n →∞ P ( ¯ X n ∈ ¯ A ∩ ( D l,m ) ρ )( nν [ n, ∞ )) l ( nν ( −∞ , − n ]) m ≤ C l,m ( ¯ A ∩ ( D l,m ) ρ ) = C l,m ( ¯ A ) . for each ( l, m ) ∈ I = J ( A ) , K ( A ) . That is, for any given ǫ > 0, there exists N ′ l,m ∈ N such that P ( ¯ X n ∈ A ∩ ( D l,m ) ρ ) ≤ (cid:0) C l,m ( ¯ A ) + ǫ/ (cid:1) L l + ( n ) L m − ( n ) n ( α − l +( β − m , (3.14)for all n ≥ N ′ l,m . On the other hand, since ¯ A \ S ( l,m ) ∈ I = J ( A ) , K ( A ) ( D l,m ) ρ is closedand bounded away from D < J ( A ) , K ( A ) ,lim sup n →∞ P (cid:16) ¯ X n ∈ A \ S ( l,m ) ( D l,m ) ρ (cid:17) ( nν [ n, ∞ )) J ( A ) ( nν ( −∞ , − n ]) K ( A ) ≤ C J ( A ) , K ( A ) (cid:16) ¯ A \ S ( l,m ) ( D l,m ) ρ (cid:17) , where the union is over the pairs ( l, m ) ∈ I = J ( A ) , K ( A ) . Therefore, there exists N ′ such that P (cid:16) ¯ X n ∈ A \ S ( l,m ) ( D l,m ) ρ (cid:17) ≤ (cid:16) C J ( A ) , K ( A ) (cid:16) ¯ A \ S ( l,m ) ( D l,m ) ρ (cid:17) + ǫ/ (cid:17) L J ( A )+ ( n ) L K ( A ) − ( n ) n ( α − J ( A )+( β − K ( A ) = ( ǫ/ L J ( A )+ ( n ) L K ( A ) − ( n ) n ( α − J ( A )+( β − K ( A ) , (3.15)for n ≥ N ′ since ¯ A \ S ( l,m ) ( D l,m ) ρ is disjoint from the support of C J ( A ) , K ( A ) .Summing (3.14) over ( l, m ) ∈ I = J ( A ) , K ( A ) and (3.15), P ( ¯ X n ∈ A ) ≤ P ( l,m ) (cid:0) C l,m (cid:0) ¯ A (cid:1) + ǫ (cid:1) L l + ( n ) L m − ( n ) n ( α − J ( A )+( β − K ( A ) , (3.16)for n ≥ N , where N = N ′ ∨ max ( l,m ) ∈ I = J ( A ) , K ( A ) N ′ l,m . This section explores the implications of the large-deviations results in Section 3,and is organized as follows. Section 4.1 proves a result similar to Theorem 3.4,15ow focusing on random walks with regularly varying increments. Section 4.2illustrates that conditional limit theorems can easily be studied by means ofthe limit theorems established in Section 3. Section 4.3 develops a weak largedeviation priciple (LDP) of the form (1.3) for the scaled L´evy processes. Finally,Section 4.4 shows that the weak LDP proved in Section 4.3 is the best one canhope for in the presence of regularly varying tails, by showing that a full LDPof the form (1.3) does not exist. Let S k , k ≥ , be a random walk, set ¯ S n ( t ) = S [ nt ] /n, t ≥ 0, and define ¯ S n = { ¯ S n ( t ) , t ∈ [0 , } . Let N ( t ) , t ≥ , be an independent unit rate Poisson process.Define the L´evy process X ( t ) , S N ( t ) , t ≥ 0, and set ¯ X n ( t ) , X ( nt ) /n, t ≥ S n . Let J ( · ), K ( · ), and C j,k ( · ) be defined as in Section 3.2. Theorem 4.1. Suppose that P ( S ≥ x ) is regularly varying with index − α and P ( S ≤ − x ) is regularly varying with index − β . Let A be a measurable setbounded away from D < J ( A ) , K ( A ) . Then lim inf n →∞ P ( ¯ S n ∈ A )( n P ( S ≥ n )) J ( A ) ( n P ( S ≤ − n )) K ( A ) ≥ C J ( A ) , K ( A ) ( A ◦ ) , lim sup n →∞ P ( ¯ S n ∈ A )( n P ( S ≥ n )) J ( A ) ( n P ( S ≤ − n )) K ( A ) ≤ C J ( A ) , K ( A ) ( ¯ A ) . (4.1) Proof. The idea is to combine our notion of asymptotic equivalence with Theo-rem 3.4. First, we need to derive the asymptotic behavior of the L´evy measure ofthe constructed L´evy process. From Example A3.17 in Embrechts et al. (1997),we obtain P ( X (1) ≥ x ) ∼ P ( S ≥ x ). Moreover, Embrechts et al. (1979) im-plies that ν ( x, ∞ ) ∼ P ( X (1) ≥ x ). Similarly, it follows that ν ( −∞ , − x ) ∼ P ( S ≤ − x ).Now, from Lemma 3.1, (4.1) is proved if (3.7) holds for ¯ S n . In view ofCorollary 2.1, (3.7) holds—and hence, the proof is completed—if we prove theasymptotic equivalence between ¯ X n and ¯ S n (w.r.t. a geometrically decaying se-quence). To prove the asymptotic equivalence, we first argue that the Skorokhoddistance between ¯ S n and ¯ X n is bounded by sup t ∈ [0 , | N ( tn ) /n − t | . To see this,define the homeomorphism λ n ( t ) as the linear interpolation of the jump pointsof N ( nt ) /n , and observe that ¯ X n ( t ) = ¯ S n ( λ n ( t )). Thus, the distance between¯ S n and ¯ X n is bounded by sup t ∈ [0 , | λ n ( t ) − t | which, in itself, is bounded bysup t ∈ [0 , | N ( tn ) /n − t | . From Lemma A.4, P ( sup t ∈ [0 , | N ( tn ) /n − t | ) > δ ) ≤ t ∈ [0 , P ( | N ( tn ) /n − t | ) > δ/ , where P ( | N ( tn ) /n − t | ) > δ/ 3) vanishes at a geometric rate w.r.t. n uniform in t ∈ [0 , .2 Conditional Limit Theorems As before, ¯ X n denotes the scaled L´evy process defined as in Section 3.1 forthe one-sided case and Section 3.2 for the two-sided case, respectively. In thissection, we present conditional limit theorems which give a precise descriptionof the limit law of ¯ X n conditional on ¯ X n ∈ A .The next result, for the one-sided case, follows immediately from the defini-tion of weak convergence and Theorem 3.2. Corollary 4.1. Suppose that a subset B of D satisfies the conditions in The-orem 3.2 and that C J ( B ) ( B ◦ ) = C J ( B ) ( B ) = C J ( B ) ( ¯ B ) > . Let ¯ X | Bn be aprocess having the conditional law of ¯ X n given that ¯ X n ∈ B , then there exists aprocess ¯ X | B ∞ such that ¯ X | Bn ⇒ ¯ X | B ∞ , in D . Moreover, if P | B ( · ) is the law of ¯ X | B ∞ , then P | B (cid:16) ¯ X | B ∞ ∈ · (cid:17) := C J ( B ) ( · ∩ B ) C J ( B ) ( B ) . Let us provide a more direct probabilistic description of the process ¯ X | B ∞ .Directly from the definition of P | B we have that¯ X | B ∞ ( t ) = J ( B ) X n =1 χ n [ U n , ( t ) , where U , ..., U J ( B ) are i.i.d. uniform random variables on [0 , 1] and P | B (cid:0) χ ∈ dx , ..., χ J ( B ) ∈ dx J ( B ) (cid:1) = Π J ( B ) i =1 (cid:0) αx i − α − dx i (cid:1) I (cid:0) x J ( B ) > ... > x > (cid:1) P (cid:16)P J ( B ) n =1 x n [ U n , ( · ) ∈ B (cid:17) C J ( B ) ( B ) . An easier to interpret description of P | B can be obtained by using the factthat δ B := d (cid:0) B, D J ( B ) − (cid:1) > 0. Define an auxiliary probability measure, P | B , under which, not only U , ..., U J ( B ) are i.i.d. Uniform(0 , χ , ..., χ J ( B ) are i.i.d. distributed Pareto( α, δ B ) and independent of the U i ’s;that is, P | B (cid:0) χ ∈ dx , ..., χ J ( B ) ∈ dx J ( B ) (cid:1) = ( α/δ B ) J ( B ) Π J ( B ) i =1 ( x i /δ B ) − α − dx i I ( x i ≥ δ B ) . Then, we have that P | B (cid:16) ¯ X | B ∞ ∈ · (cid:17) = P | B (cid:16) ¯ X | B ∞ ∈ · | ¯ X | B ∞ ∈ B (cid:17) . (4.2)17oreover, note that P | B (cid:16) ¯ X | B ∞ ∈ B (cid:17) = δ −J ( B )( α +2) B C J ( B ) ( B ) > . (4.3)In view of (4.2) and (4.3) one can say, at least qualitatively, that the mostlikely way in which the event ¯ X n ∈ B is seen to occur is by means of J ( B ) i.i.d.jumps which are suitably Pareto distributed and occurring uniformly throughoutthe time interval [0 , Corollary 4.2. Suppose that a subset B of D satisfies the conditions in Theo-rem 3.4 and that C J ( B ) , K ( B ) ( B ◦ ) = C J ( B ) , K ( B ) ( B ) = C J ( B ) , K ( B ) ( ¯ B ) > . Let ¯ X | Bn be a process having the conditional law of ¯ X n given that ¯ X n ∈ B , then ¯ X | Bn ⇒ ¯ X | B ∞ , in D . Moreover, if P | B ( · ) is the law of ¯ X | B ∞ , then P | B (cid:16) ¯ X | B ∞ ∈ · (cid:17) := C J ( B ) , K ( B ) ( · ∩ B ) C J ( B ) , K ( B ) ( B ) . A probabilistic description, completely analogous to that given for the one-sided case, can also be provided in this case. Define δ B = d (cid:0) B, D < J ( B ) , K ( B ) (cid:1) > P | B under which we have the following:First, U , ..., U J ( B ) , V , ..., V K ( B ) are i.i.d. U (0 , χ , ..., χ J ( B ) are i.i.d.Pareto( α, δ B ), and, finally ̺ , ..., ̺ K ( B ) are i.i.d. Pareto( β, δ B ) random variables(all of these random variables are mutually independent). Then, write¯ X | B ∞ ( t ) = J ( B ) X n =1 χ n [ U n , ( t ) − K ( B ) X n =1 ̺ n [ V n , ( t ) . Applying the same reasoning as in the one sided case we have that P | B (cid:16) ¯ X | B ∞ ∈ · (cid:17) = P | B (cid:16) ¯ X | B ∞ ∈ · | ¯ X | B ∞ ∈ B (cid:17) and P | B (cid:16) ¯ X | B ∞ ∈ B (cid:17) = δ −J ( B )( α +2) −K ( B )( β +2) B C J ( B ) , K ( B ) ( B ) > . We note that these results also hold for random walks, and thus is a sig-nificant extension of Theorem 3.1 in Durrett (1980), where it is assumed that α > B = { ¯ X n (1) ≥ a } . 18 .3 Large Deviation Principle In this section, we show that ¯ X n satisfies a weak large deviation principle withspeed log n , and a rate function which is piece-wise linear in the number ofdiscontinuities. More specifically, define I ( ξ ) , (cid:26) ( α − D + ( ξ ) + ( β − D − ( ξ ) , if ξ is a step function & ξ (0) = 0; ∞ , otherwise. (4.4)where D − ( ξ ) denotes the number of downward jumps in ξ . Theorem 4.2. The scaled process ¯ X n satisfies the weak large deviation principlewith rate function I and speed log n , i.e., − inf x ∈ G I ( x ) ≤ lim inf n →∞ log P ( ¯ X n ∈ G )log n (4.5) for every open set G , and lim sup n →∞ log P ( ¯ X n ∈ K )log n ≤ − inf x ∈ K I ( x ) (4.6) for every compact set K . The proof of Theorem 4.2 is provided in Section 5.3. It is based on Theo-rem 3.4, and a reduction of the case of general A to open neighborhoods; reminis-cent of arguments made in the proof of Cram´ers theorem Dembo and Zeitouni(2009). We conclude the current section by showing that the weak LDP presented inthe previous section is the best one can hope for in our setting, in the sensethat for any L´evy process X with a regularly varying L´evy measure, ¯ X n cannotsatisfy a strong LDP; i.e., (4.6) in Theorem 4.2 cannot be extended to all closedsets.Consider a mapping π : D → R that maps paths in D to their largest jumpsizes, i.e., π ( ξ ) , (cid:16) sup t ∈ (0 , (cid:0) ξ ( t ) − ξ ( t − ) (cid:1) , sup t ∈ (0 , (cid:0) ξ ( t − ) − ξ ( t ) (cid:1)(cid:17) . Note that π is continuous, since each coordinate is continuous: for example, ifthe first coordinate (the largest upward jump sizes) of π ( ξ ) and π ( ζ ) differ by ǫ then d ( ξ, ζ ) ≥ ǫ/ 2, which implies that the first coordinate is continuous. Now,to derive a contradiction, suppose that ¯ X n satisfies a strong LDP. In particular,suppose (4.6) in Theorem 4.2 is true for all closed sets rather than just compactsets. Since π is continuous w.r.t. the J metric, π ( ¯ X n ) has to satisfy a strongLDP with rate function I ′ ( y ) = inf { I ( ξ ) : ξ ∈ D , y = π ( x ) } by the contraction19rinciple, in case I ′ is a rate function. (Since I is not a good rate function, I ′ is not automatically guaranteed to be a rate function per se; see, for example,Theorem 4.2.1 and the subsequent remarks of Dembo and Zeitouni, 2009.) Fromthe exact form of I ′ , given by I ′ ( y , y ) = ( α − I ( y > 0) + ( β − I ( y > , one can check that I ′ indeed happens to be a rate function. For the sake ofsimplicity, suppose that α = β = 2, and ν [ x, ∞ ) = ν ( −∞ , − x ] = x − . Letˆ J n , n Q ← n (Γ )1 [ U , and ˆ K n , n R ← n (∆ )1 [ V , where Q ← n ( y ) , inf { s > nν [ s, ∞ ) < y } = ( n/y ) / and R ← n ( y ) , inf { s > nν ( −∞ , − s ] < y } =( n/y ) / . The random variables Γ and ∆ are standard exponential, and U , V uniform [0 , 1] (see also Section 5 for similar and more general notational conven-tions). Note that ¯ Y n , ( ˆ J n , ˆ K n ) is exponentially equivalent to π ( ¯ X n ) if wecouple π ( ¯ X n ) and ( ˆ J n , ˆ K n ), using the representation of ¯ X n as in (5.4): for any δ > P (cid:0) | ¯ Y n − π ( ¯ X n ) | > δ (cid:1) ≤ P (cid:0) ¯ Y n = π ( ¯ X n ) (cid:1) = P (cid:0) Q ← n (Γ ) ≤ R ← n (∆ ) ≤ (cid:1) , which decays at an exponential rate. Hence,log P (cid:0) | ¯ Y n − π ( ¯ X n ) | > δ (cid:1) log n → −∞ , as n → ∞ , where | · | is the Euclidean distance. As a result, ¯ Y n should satisfythe same (strong) LDP as π ( ¯ X n ). Now, consider the set A , S ∞ k =2 [log k, ∞ ) × [ k − / , ∞ ). Then, since [log k, ∞ ) × [ k − / , ∞ ) ⊆ A for k ≥ P ( ¯ Y n ∈ A ) ≥ P (cid:0) ( ˆ J n , ˆ K n ) ∈ [log n, ∞ ) × [ n − / , ∞ ) (cid:1) = P (cid:0) Q ← n (Γ ) > n log n, R ← n (∆ ) > n / (cid:1) = P (cid:18) n Γ (cid:19) / > n log n, (cid:18) n ∆ (cid:19) / > n / ! = P (cid:18) Γ < n (log n ) (cid:19) P (∆ < − e − n (log n )2 )(1 − e − ) . Thus,lim sup n →∞ P ( ¯ Y n ∈ A ) ≥ lim sup n →∞ log(1 − e − n (log n )2 )(1 − e − )log n ≥ lim sup n →∞ log n (log n ) (1 − n (log n ) )(1 − e − )log n = − . (4.7)On the other hand, since A ⊆ (0 , ∞ ) × (0 , ∞ ), − inf ( y ,y ) ∈ A I ′ ( y , y ) = − . (4.8)20oting that A is a closed (but not compact) set, we arrive at a contradiction tothe large deviation upper bound for ¯ Y n . This, in turn, proves that ¯ X n cannotsatisfy a full LDP. Section 5.1, Section 5.2, and Section 5.3 provide proofs of the results in Section 2,Section 3, and Section 4, respectively. Recall that F δ = { x ∈ S : d ( x, F ) ≤ δ } and G − δ = (( G c ) δ ) c . Proof of Lemma 2.1. Let G be an open set such that G ∩ S is bounded awayfrom C . For a given δ > 0, due to the assumed asymptotic equivalence, P ( X n ∈ G − δ , d ( X n , Y n ) ≥ δ ) = o ( ǫ n ). Therefore,lim inf n →∞ ǫ − n P ( Y n ∈ G ) ≥ lim inf n →∞ ǫ − n P (cid:0) X n ∈ G − δ , d ( X n , Y n ) < δ (cid:1) = lim inf n →∞ ǫ − n (cid:8) P (cid:0) X n ∈ G − δ (cid:1) − P (cid:0) X n ∈ G − δ , d ( X n , Y n ) ≥ δ (cid:1)(cid:9) = lim inf n →∞ ǫ − n P (cid:0) X n ∈ G − δ (cid:1) (5.1)Pick r > G − δ ∩ S ∩ C r = 0 and note that G − δ ∩ C r c is an open setbounded away from C . Then,lim inf n →∞ ǫ − n P ( X n ∈ G − δ ) = lim inf n →∞ ǫ − n P ( X n ∈ G − δ ∩ S )= lim inf n →∞ ǫ − n P ( X n ∈ G − δ ∩ S ∩ C r c )= lim inf n →∞ ǫ − n P ( X n ∈ G − δ ∩ C r c ) ≥ µ ( G − δ ∩ C r c )= µ ( G − δ ∩ C r c ∩ S ) = µ ( G − δ ∩ S ) = µ ( G − δ ) . Since G is an open set, G = S δ> G − δ . Due to the continuity of measures,lim δ → µ ( G − δ ) = µ ( G ) , and hence, we arrive at the lower boundlim inf n →∞ ǫ − n P ( Y n ∈ G ) ≥ µ ( G )by taking δ → F such that F δ ∩ S is bounded away from C . Given a δ > 0, by the equivalence assumption, P ( Y n ∈ , d ( X n , Y n ) ≥ δ ) = o ( ǫ n ). Therefore,lim sup n →∞ ǫ − n P ( Y n ∈ F )= lim sup n →∞ ǫ − n (cid:8) P ( Y n ∈ F, d ( X n , Y n ) < δ )+ P ( Y n ∈ F, d ( X n , Y n ) ≥ δ ) (cid:9) = lim sup n →∞ ǫ − n P ( X n ∈ F δ ) = lim sup n →∞ ǫ − n P ( X n ∈ F δ ∩ S ) ≤ lim sup n →∞ ǫ − n P ( X n ∈ F δ ∩ S ) ≤ µ (cid:0) F δ ∩ S (cid:1) = µ (cid:0) F δ ∩ S ∩ S (cid:1) ≤ µ (cid:0) ¯ F δ ∩ S (cid:1) = µ ( ¯ F δ ) = µ ( F δ ) . (5.2)Note that { F δ } is a decreasing sequence of sets, F = T δ> F δ (since F is closed),and µ ∈ M ( S \ C ) (and hence µ is a finite measure on S \ C r for some r > F δ ⊆ S \ C r for some δ > δ → µ ( F δ ) = µ ( F ) . Therefore, we arrive at the upper boundlim sup n →∞ ǫ − n P ( X n ∈ F ) ≤ µ ( F )by taking δ → µ on a measurable space S , denote the restriction of µ to asubspace O ⊆ S with µ | O . Proof of Lemma 2.2. We provide a proof for d = 2 which suffices for the appli-cation in this article. The extension to general d is straightforward, and hence,omitted. In view of the Portmanteau theorem for M -convergence—in particularitem (v) of Theorem 2.1 of Lindskog et al. (2014)—it is enough to show thatfor all but countably many r > 0, ( µ (1) n × µ (2) n ) | ( S × S ) \ (( C × S ) ∪ ( S × C )) r ( · ) con-verges to ( µ (1) × µ (2) ) | ( S × S ) \ (( C × S ) ∪ ( S × C )) r ( · ) weakly on ( S × S ) \ (cid:0) ( C × S ) ∪ ( S × C ) (cid:1) r , which is equipped with the relative topology as a subspace of S × S . From the assumptions of the lemma and again by Portmanteau the-orem for M -convergence, we note that µ (1) n | S \ C r converges to µ (1) | S \ C r weaklyin S \ C r , and µ (2) n | S \ C r converges to µ (2) | S \ C r weakly in S \ C r for all butcountably many r > 0. For such r ’s, µ (1) n | S \ C r × µ (2) n | S \ C r converges weakly to µ (1) | S \ C r × µ (2) | S \ C r in (cid:0) S \ C r (cid:1) × (cid:0) S \ C r (cid:1) . Noting that ( S × S ) \ (cid:0) ( C × S ) ∪ ( S × C ) (cid:1) r coincides with (cid:0) S \ C r (cid:1) × (cid:0) S \ C r (cid:1) , and µ (1) | S \ C r × µ (2) | S \ C r and µ (1) n | S \ C × µ (2) n | S \ C coincide with ( µ (1) × µ (2) ) | ( S × S ) \ (( C × S ) ∪ ( S × C )) r and( µ (1) n × µ (2) n ) | ( S × S ) \ (( C × S ) ∪ ( S × C )) r , respectively, we reach the conclusion. Proof of Lemma 2.3. Starting with the upper bound, suppose that F is a closedset bounded away from T mi =0 C ( i ). From the assumption, there exist r , . . . , r m F ⊆ S mi =0 ( S \ C ( i ) r i ), and hence,lim sup n →∞ P ( X n ∈ F ) ǫ n (0) ≤ lim sup n →∞ m X i =0 P (cid:0) X n ∈ F ∩ ( S \ C ( i ) r i ) (cid:1) ǫ n ( i ) ǫ n ( i ) ǫ n (0) ≤ lim sup n →∞ m X i =0 P ( X n ∈ F \ C ( i ) r i ) ǫ n ( i ) ǫ n ( i ) ǫ n (0)= lim sup n →∞ P ( X n ∈ F \ C (0) r ) ǫ n (0) ≤ µ (0) ( F \ C (0) r ) ≤ µ (0) ( F )Turning to the lower bound, if G is an open set bounded away from T mi =0 C ( i ),lim inf n →∞ P ( X n ∈ G ) ǫ n (0) ≥ lim inf n →∞ P ( X n ∈ G \ C (0) r ) ǫ n (0) ≥ µ (0) ( G \ C (0) r ) . Taking r → Proof of Lemma 2.4. Suppose that µ n → µ in M ( S \ C ), and µ ( D h \ C r ) = 0and µ ( ∂ S \ C r ) = 0 for each r > 0. Note that ∂h − ( A ′ ) ⊆ S \ C r for some r > ∂h − ( A ′ ) ⊆ h − ( ∂A ′ ) ∪ D h ∪ ∂ S . Therefore, µ ( ∂h − ( A ′ )) ≤ µ ◦ h − ( ∂A ′ ) + µ ( D h \ C r ) + µ ( ∂ S \ C r ) = 0. Applying Theorem2.1 (iv) of Lindskog et al. (2014) for h − ( A ′ ), we conclude that µ n ( h − ( A ′ )) → µ ( h − ( A ′ )). Again, by Theorem 2.1 (iv) of Lindskog et al. (2014), this meansthat µ n ◦ h − → µ ◦ h − in M ( S ′ \ C ′ ), and hence, ˆ h is continuous at µ . Proof of Lemma 2.6. The continuity of h is well known; see, for example, Whitt(1980). For the second claim, it is enough to prove that for each j and k , h − ( A ) ⊆ D × D is bounded away from D j × D k whenever A ⊆ D is boundedaway from D j,k . Given j and k , let A ⊆ D be bounded away from D j,k . Toprove that h − ( A ) is bounded away from D j × D k by contradiction, supposethat it is not. Then, for any given ǫ > 0, one can find ξ ∈ D and ζ ∈ D suchthat d ( ξ, D j ) < ǫ/ d ( ζ, D k ) < ǫ/ 2, and ξ − ζ ∈ A . Since a time-change of astep function doesn’t change the number of jumps and jump-sizes, there exist ξ ′ ∈ D j and ζ ′ ∈ D k such that k ξ − ξ ′ k ∞ < ǫ/ k ζ − ζ ′ k ∞ < ǫ/ 2. Therefore, d ( ξ − ζ, ξ ′ − ζ ′ ) ≤ k ( ξ − ζ ) − ( ξ ′ − ζ ′ ) k ∞ ≤ k ξ − ξ ′ k ∞ + k ζ − ζ ′ k ∞ < ǫ. From thisalong with the property d ( ξ ′ − ζ ′ , D j,k ) = 0, we conclude that d ( ξ − ζ, D j,k ) < ǫ .Taking ǫ → 0, we arrive at d ( A, D j × D k ) = 0 which is contradictory to theassumption. 23 roof of Lemma 2.7. From (i) and the inclusion-exclusion formula, µ n ( S mi =1 A i ) → µ ( S mi =1 A i ) as n → ∞ for any finite m if A i ∈ A p is bounded away from C for i = 1 , . . . , m . If G is open and bounded away from C , there is a sequence of sets A i , i ≥ A p such that G = S ∞ i =1 A i ; note that since G is bounded away from C , A i ’s are also bounded away from C . For any ǫ > 0, one can find M ǫ suchthat µ ( S M ǫ i =1 A i ) ≥ µ ( G ) − ǫ , and hence,lim inf n →∞ µ n ( G ) ≥ lim inf n →∞ µ n ( M ǫ [ i =1 A i ) = µ ( M ǫ [ i =1 A i ) ≥ µ ( G ) − ǫ. Taking ǫ → 0, we arrive at the lower bound (2.2). Turning to the upper bound,given a closed set F , we pick A ∈ A p bounded away from C such that F ⊆ A ◦ .Then, µ ( A ) − lim sup n →∞ µ n ( F ) = lim n →∞ µ n ( A ) + lim inf n →∞ ( − µ n ( F ))= lim inf n →∞ ( µ n ( A ) − µ n ( F )) = lim inf n →∞ µ n ( A \ F ) ≥ lim inf n →∞ µ n ( A ◦ \ F ) ≥ µ ( A ◦ \ F )= µ ( A ) − µ ( F ) . Note that µ ( A ) < ∞ since A is bounded away from C , which together with theabove inequality establishes the upper bound (2.2). This section provides the proofs for the limit theorems (Theorem 3.1, Theo-rem 3.3) presented in Section 3. The proof of Theorem 3.1 is based on1. The asymptotic equivalence between the target object ¯ X n and the processobtained by keeping its j largest jumps, which will be denoted as J jn :Proposition 5.1 and Proposition 5.2 prove such asymptotic equivalences.Two technical lemmas (Lemma 5.1 and Lemma 5.2) play key roles inProposition 5.2.2. M -convergence of J jn : Lemma 5.3 identifies the convergence of jump sizesequences, and Proposition 5.3 deduces the convergence of J jn from theconvergence of the jump size sequences via the mapping theorem estab-lished in Section 2.For Theorem 3.3, we first establish a general result (Theorem 5.1) for the M -convergence of multiple L´evy processes in the associated product space usingLemma 2.2 and 2.3. We then apply Lemma 2.6 to prove Theorem 3.3.Recall that X n ( t ) , X ( nt ) is a scaled process of X , where X is a L´evyprocess with a L´evy measure ν supported on (0 , ∞ ). Also recall that X n has24tˆo representation X n ( s ) = nsa + B ( ns ) + Z | x |≤ x [ N ([0 , ns ] × dx ) − nsν ( dx )] (5.3)+ Z | x | > xN ([0 , ns ] × dx ) , where N is the Poisson random measure with mean measure Leb × ν on [0 , n ] × (0 , ∞ ) and Leb denotes the Lebesgue measure. It is easy to see that J n ( s ) , ˜ N n X l =1 Q ← n (Γ l )1 [ U l , ( s ) D = Z | x | > xN ([0 , ns ] × dx ) , where Γ l = E + E + ... + E l ; E i ’s are i.i.d. and standard exponential randomvariables; U l ’s are i.i.d. and uniform variables in [0 , N n = N n (cid:0) [0 , × [1 , ∞ ) (cid:1) ; N n = P ∞ l =1 δ ( U l ,Q ← n (Γ l )) , where δ ( x,y ) is the Dirac measure concentrated on( x, y ); Q n ( x ) , nν [ x, ∞ ), Q ← n ( y ) , inf { s > nν [ s, ∞ ) < y } . Note that˜ N n is the number of Γ l ’s such that Γ l ≤ nν +1 , where ν +1 , ν [1 , ∞ ), and hence,˜ N n ∼ Poisson( nν +1 ). Throughout the rest of this section, we use the followingrepresentation for the centered and scaled process ¯ X n , n X n :¯ X n ( s ) D = 1 n J n ( s ) + 1 n B ( ns ) (5.4)+ 1 n Z | x |≤ x [ N ([0 , ns ] × dx ) − nsν ( dx )] − ( µ +1 ν +1 ) s. Proof of Theorem 3.1. We decompose ¯ X n into a centered compound Poissonprocess ¯ J n , a centered L´evy process with small jumps and continuous increments¯ Y n , and a residual process that arises due to centering ¯ Z n . After that, we willshow that the compound Poisson process determines the limit. More specifically,consider the following decomposition:¯ X n ( s ) D = ¯ Y n ( s ) + ¯ J n ( s ) + ¯ Z n ( s ) , ¯ Y n ( s ) , n B ( ns ) + 1 n Z | x |≤ x [ N ([0 , ns ] × dx ) − nsν ( dx )] , ¯ J n ( s ) , n ˜ N n X l =1 ( Q ← n (Γ l ) − µ +1 )1 [ U l , ( s ) , ¯ Z n ( s ) , n ˜ N n X l =1 µ +1 [ U l , ( s ) − µ +1 ν +1 s, (5.5)where µ +1 , ν +1 R [1 , ∞ ) xν ( dx ). Let ˆ J jn , n P jl =1 Q ← n (Γ l )1 [ U l , be, roughlyspeaking, the process obtained by just keeping the j largest (un-centered) jumpsof ¯ J n . In view of Corollary 2.1 and Proposition 5.3, it suffices to show that25 X n and ˆ J jn are asymptotically equivalent. Proposition 5.1 along with Propo-sition 5.2 prove the desired asymptotic equivalence, and hence, conclude theproof of the Theorem 3.1. Proposition 5.1. Let ¯ X n and ¯ J n be as in the proof of Theorem 3.1. Then, ¯ X n and ¯ J n are asymptotically equivalent w.r.t. (cid:0) nν [ n, ∞ ) (cid:1) j for any j ≥ .Proof. In view of the decomposition (5.5), we are done if we show that P ( k ¯ Y n k >δ ) = o (cid:0) ( nν [ n, ∞ )) − j (cid:1) and P ( k ¯ Z n k > δ ) = o (cid:0) ( nν [ n, ∞ )) − j (cid:1) . For the tail prob-ability of k ¯ Y n k , P (cid:20) sup t ∈ [0 , | ¯ Y n ( t ) | > δ (cid:21) ≤ P (cid:20) sup t ∈ [0 ,n ] (cid:12)(cid:12) B ( t ) (cid:12)(cid:12) > nδ/ (cid:21) + P (cid:20) sup t ∈ [0 ,n ] (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)Z | x |≤ x [ N ((0 , t ] × dx ) − tν ( dx )] (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) > nδ/ (cid:21) . We have an explicit expression for the first term by the reflection principle,and in particular, it decays at a geometric rate w.r.t. n . For the second term,let Y ′ ( t ) , R | x |≤ x [ N ((0 , t ] × dx ) − tν ( dx )]. Using Etemadi’s bound for L´evyprocesses (see Lemma A.4), we obtain P (cid:20) sup t ∈ [0 ,n ] (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)Z | x |≤ x [ N ([0 , t ] × dx ) − tν ( dx )] (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) > nδ/ (cid:21) ≤ t ∈ [0 ,n ] P (cid:20) | Y ′ ( t ) | > nδ/ (cid:21) ≤ t ∈ [0 ,n ] (cid:26) P (cid:20) | Y ′ ( ⌊ t ⌋ ) | > nδ/ (cid:21) + P (cid:20) | Y ′ ( t ) − Y ′ ( ⌊ t ⌋ ) | > nδ/ (cid:21)(cid:27) ≤ t ∈ [0 ,n ] P (cid:20) | Y ′ ( ⌊ t ⌋ ) | > nδ/ (cid:21) + 3 sup t ∈ [0 ,n ] P (cid:20) | Y ′ ( t ) − Y ′ ( ⌊ t ⌋ ) | > nδ/ (cid:21) = 3 sup ≤ k ≤ n P (cid:20) | Y ′ ( k ) | > nδ/ (cid:21) + 3 sup t ∈ [0 , P (cid:20) | Y ′ ( t ) | > nδ/ (cid:21) ≤ ≤ k ≤ n P (cid:20)(cid:12)(cid:12)(cid:12)(cid:12) k X i =1 { Y ′ ( i ) − Y ′ ( i − } (cid:12)(cid:12)(cid:12)(cid:12) > nδ/ (cid:21) + 3 P (cid:20) sup t ∈ [0 , | Y ′ ( t ) | m > ( nδ/ m (cid:21) . Since Y ′ ( i ) − Y ′ ( i − 1) are i.i.d. with Y ′ ( i ) − Y ′ ( i − D = Y ′ (1) = R | x |≤ x [ N ((0 , × dx ) − ν ( dx )] and Y ′ (1) has exponential moments, the first term decreases at ageometric rate w.r.t. n due to the Chernoff bound; on the other hand, since Y ′ ( t )is a martingale, the second term is bounded by 3 E | Y ′ (1) | m n m ( δ/ m for any m by Doob’ssubmartingale maximal inequality. Therefore, by choosing m large enough, this26erm can be made negligible. For the tail probability of k ¯ Z n k , note that ¯ Z n is amean zero L´evy process with the same distribution as µ +1 ( N ( ns ) /n − ν +1 s ), where N is the Poisson process with rate ν +1 . Therefore, again from the continuous-time version of Etemadi’s bound, we see that P ( k ¯ Z n k > δ ) decays at a geometricrate w.r.t. n for any δ > Proposition 5.2. For each j ≥ , let ¯ J n and ˆ J jn be defined as in the proof ofTheorem 3.1. Then, ¯ J n and ˆ J jn are asymptotically equivalent w.r.t. (cid:0) nν [ n, ∞ ) (cid:1) j .Proof. With the convention that the summation is 0 in case the superscript isstrictly smaller than the subscript, consider the following decomposition of ¯ J n :ˆ J jn , n j X l =1 Q ← n (Γ l )1 [ U l , , ¯ J >jn , n ˜ N n X l = j +1 ( Q ← n (Γ l ) − µ +1 )1 [ U l , , ˇ J jn , n j X l =1 − µ +1 [ U l , , ¯ R n , n I ( ˜ N n < j ) j X l = ˜ N n +1 ( Q ← n (Γ l ) − µ +1 )1 [ U l , , so that ¯ J n = ˆ J jn + ˇ J jn + ¯ J >jn − ¯ R n . Note that P ( k ˇ J jn k ≥ δ ) = 0 for sufficiently large n since k ˇ J jn k = jµ /n . Onthe other hand, P ( k ¯ R n k ≥ δ ) decays at a geometric rate since {k ¯ R n k ≥ δ } ⊆{ ˜ N n < j } and P ( ˜ N n < j ) decays at a geometric rate. Since P ( k ¯ J >jn k ≥ δ ) ≤ P ( k ¯ J >jn k ≥ δ, Q ← n (Γ j ) ≥ nγ ) + P ( k ¯ J >jn k ≥ δ, Q ← n (Γ j ) ≤ nγ ), Lemma 5.1 andLemma 5.2 given below imply P ( k ¯ J >jn k ≥ δ ) = o (cid:0) ( nν [ n, ∞ )) j (cid:1) by choosing γ small enough. Therefore, ˆ J jn and ¯ J n are asymptotically equivalent w.r.t.( nν [ n, ∞ )) j .Define a measure µ ( j ) α on R ∞↓ + by µ ( j ) α ( dx , dx , · · · ) , j Y i =1 ν α ( dx i ) I [ x ≥ x ≥···≥ x j > ∞ Y i = j +1 δ ( dx i ) , where ν α ( x, ∞ ) = x − α , and δ is the Dirac measure concentrated at 0. Proposition 5.3. For each j ≥ , (cid:0) nν [ n, ∞ ) (cid:1) − j P ( ˆ J jn ∈ · ) → C j ( · ) in M (cid:0) D \ D < j (cid:1) as n → ∞ .Proof. Noting that ( µ ( j ) α × Leb) ◦ T − j = C j and P ( ˆ J jn ∈ · ) = P (cid:0)(cid:0) ( Q ← n (Γ l ) /n, l ≥ , ( U l , l ≥ (cid:1) ∈ T − j ( · ) (cid:1) , Lemma 5.3 and Corollary 2.2 prove the proposition.27 emma 5.1. For any fixed γ > , δ > ,and j ≥ , P (cid:8) k ¯ J >jn k ≥ δ, Q ← n (Γ j ) ≥ nγ (cid:9) = o (cid:0) ( nν [ n, ∞ )) j (cid:1) . (5.6) Proof. (Throughout the proof of this lemma, we use µ and ν in place of µ +1 and ν +1 respectively.) We start with the following decomposition of ¯ J >jn : forany fixed λ ∈ (cid:16) , δ ν µ (cid:17) ,¯ J >jn = 1 n ˜ N n X l = j +1 ( Q ← n (Γ l ) − µ )1 [ U , = ˜ J [ j +1 ,nν (1+ λ )] n − ˜ J [ ˜ N n +1 ,nν (1+ λ )] n I ( ˜ N n < nν (1 + λ ))+ ˜ J [ nν (1+ λ )+1 , ˜ N n ] n I ( ˜ N n > nν (1 + λ )) , where ˜ J [ a,b ] n , n ⌊ b ⌋ X l = ⌈ a ⌉ ( Q ← n (Γ l ) − µ )1 [ U l , . Therefore, P (cid:8) k ¯ J >jn k ≥ δ, Q ← n (Γ j ) ≥ nγ (cid:9) ≤ P (cid:16)(cid:13)(cid:13)(cid:13) ˜ J [ j +1 ,nν (1+ λ )] n (cid:13)(cid:13)(cid:13) ≥ δ/ , Q ← n (Γ j ) ≥ nγ (cid:17) + P (cid:16)(cid:13)(cid:13)(cid:13) ˜ J [ ˜ N n +1 ,nν (1+ λ )] n (cid:13)(cid:13)(cid:13) ≥ δ/ (cid:17) + P (cid:16) ˜ N n > nν (1 + λ ) (cid:17) = (i) + (ii) + (iii) . Noting that (cid:13)(cid:13)(cid:13) ˜ J [ ˜ N n +1 ,nν (1+ λ )] n (cid:13)(cid:13)(cid:13) ≤ ( ν (1 + λ ) − ˜ N n /n ) µ — recall that ˜ N n isdefined to be the number of l ’s such that Q ← n (Γ l ) ≥ 1, and hence, 0 ≤ Q ← n (Γ l ) < l > ˜ N n — we see that (ii) is bounded by P (( ν (1 + λ ) − ˜ N n /n ) µ ≥ δ/ 3) = P ˜ N n nν ≤ λ − δ ν µ ! , which decays at a geometric rate w.r.t. n since ˜ N n is Poisson with rate nν . Forthe same reason, (iii) decays at a geometric rate w.r.t. n . We are done if weprove that (i) is o (cid:0) ( nν [ n, ∞ )) j (cid:1) . Note that Q ← n (Γ j ) ≥ nγ implies Q n ( nγ ) ≥ Γ j ,and hence, (1+ λ ) nν X l = j +1 (cid:0) Q ← n (Γ l − Γ j + Q n ( nγ )) − µ (cid:1) [ U l , ≤ (1+ λ ) nν X l = j +1 (cid:0) Q ← n (Γ l ) − µ (cid:1) [ U l , ≤ (1+ λ ) nν X l = j +1 (cid:0) Q ← n (Γ l − Γ j ) − µ (cid:1) [ U l , . A n , { Q ← n (Γ j ) ≥ nγ } ,B ′ n , sup t ∈ [0 , 1] (1+ λ ) nν X l = j +1 (cid:0) Q ← n (Γ l − Γ j ) − µ (cid:1) [ U l , ( t ) ≥ nδ ,B ′′ n , inf t ∈ [0 , 1] (1+ λ ) nν X l = j +1 (cid:0) Q ← n (Γ l − Γ j + Q n ( nγ )) − µ (cid:1) [ U l , ( t ) ≤ − nδ , then we have that(i) ≤ P ( A n ∩ ( B ′ n ∪ B ′′ n )) ≤ P ( A n ∩ B ′ n ) + P ( A n ∩ B ′′ n ) = P ( A n )( P ( B ′ n ) + P ( B ′′ n ))where the last equality is from the independence of A n and B ′ n as well as of A n and B ′′ n (which is, in turn, due to the independence of Γ j and Γ l − Γ j ).From Lemma 5.4 (c) and Proposition 5.3, P ( A n ) = P ( ˆ J jn ∈ ( D \ D < j ) − γ/ ) = O (cid:0) ( nν [ n, ∞ )) j (cid:1) , and hence, it suffices to show that the probabilities of thecomplements of B ′ n and B ′′ n converge to 1—i.e., for any fixed γ > P sup t ∈ [0 , 1] (1+ λ ) nν X l = j +1 (cid:0) Q ← n (Γ l − Γ j ) − µ (cid:1) [ U l , ( t ) < nδ → , (5.7)and P inf t ∈ [0 , 1] (1+ λ ) nν X l = j +1 (cid:0) Q ← n (Γ l − Γ j + Q n ( nγ )) − µ (cid:1) [ U l , ( t ) > − nδ → . (5.8)Starting with (5.7) P sup t ∈ [0 , 1] (1+ λ ) nν X l = j +1 (cid:0) Q ← n (Γ l − Γ j ) − µ (cid:1) [ U l , ( t ) < nδ = P sup t ∈ [0 , 1] (1+ λ ) nν − j X l =1 (cid:0) Q ← n (Γ l ) − µ (cid:1) [ U l , ( t ) < nδ ≥ P sup t ∈ [0 , 1] (1+ λ ) nν − j X l =1 (cid:0) Q ← n (Γ l ) − µ (cid:1) [ U l , ( t ) < nδ, ˜ N n ≤ (1 + λ ) nν − j ≥ P sup t ∈ [0 , 1] ˜ N n X l =1 (cid:0) Q ← n (Γ l ) − µ (cid:1) [ U l , ( t ) < nδ, ˜ N n ≤ (1 + λ ) nν − j ≥ P sup t ∈ [0 , 1] ˜ N n X l =1 (cid:0) Q ← n (Γ l ) − µ (cid:1) [ U l , ( t ) < nδ ) − P n ˜ N n > (1 + λ ) nν − j o . Q ← n and that µ ≥ Q ← n (Γ l ) − µ ≤ l ≥ ˜ N n ), while the last inequality comes from the genericinequality P ( A ∩ B ) ≥ P ( A ) − P ( B c ). The second probability converges to 0since ˜ N is Poisson with rate nν . Moving on to the first probability in the lastexpression, observe that P ˜ N n l =1 (cid:0) Q ← n (Γ l ) − µ (cid:1) [ U l , ( · ) has the same distributionas the compound Poisson process P J ( n · ) i =1 ( D i − µ ), where J is a Poisson processwith rate ν and D i ’s are i.i.d. random variables with the distribution ν con-ditioned (and normalized) on [1 , ∞ ), i.e., P { D i ≥ s } = 1 ∧ (cid:0) ν [ s, ∞ ) /ν [1 , ∞ ) (cid:1) .Using this, we obtain P sup t ∈ [0 , 1] ˜ N n X l =1 (cid:0) Q ← n (Γ l ) − µ (cid:1) [ U l , ( t ) < nδ = P ( sup ≤ m ≤ J ( n ) m X l =1 ( D l − µ ) < nδ ) (5.9) ≥ P ( sup ≤ m ≤ nν m X l =1 ( D l − µ ) < nδ, J ( n ) ≤ nν ) ≥ P ( sup ≤ m ≤ nν m X l =1 ( D l − µ ) < nδ ) − P (cid:8) J ( n ) > nν (cid:9) The second probability vanishes at a geometric rate w.r.t. n because J ( n ) isPoisson with rate nν . The first term can be investigated by the generalizedKolmogorov inequality, cf. Shneer and Wachtel (2009) (given as Result A.1 inAppendix A): P max ≤ m ≤ nν m X l =1 ( D l − µ ) ≥ nδ/ ! ≤ C nν V ( nδ/ nδ/ , where V ( x ) = E [( D l − µ ) ; µ − x ≤ D l ≤ µ + x ] ≤ µ + E [ D l ; D l ≤ µ + x ].Note that E [ D l ; D l ≤ µ + x ] = Z sds + Z µ + x s ν [ s, ∞ ) ν [1 , ∞ ) ds = 1 + 2 ν ( µ + x ) − α L ( µ + x ) , for some slowly varying L . Hence, P max ≤ m ≤ nν m X l =1 ( D l − µ ) < nδ ! ≥ − P max ≤ m ≤ nν m X l =1 ( D l − µ ) ≥ nδ/ ! → , as n → ∞ . 30ow, turning to (5.8), let γ n , Q n ( nγ ). P inf t ∈ [0 , 1] (1+ λ ) nν X l = j +1 (cid:0) Q ← n (Γ l − Γ j + Q n ( nγ )) − µ (cid:1) [ U l , ( t ) > − nδ = P inf t ∈ [0 , 1] (1+ λ ) nν − j X l =1 (cid:0) Q ← n (Γ l + γ n ) − µ (cid:1) [ U l , ( t ) > − nδ ≥ P inf t ∈ [0 , 1] (1+ λ ) nν − j X l =1 (cid:0) Q ← n (Γ l + γ n ) − µ (cid:1) [ U l , ( t ) > − nδ, E ≥ γ n ≥ P inf t ∈ [0 , 1] (1+ λ ) nν − j X l =1 (cid:0) Q ← n (Γ l + E ) − µ (cid:1) [ U l , ( t ) > − nδ, E ≥ γ n = P inf t ∈ [0 , 1] (1+ λ ) nν − j +1 X l =2 (cid:0) Q ← n (Γ l ) − µ (cid:1) [ U l , ( t ) > − nδ, Γ ≥ γ n ≥ P inf t ∈ [0 , 1] (1+ λ ) nν − j +1 X l =2 (cid:0) Q ← n (Γ l ) − µ (cid:1) [ U l , ( t ) > − nδ − P { Γ < γ n } = ( A ) − ( B ) , where E is a standard exponential random variable. (Recall that Γ l , E + E + · · · + E l , and hence (Γ l + E , U l ) D = (Γ l +1 , U l ) D = (Γ l +1 , U l +1 ).) Since( B ) = P { Γ < γ n } → γ n = nν [ nγ, ∞ ) and ν is regularly varyingwith index − α < − A ) = P ( inf t ∈ [0 , 1] (1+ λ ) nν − j +1 X l =2 (cid:0) Q ← n (Γ l ) − µ (cid:1) [ U l , ( t ) > − nδ ) ≥ P ( inf t ∈ [0 , 1] (1+ λ ) nν − j +1 X l =2 (cid:0) Q ← n (Γ l ) − µ (cid:1) [ U l , ( t ) > − nδ, ˜ N n ≤ (1 + λ ) nν − j + 1 ) ≥ P ( inf t ∈ [0 , 1] ˜ N n X l =1 (cid:0) Q ← n (Γ l ) − µ (cid:1) [ U l , ( t ) ≥ − nδ/ , inf t ∈ [0 , − (cid:0) Q ← n (Γ ) − µ (cid:1) [ U , ( t ) > − nδ/ , inf t ∈ [0 , 1] (1+ λ ) nν − j +1 X l = ˜ N n +1 (cid:0) Q ← n (Γ l ) − µ (cid:1) [ U l , ( t ) ≥ − nδ/ , ˜ N n ≤ (1 + λ ) nν − j + 1 ) ≥ P ( inf t ∈ [0 , 1] ˜ N n X l =1 (cid:0) Q ← n (Γ l ) − µ (cid:1) [ U l , ( t ) ≥ − nδ/ , ) + P n Q ← n (Γ ) − µ < nδ/ o + P ( inf t ∈ [0 , 1] (1+ λ ) nν − j +1 X l = ˜ N n +1 (cid:0) Q ← n (Γ l ) − µ (cid:1) [ U l , ( t ) ≥ − nδ/ ) + P n ˜ N n ≤ (1 + λ ) nν − j + 1 o − 3= (AI) + (AII) + (AIII) + (AIV) − . The third inequality comes from applying the generic inequality P ( A ∩ B ) ≥ P ( A ) + P ( B ) − N n is Poisson with rate nν ,(AIV) = P n ˜ N n ≤ (1 + λ ) nν − j + 1 o = P ( ˜ N n nν ≤ λ − j − nν ) → . P ( inf t ∈ [0 , 1] ˜ N n X l =1 (cid:0) Q ← n (Γ l ) − µ (cid:1) [ U l , ( t ) ≥ − nδ/ ) = P ( sup t ∈ [0 , 1] ˜ N n X l =1 (cid:0) µ − Q ← n (Γ l ) (cid:1) [ U l , ( t ) ≤ nδ/ ) = P ( sup ≤ m ≤ J ( n ) m X l =1 ( µ − D l ) ≤ nδ/ ) , where D i is defined as before. Note that this is of exactly same form as (5.9)except for the sign of D l , and hence, we can proceed exactly the same way usingthe generalized Kolmogorov inequality to prove that this quantity converges to1 — recall that the formula only involves the square of the increments, andhence, the change of the sign has no effect. For the second term (AII),(AII) ≥ P (cid:8) Q ← n (Γ ) ≤ nδ/ (cid:9) ≥ P (cid:8) Γ > Q n ( nδ/ (cid:9) → , since Q n ( nδ/ → 0. For the third term (AIII),(AIII) = P ( inf t ∈ [0 , 1] (1+ λ ) nν − j +1 X l = ˜ N n +1 (cid:0) Q ← n (Γ l ) − µ (cid:1) [ U l , ( t ) ≥ − nδ/ ) ≥ P ( inf t ∈ [0 , 1] (1+ λ ) nν − j +1 X l = ˜ N n +1 (1 − µ )1 [ U l , ( t ) ≥ − nδ/ ) ≥ P ( (1+ λ ) nν − j +1 X l = ˜ N n +1 ( µ − ≤ nδ/ ) ≥ P ( ( µ − (cid:0) (1 + λ ) nν − j − ˜ N n + 1 (cid:1) ≤ nδ/ ) ≥ P ( λ − δ ν ( µ − ≤ ˜ N n nν + j − nν ) → , since λ < δ ν ( µ − . This concludes the proof of the lemma. Lemma 5.2. For any j ≥ , δ > , and m < ∞ , there is γ > such that P (cid:8)(cid:13)(cid:13) ¯ J >jn (cid:13)(cid:13) > δ, Q ← n (Γ j ) ≤ nγ (cid:9) = o ( n − m ) . Proof. (Throughout the proof of this lemma, we use µ and ν in place of µ +1 and ν +1 respectively, for the sake of notational simplicity.) Note first that33 ← n (Γ j ) = ∞ if j = 0 and hence the claim of the lemma is trivial. Therefore,we assume j ≥ λ > P (cid:8)(cid:13)(cid:13) ¯ J >jn (cid:13)(cid:13) > δ, Q ← n (Γ j ) ≤ nγ (cid:9) ≤ P ((cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˜ N n X l = j +1 ( Q ← n (Γ l ) − µ )1 [ U l , (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) > nδ, Q ← n (Γ j ) ≤ nγ, (5.10)˜ N n nν ∈ (cid:20) jnν , λ (cid:21) ) + P ( ˜ N n nν / ∈ (cid:20) jnν , λ (cid:21)) , and P n ˜ N n nν / ∈ h jnν , λ io decays at a geometric rate w.r.t. n , it suffices toshow that (5.10) is o ( n − m ) for small enough γ > 0. First, recall that by thedefinition of Q ← n ( · ), Q ← n ( x ) ≥ s ⇐⇒ x ≤ Q n ( s ) , and nν ( Q ← n ( x ) , ∞ ) ≤ x ≤ nν [ Q ← n ( x ) , ∞ ) . Let L be a random variable conditionally (on ˜ N n ) independent of everything elseand uniformly sampled on { j + 1 , j + 2 , . . . , ˜ N n } . Recall that given ˜ N n and Γ j ,the distribution of { Γ j +1 , Γ j +2 , . . . , Γ ˜ N n } is same as that of the order statisticsof ˜ N n − j uniform random variables on [Γ j , nν [1 , ∞ )]. Let D l , l ≥ 1, be i.i.d.random variables whose conditional distribution is the same as the conditionaldistribution of Q ← n (Γ L ) given ˜ N n and Γ j . Then the conditional distribution of P ˜ N n l = j +1 ( Q n (Γ l ) − µ )1 [ U l , is the same as that of P ˜ N n − jl =1 ( D l − µ )1 [ U l , . There-fore, the conditional distribution of (cid:13)(cid:13)(cid:13)P ˜ N n l = j +1 ( Q n (Γ l ) − µ )1 [ U l , (cid:13)(cid:13)(cid:13) ∞ is the sameas the corresponding conditional distribution of sup ≤ m ≤ ˜ N n − j (cid:12)(cid:12)(cid:12) P ml =1 ( D l − µ ) (cid:12)(cid:12)(cid:12) .To make use of this in the analysis what follows, we make a few observationson the conditional distribution of Q ← n (Γ L ) given Γ j and ˜ N n .(a) The conditional distribution of Q ← n (Γ L ) : Let q , Q ← n (Γ j ). Since Γ L is uniformly distributed on [Γ j , Q n (1)] =[Γ j , nν [1 , ∞ )], the tail probability is P { Q ← n (Γ L ) ≥ s | Γ j , ˜ N n } = P { Γ L ≤ Q n ( s ) | Γ j , ˜ N n } = P { Γ L ≤ nν [ s, ∞ ) | Γ j , ˜ N n } = P (cid:26) Γ L − Γ j nν [1 , ∞ ) − Γ j ≤ nν [ s, ∞ ) − Γ j nν [1 , ∞ ) − Γ j (cid:12)(cid:12)(cid:12)(cid:12) Γ j , ˜ N n (cid:27) = nν [ s, ∞ ) − Γ j nν [1 , ∞ ) − Γ j s ∈ [1 , q ]; since this is non-increasing w.r.t. Γ j and nν ( q, ∞ ) ≤ Γ j ≤ nν [ q, ∞ ), we have that ν [ s, q ) ν [1 , q ) ≤ P { Q ← n (Γ L ) ≥ s | Γ j , ˜ N n } ≤ ν [ s, q ] ν [1 , q ] . (b) Difference in mean between conditional and unconditional distribution: From (a), we obtain˜ µ n , E [ Q ← n (Γ L ) | Γ j , ˜ N n ] ∈ (cid:20) Z q ν [ s, q ) ν [1 , q ) ds, Z q ν [ s, q ] ν [1 , q ] ds (cid:21) , and hence, | µ − ˜ µ n | ≤ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ν [1 , q ) R ∞ ν [ s, ∞ ) ds − ν [1 , ∞ ) R q ν [ s, q ) dsν [1 , ∞ ) ν [1 , q ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ∨ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ν [1 , q ] R ∞ ν [ s, ∞ ) ds − ν [1 , ∞ ) R q ν [ s, q ] dsν [1 , ∞ ) ν [1 , q ] (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) . Since ν [1 , q ) R ∞ ν [ s, ∞ ) ds − ν [1 , ∞ ) R q ν [ s, q ) dsν [1 , ∞ ) ν [1 , q )= ν [ q, ∞ ) ν [1 , q ) ( q − 1) + R ∞ q ν [ s, ∞ ) dsν [1 , ∞ ) − ν [ q, ∞ ) R q ν [ s, ∞ ) dsν [1 , ∞ ) ν [1 , q ) , and ν [1 , q ) R ∞ ν [ s, ∞ ) ds − ν [1 , ∞ ) R q ν [ s, q ) dsν [1 , ∞ ) ν [1 , q ) − ν [1 , q ] R ∞ ν [ s, ∞ ) ds − ν [1 , ∞ ) R q ν [ s, q ] dsν [1 , ∞ ) ν [1 , q ]= ν { q } (cid:16) ( q − ν [1 , ∞ ) + R ∞ q ν [ s, ∞ ) ds + R q ν [ s, ∞ ) ds (cid:17) ν [1 , ∞ )( ν [1 , q ) + ν { q } ) , we see that | µ − ˜ µ n | is bounded by a regularly varying function with index1 − α (w.r.t. q ) from Karamata’s theorem.(c) Variance of Q ← n (Γ L ) : Turning to the variance, we observe that, if α ≤ E [ Q ← n (Γ L ) | Γ j , ˜ N n ] ≤ Z sds + 2 Z q s ν [ s, q ] ν [1 , q ] ds ≤ ν [1 , q ] Z q sν [ s, ∞ ) ds = 1 + q − α L ( q ) (5.11)for some slowly varying function L ( · ). If α > 2, the variance is boundedw.r.t. q . 35ow, with (b) and (c) in hand, we can proceed with an explicit bound since allthe randomness is contained in q . Namely, we infer P (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˜ N n X l = j +1 ( Q ← n (Γ l ) − µ )1 [ U l , (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∞ > nδ, Q ← n (Γ j ) ≤ nγ, ˜ N n nν ∈ (cid:20) jnν , λ (cid:21) ! = P (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˜ N n X l = j +1 ( Q ← n (Γ l ) − µ )1 [ U l , (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∞ > nδ, Γ j ≥ Q n ( nγ ) , ˜ N n nν ∈ (cid:20) jnν , λ (cid:21) ! = E P (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˜ N n X l = j +1 ( Q ← n (Γ l ) − µ )1 [ U l , (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∞ > nδ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) Γ j , ˜ N n ! ; Γ j ≥ Q n ( nγ ) , ˜ N n nν ∈ (cid:20) jnν , λ (cid:21) = E " P max ≤ m ≤ ˜ N n − j (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m X l =1 ( D l − µ ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) > nδ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) Γ j , ˜ N n ! ; Γ j ≥ Q n ( nγ ) , ˜ N n nν ∈ (cid:20) jnν , λ (cid:21) . By Etemadi’s bound (Result A.2 in Appendix), P max ≤ m ≤ ˜ N n − j (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m X l =1 ( D l − µ ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≥ nδ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) Γ j , ˜ N n ! ≤ ≤ m ≤ ˜ N n P (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m X l =1 ( D l − µ ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≥ nδ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) Γ j , ˜ N n ! ≤ ≤ m ≤ ˜ N n ( P m X l =1 ( D l − µ ) ≥ nδ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) Γ j , ˜ N n ! + P m X l =1 ( µ − D l ) ≥ nδ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) Γ j , ˜ N n ! ) (5.12)and as | D l − ˜ µ n | is bounded by q , we can apply Prokhorov’s bound (Result A.336n Appendix) to get P m X l =1 ( µ − D l ) ≥ nδ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) Γ j , ˜ N n ! = P m X l =1 (˜ µ n − D l ) ≥ nδ − m ( µ − ˜ µ n ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) Γ j , ˜ N n ! ≤ P m X l =1 (˜ µ n − D l ) ≥ nδ − nν (1 + λ )( µ − ˜ µ n ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) Γ j , ˜ N n ! ≤ (cid:18) qn ( δ − ν (1 + λ )( µ − ˜ µ n )) m var ( Q ← n (Γ L )) (cid:19) − n ( δ − ν λ )( µ − ˜ µn ))2 q ≤ (cid:18) nν (1 + λ ) var ( Q ← n (Γ L )) qn ( δ − ν (1 + λ )( µ − ˜ µ n )) (cid:19) n ( δ − ν λ )( µ − ˜ µn ))2 q = (cid:16) ν (1+ λ )(1+ q − α L ( q )) q ( δ − ν (1+ λ ) q − α L ( q )) (cid:17) n ( δ − ν λ ) q − αL q ))2 q if α ≤ , (cid:16) ν (1+ λ ) Cq ( δ − ν (1+ λ ) q − α L ( q )) (cid:17) n ( δ − ν λ ) q − αL q ))2 q otherwise,for some C > m ≤ (1 + λ ) nν . Therefore, there exist constants M and c such that q ≥ M (i.e., Γ j ≤ Q n ( M )) implies P m X l =1 ( µ − D l ) ≥ nδ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) Γ j ! ≤ c ( q − α ∧ ) nδ q , and since we are conditioning on q = Q ← n (Γ j ) ≤ nγ , c ( q − α ∧ ) nδ q ≤ c ( q − α ∧ ) δ γ . Hence, P m X l =1 ( µ − D l ) ≥ nδ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) Γ j ! ≤ c (cid:0) Q ← n (Γ j ) − α ∧ (cid:1) δ γ . With the same argument, we also get P m X l =1 ( D l − µ ) ≥ nδ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) Γ j ! ≤ c (cid:0) Q ← n (Γ j ) − α ∧ (cid:1) δ γ . Combining (5.12) with the two previous estimates, we obtain P max ≤ m ≤ ˜ N n − j (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m X l =1 ( D l − µ ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≥ nδ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) Γ j , ˜ N n ! ≤ c (cid:0) Q ← n (Γ j ) − α ∧ (cid:1) δ γ , 37n Γ j ≥ Q n ( nγ ), ˜ N n − j ≤ nν (1 + λ ), and Γ j ≤ Q n ( M ). Now, E (cid:20) P max ≤ m ≤ ˜ N n − j (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m X l =1 ( D l − µ ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) > nδ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) Γ j , ˜ N n ! ; Γ j ≥ Q n ( nγ )& ˜ N n nν ∈ (cid:20) jnν , λ (cid:21) (cid:21) ≤ E (cid:20) P max ≤ m ≤ ˜ N n − j (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m X l =1 ( D l − µ ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) > nδ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) Γ j , ˜ N n ! ; Γ j ≥ Q n ( nγ );˜ N n nν ∈ (cid:20) jnν , λ (cid:21) ; Γ j ≤ Q n ( M ) (cid:21) + P (Γ j > Q n ( M )) ≤ E h c (cid:0) Q ← n (Γ j ) − α ∧ (cid:1) δ γ i + P (Γ j > Q n ( M )) ≤ E h c (cid:0) Q ← n (Γ j ) − α ∧ (cid:1) δ γ ; Q ← n (Γ j ) ≥ n β i + P (cid:0) Q ← n (Γ j ) < n β (cid:1) + P (cid:0) Γ j > Q n ( M ) (cid:1) ≤ c (cid:16) n β (1 − α ∧ (cid:17) δ γ + P (cid:0) Γ j > Q n ( n β ) (cid:1) + P (cid:0) Γ j > Q n ( M ) (cid:1) ≤ c (cid:16) n β (1 − α ∧ (cid:17) δ γ + P (cid:0) Γ j > ( n − αβ L ( n )) (cid:1) + P (cid:0) Γ j > Q n ( M ) (cid:1) , for any β > 0. If one chooses β so that 1 − αβ > β = α ), thesecond and third terms vanish at a geometric rate w.r.t. n . On the other hand,we can pick γ small enough compared to δ , so that the first term is decreasing atan arbitrarily fast polynomial rate. This concludes the proof of the lemma.Recall that we denote the Lebesgue measure on [0 , ∞ with Leb and definedmeasures µ ( j ) α and µ ( j ) β on R ∞↓ + as µ ( j ) α ( dx , dx , . . . ) , j Y i =1 ν α ( dx i ) I [ x ≥ x ≥···≥ x j > ∞ Y i = j +1 δ ( dx i ) , and ν α ( x, ∞ ) = x − α , where δ is the Dirac measure concentrated at 0. Lemma 5.3. For each j ≥ , (cid:0) nν [ n, ∞ )) − j P [(( Q ← n (Γ l ) /n, l ≥ , ( U l , l ≥ ∈ · ] → ( µ ( j ) α × Leb )( · ) in M (cid:0) ( R ∞↓ + × [0 , ∞ ) \ ( H < j × [0 , ∞ ) (cid:1) as n → ∞ .Proof. We first prove that (cid:0) nν [ n, ∞ )) − j P [( Q ← n (Γ l ) /n, l ≥ ∈ · ] → µ ( j ) α ( · ) (5.13)38n M ( R ∞↓ + \ H < j ) as n → ∞ . To show this, we only need to check that (cid:0) nν [ n, ∞ )) − j P [( Q ← n (Γ l ) /n, l ≥ ∈ A ] → µ ( j ) α ( A ) (5.14)for A ’s that belong to the convergence-determining class A j , (cid:8) { z ∈ R ∞↓ + : x ≤ z , . . . , x l ≤ z l } : l ≥ j, x ≥ . . . ≥ x l > (cid:9) . To see that A j is a convergence-determining class for M ( R ∞↓ + \ H < j )-convergence, note that A ′ j , (cid:8) { z ∈ R ∞↓ + : x ≤ z < y , . . . , x l ≤ z l < y l } : l ≥ j, x , . . . , x l ∈ (0 , ∞ ) , y , . . . , y l ∈ (0 , ∞ ] (cid:9) satisfies conditions (i), (ii), and (iii) of Lemma 2.7, and hence, is a convergence-determining class. Now define A j ( i )’s recursively as A j ( i + 1) , { B \ A : A, B ∈A j ( i ) , A ⊆ B } for i ≥ 0, and A j (0) = A ′′ j , (cid:8) { z ∈ R ∞↓ + : x ≤ z , . . . , x l ≤ z l } : l ≥ j, x , . . . , x l > (cid:9) . Since we restrict the set-difference operation betweennested sets, the limit associated with the sets in A j ( i + 1) is determined by thesets in A j ( i ), and eventually, A ′′ j . Noting that A ′ j ⊆ S ∞ i =0 A j ( i ), we see that A ′′ j is a convergence-determining class. Now, since both P [( Q ← n (Γ l ) /n, l ≥ ∈· ] and µ ( j ) α ( · ) are supported on R ∞↓ + , one can further reduce the convergencedetermining class from A ′′ j to A j .To check the desired convergence for the sets in A j , we first characterize thelimit measure. Let l ≥ j and x ≥ · · · ≥ x l > 0. By the change of variables v i = x αi y − αi for i = 1 , . . . , j , µ ( j ) α ( { z ∈ R ∞↓ + : x ≤ z , . . . , x l ≤ z l } )= I ( j = l ) · Z ∞ x j · · · Z ∞ x I ( y ≥ · · · ≥ y j ) dν α ( y ) · · · dν α ( y j )= I ( j = l ) · j Y i =1 x i ! − α · Z · · · Z I ( x − α v ≤ · · · ≤ x − αj v j ) dv · · · dv j . Next, we find a similar representation for the distribution of Γ , . . . , Γ l . Let U (1) , . . . , U ( l − be the order statistics of l − , / Γ l , . . . , Γ l − / Γ l ) givenΓ j does not depend on Γ j and coincides with the distribution of ( U (1) , . . . , U ( l − );see, for example, Pyke (1965). Suppose that l ≥ j and 0 ≤ y ≤ · · · ≤ y l . By39he change of variables u i = γ − y i v i for i = 1 , . . . , l − 1, and γ = y l v l , P (cid:0) Γ ≤ y , . . . , Γ l ≤ y l (cid:1) = E h P (cid:0) Γ / Γ l ≤ y / Γ l , . . . , Γ l − / Γ l ≤ y l − / Γ l (cid:12)(cid:12) Γ l (cid:1) · I (cid:0) Γ l ≤ y l (cid:1)i = Z y l P (cid:0) U (1) ≤ y /γ, . . . , U ( l − ≤ y l − /γ (cid:1) e − γ γ l − ( l − dγ = Z y l e − γ γ l − Z y l − /γ · · · Z y /γ I ( u ≤ · · · ≤ u l − ≤ du · · · du l − dγ = l − Y i =1 y i ! Z y l e − γ Z · · · Z I ( y v ≤ · · · ≤ y l − v l − ≤ γ ) dv · · · dv l − dγ = l Y i =1 y i ! · Z · · · Z e − y l v l I ( y v ≤ · · · ≤ y l v l ) dv · · · dv l . Since 0 ≤ Q n ( nx ) ≤ . . . ≤ Q n ( nx l ) for x ≥ · · · ≥ x l > nν [ n, ∞ )) − j P [ Q ← n (Γ ) /n ≥ x , . . . , Q ← n (Γ l ) ≥ x l ]= ( nν [ n, ∞ )) − j P [Γ ≤ Q n ( nx ) , . . . , Γ l ≤ Q n ( nx l )]= ( nν [ n, ∞ )) − j · l Y i =1 Q n ( nx i ) ! · Z · · · Z e − Q n ( nx l ) v l I ( Q n ( nx ) v ≤ · · · ≤ Q n ( nx l ) v l ) dv · · · dv l = j Y i =1 Q n ( nx i ) nν [ n, ∞ ) ! · l Y i = j +1 Q n ( nx i ) ! · Z · · · Z e − Q n ( nx l ) v l I (cid:18) Q n ( nx i ) nν [ n, ∞ ) v ≤ · · · ≤ Q n ( nx i ) nν [ n, ∞ ) v l (cid:19) dv · · · dv l . Note that Q n ( nx i ) → Q n ( nx i ) nν [ n, ∞ ) → x − αi as n → ∞ for each i = 1 , . . . , l .Therefore, by bounded convergence,( nν [ n, ∞ )) − j P [ Q ← n (Γ ) /n ≥ x , . . . , Q ← n (Γ l ) ≥ x l ] → I ( j = l ) j Y i =1 x i ! − α · Z · · · Z I ( x − α v ≤ · · · ≤ x − αj v j ) dv · · · dv j = µ ( j ) α ( { z ∈ R ∞↓ + : x ≤ z , . . . , x l ≤ z l } ) , which concludes the proof of (5.13). The conclusion of the lemma follows fromthe independence of ( Q ← n (Γ l ) /n, l ≥ 1) and ( U l , l ≥ 1) and Lemma 2.2. Lemma 5.4. Suppose that x ≥ · · · ≥ x j ≥ ; u i ∈ (0 , for i = 1 , . . . , j ; y ≥ · · · ≥ y k ≥ ; v i ∈ (0 , for i = 1 , . . . , k ; u , . . . , u j , v , . . . , v k are alldistinct. a) For any ǫ > , { x ∈ G : d ( x, y ) < (1 + ǫ ) δ implies y ∈ G }⊆ G − δ ⊆ { x ∈ G : d ( x, y ) < δ implies y ∈ G } . Also, ( A ∩ B ) δ ⊆ A δ ∩ B δ and A − δ ∪ B − δ ⊆ ( A ∪ B ) − δ for any A and B .(b) P ji =1 x i [ u i , ∈ ( D \ D < j ) − δ implies x j ≥ δ .(c) P ji =1 x i [ u i , / ∈ ( D \ D < j ) − δ implies x j ≤ δ .(d) P ji =1 x i [ u i , − P ki =1 y i [ v i , ∈ ( D \ D 2. Thenthere exists a non-decreasing homeomorphism φ of [0 , 1] onto itself such that41 P ji =1 x i [ u i , − ζ ◦ φ k ∞ < x j / 2. Note that this implies that at each discon-tinuity point s of ξ , ζ ◦ φ should also be discontinuous. Otherwise, | ζ ◦ φ ( s ) − ξ ( s ) | + | ζ ◦ φ ( s − ) − ξ ( s − ) | ≥ | ξ ( s ) − ξ ( s − ) | ≥ x j , and hence it is contradictoryto the bound on the supremum distance between ξ and ζ ◦ φ . However, thisimplies that ζ has j upward jumps and hence, contradictory to the assumption ζ ∈ D l,m , proving the claim. Likewise, d ( ζ, ξ ) ≥ y k / ξ ∈ D l,m with m < k .(f) Note that in case I ( ξ ) is finite, D + ( ξ ) > j or D − ( ξ ) > k . In this case,the conclusion is immediate from (e). In case I ( ξ ) = ∞ , either D + ( ζ ) = ∞ , D − ( ζ ) = ∞ , ξ (0) = 0, or ξ contains a continuous non-constant piece. Bycontaining a continuous non-constant piece, we refer to the case that there exist t and t such that t < t , ξ ( t ) = ξ ( t − ) and ξ is continuous on ( t , t ). Forthe first two cases where the number of jumps is infinite, the conclusion is animmediate consequence of (e). The case ξ (0) = 0 is also obvious. Now weare left with dealing with the last case, where ξ has a continuous non-constantpiece. To discuss this case, assume w.l.o.g. that ξ ( t ) < ξ ( t − ). We claim that d ( ξ, D j,k ) ≥ ξ ( t − ) − ξ ( t )2( j +1) . Note that for any step function ζ , k ξ − ζ k ≥ | ξ ( t − ) − ζ ( t − ) | ∨ | ξ ( t ) − ζ ( t ) |≥ ( ξ ( t − ) − ζ ( t − )) ∨ ( ζ ( t ) − ξ ( t )) ≥ n ( ξ ( t − ) − ξ ( t )) − ( ζ ( t − ) − ζ ( t )) o ≥ n ( ξ ( t − ) − ξ ( t )) − X t ∈ ( t ,t ) (cid:0) ζ ( t ) − ζ ( t − ) (cid:1)o ≥ n ( ξ ( t − ) − ξ ( t )) − D + ( ζ ) k ξ − ζ k o , where the fourth inequality is due to the fact that k ξ − ζ k ≥ ζ ( t ) − ζ ( t − )2 for all t ∈ ( t , t ). From this, we get k ξ − ζ k ≥ ξ ( t − ) − ξ ( t )2( D + ( ζ ) + 1) ≥ ξ ( t − ) − ξ ( t )2( j + 1) , for ζ ∈ D j,k . Now, suppose that ζ ∈ D j,k . Since ζ ◦ φ is again in D j,k for anynon-decreasing homeomorphism φ of [0 , 1] onto itself, d ( ξ, ζ ) ≥ ξ ( t − ) − ξ ( t )2( j + 1) , which proves the claim.Now we move on to the proof of Theorem 3.3. We first establish Theorem 5.1,which plays a key role in the proof. Recall that D Consider independent 1-dimensional L´evy processes X (1) , . . . , X ( d ) with spectrally positive L´evy measures ν ( · ) , . . . , ν d ( · ) , respectively. Suppose thateach ν i is regularly varying (at infinity) with index − α i < − , and let ¯ X ( i ) n becentered and scaled scaled version of X ( i ) for each i = 1 , . . . , d . Then, for each ( j , . . . , j d ) ∈ Z d + , P (( ¯ X (1) n , . . . , ¯ X ( d ) n ) ∈ · ) Q di =1 (cid:0) nν i [ n, ∞ ) (cid:1) j i → C (1) j × · · · × C ( d ) j d ( · ) in M (cid:16) Q di =1 D \ D < ( j ,...,j d ) (cid:17) .Proof. From Theorem 3.1, we know that ( nν i [ n, ∞ )) − j P ( ¯ X ( i ) n ∈ · ) → C j ( · ) in M ( D \ D 0. This along with Lemma 2.2, foreach ( l , . . . , l d ) ∈ Z d + we obtain d Y i =1 (cid:0) nν i [ n, ∞ ) (cid:1) − l i P (( ¯ X (1) n , . . . , ¯ X ( d ) n ) ∈ · ) → C (1) l × · · · × C ( d ) l d ( · )in M (cid:16) Q di =1 D \ C ( l ,...,l d ) (cid:17) where C ( l ,...,l d ) , S di =1 ( D i − × D 0. Let m i , inf { k ≥ ξ i ∈ ( D k ) r } . In case m i = ∞ for some i , one can pick a large enough M ∈ Z + such that M e i / ∈ I < ( j ,...,j d ) where e i is the unit vector with 0 entries except for the i -th coordinate. Letting( l , . . . , l d ) ∈ J j ,...,j d be an index such that C ( l ,...,l d ) ⊆ C M e i , we find that ξ / ∈ ( C ( l ,...,l j ) ) r ⊆ ( C M e i ) r verifying the premise. If max i =1 ,...,d m i < ∞ , ξ ∈ ( Q di =1 D m i ) r and hence, ( m , . . . , m d ) / ∈ I < ( j ,...,j d ) , which, in turn, impliesthat there exists ( l , . . . , l d ) ∈ J j ,...,j d such that C ( l ,...,l d ) ⊆ C ( m ,...,m d ) . How-ever, due to the construction of m i ’s, each ξ i is bounded away from D Let X (+) and X ( − ) be L`evy processes with spectrallypositive L´evy measures ν + and ν − respectively, where ν + [ x, ∞ ) = ν [ x, ∞ ) and ν − [ x, ∞ ) = ν ( −∞ , − x ] for each x > 0, and denote the corresponding scaled pro-cesses as ¯ X (+) n ( · ) , X (+) ( n · ) /n and ¯ X ( − ) n ( · ) , X ( − ) ( n · ) /n . More specifically,let ¯ X (+) n ( s ) = sa + B ( ns ) /n + 1 n Z | x |≤ x [ N ([0 , ns ] × dx ) − nsν ( dx )]+ 1 n Z x> xN ([0 , ns ] × dx ) , ¯ X ( − ) n ( s ) = 1 n Z x< − xN ([0 , ns ] × dx ) . From Theorem 5.1, we know that ( nν [ n, ∞ )) − j ( nν ( −∞ , − n ]) − k P (cid:0) ( ¯ X (+) n , ¯ X ( − ) n ) ∈· (cid:1) → C + j × C − k ( · ) in M (cid:0) ( D × D ) \ D < ( j,k ) (cid:1) where C + j ( · ) , E h ν jα { x ∈ (0 , ∞ ) j :44 ji =1 x i [ U i , ∈ ·} i and C − k ( · ) , E h ν kβ { y ∈ (0 , ∞ ) k : P ki =1 y i [ U i , ∈ ·} i . Inview of Lemma 2.6 and that C + j × C − k (cid:8) ( ξ, ζ ) ∈ D × D : ( ξ ( t ) − ξ ( t − ))( ζ ( t ) − ζ ( t − )) = 0 for some t ∈ (0 , (cid:9) = 0, we can apply Lemma 2.4 for h ( ξ, ζ ) = ξ − ζ .Noting that C j,k ( · ) = (cid:0) C + j × C − k (cid:1) ◦ h − ( · ), we conclude that ( nν [ n, ∞ )) − j ( nν ( −∞ , − n ]) − k P (cid:0) ¯ X (+) n − ¯ X ( − ) n ∈ · (cid:1) → C j,k ( · ) in M ( D \ D In general,min ( j,k ) ∈ Z D j,k ∩ ¯ A = ∅ I ( j, k ) ≤ I ( J ( A ) , K ( A )) ≤ min ( j,k ) ∈ Z D j,k ∩ A ◦ = ∅ I ( j, k ) , and the left inequality cannot be strict since A is bounded away from D < J ( A ) , K ( A ) .On the other hand, in case the right inequality is strict, then D J ( A ) , K ( A ) ∩ A ◦ = ∅ ,which in turn implies C J ( A ) , K ( A ) ( A ◦ ) = 0 since C J ( A ) , K ( A ) is supported on D J ( A ) , K ( A ) . Therefore, the lower bound is trivial if the right inequality is strict.In view of these observations, we can assume w.l.o.g. that ( J ( A ) , K ( A )) is alsoin both arg min ( j,k ) ∈ Z D j,k ∩ A ◦ = ∅ I ( j, k ) and arg min ( j,k ) ∈ Z D j,k ∩ ¯ A = ∅ I ( j, k ). Since A ◦ and ¯ A are also bounded-away from D < J ( A ) , K ( A ) , the upper bound of (3.9) is obtainedfrom (2.1) and Theorem 3.3 for ¯ A , j = J ( ¯ A ) = J ( A ), and k = K ( ¯ A ) = K ( A );the lower bound of (3.9) is obtained from (2.2) and Theorem 3.3 for A ◦ , j = J ( A ◦ ) = J ( A ), and k = K ( A ◦ ) = K ( A ). Finally, we obtain (3.10) from Theo-rem 3.3 and (2.1) with j = l , k = m , F = ¯ A along with the fact that C l,m ( ¯ A ) = 0since A is bounded away from D l,m . Lemma 5.5. Let A be a measurable set and suppose that the argument minimumin (3.8) is non-empty and contains a pair of integers ( J ( A ) , K ( A )) . Let ( l, m ) ∈ I = J ( A ) , K ( A ) .(i) If A δ ∩ D l,m is bounded away from D ≪J ( A ) , K ( A ) for some δ > , then A ∩ ( D l,m ) γ is bounded away from D ≪J ( A ) , K ( A ) for some γ > .(ii) If A is bounded away from D ≪J ( A ) , K ( A ) , then there exists δ > suchthat A ∩ ( D l,m ) δ is bounded away from D j,k for any ( j, k ) ∈ I = J ( A ) , K ( A ) \{ ( l, m ) } .Proof. For (i), we prove that if d ( A δ ∩ D l,m , D ≪J ( A ) , K ( A ) ) > δ then d ( A ∩ ( D l,m ) δ , D ≪J ( A ) , K ( A ) ) ≥ δ . Suppose that d ( A ∩ ( D l,m ) δ , D ≪J ( A ) , K ( A ) ) < δ .Then, there exists ξ ∈ A ∩ ( D l,m ) δ and ζ ∈ D ≪J ( A ) , K ( A ) such that d ( ξ, ζ ) < δ .Note that we can find ξ ′ ∈ D l,m such that d ( ξ, ξ ′ ) ≤ δ , which means that ξ ′ ∈ A δ ∩ D l,m . Therefore, d ( A δ ∩ D l,m , D ≪J ( A ) , K ( A ) ) ≤ d ( ξ ′ , ζ ) ≤ d ( ξ ′ , ξ ) + d ( ξ, ζ ) ≤ δ + δ ≤ δ .For (ii), suppose that d (cid:0) A, D ≪J ( A ) , K ( A ) (cid:1) > γ for some γ > l, m ) and( j, k ) are two distinct pairs that belong to I = J ( A ) , K ( A ) . Assume w.l.o.g. that45 < l . (If j > l , it should be the case that k < m , and hence one can proceed inthe same way by switching the roles of upward jumps and downward jumps in thefollowing argument.) Let c be a positive number such that c > l − j )+2 and set δ = γ/c . We will show that A ∩ ( D l,m ) δ and ( D j,k ) δ are bounded away from eachother. Let ξ be an arbitrary element of A ∩ ( D l,m ) δ . Then, there exists a ζ ∈ D l,m such that d ( ζ, ξ ) ≤ δ . Note that d (cid:0) ζ, D ≪J ( A ) , K ( A ) (cid:1) ≥ ( c − δ ; in particular, d (cid:0) ζ, D j,m ) ≥ ( c − δ . If we write ζ , P li =1 x i [ u i , − P mi =1 y i [ v i , , this impliesthat x j +1 ≥ ( c − δl − j . Otherwise, ( c − δ > P li = j +1 x i = k ζ − ζ ′ k ≥ d ( ζ, ζ ′ ),where ζ ′ , ζ − P li = j +1 x i [ u i , ∈ D j,m . Therefore, d ( ζ, D j,k ) ≥ ( c − δ l − j ) , which inturn implies d ( ξ, D j,k ) ≥ ( c − δ l − j ) − δ > δ . Since ξ was arbitrary, we concludethat A ∩ ( D l,m ) δ bounded away from ( D j,k ) δ . Recall that I ( ξ ) , (cid:26) ( α − D + ( ξ ) + ( β − D − ( ξ ) if ξ is a step function with ξ (0) = 0 ∞ otherwise . Proof of Theorem 4.2. Observe first that I ( · ) is a rate function. The level sets { ξ : I ( ξ ) ≤ x } equal S ( l,m ) ∈ Z ( α − l +( β − m ≤⌊ x ⌋ D l,m and are therefore closed—notethe level sets are not compact so I ( · ) is not a good rate function (see, forexample, Dembo and Zeitouni (2009) for the definition and properties of goodrate functions).Starting with the lower bound, suppose that G is an open set. We assumew.l.o.g. that inf ξ ∈ G I ( ξ ) < ∞ , since the inequality is trivial otherwise. Due tothe discrete nature of I ( · ), there exists a ξ ∗ ∈ G such that I ( ξ ∗ ) = inf ξ ∈ G I ( ξ ).Set j , D + ( ξ ∗ ) and k , D − ( ξ ∗ ). Let u +1 , . . . , u + j be the sorted (from theearliest to the latest) upward jump times of ξ ∗ ; x +1 , . . . , x + j be the sorted (fromthe largest to the smallest) upward jump sizes of ξ ∗ ; u − , . . . , u − k be the sorteddownward jump times of ξ ∗ ; x − , . . . , x − k be the sorted downward jump sizesof ξ ∗ . Also, let x + j +1 = x − k +1 = 0, u +0 = u − = 0, and u + j +1 = u − k +1 = 1.Note that if ζ ∈ D l,m for l < j , then d ( ξ ∗ , ζ ) ≥ x + j / j upward jumps of ξ ∗ cannot be matched by ξ . Likewise, if ζ ∈ D l,m for m < k , then d ( ξ ∗ , ζ ) ≥ x − k / 2. Therefore, d ( D 2. On theother hand, since G is an open set, we can pick δ > B ξ ∗ ,δ , { ζ ∈ D : d ( ζ, ξ ) < δ } centered at ξ ∗ with radius δ is a subset of G —i.e., B ξ ∗ ,δ ⊂ G . Let δ = ( δ ∧ x + j ∧ x − k ) / 4. If j = k = 0, then ξ ∗ ≡ 0, andhence, { ¯ X n ∈ G } contains {k ¯ X n k ≤ δ } which is a subset of B ξ ∗ ,δ . One can applyLemma A.4 to show that P ( X n ∈ G ) converges to 1, which, in turn, proves theinequality. Now, suppose that either j ≥ k ≥ 1. Then, d ( B ξ ∗ ,δ , D 0. To see this, note firstthat we can assume w.l.o.g. that x ± i ’s are all distinct since G is open (because, ifsome of the jump sizes are identical, we can pick ǫ such that B ξ ∗ ,ǫ ⊆ G , and thenperturb those jump sizes by ǫ to get a new ξ ∗ which still belongs to G while whosejump sizes are all distinct.) Suppose that ξ ∗ = P jl =1 x + i + l [ u + i , − P kl =1 x − i − l [ u − i , ,where { i ± , . . . , i ± j } are permutations of { , . . . , j } . Let 2 δ ′ , δ ∧ ∆ + u ∧ ∆ + x ∧ ∆ − u ∧ ∆ − x , where ∆ + u = min i =1 ,...,j +1 ( u + i − u + i − ), ∆ + x = min i =1 ,...,j ( x + i − − x + i ),∆ − u = min i =1 ,...,k +1 ( u − i − u − i − ), and ∆ − x = min i =1 ,...,k ( x − i − − x − i ). Consider asubset B ′ of B ξ ∗ ,δ : B ′ , (cid:26) j X l =1 y + i + l [ v + l , − k X l =1 y − i − l [ v − l , : v + i ∈ ( u + i − δ ′ , u + i + δ ′ ) , y + i ∈ ( x + i − δ ′ , x + i + δ ′ ) , i = 1 , . . . , j ; v − i ∈ ( u − i − δ ′ , u − i + δ ′ ) , y − i ∈ ( x − i − δ ′ , x − i + δ ′ ) , i = 1 , . . . , k (cid:27) . Then, C j,k ( B ξ ∗ ,δ ) ≥ C j,k ( B ′ )= Z ( u +1 − δ ′ ,u +1 + δ ′ ) ×···× ( u + j − δ ′ ,u + j + δ ′ ) dLeb · Z ( x +1 − δ ′ ,x +1 + δ ′ ) ×···× ( x + j − δ ′ ,x + j + δ ′ ) dν α · Z ( u − − δ ′ ,u − + δ ′ ) ×···× ( u − k − δ ′ ,u − k + δ ′ ) dLeb · Z ( x − − δ ′ ,x − + δ ′ ) ×···× ( x − k − δ ′ ,x − k + δ ′ ) dν β ≥ (2 δ ′ ) j (2 δ ′ ( x +1 ) α ) j (2 δ ′ ) k (2 δ ′ ( x − ) β ) k > . We conclude thatlim inf n →∞ log P ( ¯ X n ∈ G )log n ≥ lim inf n →∞ log P ( ¯ X n ∈ B ξ ∗ ,δ )log n ≥ lim inf n →∞ log( C j,k ( B ξ ∗ ,δ )( nν [ n, ∞ )) j ( nν ( −∞ , − n ]) k (1 + o (1)))log n = − (cid:0) ( α − j + ( β − k (cid:1) , (5.15)which is the lower bound. Turning to the upper bound, suppose that K is acompact set. We first consider the case where inf ξ ∈ K I ( ξ ) < ∞ . Pick ξ ∗ , j and k as in the lower bound, i.e., I ( ξ ∗ ) , inf ξ ∈ K I ( ξ ), j , D + ( ξ ∗ ), and k , D − ( ξ ∗ ).Here we can assume w.l.o.g. either j ≥ k ≥ j = k = 0. For each ζ ∈ K , either I ( ζ ) > I ( ξ ∗ ), or I ( ζ ) = I ( ξ ∗ ). Weconstruct an open cover of K by considering these two cases separately:47 If I ( ζ ) > I ( ξ ∗ ), ζ is bounded away from D 0. To see this, it suffices to show that1) sup t ∈ [0 , [ ξ ( t ) − ξ ( t − )] ≤ b and sup t ∈ [0 , [ ζ ( t ) − ζ ( t − )] > b ′ imply d ( ξ, ζ ) > b ′ − b ; and2) sup t ∈ [0 , [ ξ ( t ) − ct ] < a ′ and sup t ∈ [0 , [ ζ ( t ) − ct ] ≥ a imply d ( ξ, ζ ) ≥ a − a ′ c +1 .It is straightforward to check 1). To see 2), note that for any ǫ > 0, one canfind t ∗ such that ζ ( t ∗ ) − ct ∗ ≥ a − ǫ . Of course, ξ ( λ ( t ∗ )) − cλ ( t ∗ ) < a ′ forany homeomorphism λ ( · ). Subtracting the latter inequality from the formerinequality, we obtain ζ ( t ∗ ) − ξ ( λ ( t ∗ )) ≥ a − a ′ − ǫ + c ( t ∗ − λ ( t ∗ )) . (6.1)One can choose λ so that d ( ξ, ζ ) + ǫ ≥ k λ − e k ≥ λ ( t ∗ ) − t ∗ and d ( ζ, ξ ) + ǫ ≥k ζ − ξ ◦ λ k ≥ ζ ( t ∗ ) − ξ ( λ ( t ∗ )), which together with (6.1) yields d ( ξ, ζ ) > a − a ′ − ( c + 1) ǫ − cd ( ξ, ζ ) . This leads to d ( ξ, ζ ) ≥ a − a ′ c +1 by taking ǫ → 0. With 1) and 2) in hand, itfollows that φ ( ξ ) , sup t ∈ [0 , [ ξ ( t ) − ξ ( t − )] and φ ( ξ ) , sup t ∈ [0 , [ ξ ( t ) − ct ] arecontinuous functionals and A δ ⊆ A ( δ ), where A ( δ ) , { ξ ∈ D : sup t ∈ [0 , [ ξ ( t ) − ct ] ≥ a − ( c + 1) δ ; sup t ∈ [0 , [ ξ ( t ) − ξ ( t − )] ≤ b + 3 δ } . Since ξ ∈ A ( δ ) ∩ D j impliesthat the jump size of ξ is bounded from below by ( b + 3 δ ) j − ( a − ( c + 1) δ ), onecan choose δ > A ( δ ) ∩ D j is bounded away from D j − . This impliesthat A δ ∩ D j is also bounded away from D j − for sufficiently small δ > J ( A ) = j .Next, to identify the limit, recall the discussion at the end of Section 3.1.49ote that A = φ − [ a, ∞ ) ∩ φ − ( −∞ , b ] andˆ T − j ( φ − [ a, ∞ ) ∩ φ − ( −∞ , b ])= n ( x, u ) ∈ ˆ S j : P ji =1 x i ≥ a + c max i =1 ,...,j u i , max i =1 ,...,j x i ≤ b o , ˆ T − j ( φ − ( a, ∞ ) ∩ φ − ( −∞ , b ))= n ( x, u ) ∈ ˆ S j : P ji =1 x i > a + c max i =1 ,...,j u i , max i =1 ,...,j x i < b o . (6.2)We see that ˆ T − j ( φ − [ a, ∞ ) ∩ φ − ( −∞ , b ]) \ ˆ T − j ( φ − ( a, ∞ ) ∩ φ − ( −∞ , b )) hasLebesgue measure 0, and hence, A is C j -continuous. Thus, (3.6) holds with C j ( A ) = E " ν jα { (0 , ∞ ) j : j X i =1 x i [ U i , ∈ A } = Z ( x,u ) ∈ ˆ T − j ( A ) j Y i =1 [ αx − α − i dx i du i ] > . Therefore, we conclude that P sup t ∈ [0 , [ ¯ X n ( t ) − ct ] ≥ a ; sup t ∈ [0 , [ ¯ X n ( t ) − ¯ X n ( t − )] ≤ b ! ∼ C j (cid:0) A (cid:1) ( nν [ n, ∞ )) j . (6.3)In particular, the probability of interest is regularly varying with index − ( α − ⌈ a/b ⌉ . We consider a L´evy-driven Ornstein-Uhlenbeck process of the form d ¯ Y n ( t ) = − κd ¯ Y n ( t ) + d ¯ X n ( t ) , ¯ Y n (0) = 0 . We apply our results to provide sharp large-deviations estimates for b ( n ) = P (cid:0) inf { ¯ Y n ( t ) : 0 ≤ t ≤ } ≤ − a − , ¯ Y n (1) ≥ a + (cid:1) as n → ∞ , where a − , a + > 0. This probability can be interpreted as the priceof a barrier digital option (see Cont and Tankov, 2004, Section 11.3). In orderto apply our results it is useful to represent ¯ Y n as an explicit function of ¯ X n . Inparticular, we have that¯ Y n ( t ) = exp ( − κt ) (cid:18) ¯ Y n (0) + Z t exp ( κs ) d ¯ X n ( s ) (cid:19) (6.4)= ¯ X n ( t ) − κ exp ( − κt ) Z t exp ( κs ) ¯ X n ( s ) ds. (6.5)Hence, if φ : D ([0 , , R ) → D ([0 , , R ) is defined via φ ( ξ ) ( t ) = ξ ( t ) − κ exp ( − κt ) Z t exp ( κs ) ξ ( s ) ds, Y n = φ (cid:0) ¯ X n (cid:1) . Moreover, if we let A = (cid:26) ξ ∈ D : inf ≤ t ≤ φ ( ξ ) ( t ) ≤ − a − , φ ( ξ ) (1) ≥ a + (cid:27) , then we obtain b ( n ) = P (cid:0) ¯ X n ∈ A (cid:1) . In order to easily verify topological properties of A , let us define m, π : D ([0 , , R ) → R by m ( ξ ) = inf ≤ t ≤ ξ ( t ) , and π ( ξ ) = ξ (1) . Note that π is continu-ous (see Billingsley, 2013, Theorem 12.5), that m is continuous as well, and sois φ . Thus, m ◦ φ and π ◦ φ are continuous. We can therefore write A = ( m ◦ φ ) − ( −∞ , − a − ] ∩ ( π ◦ φ ) − [ a + , ∞ ) , concluding that A is a closed set. We now apply Theorem 3.4. To show that D i, is bounded away from ( m ◦ φ ) − ( −∞ , − a − ], select θ such that d ( θ, D i, ) < r with r < a − / (1 + κ exp ( κ )). There exists a ξ ∈ D i, such that d ( θ, ξ ) 1. There also exists ahomeomorphism λ : [0 , → [0 , 1] such thatsup t ∈ [0 , | λ ( t ) − t | ∨ | ( ξ ◦ λ ) ( t ) − θ ( t ) | < r. (6.6)Now, define ψ = θ − ( ξ ◦ λ ). Due to the linearity of φ , and representations (6.4)and (6.5), we obtain that φ ( θ ) ( t ) = φ (( ξ ◦ λ )) ( t ) + φ ( ψ ) ( t )= exp ( − κt ) i X j =1 exp (cid:0) κλ − ( u j ) (cid:1) x j I [ λ − ( u j ) , ( t ) + ψ ( t ) − κ exp ( − κt ) Z t exp ( κs ) ψ ( s ) ds. Since x j ≥ 0, applying the triangle inequality and inequality (6.6) we conclude(by our choice of r ), thatinf ≤ t ≤ φ ( θ ) ( t ) ≥ − r (1 + κ exp ( κ )) > − a − . A similar argument allows us to conclude that D ,i is bounded away from( π ◦ φ ) − [ a + , ∞ ). Hence, in addition to being closed, A is bounded away from D ,i ∪ D i, for any i ≥ 1. Moreover, let ξ ∈ A ∩ D , , with ξ ( t ) = xI [ u, ( t ) − yI [ v, ( t ) , (6.7)where x > y > 0. Using (6.4), we obtain that ξ ∈ A ∩ D , , is equivalentto y ≥ a − , u > v , and x ≥ a + exp ( κ (1 − u )) + y exp ( − κ ( u − v )) . A ◦ = (cid:26) ξ ∈ D : inf ≤ t ≤ φ ( ξ ) ( t ) < − a − , φ ( ξ ) (1) > a + (cid:27) (6.8)= ( m ◦ φ ) − ( −∞ , − a − ) ∩ ( π ◦ φ ) − ( a + , ∞ ) . It is clear that A ◦ contains the open set in the right hand side. We now ar-gue that such a set is actually maximal, so that equality holds. Suppose that φ ( ξ ) (1) = a + , while min ≤ t ≤ φ ( ξ ) ( t ) < − a − . We then consider ψ = − δI { } ( t )with δ > 0, and note that d ( ξ, ξ + ψ ) ≤ δ , and φ ( ξ + ψ ) ( t ) = φ ( ξ ) ( t ) I [0 , ( t ) + ( a + − δ ) I { } ( t ) , so that ξ + ψ / ∈ A . Similarly, we can see that the other inequality (involving a − ) must also be strict, hence concluding that (6.8) holds.We deduce that, if ξ ∈ A ◦ ∩ D , with ξ satisfying (6.7), then y > a − , u > v , x > a + exp ( κ (1 − u )) + y exp ( − κ ( u − v )) . Thus, we can see that A is C , ( · )-continuous, either directly or by invokingour discussion in Section 3.1 regarding continuity of sets. Therefore, applyingTheorem 3.4, we conclude that b ( n ) ∼ nν [ n, ∞ ) nν ( −∞ , − n ] C , ( A )as n → ∞ , where C , ( A ) = Z Z ∞ a − Z v Z ∞ a + exp( κ (1 − u ))+ y exp( − κ ( u − v )) ν α ( dx ) du ν β ( dy ) dv. In particular, the probability of interest is regularly varying with index 2 − α − β . A = { ξ : l ≤ ξ ≤ u } The sets that appeared in the examples in Section 6.1 and Section 6.2 lend them-selves to a direct characterization of the optimal numbers of jumps ( J ( A ) , K ( A )).However, in more complicated problems, deciding what kind of paths the mostprobable limit behaviors consist of may not be as obvious. In this section, weshow that for sets of a certain form, we can identify an optimal path. Considercontinuous real-valued functions l and u , which satisfy l ( t ) < u ( t ) for every t ∈ [0 , l (0) < < u (0). Define A = { ξ : l ( t ) ≤ ξ ( t ) ≤ u ( t ) } .We assume that both α, β < ∞ , which is the most interesting case.The goal of this section is to construct an algorithm which yields an expres-sion for J ( A ) and K ( A ). In fact, we can completely identify a function h thatsolves the optimization problem defining ( J ( A ) , K ( A )). This function will be astep function with both positive and negative steps. We first construct such a52unction, and then verify its optimality. The first step is to identify the timesat which this function jumps. Define the sets A t , { x : l ( t ) ≤ x ≤ u ( t ) } , A ∗ s,t , ∩ s ≤ r ≤ t A r , and the times ( t n , n ≥ 1) by t n +1 , ∧ inf { t > t n : A τ n ,t = ∅} for n ≥ , t , ∧ inf { t > / ∈ A t } . Let n ∗ = inf { n ≥ t n = 1 } . Assume that n ∗ > 1, since the zero function isthe obvious optimal path in case n ∗ = 1. Due to the construction of the times t n , n ≥ , we have the following properties: • Either l ( t ) = 0 or u ( t ) = 0. • For every n = 1 , . . . , n ∗ − 2, sup t ∈ [ t n ,t n +1 ] l ( t ) = inf t ∈ [ t n ,t n +1 ] u ( t ). • H fin , [sup t ∈ [ t n ∗− ,t n ∗ ] l ( t ) , inf t ∈ [ t n ∗− ,t n ∗ ] u ( t )] is nonempty.Set h n , sup t ∈ [ t n ,t n +1 ] l ( t ) for n = 1 , . . . , n ∗ − 1, and set h n ∗ − , h fin for any h fin ∈ H fin . Define now h ( t ) as 0 on t ∈ [0 , t ), h ( t ) = h n on t ∈ [ t n , t n +1 )for n = 1 , . . . , n ∗ − 2, and h ( t ) = h n ∗ − on t ∈ [ t n ∗ − , J ( A ) , K ( A )) = ( J ( { h } ) , K ( { h } )). In fact, we can prove that if g ∈ A is a stepfunction, D + ( g ) ≥ D + ( h ) and D − ( g ) ≥ D − ( h ), which implies the optimality of h . The proof is based on the following observation. At each t n +1 , either1) for any ǫ > t ∈ [ t n +1 , t n +1 + ǫ ] such that u ( t ) < h n , or2) for any ǫ > t ∈ [ t n +1 , t n +1 + ǫ ] such that l ( t ) > h n .Otherwise, there exists ǫ > h n ∈ A t n ,t n +1 + ǫ , contradicting thedefinition of t n , which requires A t n ,t n +1 + ǫ = ∅ . From this observation, we canprove that on each interval ( t n , t n +1 ], any feasible path must jump at leastonce in the same direction as that of the jump of h . To see this, first supposethat 1) is the case at t n +1 , and g ∈ A is a step function. Note that dueto its continuity, l ( · ) should have achieved its supremum at t sup ∈ [ t n , t n +1 ],i.e., l ( t sup ) = h n , and hence, g ( t sup ) ≥ h n . On the other hand, due to the rightcontinuity of g and 1), g has to be strictly less than h n at t n +1 , i.e., g ( t n +1 ) < h n .Therefore, g must have a downward jump on ( t sup , t n +1 ] ⊆ ( t n , t n +1 ]. Note thatthe direction of the jump of h in the interval ( t n , t n +1 ] (more specifically at t n +1 )also has to be downward. Since g is an arbitrary feasible path, this means thatwhenever h jumps downward on ( t n , t n +1 ), any feasible path in A should alsojump downward. Hence, any feasible path must have either equal or a greaternumber of downward jumps as h ’s on [0 , h is optimal, proving that h is indeed the optimal path.53 .4 Multiple Optima This section illustrates how to handle a case where we require Theorem 3.5,and consider an illustrative example where a rare event can be caused by twodifferent configurations of big jumps. Suppose that the regularly varying in-dices − α and − β for positive and negative parts of the L´evy measure ν of X are equal, and consider the set A , { ξ ∈ D : | ξ ( t ) | ≥ t − / } . Then,arg min ( j,k ) ∈ Z D j,k ∩ A = ∅ I ( j, k ) = { (1 , , (0 , } , and D ≪ , = D ≪ , = D , . Since | ξ (1) | ≥ / ξ ∈ A , d ( A, D , ) = 1 / > 0. Theorem 3.5 thereforeapplies, and for each ǫ > 0, there exists N such that P ( ¯ X n ∈ A ) ≥ (cid:0) C l,m ( A ◦ ∩ D , ) − ǫ (cid:1) L + ( n ) + (cid:0) C l,m ( A ◦ ∩ D , ) − ǫ (cid:1) L − ( n ) n α − , P ( ¯ X n ∈ A ) ≤ (cid:0) C l,m ( A − ∩ D , ) + ǫ (cid:1) L + ( n ) + (cid:0) C l,m ( A − ∩ D , ) + ǫ (cid:1) L − ( n ) n α − , for all n ≥ N . Note that A is closed, since if there is ξ ∈ D and s ∈ [0 , | ξ ( s ) | < s − / 2, then B ( ξ, s − / − ξ ( s )2 ) ⊆ A c . Therefore, A − ∩ D , = A ∩ D , = { ξ = x [ u, : x ≥ / , < u ≤ / } , and hence, C , ( A − ∩ D , ) = P ( U ∈ (0 , / ν α [1 / , ∞ ) = (1 / − α . Noting that A ◦ ∩ D , ⊇ ( A ∩ D , ) ◦ = { ξ = x [ u, : x > / , < u < / } , we deduce C , ( A ◦ ∩ D , ) ≥ P ( U ∈ (0 , / ν α (1 / , ∞ ) = (1 / − α . Therefore, C , ( A ◦ ∩ D , ) = C , ( A − ∩ D , ) =(1 / − α . Similarly, we can check that C , ( A ◦ ∩ D , ) = C , ( A − ∩ D , ) =(1 / − β (= (1 / − α ). Therefore, for n ≥ N ,((1 / − α − ǫ )( L + ( n ) + L − ( n )) n − α ≤ P ( ¯ X n ∈ A ) ≤ ((1 / − α + ǫ )( L + ( n ) + L − ( n )) n − α . This is equivalent to (cid:18) (cid:19) − α ≤ lim inf n →∞ P ( ¯ X n ∈ A )( L + ( n ) + L − ( n )) n − α ≤ lim sup n →∞ P ( ¯ X n ∈ A )( L + ( n ) + L − ( n )) n − α ≤ (cid:18) (cid:19) − α . Hence, lim n →∞ P ( ¯ X n ∈ A )( L + ( n ) + L − ( n )) n − α = (cid:18) (cid:19) − α . A Inequalities Lemma A.1 (Generalized Kolmogorov inequality; Shneer and Wachtel (2009)) . Let S n = X + · · · + X n be a random walk with mean zero increments, i.e., E X i = 0 . Then, P (max k ≤ n S k ≥ x ) ≤ C nV ( x ) x , here V ( x ) = E ( X ; | X | ≤ x ) , for all x > . Lemma A.2 (Etemadi’s inequality) . Let X , ..., X n be independent real-valuedrandom variables defined on some common probability space, and let α ≥ . Let S k denote the partial sum S k = X + · · · + X k . Then P (cid:16) max ≤ k ≤ n | S k | ≥ x (cid:17) ≤ ≤ k ≤ n P (cid:0) | S k | ≥ x (cid:1) . Lemma A.3 (Prokhorov’s inequality; Prokhorov (1959)) . Suppose that ξ i , i =1 , . . . , n are independent, zero-mean random variables such that there exists aconstant c for which | ξ i | ≤ c for i = 1 , . . . , n , and P ni =1 var ξ i < ∞ . Then P n X i =1 ξ i > x ! ≤ exp (cid:26) − x c arcsinh xc P ni =1 var ξ i (cid:27) , which, in turn, implies P n X i =1 ξ i > x ! ≤ (cid:18) cx P ni =1 var ξ i (cid:19) − x c . We extend the Etemadi’s inequality to L´evy processes in the following lemma. Lemma A.4. Let Z be a L´evy process. Then, P (cid:16) sup t ∈ [0 ,n ] | Z ( t ) | ≥ x (cid:17) ≤ t ∈ [0 ,n ] P (cid:0) | Z ( t ) | ≥ x/ (cid:1) . Proof. Since Z (and hence | Z | also) is in D , sup ≤ k ≤ m | Z ( nk m ) | converges tosup t ∈ [0 ,n ] | Z ( t ) | almost surely as m → ∞ . To see this, note that one can choose t i ’s such that | Z ( t i ) | ≥ sup t ∈ [0 ,n ] | Z ( t ) | − i − . Since { t i } ’s are in a compact set[0 , n ], there is a subsequence, say, t ′ i , such that t ′ i → t for some t ∈ [0 , n ]. Thesupremum has to be achieved at either t − or t . Either way, with large enough m , sup ≤ k ≤ m | Z ( nk m ) | becomes arbitrarily close to the supremum. Now, bybounded convergence, P ( sup t ∈ [0 ,n ] | Z ( t ) | > x ) = lim m →∞ P (cid:26) sup ≤ k ≤ m (cid:12)(cid:12)(cid:12)(cid:12) Z ( nk m ) (cid:12)(cid:12)(cid:12)(cid:12) > x (cid:27) = lim m →∞ P ( sup ≤ k ≤ m (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) k X i =0 (cid:18) Z ( ni m ) − Z ( n ( i − m ) (cid:19)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) > x ) ≤ lim m →∞ ≤ k ≤ m P ((cid:12)(cid:12)(cid:12)(cid:12)(cid:12) k X i =0 (cid:18) Z ( ni m ) − Z ( n ( i − m ) (cid:19)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) > x/ ) = lim m →∞ ≤ k ≤ m P (cid:26)(cid:12)(cid:12)(cid:12)(cid:12) Z ( nk m ) (cid:12)(cid:12)(cid:12)(cid:12) > x/ (cid:27) ≤ t ∈ [0 ,n ] P {| Z ( t ) | > x/ } , Z ( t ) , t < References Asmussen, S. and Pihlsg˚ard, M. (2005). Performance analysis with truncatedheavy-tailed distributions. Methodol. Comput. Appl. Probab. , 7(4):439–457.Barles, G. (1985). Deterministic impulse control problems. SIAM J. ControlOptim. , 23(3):419–432.Billingsley, P. (2013). Convergence of probability measures . John Wiley & Sons.Blanchet, J. and Shi, Y. (2012). Measuring systemic risks in insurance - rein-surance networks. Preprint .Borovkov, A. A. and Borovkov, K. A. (2008). Asymptotic analysis of randomwalks: Heavy-tailed distributions . Number 118. Cambridge University Press.Buraczewski, D., Damek, E., Mikosch, T., and Zienkiewicz, J. (2013). Largedeviations for solutions to stochastic recurrence equations under Kesten’s con-dition. Ann. Probab. , 41(4):2755–2790.Chen, B., Blanchet, J., Rhee, C.-H., and Zwart, B. (2017). Efficient rare-eventsimulation for multiple jump events in regularly varying random walks andcompound poisson processes. arXiv preprint arXiv:1706.03981 .Cont, R. and Tankov, P. (2004). Financial modelling with jump processes . Chap-man & Hall.de Haan, L. and Lin, T. (2001). On convergence toward an extreme valuedistribution in C [0 , Ann. Probab. , 29(1):467–483.Dembo, A. and Zeitouni, O. (2009). Large deviations techniques and applica-tions , volume 38. Springer Science & Business Media.Denisov, D., Dieker, A., and Shneer, V. (2008). Large deviations for randomwalks under subexponentiality: the big-jump domain. The Annals of Proba-bility , 36(5):1946–1991.Durrett, R. (1980). Conditioned limit theorems for random walks with nega-tive drift. Zeitschrift f¨ur Wahrscheinlichkeitstheorie und Verwandte Gebiete ,52(3):277–287.Embrechts, P., Goldie, C. M., and Veraverbeke, N. (1979). Subexponentialityand infinite divisibility. Z. Wahrsch. Verw. Gebiete , 49(3):335–347.Embrechts, P., Kl¨uppelberg, C., and Mikosch, T. (1997). Modelling extremalevents , volume 33 of Applications of Mathematics (New York) . Springer-Verlag, Berlin. For insurance and finance.56oss, S., Konstantopoulos, T., and Zachary, S. (2007). Discrete and continu-ous time modulated random walks with heavy-tailed increments. Journal ofTheoretical Probability , 20(3):581–612.Foss, S. and Korshunov, D. (2012). On large delays in multi-server queues withheavy tails. Mathematics of Operations Research , 37(2):201–218.Foss, S., Korshunov, D., and Zachary, S. (2011). An introduction to heavy-tailedand subexponential distributions . Springer.Gantert, N. (1998). Functional erd˝os-renyi laws for semiexponential randomvariables. The Annals of Probability , 26(3):1356–1369.Gantert, N. (2000). The maximum of a branching random walk with semiexpo-nential increments. Ann. Probab. , 28(3):1219–1229.Gantert, N., Ramanan, K., and Rembart, F. (2014). Large deviations forweighted sums of stretched exponential random variables. Electron. Com-mun. Probab. , 19:no. 41, 14.Hult, H. and Lindskog, F. (2005). Extremal behavior of regularly varyingstochastic processes. Stochastic Process. Appl. , 115(2):249–274.Hult, H. and Lindskog, F. (2006). Regular variation for measures on metricspaces. Publ. Inst. Math. (Beograd) (N.S.) , 80(94):121–140.Hult, H. and Lindskog, F. (2007). Extremal behavior of stochastic integralsdriven by regularly varying L´evy processes. Ann. Probab. , 35(1):309–339.Hult, H., Lindskog, F., Mikosch, T., and Samorodnitsky, G. (2005). Functionallarge deviations for multivariate regularly varying random walks. The Annalsof Applied Probability , 15(4):2651–2680.Konstantinides, D. G. and Mikosch, T. (2005). Large deviations and ruin prob-abilities for solutions to stochastic recurrence equations with heavy-tailedinnovations. Ann. Probab. , 33(5):1992–2035.Kyprianou, A. E. (2014). Fluctuations of L´evy processes with applications: In-troductory Lectures . Springer Science & Business Media.Lindskog, F., Resnick, S. I., and Roy, J. (2014). Regularly varying measureson metric spaces: Hidden regular variation and hidden jumps. ProbabilitySurveys , 11:270–314.Mikosch, T. and Samorodnitsky, G. (2000). Ruin probability with claims mod-eled by a stationary ergodic stable process. Ann. Probab. , 28(4):1814–1851.Mikosch, T. and Wintenberger, O. (2013). Precise large deviations for dependentregularly varying sequences. Probab. Theory Related Fields , 156(3-4):851–887.57ikosch, T. and Wintenberger, O. (2016). A large deviations approach to limittheory for heavy-tailed time series. Probability Theory and Related Fields ,166(1-2):233–269.Nagaev, A. V. (1969). Limit theorems that take into account large deviationswhen Cram´er’s condition is violated. Izv. Akad. Nauk UzSSR Ser. Fiz.-Mat.Nauk , 13(6):17–22.Nagaev, A. V. (1977). A property of sums of independent random variables. Teor. Verojatnost. i Primenen. , 22(2):335–346.Prokhorov, Y. V. (1959). An extremal problem in probability theory. Theor.Probability Appl. , 4:201–203.Pyke, R. (1965). Spacings. Journal of the Royal Statistical Society. Series B(Methodological) , pages 395–449.Samorodnitsky, G. (2004). Extreme value theory, ergodic theory and the bound-ary between short memory and long memory for stationary stable processes. Ann. Probab. , 32(2):1438–1468.Shneer, S. and Wachtel, V. (2009). Heavy-traffic analysis of the maximum ofan asymptotically stable random walk. arXiv preprint arXiv:0902.2185 .Whitt, W. (1980). Some useful functions for functional limit theorems. Mathe-matics of operations research , 5(1):67–85.Zwart, B., Borst, S., and Mandjes, M. (2004). Exact asymptotics for fluid queuesfed by multiple heavy-tailed on-off flows.