On the distribution of the last exit time over a slowly growing linear boundary for a Gaussian process
aa r X i v : . [ m a t h . P R ] D ec On the distribution of the last exit time over aslowly growing linear boundary for a Gaussianprocess ∗ N.A. Karagodin † , M.A.Lifshits ‡ Abstract
For a class of Gaussian stationary processes, we prove a limit theo-rem on the convergence of the distributions of the scaled last exit timeover a slowly growing linear boundary. The limit is a double exponential(Gumbel) distribution.
Key words and phrases: last exit time, Gaussian process, limit theorem,double exponential law.
Consider a stationary Gaussian process with continuous trajectories and its ”lastexit time over a linear boundary”, i.e. the last instant when the process hits aline at , where t denotes time and a > under the line. We are interested in theasymptotic distribution of the last exit time when the trend a goes to zero. Inthis work, we will prove a limit theorem on the convergence of the distribution ofthe properly centered and scaled last exit time to a double exponential (Gumbel)law.A special case of this problem, for a particular process, emerged in recentworks [1, 2] providing a mathematical study of a physical model (Brownianchain break). Quite naturally, a question is raised, whether it is possible toextend that result to a sufficiently wide class of processes. This is what we dohere. ∗ The work of M.A. Lifshits was supported by RFBR-DFG grant 20-51-12004. † St.Petersburg State University, Department of Mathematics and Computer Sciences,199034, St.Petersburg, Universitetskaya emb., 7–9. email: [email protected] . ‡ St.Petersburg State University, Department of Mathematics and Computer Sciences,199034, St.Petersburg, Universitetskaya emb., 7–9. email: [email protected] .
1s far as we know, the problem setting handling the small trend is new,although the last exit time is a sufficiently popular object in the problems ofeconomical applications such as studies of ruin probabilities. In those settings,however, as a rule, one considers processes with stationary increments and trendis fixed, see [3, 4, 6].
Let Y ( t ) , t ∈ R , be a real-valued centered stationary Gaussian process withcovariance function ρ ( t ) := E [ Y ( t ) Y (0)]. We make two assumptions on thecovariance function: at zero ρ ( t ) = v (1 − Q | t | α + o ( | t | α )) , as t → , (1)for some v > , Q > α ∈ (0 , ρ ( t ) = o ((ln t ) − ) , as t → + ∞ . (2)Recall that relation (1) appears in the following lemma that will serve asone of the two basic tools in our calculations. Lemma 1 (Pickands–Piterbarg lemma). Let Y ( t ) , t ∈ R , be a real-valued cen-tered stationary Gaussian process satisfying conditions (1) and lim sup t →∞ ρ ( t ) < . Then P (cid:26) max s ∈ [0 ,t ] Y ( s ) ≥ x (cid:27) ∼ Q /α H α √ π · t · ( x/v ) /α − e − x / v for all x and t such that the right hand side tends to zero and tx /α → ∞ . Here H α are Pickands constants (in particular, H = 1 , H = π − / ). Here and elsewhere throughout the paper f ∼ g stands for lim fg = 1. Afirst version of this lemma with fixed t was obtained by Pickands [7], whilethis version with variable t (which is very important for our goals) is due toPiterbarg, see [8, lecture 9].Define the Y ’s last exit time over a linear boundary as T = T ( ε ) := max { t : Y ( t ) = ε t } . The main result of our work is as follows.
Theorem 2
Let Y ( t ) , t ∈ R , be a real-valued centered stationary Gaussian pro-cess satisfying assumptions (1) and (2) . Let c := Q /α H α √ π , ε v := εv . Then foreach r ∈ R it is true that lim ε → P (cid:26) T ( ε ) − A ε B ε ≤ r (cid:27) = exp( − c exp( − r )) , ith the scaling constants A ε := ε − v (cid:18)p − ε v + (cid:18) α − (cid:19) ln( − ε v ) √− ε v (cid:19) ,B ε := (cid:16) ε v p − ε v (cid:17) − . We stress that the research technique of the initial work [1] was based onthe exponential strong mixing property satisfied by the process studied there.In our work, the assumption is only imposed on the covariance function (whichis by far easier to check than strong mixing) and the condition of the covariancedecay is logarithmic instead of the exponential one.The double exponential law usually emerges in the studies of maxima of theidentically distributed random variables. Amazingly, in our problem it is relatedto the maxima of non -identically distributed variables.
By the linear variable change Y ( t ) = v e Y ( Q /α t ) one may reduce the problemto the case v = Q = 1, ε v = ε , which is considered in the following.Let us fix r ∈ R and let τ = τ ( ε, r ) := A ε + B ε r. The theorem’s statement is equivalent tolim ε → P { T ( ε ) ≤ τ } = exp( − c exp( − r )) . In order to prove this, we cover the halfline [ τ, ∞ ) by the following system ofsets: • the halfline [ σ, ∞ ), where σ := A ε + B ε R and R = R ( ε ) slowly tends toinfinity. The choice of R will be further specified at the end of the proof. • long intervals L i = [( ℓ + s ) i, ( ℓ + s ) i + ℓ ], i ∈ Z , of some length ℓ , • shorter intervals S i = [( ℓ + s ) i + ℓ, ( ℓ + s )( i + 1)], i ∈ Z , of length s .In fact the main part of process exits over the linear boundary will occur onthe long intervals, while the shorter intervals placed between the long ones playthe role of separators providing weak dependence between the process values ondifferent long intervals.The interval lengths ℓ = ℓ ( ε ), s = s ( ε ) must satisfy the relations3n s ∼ | ln( − ε ) | , (3) s/ℓ → , (4) ε √− ln ε ℓ → . (5)The parameter R should grow to infinity so slowly that τ ∼ σ. (6)Let X εi := max t ∈ L i Y ( t ); V εi := max t ∈ S i Y ( t ) . By using stationarity, we infer from Pickands–Piterbarg lemma the asymptotics P { X εi ≥ x } ∼ c ℓ x /α − exp( − x /
2) and P { V εi ≥ x } ∼ c s x /α − exp( − x / c = H α √ π .Define the index sets I := { i : ( ℓ + s ) i + ℓ ≥ τ ( ℓ + s ) i<σ } ,I := { i : ( ℓ + s ) i ≥ τ ( ℓ + s ) i + ℓ<σ } ,I := { i : ( ℓ + s )( i +1) ≥ τ ( ℓ + s ) i + ℓ<σ } , chosen so that the following inclusions hold: [ i ∈ I L i ⊂ [ τ, σ ] ⊂ (cid:0) [ i ∈ I L i (cid:1) ∪ (cid:0) [ i ∈ I S i (cid:1) . (7)In the first inclusion one considers the long intervals belonging to [ τ, σ ]; in thesecond one, the long intervals and the short intervals separating them cover[ τ, σ ].Let us define the events related to the exits of our process over the linearboundary: E := [ i ∈ I { X εi ≥ ( ℓ + s ) iε } , E := [ i ∈ I { X εi ≥ ( ℓ + s )( i + 1) ε } , E := [ i ∈ I { V εi ≥ ( ℓ + s ) iε } , E := {∃ t > σ : Y ( t ) ≥ εt } . By using inclusions (7) and monotonicity of the linear function, it is easy tosee that the following bounds are true: P { T ( ε ) > τ } = P {∃ t > τ : Y ( t ) ≥ εt } ≤ P {E } + P {E } + P {E } , P { T ( ε ) > τ } ≥ P {E } . ε →
0, we have P {E } , P {E } → − exp( − c exp( − r )) , P {E } , P {E } → . Let us first prove that the probabilities of the events E and E are almostequal, thus it will be enough to find the limit of P {E } . Indeed, let the indices m and n be such that I = [ m, n ]. Then P {E } ≤ P {E } = P ( n [ i = m { X εi ≥ ( ℓ + s ) iε } ) ≤ P { X εm ≥ ( ℓ + s ) mε } + P (cid:8) X εm +1 ≥ ( ℓ + s )( m + 1) ε (cid:9) + P ( n [ i = m +2 { X εi ≥ ( ℓ + s ) iε } ) ≤ P { X εm ≥ ( ℓ + s ) mε } + P n − [ j = m +1 { X εj +1 ≥ ( ℓ + s )( j + 1) ε } ≤ P { X εm ≥ ( ℓ + s ) mε } + P {E } , where in the penultimate inequality we used the stationarity of the sequence X εi following from the stationarity of the process Y . For the remaining term weuse Pickands–Piterbarg bound and obtain P { X εm ≥ ( ℓ + s ) mε } ≤ P { X εm ≥ ( τ − ℓ ) ε } ∼ cℓ [( τ − ℓ ) ε ] /α − exp {− [( τ − ℓ ) ε ] / } . Let us take into account that the definitions of τ and σ yield the following threerelations: τ ∼ ε − √− ε, (8)1 ε ( τ ε ) /α − exp {− ( τ ε ) / } ∼ e − r , (9)1 ε ( σε ) /α − exp {− ( σε ) / } = o (1) . (10)We stress that equation (9) is a key for the choice of the scaling constants A ε and B ε in the theorem assertion.We use the bound (5) and obtain ℓ/τ → ℓτ ε →
0. Therefore, ℓ [( τ − ℓ ) ε ] /α − exp {− [( τ − ℓ ) ε ] / } ∼ ℓ ( τ ε ) /α − exp {− ( τ ε ) / }∼ ℓτ ε e − r → . We conclude that P { X εm ≥ ( ℓ + s ) mε } →
0, so that the difference between P {E } and P {E } is indeed negligible.In the sequel, we will many times use the following technical lemma. Itsproof is postponed to Section 2.2. 5 emma 3 For each α = 0 and all θ ( ε ) , a ( ε ) , b ( ε ) such that, as ε → , one has θε → ∞ , a = o ( θ ) , θaε → , it is true that ∞ X i : ai + b ≥ θ [( ai + b ) ε ] /α − exp {− [( ai + b ) ε ] / } ∼ aε ( θε ) /α − exp {− ( θε ) / } . Let us evaluate P {E } . By Pickands–Piterbarg asymptotics, we have P {E } ≤ X i ∈ I P { V εi ≥ ( ℓ + s ) iε }≤ c s X i :( ℓ + s )( i +1) ≥ τ [( ℓ + s ) iε ] /α − exp {− [( ℓ + s ) iε ] / } (1 + o (1)) . In order to find the asymptotic behavior of this sum, we apply Lemma 3with parameters a = ℓ + s, b = 0 , θ = τ − ℓ − s . Then, by using (4), (5), and (8)we have a ∼ ℓ, θ ∼ τ, ( θε ) = ( τ ε ) + o (1). Therefore, Lemma 3, relation (9),and assumption (4) yield csaε ( θε ) /α − exp {− ( θε ) / } ∼ csℓε ( τ ε ) /α − exp {− ( τ ε ) / } = o (1) . Evaluation of P {E } goes along the same lines. By splitting the halfline[ σ, ∞ ) into the intervals of the unit length, we obtain P {E } ≤ ∞ X j =0 P { max t ∈ [ σ + j,σ + j +1] Y ( t ) > ε ( σ + j ) }≤ c ∞ X j =0 [( σ + j ) ε ] /α − exp {− [ σ + j ) ε ] / } (1 + o (1)) . We use Lemma 3 with parameters a = 1 , b = σ, θ = σ . By applying (10), weobtain the asymptotics cε ( σε ) /α − exp {− ( σε ) / } = o (1) . The subsequent estimates use the effect of weak dependence of the valuesof the process Y at distant times. Our main tool here is the following classicalinequality due to Slepian (see e.g., [5, § Lemma 4
Let ( U , ..., U n ) and ( V , ..., V n ) be two centered Gaussian vectorssuch that E U j = E V j , ≤ j ≤ n , and E ( U i U j ) ≤ E ( V i V j ) , ≤ i, j ≤ n . Thenfor each r ∈ R one has P (cid:26) max ≤ j ≤ n U j ≥ r (cid:27) ≥ P (cid:26) max ≤ j ≤ n V j ≥ r (cid:27) . r , ..., r n one has P {∃ j : U j ≥ r j } ≥ P {∃ j : V j ≥ r j } . This fact follows by application of Slepian inequality to the vectors ( U r , ..., U n r n ),( V r , ..., V n r n ) and r = 1.The latter inequality obviously extends to the Gaussian processes with con-tinuous trajectories defined on a metric space (by the way, the processes sat-isfying assumption (1) belong to this class). Namely, let { U ( t ) , t ∈ T } and { V ( t ) , t ∈ T } be two Gaussian processes with continuous trajectories defined ona common metric space T . Let E U ( t ) = E V ( t ) , t ∈ T , and E ( U ( t ) U ( t )) ≤ E ( V ( t ) V ( t )), t , t ∈ T . Then for all compact sets T , ..., T n in T and for allnon-negative r , ..., r n it is true that P n [ j =1 (cid:26) max t ∈ T j U ( t ) ≥ r j (cid:27) ≥ P n [ j =1 (cid:26) max t ∈ T j V ( t ) ≥ r j (cid:27) . (11)Now we may proceed to the proof of the remaining claim1 − P {E } → exp( − c exp( − r )) , ε → . (12)We provide the corresponding upper and lower bounds. In both cases we willuse Slepian inequality in the form (11). Upper bound.
Let us compare our process Y with an auxiliary process Z which is definedas follows. First, let us consider a process e Y ( t ) , t ∈ ∪ L i which consists ofindependent copies of Y ( t ) on the intervals L i .Further, let δ = δ ( ε ) := sup t ≥ s ( ε ) | E [ Y ( t ) Y (0)] | . Taking into account the correlation decay assumption (2) and assumption (3)concerning the choice of s , we have δ = o ((ln s ) − ) = o (( − ln ε ) − ) . (13)Let ξ be an auxiliary standard normal random variable independent withthe process e Y . We define the centered Gaussian process Z ( t ), t ∈ ∪ L i , by theequality Z ( t ) := p − δ e Y ( t ) + δξ. Then for all t the variances are equal: E Y ( t ) = E Z ( t ) = 1. For covarianceswe have the following inequalities: 7 for t and t that belong to the same interval L i we have E [ Z ( t ) Z ( t )] = E h(cid:16)p − δ e Y ( t ) + δξ (cid:17) (cid:16)p − δ e Y ( t ) + δξ (cid:17)i = (1 − δ ) E [ Y ( t ) Y ( t )] + δ ≥ E [ Y ( t ) Y ( t )] , where the last inequality follows from E [ Y ( t ) Y ( t )] ≤ p E Y ( t ) E Y ( t ) = 1, • for t and t that belong to different intervals L i and L j , by the definitionof δ and by intervals’ construction we have E [ Z ( t ) Z ( t )] = E h(cid:16)p − δ e Y ( t ) + δξ (cid:17) (cid:16)p − δ e Y ( t ) + δξ (cid:17)i = δ ≥ E [ Y ( t ) Y ( t )] . Let e X εi := max t ∈ L i e Y ( t ). By applying Slepian inequality (11) to the pro-cesses Y and Z , we obtain P {E } = P ( [ i ∈ I { X εi ≥ ( ℓ + s ) iε } ) ≥ P ( [ i ∈ I { p − δ e X εi + δξ ≥ ( ℓ + s ) iε } ) . Let us pass to the complementary events; for every h = h ( ε ) > − P {E } = P ( \ i ∈ I { X εi ≤ ( ℓ + s ) iε } ) ≤ P ( \ i ∈ I { p − δ e X εi + δξ ≤ ( ℓ + s ) iε } ) ≤ P ( \ i ∈ I { p − δ e X εi ≤ ( ℓ + s ) iε + hε } ) + P { δξ ≤ − hε } = Y i ∈ I P (cid:26) X εi ≤ ( ℓ + s ) iε + hε √ − δ (cid:27) + P { ξ ≤ − hε/δ } , (14)where the last equality holds because e X εi are independent copies of X εi . Wechoose the level h = h ( δ, ε ) so that hε/δ → ∞ , (15) hε · √− ε → , (16)which is possible under (13). It also follows from (16) that h/τ = ( hε ) / ( τ ε ) ∼ ( hε ) / √− ε → . − c exp( − r )). Taking the logarithm and passing tothe complementary events, we see that it is necessary to prove the convergence X i ∈ I P (cid:26) X εi ≥ ( ℓ + s ) iε + hε √ − δ (cid:27) → c exp( − r ) . By Pickands–Piterbarg lemma this is equivalent to c ℓ X i ∈ I (cid:18) ( ℓ + s ) iε + hε √ − δ (cid:19) /α − exp − (cid:18) ( ℓ + s ) iε + hε √ − δ (cid:19) / ! . We represent this expression as a difference of two sums c ℓ X i :( ℓ + s ) i + ℓ ≥ τ (cid:18) ( ℓ + s ) iε + hε √ − δ (cid:19) /α − exp − (cid:18) ( ℓ + s ) iε + hε √ − δ (cid:19) / ! (17)and c ℓ X i :( ℓ + s ) i ≥ σ (cid:18) ( ℓ + s ) iε + hε √ − δ (cid:19) /α − exp − (cid:18) ( ℓ + s ) iε + hε √ − δ (cid:19) / ! (18)The asymptotics of the first sum follows from Lemma 3 applied with parameters a = ℓ + s √ − δ ∼ ℓ and θ = τ − ℓ + h √ − δ ∼ τ , where equations (16), (8), (5) and (13)yield ( θε ) = ( τ − ℓ + h ) ε − δ = τ ε − δ + O ( τ ( h + ℓ ) ε )= τ ε + τ ε δ − δ + O ( τ ( h + ℓ ) ε ) = τ ε + o (1) . By this relation and (9), Lemma 3 provides the following asymptotics for (17): c ℓ aε ( θε ) /α − exp {− ( θε ) / } ∼ c ℓ ℓε ( τ ε ) /α − exp {− ( τ ε ) / } ∼ ce − r . Similarly, Lemma 3 applied with parameters a = ℓ + s √ − δ ∼ ℓ and θ = σ + h √ − δ ∼ σ provides an asymptotics for (18). Since ( θε ) = ( σε ) + o (1), by using (10) weobtain cℓ aε ( θε ) /α − exp {− ( θε ) / } ∼ cℓ ℓε ( σε ) /α − exp {− ( σε ) / } = o (1) . Substraction of sums’ asymptotics implies the required upper bound for 1 − P {E } . Lower bound.
9n order to obtain an opposite bound for 1 − P {E } , we will introduce andcompare two more auxiliary processes Y , e Y . Let ξ be an auxiliary standardnormal random variable independent with the process Y . Let Y ( t ) := Y ( t )+ δξ , t ∈ ∪ L i . Furthermore, let us consider a sequence of independent standardGaussian random variables ξ i independent of e Y ( t ) and let e Y ( t ) := e Y ( t ) + δξ i , t ∈ L i . Then for all t we have the equality of variances: E Y ( t ) = E e Y ( t ) = 1 + δ .For covariances we have the following inequalities: • for t and t that belong to the same interval L i we have E [ Y ( t ) Y ( t )] = E [ e Y ( t ) e Y ( t )] , • for t and t that belong to different intervals L i and L j we have E [ Y ( t ) Y ( t )] = E [( Y ( t ) + δξ )( Y ( t ) + δξ )] = E [ Y ( t ) Y ( t )] + δ ≥ E [ e Y ( t ) e Y ( t )] . We choose h = h ( δ, ε ) as before, i.e. satisfying assumptions (15) and (16).Slepian inequality (11) yields P ( [ i ∈ I { e X εi + δξ i ≥ ( ℓ + s ) iε − hε } ) = P ( [ i ∈ I { max t ∈ L i e Y ( t ) ≥ ( ℓ + s ) iε − hε } ) ≥ P ( [ i ∈ I { max t ∈ L i Y ( t ) ≥ ( ℓ + s ) iε − hε } ) = P ( [ i ∈ I { X εi + δξ ≥ ( ℓ + s ) iε − hε } ) . By passing to the complementary events, we obtain P ( \ i ∈ I { X εi + δξ ≤ ( ℓ + s ) iε − hε } ) ≥ P ( \ i ∈ I { e X εi + δξ i ≤ ( ℓ + s ) iε − hε } ) = Y i ∈ I P n e X εi + δξ i ≤ ( ℓ + s ) iε − hε o = Y i ∈ I P { X εi + δξ ≤ ( ℓ + s ) iε − hε } . Further, we apply an elementary bound1 − P {E } = P ( \ i ∈ I { X εi ≤ ( ℓ + s ) iε } ) ≥ P ( \ i ∈ I { X εi + δξ ≤ ( ℓ + s ) iε − hε } ) − P { δξ ≤ − hε }≥ Y i ∈ I P { X εi + δξ ≤ ( ℓ + s ) iε − hε } − P { δξ ≤ − hε } . P { δξ ≤ − hε } →
0. It remains to prove thatthe product is greater than exp( − c exp( − r ))(1 + o (1)). Taking the logarithmand passing to the complementary events, we see that it is necessary to provethe bound X i ∈ I P { X εi + δξ ≥ ( ℓ + s ) iε − hε } ≤ c exp( − r )(1 + o (1)) . We start with the estimate X i ∈ I P { X εi + δξ ≥ ( ℓ + s ) iε − hε }≤ X i ∈ I (cid:2) P { X εi ≥ ( ℓ + s ) iε − hε } + P { δξ > hε } (cid:3) ≤ X i :( ℓ + s ) i + ℓ ≥ τ P { X εi ≥ ( ℓ + s ) iε − hε } + N P { δξ > hε } , (19)where N denotes the number of elements in the set I ; it has asymptotics N ∼ σ − τℓ + s = ( R − r ) B ε ℓ + s ∼ Rε √− ε ℓ . For the sum in (19) Pickands–Piterbarg lemma provides an equivalent expression c ℓ X i ∈ I (( ℓ + s ) iε − hε ) /α − exp (cid:16) − (( ℓ + s ) iε − hε ) / (cid:17) . (20)Next, Lemma 3 applied with parameters a = ℓ + s , b = − h , θ = τ − ℓ − h yields an asymptotics for the latter sum. Here, as in the derivation of the upperbound we have a ∼ ℓ , θ ∼ τ , ( θε ) = ( τ ε ) + o (1). By combining the result ofLemma 3 with (9), we obtain c ℓaε ( θε ) /α − exp {− ( θε ) / } ∼ cε ( τ ε ) /α − exp {− ( τ ε ) / } ∼ ce − r . This means that X i :( ℓ + s ) i + ℓ ≥ τ P { X εi ≥ ( ℓ + s ) iε − hε } = c e − r (1 + o (1)) . It remains to estimate the last term in (19). To this aim, we have to specify thechoice of parameters ℓ and R .Since P ( δξ > hε ) →
0, we may chose ℓ = ℓ ( ε ), although satisfying (5), butstill such that P ( δξ > hε ) ℓε √− ln ε → . Then we may choose R = R ( ε ) tending to infinity so slowly that N P { δξ > hε } ∼ R P ( δξ > hε ) ℓε √− ε → . By summing up the estimates for the terms of (19), we arrive at the requiredlower estimate for 1 − P {E } . 11 .2 Proof of Lemma 3 The monotone decay of the function x x /α − exp {− x / } at large x yieldsthe following two bounds[( ai + b ) ε ] /α − exp {− [( ai + b ) ε ] / } ≤ aε Z ( ai + b ) ε ( a ( i − b ) ε x /α − exp {− x / } dx, [( ai + b ) ε ] /α − exp {− [( ai + b ) ε ] / } ≥ aε Z ( a ( i +1)+ b ) ε ( ai + b ) ε x /α − exp {− x / } dx. By summing up over all considered i we infer that the sum under considerationis contained between two integrals1 aε Z ∞ ( θ − a ) ε x /α − exp {− x / } dx and 1 aε Z ∞ ( θ + a ) ε x /α − exp {− x / } dx. Furthermore, under our assumptions on θ and a it is true that1 aε Z ∞ ( θ − a ) ε x /α − exp {− x / } dx ∼ aε [( θ − a ) ε ] /α − exp {− [( θ − a ) ε ] / }∼ aε ( θε ) /α − exp {− ( θε ) / } and 1 aε Z ∞ ( θ + a ) ε x /α − exp {− x / } dx ∼ aε ( θε ) /α − exp {− ( θε ) / } , and the required estimate follows. References [1] F.Aurzada, V.Betz, M.Lifshits, Breaking a chain of interacting Brownian parti-cles. Preprint https://arxiv.org/abs/1912.05168.[2] F.Aurzada, V.Betz, M.Lifshits, Universal break law for chains ofBrownian particles with nearest neighbour interaction. Preprint .[3] K.Debicki, P.Liu, The time of ultimate recovery in Gaussian risk model. Ex-tremes, 22(3), 499-521, 2019. https://arxiv.org/abs/1801.02469.[4] J.H¨usler, Y.Zhang, On first and last ruin times of Gaussian processes. Statist.Probab. Lett., 78(10):1230–1235, 2008.[5] M.Lifshits, Gaussian random functions, Kluwer, Dordrecht, 1995.
6] Ch.Paroissin, L.Rabehasaina, First and last passage times of spectrally positiveL´evy processes with application to reliability. Methodology and Computing inApplied Probability, 17, 351–372, 2015.[7] J.Pickands III, Asymptotic properties of the maximum in a stationary Gaussianprocess, Trans. Amer. Math. Soc. 145 (1969), 75–86.[8] V.I. Piterbarg, Twenty lectures on Gaussian processes. Atlantic Financial Press,2015.6] Ch.Paroissin, L.Rabehasaina, First and last passage times of spectrally positiveL´evy processes with application to reliability. Methodology and Computing inApplied Probability, 17, 351–372, 2015.[7] J.Pickands III, Asymptotic properties of the maximum in a stationary Gaussianprocess, Trans. Amer. Math. Soc. 145 (1969), 75–86.[8] V.I. Piterbarg, Twenty lectures on Gaussian processes. Atlantic Financial Press,2015.