Piterbarg's max-discretisation theorem for stationary vector Gaussian processes observed on different grids
aa r X i v : . [ m a t h . P R ] O c t PITERBARG’S MAX-DISCRETISATION THEOREM FOR STATIONARY VECTORGAUSSIAN PROCESSES OBSERVED ON DIFFERENT GRIDS
ENKELEJD HASHORVA AND ZHONGQUAN TAN
Abstract:
In this paper we derive Piterbarg’s max-discretisation theorem for two different grids consideringcentered stationary vector Gaussian processes. So far in the literature results in this direction have been derivedfor the joint distribution of the maximum of Gaussian processes over [0 , T ] and over a grid R ( δ ( T )) = { kδ ( T ) : k = 0 , , · · · } . In this paper we extend the recent findings by considering additionally the maximum over anothergrid R ( δ ( T )). We derive the joint limiting distribution of maximum of stationary Gaussian vector processesfor different choices of such grids by letting T → ∞ . As a by-product we find that the joint limiting distributionof the maximum over different grids, which we refer to as the Piterbarg distribution, is in the case of weaklydependent Gaussian processes a max-stable distribution. Key Words:
Piterbarg’s max-discretisation theorem; Limiting distribution; Piterbarg distribution; Pickandsconstant; Extremes of Gaussian processes; Gumbel limit law; Berman condition.
AMS Classification:
Primary 60F05; secondary 60G151.
Introduction
Let { X ( t ) , t ≥ } be a centered stationary Gaussian process with continuous sample paths, unit variance andcorrelation function r ( · ) which satisfies for some α ∈ (0 , r ( t ) = 1 − C | t | α + o ( | t | α ) as t → r ( t ) < t = 0 , (1)where C is some positive constant. In various applications only realisations of X on a discrete time grid arepossible. For simplicity, in this paper we shall consider uniform grids of points R ( δ ) = { kδ : k = 0 , , · · · } where δ := δ ( T ) > T >
0. In view of the findings of Berman (see [5, 7]) the maximumof X taken over such a discrete grid has a limiting Gumbel distribution iflim T →∞ (2 ln T ) /α δ ( T ) = D, (2)with D = ∞ and the Berman condition lim T →∞ r ( T ) ln T = r (3)holds for r = 0. Specifically, for the maximum M ( δ, T ) = max i :0 ≤ iδ ≤ T X ( iδ ) over R ( δ ) ∩ [0 , T ] we havelim T →∞ sup x ∈ R (cid:12)(cid:12)(cid:12)(cid:12) P { a T ( M ( δ, T ) − b δ,T ) ≤ x } − e − e − x (cid:12)(cid:12)(cid:12)(cid:12) = 0 , Date : September 9, 2018. provided that both (2) and (3) hold, where a T = √ T , b δ,T = a T − ln( a T δ √ π ) a T , T > . (4)For the maximum over [0 , T ] defined thus as M ( T ) = max t ∈ [0 ,T ] X ( t ) it is well-known (see e.g., [21, 1, 2, 7, 26])that (1) and (3) imply lim T →∞ sup x ∈ R (cid:12)(cid:12)(cid:12)(cid:12) P { a T ( M ( T ) − b T ) ≤ x } − e − e − x (cid:12)(cid:12)(cid:12)(cid:12) = 0 , (5)where b T = a T + a − T ln((2 π ) − / C /α H α a − /αT )(6)and H α ∈ (0 , ∞ ) denotes Pickands constant, see [24, 25, 6, 21, 2, 26, 11, 3, 14, 12, 9, 17] for more details andgeneralisations of H α .The seminal contribution [27] derives the joint convergence as T → ∞ of M ( T ) and M ( δ, T ) showing theirasymptotic independence, i.e.,lim T →∞ sup x,y ∈ R (cid:12)(cid:12)(cid:12)(cid:12) P { a T ( M ( T ) − b T ) ≤ x, a T ( M ( δ, T ) − b δ,T ≤ y } − e − e − x − e − y (cid:12)(cid:12)(cid:12)(cid:12) = 0 . Hereafter we set B ∗ α/ ( t ) := √ B α/ ( t ) − | t | α , t ≥ B α a standard fractional Brownian motion with Hurstindex α/ ∈ (0 , δ = δ ( T ) is given by (2). Define further for any D > H D,α = lim λ →∞ λ − E n e max k ∈ N : kD ∈ [0 ,λ ] B ∗ α/ ( kD ) o ∈ (0 , ∞ )and set (the constant C > b T ( D ) = a T + a − T ln((2 π ) − / C /α H D,α a − /αT ) . (7)For R ( Da − /αT ) , D > δ = δ ( T ) = Da − /αT ), then in view of[27], Theorem 2 the stated asymptotic independence does not hold sincelim T →∞ sup x,y ∈ R (cid:12)(cid:12)(cid:12)(cid:12) P { a T ( M ( T ) − b T ) ≤ x, a T ( M ( δ, T ) − b T ( D )) ≤ y } − e − e − x − e − y + H ln Hα + x, ln HD,α + yD,α (cid:12)(cid:12)(cid:12)(cid:12) = 0 , where the function H x,yD,α is defined for any x, y ∈ R as H x,yD,α = lim λ →∞ λ − H x,yD,α ( λ ) ∈ (0 , ∞ ) , (8)with H x,yD,α ( λ ) = Z s ∈ R e s P (cid:26) max t ∈ [0 ,λ ] B ∗ α/ ( t ) > s + x, max k ∈ N : kD ∈ [0 ,λ ] B ∗ α/ ( kD ) > s + y (cid:27) ds. Since it follows that for any w ∈ R lim x →−∞ H x,wD,α = e − w H D,α , lim y →−∞ H w,yD,α = e − w H α ∈ (0 , ∞ ) , (9) ITERBARG’S MAX-DISCRETISATION THEOREM 3 then Q ( x, y ) = e − e − x − e − y + H D,α (ln H α + x, ln H D,α + y ) , x, y ∈ R is a bivariate distribution function which has Gumbel marginals Q ( z, ∞ ) = Q ( ∞ , z ) = e − e − z , z ∈ R . Moreover Q is a bivariate max-stable distribution, which we shall refer to as Piterbarg distribution. This multivariatedistribution is of some independent interest for statistical modelling of dependent multivariate risks.In the extreme case of a dense grid, which in the terminology of [27] means that (2) holds for D = 0, then byTheorem 3 in [27]lim T →∞ sup x,y ∈ R (cid:12)(cid:12)(cid:12)(cid:12) P { a T ( M ( T ) − b T ) ≤ x, a T ( M ( δ, T ) − b T ) ≤ y } − e − e − min( x,y ) (cid:12)(cid:12)(cid:12)(cid:12) = 0thus the continuous time and the discrete time maxima are asymptotically completely dependent.In case of two different uniform girds R ( δ ) and R ( δ ) a natural question that arises is:What is the joint limiting behaviour of M ( T ) , M ( T, δ ) , M ( T, δ ) for different types of grids?Motivated by this question, our findings this contribution include:a) We show that M ( T, δ ) and M ( T, δ ) are always asymptotically independent if one grid is sparse and theother grid is Pickands or dense. Further, we obtain the joint limiting distribution if one of the grids is Pickands,and the other grid is Pickands or dense.b) The Berman condition is relaxed by assuming that (3) holds for some r ∈ [0 , ∞ ). When r > X is said to be strongly dependent, see [22, 26, 23, 32, 29, 8] for details on the extremes of such Gaussianprocesses. The contribution [34] derives Piterbarg’s max-discretisation theorem for strongly dependent Gaussianprocesses. In applications, often modelling of the maximum of functionals of a Gaussian vector process is ofinterest, see e.g., [38, 4, 10]. Our results in this paper are derived for the more general framework of Gaussianvector processes extending the recent findings of [31] by considering simultaneously two different grids. Thispaper highlights the role of different grids in the approximation of the maximum over a continuous interval. Ourresults are therefore of interest for simulation studies, which was the main motivation of [27, 19, 20, 36, 37, 30, 35].c) As a by-product we show that for weakly dependent stationary Gaussian processes the limiting distributionsare max-stable. In Extreme Value Theory max-stable distributions and processes are characterised in differentways, see e.g., [15, 13]. In order for a multivariate max-stable distribution to be also useful for statisticalmodelling, it is important to find how that distribution approximates the maxima of certain sequences (ortriangular arrays). Piterbarg max-stable distributions are therefore important since we show also their usefulnessin the approximations of maxima over different grids.Organisation of the article is as follows. Our main results are presented in the next section. All the proofs arerelegated to Section 3 which is followed by an Appendix.2. Main results
We shall investigate in the following the asymptotics of maxima over different grids of a centered stationarymultivariate p -dimensional Gaussian process { X ( t ) , t ≥ } . Each component X k , k ≤ p of X is assumedto have a constant variance function equal to 1, continuous sample paths and correlation function r kk ( t ) = ENKELEJD HASHORVA AND ZHONGQUAN TAN
Cov ( X k ( s ) , X k ( s + t )) which satisfies for any index k ≤ pr kk ( t ) = 1 − C | t | α + o ( | t | α ) as t → r kk ( t ) < t = 0(10)for some positive constants C . Hereafter we suppose that X has jointly stationary components with cross-correlation function r kl ( t ) = Cov ( X k ( s ) , X l ( s + t )) which does not depend on s for any s, t positive. The strongdependence condition for the vector Gaussian process X readslim T →∞ r kl ( T ) ln T = r kl ∈ [0 , ∞ ) , ≤ k, l ≤ p. (11)In order to exclude the possibility that | X k ( t ) | = | X l ( t + t ) | for some k = l, t > k = l sup t ∈ [0 , ∞ ) | r kl ( t ) | < R ( δ ) and R ( δ ). Recall that δ i , i = 1 , T >
0; in the case of Pickands grid we set R ( δ i ) = R ( D i a − /αT )for some constant D i > , i = 1 ,
2. The vector of maxima on continuous time will be denoted by M ( T ) andthat with respect to the discrete uniform grid R ( δ i ) , i = 1 , M ( δ i , T ). This means that the k th componentsof these two random vectors are M k ( T ) and M k ( δ i , T ), respectively which are defined by M k ( T ) = max t ∈ [0 ,T ] X k ( t ) , M k ( δ i , T ) = max t ∈ R ( δ i ) ∩ [0 ,T ] X k ( t ) , k ≤ p. For notational simplicity we shall set below f M ( T ) = (cid:16) a T ( M ( T ) − b T ) , . . . , a T ( M p ( T ) − b T ) (cid:17) and f M ( δ i , T ) = (cid:16) a T ( M ( δ i , T ) − b δ i ,T ) , . . . , a T ( M p ( δ i , T ) − b δ i ,T ) (cid:17) , where b δ i ,T is defined in (4) if the grid R ( δ i ) is sparse, b δ i ,T = b T ( D i ) is given by (7) if we consider a Pickandsgrid R ( δ i ) = R ( D i a − /αT ) and for a dense grid we set b δ i ,T = b T with b T defined in (6).In the following x , y , y ∈ R p are fixed vectors and Z is a p -dimensional centered Gaussian random vectorwith covariances Cov ( Z k , Z l ) = r kl √ r kk r ll , ≤ l ≤ k ≤ p. (13)When r kk r ll = 0 we assume that Z k and Z l are independent, i.e., we shall set Cov ( Z k , Z l ) = 0 . ITERBARG’S MAX-DISCRETISATION THEOREM 5
The operations with vectors are meant componentwise, for instance x ≤ y means x k ≤ y k for any index k ≤ p ,with x k and y k the k th component of x and y , respectively. Hereafter we define p T, x , y ,δ := P n f M ( T ) ≤ x , f M ( δ i , T ) ≤ y i , i = 1 , o . In the first theorem below we discuss the case when one of the grids is sparse. Our results shall establish thatlim T →∞ sup x , y , y ∈ R p (cid:12)(cid:12)(cid:12)(cid:12) p T, x , y ,δ − E ( exp (cid:16) − p X k =1 f ( x k , y k , y k ) e − r kk + √ r kk Z k (cid:17))(cid:12)(cid:12)(cid:12)(cid:12) = 0 , (14)where the function f is given below explicitly for each particular case. Theorem 2.1.
Let { X ( t ) , t ≥ } be a centered stationary Gaussian vector process as defined above and let R ( δ ) be a sparse grid. Assume that (10), (11) and (12) hold and the Gaussian random vector Z has a positive-definitecovariance matrix with elements defined in (13) .i) If R ( δ ) is another sparse grid such that R ( δ ) ∩ R ( δ ) = ∅ or lim T →∞ δ ( T ) /δ ( T ) = ∞ , then (14) holdswith f ( x k , y k , y k ) = e − x k + e − y k + e − y k . ii) Let R ( δ ) be a sparse grid such that R ( δ ) ∩ R ( δ ) = R ( δ ) . If R ( δ ) is a non-empty grid such that lim T →∞ ln( δ ( T ) δ ( T ) ) = θ ∈ [0 , ∞ ) , lim T →∞ ln( δ ( T ) δ ( T ) ) = θ ∈ [0 , ∞ ) , then (14) holds with (write θ = θ − θ ) f ( x k , y k , y k ) = e − x k + e − y k + e − y k − e − y k − θ I ( y k > y k + θ ) − e − y k − θ I ( y k ≤ y k + θ ) , where I ( · ) is the indicator function.iii) If R ( δ ) = R ( D a − /αT ) is a Pickands grid, then (14) holds with f ( x k , y k , y k ) = e − x k + e − y k + e − y k − H ln H α + x k , ln H D ,α + y k D ,α . iv) If R ( δ ) is a dense grid, then again (14) holds with f ( x k , y k , y k ) = e − y k + e − min( x k ,y k ) . We consider next the cases that one grid is a Pickands grid and the second one is either a Pickands or a densegrid. For positive constants D , D , λ and x, z , z ∈ R define (recall B ∗ α/ ( t ) := √ B α/ ( t ) − | t | α ) H z ,z D ,D ,α ( λ ) = Z s ∈ R e s P (cid:26) max k ∈ N : kD i ∈ [0 ,λ ] B ∗ α/ ( kD i ) > s + z i , i = 1 , (cid:27) ds and H x,z ,z D ,D ,α ( λ ) = Z s ∈ R e s P (cid:26) max t ∈ [0 ,λ ] B ∗ α/ ( t ) > s + x, max k ∈ N : kD i ∈ [0 ,λ ] B ∗ α/ ( kD i ) > s + z i , i = 1 , (cid:27) ds. ENKELEJD HASHORVA AND ZHONGQUAN TAN
Theorem 2.2.
Under the assumptions of Theorem 2.1 suppose further that R ( δ ) = R ( D a − /αT ) , D > is aPickands grid.i) If R ( δ ) = R ( D a − /αT ) , D ∈ (0 , ∞ ) \ { D } is also a Pickands grid, then for any x, z , z ∈ R H z ,z D ,D ,α = lim λ →∞ H z ,z D ,D ,α ( λ ) λ ∈ (0 , ∞ ) and H x,z ,z D ,D ,α = lim λ →∞ H x,z ,z D ,D ,α ( λ ) λ ∈ (0 , ∞ ) and further (14) holds with f given by f ( x k , y k , y k ) = e − x k + e − y k + e − y k − H ln H α + x k , ln H D ,α + y k D ,α − H ln H α + x k , ln H D ,α + y k D ,α − H ln H D ,α + y k , ln H D ,α + y k D ,D ,α + H ln H α + x k , ln H D ,α + y k , ln H D ,α + y k D ,D ,α . ii) If R ( δ ) is a dense grid, then (14) holds with f ( x k , y k , y k ) = e − min( x k ,y k ) + e − y k − H ln H α +min( x k ,y k ) , ln H D ,α + y k D ,α . iii) If both R ( δ ) and R ( δ ) are dense grids, then again (14) holds with f ( x k , y k , y k ) = e − min( x k ,y k ,y k ) . Remarks : a) From the above results it follows that the joint convergence stated therein is determined by thechoice of the grids. The dependence parameters r lk , l, k ≤ p determine the covariance of the Gaussian randomvector Z and appears explicitly in the definition of the limiting distribution.Clearly, if each r kk equals 0, i.e., the Berman condition holds for each component of the vector process, then Z does not appear in any of the limiting results above. For such cases the maxima over a sparse grid is independentof that taken over a Pickands or a dense grid.b) Condition (9) can be stated in a slightly more general form putting therein C k instead of C . Our results canbe restated then with some obvious modifications on the constants involved.c) In [33] a particular case of Piterbarg’s max-discretisation theorem was investigated, which in our notationcorresponds to r kk = ∞ . Considering for simplicity p = 1, so we assume that r = ∞ , then if (1) holds with α ∈ (0 ,
1] and r ( t ) = o (1) , t → ∞ a convex function, and ( r ( t ) ln t ) − is monotone for large t and o (1), then forany two different sparse, Pickands or dense grids R ( δ ) and R ( δ ) we havelim T →∞ P (cid:8) a ∗ T ( M ( T ) − b ∗ T ) ≤ x, a ∗ T ( M ( δ , T ) − b ∗ δ ,T ) ≤ y, a ∗ T ( M ( δ , T ) − b ∗ δ ,T ) ≤ z (cid:9) = Φ(min( x, y, z ))(15)for any x, y, z ∈ R as T → ∞ , where a ∗ T = 1 / p r ( T ) , b ∗ δ i ,T = p (1 − r ( T )) /r ( T ) b δ i ,T and Φ denotes the distribution function of an N (0 ,
1) random variable. The proof of the above claim follows byTheorem 2.1 in [33] and Lemma 4.5.Consequently, for this case different grids do not play a role in the limiting distribution. Note however that thenoramlisation constant b ∗ δ i ,T depends on the type of the grid. ITERBARG’S MAX-DISCRETISATION THEOREM 7 iv) Set for x , y , y ∈ R p G ( x , y , y ) = E ( exp (cid:16) − p X k =1 f ( x k , y k , y k ) e − r kk + √ r kk Z k (cid:17)) , where f and Z are as in Theorem 2.1 and Theorem 2.2. It follows thatlim x →−∞ H x,y ,y D ,D ,α = H y ,y D ,D ,α , lim y →−∞ ,y →−∞ H x,y,zD ,D ,α = e − x H α , lim x →−∞ ,y →−∞ H x,y ,y D ,D ,α = lim y →−∞ H y ,y D ,D ,α = H y D ,α = e − y H D ,α . Hence, using further (9) we conclude that G is a non-degenerate multivariate distribution in R p , which we referto as the Piterbarg distribution. One important property of G is that when r kk = 0 for all indices k ≤ p , thenit has unit Gumbel marginals Λ( x ) = e − e − x , x ∈ R . Moreover, G is a max-stable distribution since( G ( x + ln n, y + ln n, y + ln n )) n = G ( x , y , y ) , x , y , y ∈ R p , n ∈ N . In Extreme Value Theory max-stable distributions are important for modelling of extremes and rare events, seee.g., [28, 15] for details. 3.
Proofs
In this section we present several lemmas needed for the proof of the main results. In order to establishPiterbarg’s max-discretisation theorem for multivariate stationary Gaussian processes we need to closely follow[27], and of course to strongly rely on the deep ideas and the techniques presented in [26]. First, for 1 ≤ k, l ≤ p define ρ kl ( T ) = r kl / ln T. Following the former reference, we divide the interval [0 , T ] onto intervals of length S alternating with shorterintervals of length R . Let b < a < b will be chosen below (see (29)). Weshall denote throughout in the sequel S = T a , R = T b , T > . Denote the long intervals by S l , l = 1 , · · · , n T , and the short intervals by R l , l = 1 , · · · , n T where n T := [ T / ( S + R )] . (16)It will be seen from the proofs, that a possible remaining interval with length different than S or R plays no role inour asymptotic considerations; we call also this interval a short interval. Define further S = ∪ n T l =1 S l , R = ∪ n T l =1 R l and thus [0 , T ] = S ∪ R .Our proofs also rely on the ideas of [22]; we shall construct new Gaussian processes to approximate the originalones. For each index k ≤ p we define a Gaussian process η k as η k ( t ) = Y ( j ) k ( t ) , t ∈ R j ∪ S j = [( j − S + R ) , j ( S + R )) , (17) ENKELEJD HASHORVA AND ZHONGQUAN TAN where { Y ( j ) k ( t ) , t ≥ } , j = 1 , · · · , n T are independent copies of { X k ( t ) , t ≥ } . We construct the processes sothat η k , k = 1 , · · · , p are independent by taking Y ( j ) k to be independent for any j and k two possible indices.The independence of η k and η l implies γ kl ( s, t ) := E { η k ( s ) η l ( t ) } = 0 , k = l, whereas for any fixed kγ kk ( s, t ) := E { η k ( s ) η k ( t ) } = E n Y ( i ) k ( t ) , Y ( i ) k ( s ) o = r kk ( s, t ) , if t, s ∈ R i ∪ S i , for some i ≤ n T ; E n Y ( i ) k ( t ) , Y ( j ) k ( s ) o = 0 , if t ∈ R i ∪ S i , s ∈ R j ∪ S j , for some i = j ≤ n T . For k = 1 , · · · , p define ξ Tk ( t ) = (cid:0) − ρ kk ( T ) (cid:1) / η k ( t ) + ρ / kk ( T ) Z k , ≤ t ≤ T, where Z = ( Z , . . . , Z p ) is a p -dimensional centered Gaussian random vector introduced in Section 2, which isindependent of { η k ( t ) , t ≥ } , k = 1 , · · · , p . Denote by { ̺ kl ( s, t ) , ≤ k, l ≤ p } the covariance functions of { ξ Tk ( t ) , ≤ t ≤ T, k = 1 , · · · , p } . We have ̺ kl ( s, t ) = E (cid:8) ξ Tk ( s ) ξ Tl ( t ) (cid:9) = ρ kl ( T ) , k = l and ̺ kk ( s, t ) = r kk ( s, t ) + (1 − r kk ( s, t )) ρ kk ( T ) , t ∈ R i ∪ S i , s ∈ R j ∪ S j , i = j ; ρ kk ( T ) , t ∈ R i ∪ S i , s ∈ R j ∪ S j , i = j. For any ε > q ε = ε (ln T ) /α . (18)For notational simplicity we write f M ξ ( q ε , S ) = (cid:16) a T ( M ξ ( q ε , S ) − b T ) , . . . , a T ( M ξp ( q ε , S ) − b T ) (cid:17) and f M ξ ( δ i , S ) = (cid:16) a T ( M ξ ( δ i , S ) − b δ i ,T ) , . . . , a T ( M ξp ( δ i , S ) − b δ i ,T ) (cid:17) , where M ξk ( q ε , S ) = max t ∈ R ( q ε ) ∩ S ξ Tk ( t )and b δ i ,T is defined in (4) if the grid R ( δ i ) is sparse, b δ i ,T = b T ( D i ) is given by (7) if we consider a Pickandsgrid R ( δ i ) = R ( D i a − /αT ) and for a dense grid b δ i ,T = b T with b T defined in (6).We present first four lemmas. Since their proofs are similar to those of Lemmas 3.1-3.4 in [31] we shall not givethem here. ITERBARG’S MAX-DISCRETISATION THEOREM 9
Lemma 3.1. If R ( δ ) and R ( δ ) are sparse or Pickands grids, then for any B > there exits some K > such that for all x k , y ki ∈ [ − B, B ] , i = 1 , , k ≤ p (cid:12)(cid:12)(cid:12)(cid:12) P n f M ( T ) ≤ x , f M ( δ i , T ) ≤ y i , i = 1 , o − P n f M ( S ) ≤ x , f M ( δ i , S ) ≤ y i , i = 1 , o (cid:12)(cid:12)(cid:12)(cid:12) ≤ K (ln T ) /α − / T b − a holds for some < b < a < and all T large. In the following R ( q ε ) = R ( ε/ (ln T ) /α ) denotes a Pickands grid where ε > q ε is defined in (18). Lemma 3.2. If R ( δ ) and R ( δ ) are sparse or Pickands grids, then for any B > and for all x k , y ki ∈ [ − B, B ] , i = 1 , , k ≤ p (cid:12)(cid:12)(cid:12)(cid:12) P n f M ( S ) ≤ x , f M ( δ i , S ) ≤ y i , i = 1 , o − P n f M ( q ε , S ) ≤ x , f M ( δ i , S ) ≤ y i , i = 1 , o (cid:12)(cid:12)(cid:12)(cid:12) → as ε ↓ . Lemma 3.3. If R ( δ ) and R ( δ ) are sparse or Pickands grids, then for any B > and for all x k , y ki ∈ [ − B, B ] , i = 1 , , k ≤ p lim T →∞ (cid:12)(cid:12)(cid:12)(cid:12) P n f M ( q ε , S ) ≤ x , f M ( δ i , S ) ≤ y i , i = 1 , o − P n f M ξ ( q ε , S ) ≤ x , f M ξ ( δ i , S ) ≤ y i , i = 1 , o (cid:12)(cid:12)(cid:12)(cid:12) = 0 uniformly for ε > . Let in the following Φ p denote the distribution function of the p -dimensional Gaussian random vector Z andset for η k defined in (17) c M η ( δ i , S j ) = (cid:18) max t ∈ R ( δ i ) ∩S j η ( t ) , · · · , max t ∈ R ( δ i ) ∩S j η p ( t ) (cid:19) , c M η ( S j ) = (cid:18) max t ∈S j η ( t ) , · · · , max t ∈S j η p ( t ) (cid:19) . Lemma 3.4. If R ( δ ) and R ( δ ) are sparse or Pickands grids, then for any B > for all x k , y ki ∈ [ − B, B ] , i =1 , , k ≤ p (cid:12)(cid:12)(cid:12)(cid:12) P n f M ξ ( q ε , S ) ≤ x , f M ξ ( δ i , S ) ≤ y i , i = 1 , o − Z z ∈ R p n T Y j =1 P n c M η ( S j ) ≤ u ( x , z ) , c M η ( δ i , S j ) ≤ u ( y i , z ) , i = 1 , o d Φ p ( z ) (cid:12)(cid:12)(cid:12)(cid:12) → as ε ↓ , where u ( x , z ) , u ( y i , z ) , i = 1 , have components u ( x k , z k ) = b T + x k /a T − ρ / kk ( T ) z k (1 − ρ kk ( T )) / = x k + r kk − √ r kk z k a T + b T + o ( a − T ) , (19) u ( y ki , z k ) = b δ i ,T + y ki /a T − ρ / kk ( T ) z k (1 − ρ kk ( T )) / = y ki + r kk − √ r kk z k a T + b δ i ,T + o ( a − T ) , (20) for all x k , y ki ∈ [ − B, B ] , i = 1 , , k ≤ p . Proof of Theorem x k , y ki ∈ [ − B, B ] , i = 1 , , k ≤ p , by letting ε ↓
0, we have P n f M ( T ) ≤ x , f M ( δ i , T ) ≤ y i , i = 1 , o ∼ Z z ∈ R p n T Y j =1 P n c M η ( S j ) ≤ u ( x , z ) , c M η ( δ i , S j ) ≤ u ( y i , z ) , i = 1 , o d Φ p ( z )as T → ∞ . Thus, if we can provelim T →∞ (cid:12)(cid:12)(cid:12)(cid:12) n T Y j =1 P n c M η ( S j ) ≤ u ( x , z ) , c M η ( δ i , S j ) ≤ u ( y i , z ) , i = 1 , o − exp (cid:16) − p X k =1 f ( x k , y k , y k ) e − r kk + √ r kk z k (cid:17)(cid:12)(cid:12)(cid:12)(cid:12) = 0 , (21)where f ( x k , y k , y k ) is defined in Theorem 2.1, then applying the dominated convergence theorem we completethe proof of Theorem 2.1 for the case i ) − iii ). Define next the events A k = n max t ∈ [0 ,S ] η k ( t ) > u ( x k , z k ) o , A p + k = n max t ∈ R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) o and A p + k = n max t ∈ R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) o , k = 1 , · · · , p.i ) Using the stationarity of { η k ( t ) , k = 1 , · · · , p } (we write A ck for the complimentary event of A k ) n T Y j =1 P n c M η ( S j ) ≤ u ( x , z ) , c M η ( δ i , S j ) ≤ u ( y i , z ) , i = 1 , o = ( P {∩ pk =1 A ck } ) n T = exp (cid:0) n T ln( P {∩ pk =1 A ck } ) (cid:1) = exp (cid:0) − n T P {∪ pk =1 A k } + W n T (cid:1) , where n T is defined in (16). Since lim T →∞ P {∩ pk =1 A k } = 1 we get that the remainder W n T satisfies W n T = o ( n T P {∪ pk =1 A k } ) , T → ∞ . Next, by Bonferroni inequality p X k =1 P {A k } ≥ P {∪ pk =1 A k } ≥ p X k =1 P {A k } − X ≤ k Further, Lemma 2 in [27] and (19), (20) imply (recall S = T a ) A ∼ p X k =1 ST − ( e − x k + e − y k + e − y k ) e − r kk + √ r kk z k , T → ∞ . For A , by the independence of η k ( t ) and η l ( t ), k = l , Lemma 2 of [27] and (19), (20), we have A = X ≤ k 12 ln ln T √ T + ln δ − i ( T ) √ T + ln(2 − π − / ) √ T + y ki + r kk − √ r kk z k √ T + o T (1) √ T (24)for sparse grids. From the assumptions we know that lim T →∞ ln( δ ( T ) δ ( T ) ) = θ = θ − θ . Consequently, we have u ( y k , z k ) − u ( y k , z k ) = (cid:20) ln( δ ( T ) δ ( T ) ) + y k − y k (cid:21) (2 ln T ) − / + o T (1)(2 ln T ) − / ∼ [ − θ + y k − y k ] (2 ln T ) − / + o T (1)(2 ln T ) − / as T → ∞ . Letting first y k > y k + θ , we thus have u ( y k , z k ) > u ( y k , z k ) for sufficiently large T . Further, A = p X k =1 P (cid:26) max t ∈R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) , max t ∈R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) (cid:27) = p X k =1 (cid:20) P (cid:26) max t ∈R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) (cid:27) + P (cid:26) max t ∈R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) (cid:27) − (cid:18) − P (cid:26) max t ∈R ( δ ) ∩ [0 ,S ] η k ( t ) ≤ u ( y k , z k ) , max t ∈R ( δ ) ∩ [0 ,S ] η k ( t ) ≤ u ( y k , z k ) (cid:27)(cid:19) (cid:21) = p X k =1 (cid:20) P (cid:26) max t ∈R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) (cid:27) + P (cid:26) max t ∈R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) (cid:27) − (cid:18) − P (cid:26) max t ∈R ( δ ) ∩ [0 ,S ] \R ( δ ) ∩ [0 ,S ] η k ( t ) ≤ u ( y k , z k ) , max t ∈R ( δ ) ∩ [0 ,S ] η k ( t ) ≤ u ( y k , z k ) (cid:27)(cid:19) (cid:21) = p X k =1 (cid:20) P (cid:26) max t ∈R ( δ ) ∩ [0 ,S ] \R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) (cid:27) − P (cid:26) max t ∈R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) (cid:27) + P (cid:26) max t ∈R ( δ ) ∩ [0 ,S ] \R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) , max t ∈R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) (cid:27) (cid:21) . By Lemma 4.2 and (24) we have for i = 1 , T → ∞ P (cid:26) max t ∈R ( δ δi ) ∩ [0 ,S ] \R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) (cid:27) ∼ δ i ST ( 1 δ i − δ ) e − y k − r kk + √ r kk z k ∼ ST − (1 − e − θ i ) e − y k − r kk + √ r kk z k , T → ∞ . Further, applying Lemma 2 in [27] (recall (24)) we obtain as T → ∞ P (cid:26) max t ∈R ( δ i ) ∩ [0 ,S ] η k ( t ) > u ( y ki , z k ) (cid:27) ∼ ST − e − y ki − r kk + √ r kk z k , i = 1 , . By the second assertion of Lemma 4.1, the third term is o ( T a − ).Next, for y k ≤ y k + θ , we have u ( y k , z k ) ≤ u ( y k , z k ) for sufficient large T . Similarly, we have A = p X k =1 P (cid:26) max t ∈R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) , max t ∈R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) (cid:27) = p X k =1 (cid:20) P (cid:26) max t ∈R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) (cid:27) − P (cid:26) max t ∈R ( δ ) ∩ [0 ,S ] \R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) (cid:27) + P (cid:26) max t ∈R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) , max t ∈R ( δ ) ∩ [0 ,S ] \R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) (cid:27) (cid:21) . ITERBARG’S MAX-DISCRETISATION THEOREM 13 Again, in view of the second assertion of Lemma 4.1 the third term is also o ( T a − ). Consequently, A = p X k =1 T a − [ e − y k − θ I ( y k > y k + θ ) + e − y k − θ I ( y k ≤ y k + θ )] e − r kk + √ r kk z k + o ( T a − ) , T → ∞ implying that as T → ∞ n T P {∪ pk =1 A k } ∼ p X k =1 ( e − x k + e − y k + e − y k − e − y k − θ I ( y k > y k + θ ) − e − y k − θ I ( y k ≤ y k + θ )) e − r kk + √ r kk z k , which completes the proof of (21). iii ) We proceed as for the proof of cases i ) and ii ) using the bound (23). By Lemmas 2 and 3 in [27] and (19),(20) we obtain A ∼ T a − p X k =1 ( e − x k + e − y k + e − y k ) e − r kk + √ r kk z k , T → ∞ . With similar argument as for A in the proof of case i ), we conclude that A k = o ( A ) , k = 2 , , , , , . Further, Lemma 2 in [27] implies A = o ( A ) and Lemma 4.3 yields A = o ( T a − ) = o ( A ) , T → ∞ . Similar arguments as for A in the proof of case ii ) imply A = o ( A ) , T → ∞ . Borrowing the arguments of [26], p. 176 and using Lemma 3 in [27] it follows that A = p X k =1 P (cid:26) max t ∈ [0 ,S ] η k ( t ) > u ( x k , z k ) , max t ∈ R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) (cid:27) ∼ T a − p X k =1 H ln H α + x k , ln H D ,α + y k D ,α e − r kk + √ r kk z k , T → ∞ . Consequently, as T → ∞ n T P {∪ pk =1 A k } ∼ p X k =1 ( e − x k + e − y k + e − y k − H ln H α + x k , ln H D ,α + y k D ,α ) e − r kk + √ r kk z k , which completes the proof of the claim in (21). iv ). By Lemma 5 in [27], we have (cid:12)(cid:12)(cid:12)(cid:12) P n f M ( T ) ≤ x , f M ( δ , T ) ≤ y , f M ( δ , T ) ≤ y o − P n f M ( T ) ≤ x , f M ( δ , T ) ≤ y , f M ( T ) ≤ y o (cid:12)(cid:12)(cid:12)(cid:12) ≤ (cid:12)(cid:12)(cid:12)(cid:12) P n f M ( δ , T ) ≤ y o − P n f M ( T ) ≤ y o (cid:12)(cid:12)(cid:12)(cid:12) → , T → ∞ . Now, by Theorem 2.1 of [31], we have P n f M ( T ) ≤ x , f M ( δ , T ) ≤ y , f M ( T ) ≤ y o = P n f M ( T ) ≤ min( x , y ) , f M ( δ , T ) ≤ y o → E ( exp (cid:16) − p X k =1 f ( x k , y k , y k ) e − r kk + √ r kk Z k (cid:17)) , as T → ∞ with f ( x k , y k , y k ) = e − min( x k ,y k ) + e − y k establishing the proof. (cid:3) Proof of Theorem i ) The limiting properties of the two constants can be found in Lemma 4.4. We givethe proof of the relation of (14). As for the proof of Theorem 2.1, in view of Lemmas 3.1-3.4 and the dominatedconvergence theorem in order to establish the proof we need to show that (21) holds with f ( x k , y k , y k ) = e − x k + e − y k + e − y k − H ln H α + x k , ln H D ,α + y k D ,α − H ln H α + x k , ln H D ,α + y k D ,α − H ln H D ,α + y k , ln H D ,α + y k D ,D ,α + H ln H α + x k , ln H D ,α + y k , ln H D ,α + y k D ,D ,α . We proceed as in the proof of case ii ) of Theorem 2.1 using the bound (23); we have thus P n ∪ pk =1 A k o = p X k =1 P {A k } − X ≤ k,l ≤ p P {A k , A l } + X ≤ k,l,j ≤ p P {A k , A l , A j } + X ≤ k, ··· ,l ≤ p P {·} =: Σ − Σ + Σ + Σ . (25)By Lemmas 2 and 3 in [27] and (19), (20) we obtain thatΣ ∼ T a − p X k =1 ( e − x k + e − y k + e − y k ) e − r kk + √ r kk z k , T → ∞ . Further, write Σ = A + A + A + 2 A + 2 A + 2 A + A + A + A , (26)where A i , i = 2 , · · · , 10 are defined in the proof of ii ) of Theorem 2.1. Hence, with similar arguments as above A i = o ( A ), i = 1 , · · · , A ∼ T a − p X k =1 H ln H α + x k , ln H D ,α + y k D ,α e − r kk + √ r kk z k ,A ∼ T a − p X k =1 H ln H α + x k , ln H D ,α + y k D ,α e − r kk + √ r kk z k ,A ∼ T a − p X k =1 H ln H D ,α + y k , ln H D ,α + y k D ,D e − r kk + √ r kk z k ITERBARG’S MAX-DISCRETISATION THEOREM 15 as T → ∞ , where for the estimates of A and A we applied Lemma 3 in [27] and for the estimate of A wehave used Lemma 4.4. FurtherΣ = X ≤ k For the proof of the main results, we need the following technical lemmas. Let in the sequel C be a positiveconstant whose value will change from place to place and Φ , ϕ be the survival function and the density functionof an N (0 , 1) random variable, respectively. Lemma 4.1. Suppose that R ( δ ) and R ( δ ) are sparse grids and a ∈ (0 , .i) If lim T →∞ δ ( T ) /δ ( T ) = ∞ or R ( δ ) ∩ R ( δ ) = ∅ , then we have for k ≤ p as T → ∞ P (cid:26) max t ∈R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) , max t ∈R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) (cid:27) = o ( T a − ) . ii) Let R ( δ ) ∩ R ( δ ) = R ( δ ) and lim T →∞ ln( δ ( T ) δ ( T ) ) = θ ∈ [0 , ∞ ) , lim T →∞ ln( δ ( T ) δ ( T ) ) = θ ∈ [0 , ∞ ) hold. If y k > y k + θ − θ , then we have for k ≤ p as T → ∞ P (cid:26) max t ∈R ( δ ) ∩ [0 ,S ] \R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) , max t ∈R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) (cid:27) = o ( T a − ) , whereas if y k ≤ y k + θ − θ P (cid:26) max t ∈R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) , max t ∈R ( δ ) ∩ [0 ,S ] \R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) (cid:27) = o ( T a − ) holds. Proof of Lemma ǫ > | s − t | ≤ ǫ < − /α | s − t | α ≤ − r kk ( s, t ) ≤ | s − t | α . (27) i ) We first deal with the case lim T →∞ δ ( T ) /δ ( T ) = ∞ . It is easy to check that p X k =1 P (cid:26) max t ∈R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) , max t ∈R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) (cid:27) ITERBARG’S MAX-DISCRETISATION THEOREM 17 ≤ p X k =1 (cid:20) P (cid:26) max t ∈R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) (cid:27) + P (cid:26) max t ∈R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) , max t ∈R ( δ ) ∩ [0 ,S ] \R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) (cid:27) (cid:21) . By Lemma 2 of [27] and the definition of u ( y k , z k ), we have as T → ∞ P (cid:26) max t ∈R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) (cid:27) ∼ Sδ − ( T )Φ( u ( y k , z k ))= C Sδ − ( T ) T − δ ( T )= C T a − δ ( T ) δ ( T ) = o ( T a − ) . Now, for m, n ∈ N and the ǫ chosen in (27), we have P (cid:26) max t ∈R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) , max t ∈R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) (cid:27) = P (cid:26) max t ∈R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) , max t ∈R ( δ ) ∩ [0 ,S ] \R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) (cid:27) + o ( T a − ) ≤ [ S/δ ]+1 X n =0 P (cid:26) η k ( nδ ) > u ( y k , z k ) , max t ∈R ( δ ) ∩ [0 ,S ] \R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) (cid:27) + o ( T a − ) ≤ [ S/δ ]+1 X n =0 P ( η k ( nδ ) > u ( y k , z k ) , max ≤ t ≤ S | t − nδ |≤ ǫ η k ( t ) > u ( y k , z k ) ) + [ S/δ ]+1 X n =0 P ( η k ( nδ ) > u ( y k , z k ) , max ≤ mδ ≤ S | nδ − mδ | >ǫ η k ( mδ ) > u ( y k , z k ) ) + o ( T a − )=: S T, + S T, + o ( T a − ) , where [ x ] denotes the integer part of x . By stationarity we have setting η ∗ nk ( t ) = η k ( nδ ) + η k ( t ) S T, ≤ [ S/δ ]+1 X n =0 P (cid:26) max nδ − ǫ ≤ t ≤ nδ + ǫ η ∗ nk ( t ) > u ( y k , z k ) + u ( y k , z k ) (cid:27) = C Sδ P (cid:26) max ≤ t<ǫ η ∗ k ( t ) > u ( y k , z k ) + u ( y k , z k ) (cid:27) . For the correlation function of η ∗ k ( t ) = η k (0) + η k ( t ), t ∈ [0 , ǫ ] we have1 − E ( η ∗ k ( s ))( η ∗ k ( t )) p E (( η ∗ k ( s )) ) E (( η ∗ k ( t )) ) ≤ − r kk ( t − s )2 p r kk ( t ) p r kk ( s ) ≤ | t − s | α − ǫ α ≤ − exp( −| t − s | α ) . Further V ar ( η ∗ k ( t )) = 2 + 2 r kk ( t ) = 4 − | t | α (1 + o (1))as t → 0. Hence by Slepian’s inequality (see e.g. Theorem 7.4.2 of [21]) we have P (cid:26) max ≤ t<ǫ η ∗ k ( t ) > u ( y k , z k ) + u ( y k , z k ) (cid:27) = P ( max ≤ t<ǫ η ∗ k ( t ) p E (( η ∗ k ( t )) ) q E (( η ∗ k ( t )) ) > u ( y k , z k ) + u ( y k , z k ) ) ≤ P (cid:26) max ≤ t<ǫ W ( t ) q E (( η ∗ k ( t )) ) > u ( y k , z k ) + u ( y k , z k ) (cid:27) , where W is a Gaussian zero mean stationary process with covariance function exp( −| t | α ), thus the condition ofTheorem D.3 in [26] for the case α = β hold. By that theorem S T, ≤ C Sδ Φ (cid:18) u ( y k , z k ) + u ( y k , z k )2 (cid:19) . The definition of u ( y ki , z k ) , i = 1 , u ( y ki , z k )] = 2 ln T − ln ln T + 2 ln δ − i ( T ) + O (1) . (28)Consequently, from the fact that lim T →∞ δ ( T ) /δ ( T ) = ∞ S T, ≤ C Sδ ( T ) 1 √ ln T T − √ ln T δ / ( T ) δ / ( T ) = C T a − (cid:18) δ ( T ) δ ( T ) (cid:19) / = o ( T a − ) , T → ∞ . Now, let ϑ kk ( t ) = sup t ≤ s ≤ S r kk ( s ). Assumption (10) implies that ϑ kk ( ǫ ) < T and any ǫ ∈ (0 , − /α ).Consequently, we may choose some positive constant β kk such that β kk < − ϑ kk ( ǫ )1 + ϑ kk ( ǫ ) < T . In the following we choose0 < a < b < min ≤ k ≤ p β kk . (29)For the second term, by stationarity and Berman’s inequality (see eg. Theorem 4.2.1 of [21], Theorem C.2 of[26]), we have S T, ≤ [ S/δ ]+1 X n =0 X ≤ mδ ≤ S | nδ − mδ | >ǫ P { η k ( nδ ) > u ( y k , z k ) , η k ( mδ ) > u ( y k , z k ) }≤ [ S/δ ]+1 X n =0 X ≤ mδ ≤ S | nδ − mδ | >ǫ (cid:20) Φ( u ( y k , z k ))Φ( u ( y k , z k )) + C exp (cid:18) u ( y k , z k ) + u ( y k , z k )2(1 + r kk ( | nδ − mδ | )) (cid:19) (cid:21) ≤ Sδ Sδ (cid:20) Φ( u ( y k , z k ))Φ( u ( y k , z k )) + C exp (cid:18) u ( y k , z k ) + u ( y k , z k )2(1 + ϑ kk ( ǫ )) (cid:19) (cid:21) =: S T, + S T, . Utilising again (28) S T, ≤ C Sδ Sδ ϕ ( u ( y k , z k )) u ( y k , z k ) ϕ ( u ( y k , z k )) u ( y k , z k ) ≤ C Sδ Sδ T exp (cid:18) − u ( y k , z k ) (cid:19) exp (cid:18) − u ( y k , z k ) (cid:19) ≤ C Sδ Sδ T T − (ln T ) / δ T − (ln T ) / δ = C T a − as T → ∞ . Since u ( y ki , z k ) ∼ (2 ln T ) / , i = 1 , S T, ≤ C Sδ Sδ exp (cid:18) u ( y k , z k ) + u ( y k , z k )2(1 + ϑ kk ( ǫ )) (cid:19) ≤ C T a δ T a δ T − ϑkk ( ǫ ) ≤ C T a − T a − − ϑkk ( ǫ )1+ ϑkk ( ǫ ) ( δ δ ) − . Both (29) and lim T →∞ (ln T ) /α δ i ( T ) = ∞ imply S T, = o ( T a − ) as T → ∞ .Let us consider now the case that R ( δ ) ∩ R ( δ ) = ∅ . Without loss of generality, we suppose that u ( y k , z k ) P (cid:26) max t ∈R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) , max t ∈R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) (cid:27) ≤ [ S/δ ]+1 X n =0 P (cid:26) η k ( nδ ) > u ( y k , z k ) , max 14 ( mδ ) α and using (28) we obtain R T, ≤ C Sδ X 12 ( mδ ) α/ u ( y k , z k ) (cid:19)(cid:21) = C T a − X 12 ( mδ ) α/ u ( y k , z k ) (cid:19) = C T a − X 18 ( mδ ) α u ( y k , z k ) (cid:19) = C T a − X 14 [ mδ (ln T ) /α ] α (cid:19) ≤ C T a − δ (ln T ) /α ] α/ X 14 [ mδ (ln T ) /α ] α (cid:19) ≤ C T a − δ (ln T ) /α ] α/ = T a − o (1) , where we used additionally the fact that lim T →∞ (ln T ) /α δ i ( T ) = ∞ , i = 1 , 2. By repeating the calculationsfor S T, we obtain further R T, = o ( T a − ) as T → ∞ , which completes the proof. ii ) If y k ≤ y k + θ − θ , then we have u ( y k , z k ) ≤ u ( y k , z k ) for sufficient large T . By stationarity we have for m, n ∈ N and ǫ > P (cid:26) max t ∈R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) , max t ∈R ( δ ) ∩ [0 ,S ] \R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) (cid:27) ≤ [ S/δ ]+1 X n =0 P (cid:26) η k ( nδ ) > u ( y k , z k ) , max t ∈R ( δ ) ∩ [0 ,S ] \R ( δ ) ∩ [0 ,S ] η k ( t ) > u ( y k , z k ) (cid:27) ≤ [ S/δ ]+1 X n =0 P ( η k ( nδ ) > u ( y k , z k ) , max For S = T a , a ∈ (0 , we have for any k ≤ p P (cid:26) max t ∈R ( δ ) ∩ [0 ,S ] η k ( t ) > u T (cid:27) = ♯ ( R ( δ ) ∩ [0 , S ])Φ( u T )(1 + o (1)) as u T → ∞ , where ♯ ( A ) denotes the number of the elements of the set A . ITERBARG’S MAX-DISCRETISATION THEOREM 21 Proof of Lemma T large (set Θ T := ♯ ( R ( δ ) ∩ [0 , S ]) and u := u T ) Θ T X l =1 P { η k ( t l ( T )) > u } ≥ P (cid:26) max t ∈R ( δ ) ∩ [0 ,S ] η k ( t ) > u (cid:27) ≥ Θ T X l =1 P { η k ( t l ( T )) > u } − X ≤ m 14 ( t l ( T ) − t m ( T )) α ≥ δ αmin and the fact that u = u T = (2 ln T ) / (1 + o (1)) and δ min (ln T ) /α → ∞ we get P ≤ Θ T Φ( u ) ǫδ min Φ (cid:18) uδ α/ min (cid:19) ≤ C Θ T Φ( u ) ǫδ min uδ α/ min exp( − u δ αmin ) ≤ C Θ T Φ( u ) 1 uδ α/ min = Θ T Φ( u ) o (1) . Recalling the bound derived for S T, , by stationarity and Berman’s inequality P ≤ X ≤ m Lemma 4.4. For any x, y ∈ R we have < H ,xc,d,α = lim λ →∞ H ,xc,d,α ( λ ) λ < ∞ and < H ,x,yc,d,α = lim λ →∞ H ,x,yc,d,α ( λ ) λ < ∞ . Furthermore, for any S > P S ( u, x ) ∼ SH ,xc,d,α u /α Φ( u ) and P S ( u, x, y ) ∼ SH ,x,yc,d,α u /α Φ( u ) as u → ∞ . ITERBARG’S MAX-DISCRETISATION THEOREM 23 Lemma 4.5. Let { Z T,ij , ≤ i ≤ p, ≤ j ≤ m } , T > be a random matrix. Suppose that the followingconvergence in distribution Z T,j := ( Z T, j , . . . , Z T,pj ) d → ( W , . . . , W p ) =: W , T → ∞ is valid for any index j ≤ m . If further Z T,ij ≤ Z i holds almost surely for any index i ≤ p, ≤ j ≤ m , then wehave the joint convergence in distribution ( Z T, , . . . , Z T,k ) d → ( W , . . . , W ) , T → ∞ . Proof of Lemma m = p = 2. By the assumptions, Lemma 2.3 in [18] impliesthe convergence in distributions( Z T, , Z T, ) d → ( W , W ) , ( Z T, , Z T, ) d → ( W , W ) , T → ∞ . Hence we have the convergence in probability Z T, − Z T, p → , Z T, − Z T, p → , T → ∞ , which then entails that ( Z T, , Z T, , Z T, , Z T, ) d → ( W , W , W , W ) , T → ∞ establishing thus the proof. (cid:3) Acknowledgments. E. Hashorva kindly acknowledges partial support by the Swiss National Science Founda-tion grant 200021-140633/1 and RARE -318984 (an FP7 Marie Curie IRSES Fellowship). Z. Tan acknowledgesalso support by the National Science Foundation of China (No. 11326175), RARE -318984 and Natural ScienceFoundation of Zhejiang Province of China (No. LQ14A010012). References [1] J.M.P. Albin. On extremal theory for non differentiable stationary processes. PhD Thesis, University of Lund, Sweden , 1987.[2] J.M.P. Albin. On extremal theory for stationary processes. Ann. Probab. , 18(1):92–128, 1990.[3] J.M.P. Albin and H. Choi. A new proof of an old result by Pickands. Electron. Commun. Probab. , 15:339–345, 2010.[4] M.T. Alodat, M. Al-Rawwash, and M.A. Jebrini. Duration distribution of the conjunction of two independent F processes. J.Appl. Probab. , 47(1):179–190, 2010.[5] S.M. Berman. Limit theorems for the maximum term in stationary sequences. Ann. Math. Statist. , 35:502–516, 1964.[6] S.M. Berman. Sojourns and extremes of stationary processes. Ann. Probab. , 10(1):1–46, 1982.[7] S.M. Berman. Sojourns and extremes of stochastic processes . Wadsworth & Brooks/Cole Advanced Books & Software, PacificGrove, CA, 1992.[8] K. D¸ebicki, Hashorva E., and S.N. Kukie la. Extremes of homogeneous Gaussian random fields. J. Appl. Probab. , in press, 2014.[9] K. D¸ebicki, E. Hashorva, and L. Ji. Tail asymptotics of supremum of certain Gaussian processes over threshold dependentrandom intervals. Extremes , 17(3):411–429, 2014.[10] K. D¸ebicki, E. Hashorva, Ji L., and K. Tabis. On the probability of conjunctions of stationary Gaussian processes. Statist.Probab. Lett. , 88(5):141–148, 2014. [11] K. D¸ebicki and P. Kisowski. A note on upper estimates for Pickands constants. Statist. Probab. Lett. , 78(14):2046–2051, 2008.[12] K. D¸ebicki and K. Kosi´nski. On the infimum attained by the reflected fractional Brownian motion. Extremes , 17(3):431–446,2014.[13] A.B. Dieker and T. Mikosch. Exact simulation of Brown-Resnick random fields. arXiv:1406.5624 , 2014.[14] A.B. Dieker and B. Yakir. On asymptotic constants in the theory of Gaussian processes. Bernoulli , 20(3):1600–1619, 2014.[15] M. Falk, J. H¨usler, and R.-D. Reiss. Laws of small numbers: Extremes and rare events. In DMV Seminar . Birkh¨auser, Basel,third edition, 2010.[16] E. Hashorva. Asymptotics and bounds for multivariate Gaussian tails. J. Theoret. Probab. , 18(1):79–97, 2005.[17] E. Hashorva and L. Ji. Extremes and first passage times of correlated fractional Brownian motions. Stochastic Models ,30(3):272–299, 2014.[18] E. Hashorva and L. Ji. Gaussian approximation of passage times of γ -reflected processes with fbm as input. J. Appl. Probab. ,51(3):713–726, 2014.[19] J. H¨usler. Dependence between extreme values of discrete and continuous time locally stationary Gaussian processes. Extremes ,7(2):179–190, 2004.[20] J. H¨usler and V.I. Piterbarg. Limit theorem for maximum of the storage process with fractional Brownian motion as input. Stochastic Process. Appl. , 114(2):231–250, 2004.[21] M.R. Leadbetter, G. Lindgren, and H. Rootz´en. Extremes and related properties of random sequences and processes . SpringerVerlag, 1983.[22] Y. Mittal and D. Ylvisaker. Limit distributions for the maxima of stationary gaussian processes. Stochastic Processes and theirApplications , 3(1):1–18, 1975.[23] Z. Peng, Cao L., and S. Nadarajah. Asymptotic distributions of maxima of complete and incomplete samples from multivariatestationary gaussian sequences. Journal of Multivariate Analysis , 101(10):2641–2647, 2010.[24] J. Pickands, III. Upcrossing probabilities for stationary Gaussian processes. Trans. Amer. Math. Soc. , 145:51–73, 1969.[25] V.I. Piterbarg. On the paper by J. Pickands “Upcrossing probabilities for stationary Gaussian processes”. Vestnik Moskov.Univ. Ser. I Mat. Meh. , 27(5):25–30, 1972.[26] V.I. Piterbarg. Asymptotic methods in the theory of Gaussian processes and fields , volume 148 of Translations of MathematicalMonographs . American Mathematical Society, Providence, RI, 1996.[27] V.I. Piterbarg. Discrete and continuous time extremes of Gaussian processes. Extremes , 7(2):161–177, 2004.[28] S.I. Resnick. Extreme values, regular variation, and point processes . Springer-Verlag, New York, 1987.[29] Z. Tan and E. Hashorva. Limit theorems for extremes of strongly dependent cyclo-stationary χ -processes. Extremes , 16(2):241–254, 2013.[30] Z. Tan and E. Hashorva. On piterbarg max-discretisation theorem for standardised maximum of stationary Gaussian processes. Methodology and Computing in Applied Probability , 16(1):169–185, 2014.[31] Z. Tan and E. Hashorva. On Piterbarg’s max-discretisation theorem for multivariate stationary Gaussian processes. J. Math.Anal. Appl. , 409(1):299–314, 2014.[32] Z. Tan, E. Hashorva, and Z. Peng. Asymptotics of maxima of strongly dependent Gaussian processes. Journal of AppliedProbability , 49(4):1106–1118, 2012.[33] Z. Tan and L. Tang. The dependence of extreme values of discrete and continuous time strongly dependent Gaussian processes. Stochastics An International Journal of Probability and Stochastic Processes , 86(1):60–69, 2014.[34] Z. Tan and Y. Wang. Extremes values of discrete and continuous time strongly dependent Gaussian processes. Commun. Stat.,Theory Methods , 42(13):2451–2463, 2013.[35] M. Teimouri and S. Nadarajah. On simulating truncated stable random variables. Computational Statistics , 28(5):2367–2377,2013. ITERBARG’S MAX-DISCRETISATION THEOREM 25 [36] K.F. Turkman. Discrete and continuous time series extremes of stationary processes. Handbook of statistics Vol 30. TimeSeries Methods and Aplications. Eds. T.S. Rao, S.S. Rao and C.R. Rao. Elsevier , pages 565–580, 2012.[37] K.F. Turkman, M.A.A. Turkman, and J.M. Pereira. Asymptotic models and inference for extremes of spatio-temporal data. Extremes , 13(4):375–397, 2010.[38] K.J. Worsley and K.J. Friston. A test for a conjunction. Statist. Probab. Lett. , 47(2):135–140, 2000. Enkelejd Hashorva, Department of Actuarial Science, Faculty of Business and Economics (HEC Lausanne), Universityof Lausanne,, UNIL-Dorigny, 1015 Lausanne, Switzerland E-mail address : [email protected] Zhongquan Tan, College of Mathematics, Physics and Information Engineering, Jiaxing University, Jiaxing 314001,PR China, and Department of Actuarial Science, Faculty of Business and Economics (HEC Lausanne), University ofLausanne,, UNIL-Dorigny, 1015 Lausanne, Switzerland, Corresponding author E-mail address ::