Comparison Inequalities for Order Statistics of Gaussian Arrays
aa r X i v : . [ m a t h . P R ] M a r COMPARISON INEQUALITIES FOR ORDER STATISTICS OF GAUSSIAN ARRAYS
KRZYSZTOF DE¸ BICKI, ENKELEJD HASHORVA, LANPENG JI, AND CHENGXIU LING
Abstract:
Normal comparison lemma and Slepian’s inequality are essential tools in the study of Gaussianprocesses. In this paper we extend normal comparison lemma and derive various related comparison inequalitiesincluding Slepian’s inequality for order statistics of two Gaussian arrays.The derived results can be applied innumerous problems related to the study of the supremum of order statistics of Gaussian processes. In order toillustrate the range of possible applications, we analyze the lower tail behaviour of order statistics of self-similarGaussian processes and derive mixed Gumbel limit theorems.
Key Words: comparison inequality; order statistics process; Slepian’s inequality; mixed Gumbel limit theorem;lower tail probability; self-similar Gaussian process; fractional Brownian motion.1.
Introduction
The normal comparison inequality is crucial for the study of extremes of Gaussian processes, chi-processes andGaussian random fields; see, e.g., [3, 4, 15, 18, 25]. It has been shown to be valuable in many other fieldsof mathematics, such as, for instance, certain problems in number theory; see, e.g., [10, 11]. In the simplerframework of two d -dimensional Gaussian distributions Φ Σ (1) and Φ Σ (0) with N (0 ,
1) marginal distributions, thenormal comparison inequality gives bounds for the difference∆( u ) := Φ Σ (1) ( u ) − Φ Σ (0) ( u ) , ∀ u = ( u , . . . , u d ) ∈ R d by a function of the covariance matrices Σ ( k ) = ( σ ( k ) ij ) d × d , k = 0 ,
1. As mentioned in [14], the derivation of thebounds for ∆( u ), by Slepian [27], Berman [2, 4], Cram´er [6], Bickel and Rosenblatt [5] and Piterbarg [24, 25],relies strongly on Plackett’s partial differential equation; see [26]. The most elaborated version of the normalcomparison inequality is due to Li and Shao [17]. Specifically, Theorem 2.1 therein shows that∆( u ) ≤ π X ≤ i 1) random variable1 ≤ Φ Σ (1) ( u )Φ Σ (0) ( u ) ≤ exp √ π X ≤ i 1. Recent extensions of the normal comparison inequalities are presented in[7, 10, 12, 13, 20, 22].In this paper, we are interested in the derivation of comparison inequalities for order statistics of Gaussian arrays,which are useful in several applications. To fix the notation, we denote by X = ( X ij ) d × n and Y = ( Y ij ) d × n two random d × n arrays with N (0 , 1) components (hereafter referred to as standard Gaussian arrays), andlet Σ (1) = ( σ (1) ij,lk ) dn × dn and Σ (0) = ( σ (0) ij,lk ) dn × dn be the covariance matrices of X and Y , respectively, with Date : October 2, 2018. σ (1) ij,lk := E { X ij X lk } and σ (0) ij,lk := E { Y ij Y lk } . Furthermore, define X ( r ) = ( X r ) , . . . , X d ( r ) ) , ≤ r ≤ n to bethe r th order statistics vector generated by X as follows X i (1) = min ≤ j ≤ n X ij ≤ · · · ≤ X i ( r ) ≤ · · · ≤ max ≤ j ≤ n X ij = X i ( n ) , ≤ i ≤ d. Similarly, we write Y ( r ) = ( Y r ) , . . . , Y d ( r ) ) which is generated by Y .Our principal results, Theorem 2.1 and Theorem 2.4 derive bounds for the difference∆ ( r ) ( u ) := P (cid:8) X ( r ) ≤ u (cid:9) − P (cid:8) Y ( r ) ≤ u (cid:9) , u ∈ R d . (2)A direct application of those bounds concerns the study of supreum of the r th order statistics process { X r : n ( t ) , t ≥ } of { X j ( t ) , t ≥ } , ≤ j ≤ n which are independent copies of a centered Gaussian process { X ( t ) , t ≥ } .More precisely, X r : n is defined by X n : n ( t )= min ≤ i ≤ n X i ( t ) ≤ · · · ≤ X r : n ( t ) ≤ · · · ≤ max ≤ i ≤ n X i ( t ) = X n ( t ) , t ≥ . Below we call X r : n the r th order statistics process generated by X ; we refer to [7, 8, 9] for the study of theextremes of order statistics processes.With motivation from Theorem 3.1 in [18], we apply the findings of Theorem 2.1 to show that P ( sup t ∈ [0 , B r : n,α ( t ) ≤ x ) = x p r : n,α /α + o (1) , x ↓ p r : n,α , where B r : n,α is the r th order statistics process generated by afractional Brownian motion (fBm) B α with Hurst index α/ ∈ (0 , B (0) α is an fBm which isindependent of B r : n,α , then P ( sup t ∈ [0 , (cid:0) B r : n,α ( t ) − B (0) α ( t ) (cid:1) ≤ x ) = x q r : n,α /α + o (1) , x ↓ q r : n,α . This result is related to the problem of the capture time of afractional Brownian pursuit; see Theorem 4.1 in [18].In Proposition 2.5 we derive bounds for the ratioΘ ( r ) ( u ) := P (cid:8) X ( r ) ≤ u (cid:9) P (cid:8) Y ( r ) ≤ u (cid:9) , ∀ u ∈ [0 , ∞ ) d , which extend (1). Relying on the findings of Li and Shao [17], results for Θ ( r ) ( u ) can be applied for estimationof p r : n,α and q r : n,α appearing in (3) and (4), respectively; this topic will be investigated in a forthcoming paper.We organize this paper as follows. In Section 2 we display our main results. Section 3 is devoted to the studyof the lower tail probability of order statistics of self-similar Gaussian processes, where Slepian-type inequalitiesfor order statistics processes are also derived. We present the limit theorems for stationary order statisticsprocesses in Section 4. Finally, all the proofs are relegated to Section 5 and Appendix.2. Main Results We begin this section with deriving some sharp bounds for ∆ ( r ) ( u ) defined in (2), which go in line with Li andShao’s [17] normal comparison inequality. For notational simplicity we set below Q ij,lk := (cid:12)(cid:12)(cid:12) arcsin( σ (1) ij,lk ) − arcsin( σ (0) ij,lk ) (cid:12)(cid:12)(cid:12) , Q + ij,lk := (arcsin( σ (1) ij,lk ) − arcsin( σ (0) ij,lk )) + . OMPARISON INEQUALITIES FOR ORDER STATISTICS 3 Theorem 2.1. If X and Y are two standard d × n Gaussian arrays, then for any ≤ r ≤ n we have (cid:12)(cid:12) ∆ ( r ) ( u ) (cid:12)(cid:12) ≤ π X ≤ i ≤ d ≤ j A direct consequence of Theorem 2.1 is the following Slepian’s inequality for the order statistics of Gaussianarrays. Corollary 2.3. If (6) is satisfied and further σ (0) ij,lk ≥ σ (1) ij,lk , ≤ i < l ≤ d, ≤ j, k ≤ n holds, then ∆ ( r ) ( u ) ≤ , or equivalently P n ∪ di =1 { X i ( r ) > u i } o ≥ P n ∪ di =1 { Y i ( r ) > u i } o , ∀ u ∈ R d . (9)Note that the bounds in Theorem 2.1 do not include r , which indicates that in some case they may be notsharp. Below we present a sharper result which holds under the assumption that the columns of both X and Y are mutually independent, i.e., σ ( κ ) ij,lk = σ ( κ ) il I { j = k } , ≤ i, l ≤ d, ≤ j, k ≤ n, κ = 0 , , (10)with some σ ( κ ) il , ≤ i, l ≤ d, κ = 0 , 1, where I {·} stands for the indicator function. This result is useful forestablishing mixed Gumbel limit theorems, see Section 4. KRZYSZTOF DE¸BICKI, ENKELEJD HASHORVA, LANPENG JI, AND CHENGXIU LING In order to simplify the presentation, we shall define c n,r = n ! r !( n − r )! , ≤ r ≤ n, ρ il = max( | σ (0) il | , | σ (1) il | ) , ≤ i, l ≤ d and A ( r ) il = Z σ (1) il σ (0) il (1 + | h | ) n − r ) (1 − h ) ( n − r +1) / dh, ≤ i, l ≤ d, ≤ r ≤ n. Theorem 2.4. Under the assumptions of Theorem 2.1, if further (10) is satisfied, then for any u ∈ (0 , ∞ ) d ∆ ( r ) ( u ) ≤ n ( c n − ,r − ) (2 π ) n − r +1 u − n − r ) X ≤ i Under the assumptions of Theorem 2.1, if further (6) holds and σ (1) ij,lk ≥ σ (0) ij,lk ≥ for all ≤ i < l ≤ d, ≤ j, k ≤ n , then for any u ∈ [0 , ∞ ) d ≤ Θ ( r ) ( u ) ≤ exp √ π X ≤ i The seminal contributions [18, 19] show that the investigation of the lower tail probability of Gaussian processesis of special interest in many applied fields, for example, in the study of real zeros of random polynomials, inthe study of fractional Brownian pursuit, and in the study of the first-passage time for the Slepian process. Inthis section, we aim at extending some results in [18, 19], by considering order statistics processes instead ofGaussian processes.Our first result is concerned with Slepian’s inequality for order statistics processes. In the following X, Y, Z are three independent mean-zero Gausian processes with almost surely (a.s.) continuous sample paths. Inaccordance with our notation above X r : n , Y r : n and Z r : n are the corresponding r th order statistics processes.Below, we shall denote by σ X ( s, t ) = E { X ( s ) X ( t ) } and σ Y ( s, t ) = E { Y ( s ) Y ( t ) } the covariance functions of X and Y , respectively. Proposition 3.1. If for all s, t ≥ σ X ( t, t ) = σ Y ( t, t ) and σ X ( s, t ) ≤ σ Y ( s, t ) , then for any c ≥ , T > and u ∈ R we have P ( sup t ∈ [0 ,T ] (cid:0) X r : n ( t ) + cZ ( t ) (cid:1) > u ) ≥ P ( sup t ∈ [0 ,T ] (cid:0) Y r : n ( t ) + cZ ( t ) (cid:1) > u ) (13) OMPARISON INEQUALITIES FOR ORDER STATISTICS 5 and P ( sup t ∈ [0 ,T ] (cid:0) Z r : n ( t ) + cX ( t ) (cid:1) > u ) ≥ P ( sup t ∈ [0 ,T ] (cid:0) Z r : n ( t ) + cY ( t ) (cid:1) > u ) . (14)We shall display the proof of this proposition in Appendix. Remark 3.2. Similarly to [18] , as an immediate consequence of Proposition 3.1 for any c ≥ and x ∈ R weobtain that p r ( x ) := lim T →∞ T ln P (cid:26) sup ≤ t ≤ T (cid:0) X r : n ( t ) + cZ ( t ) (cid:1) ≤ x (cid:27) = sup T > T ln P (cid:26) sup ≤ t ≤ T (cid:0) X r : n ( t ) + cZ ( t ) (cid:1) ≤ x (cid:27) (15) exists and p r ( x ) , x ∈ R is left-continuous, provided that σ X (0 , t ) ≥ and σ Z (0 , t ) ≥ for all t ≥ . Next, we present the main result of this section, which gives a lower tail probability for order statistics processes.Let { X ( t ) , t ≥ } be a centered self-similar Gaussian process with a.s. continuous sample paths and index α/ α > 0, i.e., X (0) = 0 , E (cid:8) X (1) (cid:9) = 1 , { X ( λt ) , t ≥ } d = { λ α/ X ( t ) , t ≥ } , ∀ λ > , where d = denotes equality of the (finite-dimensional) distribution functions. It is well-known that by Lamperti’stransformation a dual stationary Gaussian process { X ∗ ( t ) , t ≥ } can be defined as X ∗ ( t ) = e − αt/ X ( e t ) , t ≥ . Theorem 3.3. Let { X ( t ) , t ≥ } and { Z ( t ) , t ≥ } be two independent centered self-similar Gaussian processeswith continuous sample paths and common self-similarity index α/ ∈ (0 , ∞ ) . Suppose that σ X ( s, t ) ≥ and σ Z ( s, t ) ≥ for all s, t ≥ , and both ρ ( t ) := E { X ∗ ( t ) X ∗ (0) } , e ρ ( t ) := E { Z ∗ ( t ) Z ∗ (0) } , t ≥ are non-increasing.If further for any h ∈ (0 , ∞ ) and θ ∈ (0 , a h,θ = inf As discussed in [19] two examples of { X ( t ) , t ≥ } that satisfy all conditions of Theorem 3.3 arethe standard fBm B α and the centered Gaussian process { X β ( t ) , t ≥ } , β > with E { X β ( s ) X β ( t ) } = 2 β ( st ) (1+ β ) / ( s + t ) β , s, t > . Moreover, we have that (3) holds with p r : n,α given by p r : n,α = − lim T →∞ T ln P (cid:26) sup ≤ t ≤ T B r : n,α ( e t ) ≤ (cid:27) . To this end, we discuss a modification of the fractional Brownian pursuit problem considered in [17, 18]. Let B ( k ) α , ≤ k ≤ n be independent standard fBms, and define τ r : n,α = inf { t ≥ B r : n,α ( t ) − B (0) α ( t ) } , (18) KRZYSZTOF DE¸BICKI, ENKELEJD HASHORVA, LANPENG JI, AND CHENGXIU LING where { B r : n,α ( t ) , t ≥ } is the r th order statistics process of B ( k ) α , ≤ k ≤ n . Then τ r : n,α can be viewedas a capture time in a random pursuit setting. Assume that a fractional Brownian prisoner escapes, runningalong the path of B (0) α . In his pursuit, there are n independent fractional Brownian policemen running alongthe paths of B ( k ) α , ≤ k ≤ n , respectively. At the outset, the prisoner is ahead of the policemen by 1 unit ofdistance. Then, τ r : n,α represents the capture time when at least r policemen catches the prisoner. As shownin the aforementioned papers the study of the capture time of the fractional Brownian pursuit is related to theanalysis of the lower tail probability of order statistics process since P { τ r : n,α > s } = P ( sup t ∈ [0 , (cid:16) B r : n,α ( t ) − B (0) α ( t ) (cid:17) ≤ s − α/ ) . As an application of Theorem 3.3 we have that (4) holds with q r : n,α = − lim T →∞ T ln P (cid:26) sup ≤ t ≤ T (cid:0) B r : n,α ( e t ) − B (0) α ( e t ) (cid:1) ≤ (cid:27) , which leads to the following result. Corollary 3.5. If τ r : n,α is defined as in (18) , then we have P { τ r : n,α > s } = s − q r : n,α + o (1) , s → ∞ . Limit Theorems of Stationary Order Statistics Processes In this section we suppose that { X r : n ( t ) , t ≥ } to be the r th order statistics process generated by a centeredstationary Gaussian process { X ( t ) , t ≥ } with a.s. continuous sample paths, unit variance and correlationfunction ρ ( · ) satisfying ρ ( t ) = 1 − | t | α + o ( | t | α ) , t → α ∈ (0 , 2] and ρ ( t ) < , ∀ t = 0 . (19)From Theorem 1.1 in [7] or Theorem 2.2 in [9] for any T > P ( sup t ∈ [0 ,T ] X r : n ( t ) > u ) = T A r,α c n,r (2 π ) − r u α − r exp (cid:18) − ru (cid:19) (1 + o (1)) , u → ∞ , (20)where A r,α ∈ (0 , ∞ ) is a positive constant. As a continuation of [7] we establish below a limit theorem X r : n . Theorem 4.1. Let { X r : n ( t ) , t ≥ } be the r th order statistics process generated by X as above. Suppose that (19) holds and lim t →∞ ρ ( t ) ln t = γ ∈ [0 , ∞ ] .a) If γ = 0 , then lim T →∞ sup x ∈ R (cid:12)(cid:12)(cid:12)(cid:12) P ( a r,T (cid:16) sup t ∈ [0 ,T ] X r : n ( t ) − b r,T (cid:17) ≤ x ) − exp (cid:0) − e − x (cid:1)(cid:12)(cid:12)(cid:12)(cid:12) = 0 , where, with D := ( r/ r/ − /α c n,r A r,α (2 π ) − r/ a r,T = √ r ln T , b r,T = p (2 /r ) ln T + 1 √ r ln T (cid:18)(cid:18) α − r (cid:19) ln ln T + ln D (cid:19) , T > e. (21) b) If γ = ∞ , and α ∈ (0 , , ρ ( t ) is convex for t ≥ with lim t →∞ ρ ( t ) = 0 and further ρ ( t ) ln t is monotone forlarge t , then with Φ( · ) the df of an N (0 , random variable, lim T →∞ sup x ∈ R (cid:12)(cid:12)(cid:12)(cid:12) P ( p ρ ( T ) (cid:16) sup t ∈ [0 ,T ] X r : n ( t ) − p − ρ ( T ) b r,T (cid:17) ≤ x ) − Φ( x ) (cid:12)(cid:12)(cid:12)(cid:12) = 0 . OMPARISON INEQUALITIES FOR ORDER STATISTICS 7 c) If γ ∈ (0 , ∞ ) , then, with W an N (0 , random variable lim T →∞ sup x ∈ R (cid:12)(cid:12)(cid:12)(cid:12) P ( a r,T (cid:16) sup t ∈ [0 ,T ] X r : n ( t ) − b r,T (cid:17) ≤ x ) − E n exp (cid:16) − e − ( x + γ −√ γrW ) (cid:17)o(cid:12)(cid:12)(cid:12)(cid:12) = 0 . The proof of Theorem 4.1 is presented in the Appendix.5. Proofs Hereafter, we write d = for equality of the distribution functions. A vector z = ( z , z , . . . , z dn ) will also bedenoted by z = ( z , . . . , z d ) , with z i = ( z i , z i , . . . , z in ) , ≤ i ≤ d, where z ij = z ( i − n + j , ≤ i ≤ d, ≤ j ≤ n . Note that for any p = ( i − n + j, q = ( l − n + k, ≤ i, l ≤ d, ≤ j, k ≤ n { p < q } = { i < l, or i = l and j < k } . Furthermore, for any z ∈ R n we denote d z dz i = dz dz · · · dz i − dz i +1 · · · dz n , ≤ i ≤ n, and for 1 ≤ i < j ≤ n d z dz i dz j = dz dz · · · dz i − dz i +1 · · · dz j − dz j +1 · · · dz n . Proof of Theorem r = 1 , r = 2 and 2 < r ≤ n separately.Case r = 1. Note that X d = −X for the standard Gaussian array X . It follows from Theorem 2.1 in [20] that (cid:12)(cid:12) ∆ (1) ( u ) (cid:12)(cid:12) = (cid:12)(cid:12)(cid:12)(cid:12) P n ∪ di =1 ∩ nj =1 {− Y ij < − u i } o − P n ∪ di =1 ∩ nj =1 {− X ij < − u i } o(cid:12)(cid:12)(cid:12)(cid:12) ≤ π X ( i − n + j< ( l − n + k Q ij,lk exp (cid:18) − u i + u l ρ ij,lk ) (cid:19) establishing the proof of r = 1.Next, by a standard approximation procedure we may assume that both Σ (1) and Σ (0) are positive definite.Let further Z = ( Z ij ) d × n be a standard Gaussian array with covariance matrix Γ h = h Σ (1) + (1 − h )Σ (0) =( δ hij,lk ) dn × dn , where by our notation δ hij,lk = E { Z ij Z lk } . Clearly, Γ h is also positive definite for any h ∈ [0 , g h ( z ) the probability density function (pdf) of Z . It is known that (see, e.g., [15], p. 82, or[20]) ∂g h ( z ) ∂δ hij,lk = ∂ g h ( z ) ∂z ij ∂z lk , ≤ i, l ≤ d, ≤ j, k ≤ n. (22)Case r = 2. Hereafter, we write λ = − u and set Q ( Z ; Γ h ) = P (cid:8) Z ( n − > λ (cid:9) = Z ∩ di =1 ∪ nj,j ′ =1 { z ij >λ i ,z ij ′ >λ i } g h ( z ) d z . (23)Since X (2) d = − X ( n − we have∆ (2) ( u ) = Q ( Z ; Γ ) − Q ( Z ; Γ ) = Z ∂Q ( Z ; Γ h ) ∂h dh. (24) KRZYSZTOF DE¸BICKI, ENKELEJD HASHORVA, LANPENG JI, AND CHENGXIU LING Note that the quantities Q ( Z ; Γ h ) and g h ( z ) depend on h only through the entries δ hij,lk of Γ h . Hence we haveby (22) ∂Q∂h ( Z ; Γ h ) = X ( i − n + j< ( l − n + k ∂Q ( Z ; Γ h ) ∂δ hij,lk ∂δ hij,lk ∂h = X ( i − n + j< ( l − n + k ( σ (1) ij,lk − σ (0) ij,lk ) E il ( j, k ) , (25)with E il ( j, k ) := Z ∩ ds =1 ∪ nt,t ′ =1 { z st >λ s ,z st ′ >λ s } ∂ g h ( z ) ∂z ij ∂z lk d z , ( i − n + j < ( l − n + k. Next, in order to establish (5) we shall show that | E il ( j, k ) | ≤ ϕ ( λ i , λ l ; δ hij,lk ) , ( i − n + j < ( l − n + k, (26)where ϕ ( · , · ; δ hij,lk ) is the pdf of ( Z ij , Z lk ), given by ϕ ( x, y ; δ hij,lk ) = 12 π q − ( δ hij,lk ) exp − x − δ hij,lk xy + y (cid:16) − ( δ hij,lk ) (cid:17) , x, y ∈ R . We consider below two sub-cases: a) i = l and b) i < l .a) Proof of (26) for i = l . Letting A ′ i = ∩ ds =1 ,s = i ∪ nt,t ′ =1 { z st > λ s , z st ′ > λ s } , A i = ∪ nt,t ′ =1 { z it > λ i , z it ′ > λ i } , we rewrite E ii ( j, k ) as E ii ( j, k ) = Z A ′ i Z A i ∂ g h ( z ) ∂z ij ∂z ik d z , ≤ i ≤ d, ≤ j < k ≤ n. (27)Next, we decompose the integral region A i according to a ) z ij > λ i , z ik > λ i ; a ) z ij > λ i , z ik ≤ λ i ; a ) z ij ≤ λ i , z ik > λ i ; and a ) z ij ≤ λ i , z ik ≤ λ i .For case a ) we have Z A i ∩{ z ij >λ i ,z ik >λ i } ∂ g h ( z ) ∂z ij ∂z ik d z i = Z R n − g h ( z ij = λ i , z ik = λ i ) d z i dz ij dz ik , (28)where g h ( z ij = λ i , z ik = λ i ) denotes a function of dn − g h ( z ) by putting z ij = λ i , z ik = λ i .Similarly, for cases a ) and a ) Z A i ∩{ z ij >λ i ,z ik ≤ λ i } ∂ g h ( z ) ∂z ij ∂z ik d z i = Z A i ∩{ z ij ≤ λ i ,z ik >λ i } ∂ g h ( z ) ∂z ij ∂z ik d z i = − Z ∪ nt =1 ,t = j,k { z it >λ i } g h ( z ij = λ i , z ik = λ i ) d z i dz ij dz ik . (29)Finally, for case a ) Z A i ∩{ z ij ≤ λ i ,z ik ≤ λ i } ∂ g h ( z ) ∂z ij ∂z ik d z i = Z ∪ nt,t ′ =1 ,t,t ′6 = j,k { z it >λ i ,z it ′ >λ i } g h ( z ij = λ i , z ik = λ i ) d z i dz ij dz ik . This together with (27)–(29) yields E ii ( j, k ) = Z A ′ i Z R n − −∪ nt =1 ,t = j,k { z it >λ i } g h ( z ij = λ i , z ik = λ i ) d z dz ij dz ik − Z A ′ i Z ∪ nt =1 ,t = j,k { z it >λ i }−∪ nt,t ′ =1 ,t,t ′6 = j,k { z it >λ i ,z it ′ >λ i } g h ( z ij = λ i , z ik = λ i ) d z dz ij dz ik = ϕ ( λ i , λ i ; δ hij,ik ) (cid:16) P n ( ∩ ds =1 ,s = i { Z s ( n − > λ s } ) ∩ { Z ′′ i ∈ { w ′′ i = ∞}} (cid:12)(cid:12)(cid:12) { Z ij = λ i , Z ik = λ i } o − P n ( ∩ ds =1 ,s = i { Z s ( n − > λ s } ) ∩ { Z ′′ i ∈ { w ′′ i ≤ n, w ′′ i = ∞}} (cid:12)(cid:12)(cid:12) { Z ij = λ i , Z ik = λ i } o(cid:17) , (30) OMPARISON INEQUALITIES FOR ORDER STATISTICS 9 where Z ′′ i is the ( n − Z i obtained by deleting Z ij and Z ik , and w ′′ i , w ′′ i are definedby (recall inf {∅} = ∞ ) w ′′ i = inf { t : z it > λ i , t = j, k } , w ′′ i = inf { t : z it > λ i , t = j, k, t > w ′′ i } . (31)It follows from (30) that (26) holds for i = l .b) Proof of (26) for i < l . With A ′′ il = ∩ ds =1 ,s = i,l ∪ nt,t ′ =1 { z st > λ s , z st ′ > λ s } , we have (recall A i in (27)) E il ( j, k ) = Z A ′′ il Z A l Z A i ∂ g h ( z ) ∂z ij ∂z lk d z . (32)Next, we decompose the integral region A i according to z ij > λ i and z ij ≤ λ i . We have Z A i ∩{ z ij >λ i } ∂ g h ( z ) ∂z ij ∂z lk d z i + Z A i ∩{ z ij ≤ λ i } ∂ g h ( z ) ∂z ij ∂z lk d z i = − Z ∪ nt =1 ,t = j { z it >λ i }−∪ nt,t ′ =1 ,t,t ′6 = j { z it >λ i ,z it ′ >λ i } ∂g h ( z ij = λ i ) ∂z lk d z i dz ij = − Z { w ′ i ≤ n,w ′ i = ∞} ∂g h ( z ij = λ i ) ∂z lk d z i dz ij , where w ′ i , w ′ i are defined by (similar notation below for w ′ l , w ′ l with respect to k ) w ′ i = inf { t : z it > λ i , t = j } , w ′ i = inf { t : z it > λ i , t = j, t > w ′ i } . (33)Using similar arguments for the integral with region A l , we have by (32) E il ( j, k ) = Z A ′′ il Z { w ′ i ≤ n,w ′ i = ∞} Z { w ′ l ≤ n,w ′ l = ∞} g h ( z ij = λ i , z lk = λ l ) d z dz ij dz lk = ϕ ( λ i , λ l ; δ hij,lk ) P n ∩ ds =1 ,s = i,l { Z s ( n − > λ s } ∩ ( Z ′ i ∈ { w ′ i ≤ n, w ′ i = ∞} ) ∩ ( Z ′ l ∈ { w ′ l ≤ n, w ′ l = ∞} ) (cid:12)(cid:12)(cid:12) { Z ij = λ i , Z lk = λ l } o , (34)where Z ′ i and Z ′ l are the ( n − Z i and Z l obtained by deleting Z ij and Z lk ,respectively. Consequently, by (30) and (34) the validity of (26) follows. Next, by combining (24)–(26), theclaim in (5) for r = 2 follows by the fact that (see [17]) Z ϕ ( λ i , λ l ; δ hij,lk ) dh ≤ arcsin( σ (1) ij,lk ) − arcsin( σ (0) ij,lk )2 π ( σ (1) ij,lk − σ (0) ij,lk ) exp (cid:18) − λ i + λ l ρ ij,lk ) (cid:19) . (35)Case 2 < r ≤ n . Letting e Q ( Z ; Γ h ) = P (cid:8) Z ( n − r +1) > λ (cid:9) we have∆ ( r ) ( u ) = Z dh X ( i − n + j< ( l − n + k ( σ (1) ij,lk − σ (0) ij,lk ) e E il ( j, k ) , (36)where e E il ( j, k ) := Z ∩ ds =1 ∪ nt ,...,tr =1 { z st >λ s ,...,z str >λ s } ∂ g h ( z ) ∂z ij ∂z lk d z . With the aid of (35), it suffices to show that (cid:12)(cid:12)(cid:12) e E il ( j, k ) (cid:12)(cid:12)(cid:12) ≤ ϕ ( λ i , λ l ; δ hij,lk ) , ( i − n + j < ( l − n + k. (37)Similarly as above, two sub-cases : a) i = l and b) i < l need to be considered separately.a) Proof of (37) for i = l . Similarly to E ii ( j, k ), we rewrite e E ii ( j, k ) as e E ii ( j, k ) = Z e A ′ i Z e A i ∂ g h ( z ) ∂z ij ∂z ik d z , (38) where e A ′ i = ∩ ds =1 ,s = i ∪ nt ,...,t r =1 { z st > λ s , . . . , z st r > λ s } , e A i = ∪ nt ,...,t r =1 { z it > λ i , . . . , z it r > λ i } . Next, we decompose the integral region e A i according to the four cases a )– a ) as introduced for A i (see the lasttwo lines right above (28)).For case a ) Z e A i ∩{ z ij >λ i ,z ik >λ i } ∂ g h ( z ) ∂z ij ∂z ik d z i = Z { w ′′ i,r − ≤ n } g h ( z ij = λ i , z ik = λ i ) d z i dz ij dz ik , (39)where w ′′ i is given by (31) and (notation: w ′′ i,t = w ′′ it ) w ′′ it = inf { t ≤ n : z it > λ i , t = j, k, t > w ′′ i,t − } , ≤ t ≤ r, ≤ i ≤ d. Next, for cases a ) and a ) Z e A i ∩{ z ij >λ i ,z ik ≤ λ i } ∂ g h ( z ) ∂z ij ∂z ik d z i = Z e A i ∩{ z ij ≤ λ i ,z ik >λ i } ∂ g h ( z ) ∂z ij ∂z ik d z i = − Z { w ′′ i,r − ≤ n } g h ( z ij = λ i , z ik = λ i ) d z i dz ij dz ik . (40)Finally, for case a ) Z e A i ∩{ z ij ≤ λ i ,z ik ≤ λ i } ∂ g h ( z ) ∂z ij ∂z ik d z i = Z { w ′′ ir ≤ n } g h ( z ij = λ i , z ik = λ i ) d z i dz ij dz ik . This together with (38)–(40) yields that e E ii ( j, k ) = Z e A ′ i Z { w ′′ i,r − ≤ n,w ′′ i,r − = ∞} g h ( z ij = λ i , z ik = λ i ) d z dz ij dz ik − Z e A ′ i Z { w ′′ i,r − ≤ n,w ′′ ir = ∞} g h ( z ij = λ i , z ik = λ i ) d z dz ij dz ik = ϕ ( λ i , λ i ; δ hij,ik ) × (cid:16) P n ∩ ds =1 ,s = i { Z s ( n − r +1) > λ s } ∩ ( Z ′′ i ∈ { w ′′ i,r − ≤ n, w ′′ i,r − = ∞} ) (cid:12)(cid:12)(cid:12) { Z ij = λ i , Z ik = λ i } o − P n ∩ ds =1 ,s = i { Z s ( n − r +1) > λ s } ∩ ( Z ′′ i ∈ { w ′′ i,r − ≤ n, w ′′ i,r = ∞} ) (cid:12)(cid:12)(cid:12) { Z ij = λ i , Z ik = λ i } o(cid:17) (41)establishing the validity of (37) for i = l .b) Proof of (37) for i < l . By e A ′′ il = ∩ ds =1 ,s = i,l ∪ nt ,...,t r =1 { z st > λ s , . . . , z st r > λ s } and e A i in (38) e E il ( j, k ) = Z e A ′′ il Z e A i Z e A l ∂ g h ( z ) ∂z ij ∂z lk d z . (42)By decomposing the integral regions e A i and e A l according to z ij >, ≤ λ i and z lk >, ≤ λ l , respectively, we obtainby similar arguments as for E il ( j, k ) that e E il ( j, k ) = ϕ ( λ i , λ l ; δ hij,lk ) P n ∩ ds =1 ,s = i,l { Z s ( n − r +1) > λ s } ∩ ( Z ′ i ∈ { w ′ i,r − ≤ n, w ′ ir = ∞} ) ∩ ( Z ′ l ∈ { w ′ l,r − ≤ n, w ′ lr = ∞} ) (cid:12)(cid:12)(cid:12) { Z ij = λ i , Z lk = λ l } o , (43)where w ′ i is introduced in (33) and (similar notation for w ′ lt with respect to k ) w ′ it = inf { t ≤ n : z it > λ i , t = j, t > w ′ i,t − } , ≤ t ≤ r, ≤ i ≤ d. It follows then from (43) that (37) holds. Consequently, the claim in (5) for 2 < r ≤ n follows.Finally, in view of (6) we see that the indices over the sum in (25) and (36) are simplified to 1 ≤ i < l ≤ d, ≤ j, k ≤ n . Then the claim in (7) follows immediately from (34), (35) and (43). This completes the proof. (cid:3) OMPARISON INEQUALITIES FOR ORDER STATISTICS 11 Proof of Theorem r = 1 follows from condition (10). We shall present next the proofs for a) r = 2 and b) 2 < r ≤ n .a) Proof of (11) for r = 2. It follows from (10), (24) and (25) that∆ (2) ( u ) = n X ≤ i 0. Further, similar arguments as for E il (consider thenumber of w ′ it = w ′ ls , s, t < r ) yield that e E il ϕ ( − u i , − u l ; δ hil ) ≤ P n Z ′ i ∈ { w ′ i,r − ≤ n, w ′ ir = ∞} , Z ′ l ∈ { w ′ l,r − ≤ n, w ′ lr = ∞} o ≤ ( c n − ,r − ) (cid:16) (1 + | δ hil | ) u ϕ ( u, u ; | δ hil | ) (cid:17) n − r . Consequently, the claim in (11) for 2 < r ≤ n follows. We complete the proof. (cid:3) Proof of Proposition r = 2. Hereafter, we adopt the same notation as in the proof ofTheorem 2.1. Further, define f ( h ) = exp X ≤ i 1) random variable.It suffices to show that Q ( Z ; Γ h ) /f ( h ) is non-increasing in h , i.e., ∂Q ( Z ; Γ h ) /∂hQ ( Z ; Γ h ) ≤ ∂f ( h ) /∂hf ( h ) , h ∈ [0 , . (47)Moreover, since ∂f ( h ) /∂hf ( h ) = X ≤ i 1) and h ∈ (0 , ∞ ) P (cid:26) sup ≤ t ≤ mh Y r : n ( t ) ≤ x + y (cid:27) ≤ Φ − m (cid:18) − y + x (cid:16)q a h,θ − (cid:17) a h,θ (cid:19) P (cid:26) sup ≤ t ≤ θmh Y r : n ( t ) ≤ x (cid:27) . (53)Let therefore W k , ≤ k ≤ m be independent N (0 , 1) random variables which are further independent of thedual processes X ∗ i , Z ∗ , ≤ i ≤ n , and write, for simplicity, a = a h,θ . We have p h,θ ( x, Y ) := P ( max ≤ k ≤ m sup ( k − h ≤ t ≤ kh Y r : n ( t, aW k ) √ a ≤ x ) ≥ P ( max ≤ k ≤ m sup ( k − h ≤ t ≤ kh Y r : n ( t ) ≤ x + y ) P (cid:26) max ≤ k ≤ m aW k ≤ − y + x (cid:0)p a − (cid:1)(cid:27) = P ( max ≤ k ≤ m sup ( k − h ≤ t ≤ kh Y r : n ( t ) ≤ x + y ) Φ m (cid:18) − y + x (cid:0) √ a − (cid:1) a (cid:19) , where { Y r : n ( t, aW k ) , t ∈ [( k − h, kh ] } is the r th order statistics process generated by { Y i ( t ) + aW k , t ∈ [( k − h, kh ) } , ≤ i ≤ n . Furthermore, it follows by (16) and the monotonicity of ρ ( · ) , e ρ ( · ) that (set I k =[( k − h, kh )) E (cid:8) ( Y ( t ) + aW [ t/h ]+1 )( Y ( s ) + aW [ s/h ]+1 ) (cid:9) a − E { Y ( θt ) Y ( θs ) } = − ρ ( | t − s | )1+ a (cid:16) a − ρ ( θ | t − s | ) − ρ ( | t − s | )1 − ρ ( | t − s | ) (cid:17) + c (cid:16) e ρ ( | t − s | )1+ a − e ρ ( θ | t − s | ) (cid:17) , t, s ∈ I k ; ρ ( | t − s | )1+ a − ρ ( θ | t − s | ) + c (cid:16) e ρ ( | t − s | )1+ a − e ρ ( θ | t − s | ) (cid:17) , t ∈ I k , s ∈ I l , k = l ≤ , which implies by Proposition 3.1 that p h,θ ( x, Y ) ≤ P ( max ≤ k ≤ m sup ( k − h ≤ t ≤ kh Y r : n ( θt ) ≤ x ) establishing (53) and thus the continuity of c r : n,α ( x ) follows. In order to complete the proof, it is suffices toshow that (set below e Y r : n ( t ) := X r : n ( t ) + cZ ( t )) − α c r : n,α ≤ lim inf x ↓ ln P n sup ≤ t ≤ e Y r : n ( t ) ≤ x o ln(1 /x ) ≤ lim sup x ↓ ln P n sup ≤ t ≤ e Y r : n ( t ) ≤ x o ln(1 /x ) ≤ − α c r : n,α . (54)By the self-similarity of the process e Y , for any x ∈ (0 , 1) we have P ( sup ≤ t ≤ /α ln(1 /x ) Y r : n ( t ) ≤ ) = P ( sup x /α ≤ t ≤ e Y r : n ( t ) ≤ ) ≤ P ( sup x /α ≤ t ≤ e Y r : n ( t ) ≤ x ) ≤ P n sup 0. Consequently, the lower bound in (54) follows since c r : n,α = c r : n,α (0). Next, we establish the upperbound in (54). It follows that, for y > α/ h ln P (cid:26) sup ≤ t ≤ h Y r : n ( t ) ≤ y (cid:27) = 1 αh/ P ( sup e − h ≤ t ≤ ( t − α/ e Y r : n ( t )) ≤ y ) ≥ αh/ P ( sup e − h ≤ t ≤ e Y r : n ( t ) ≤ ye − αh/ ) ≥ αh/ P (cid:26) sup ≤ t ≤ e Y r : n ( t ) ≤ ye − αh/ (cid:27) = αh/ − ln yαh/ / ( ye − αh/ )) ln P (cid:26) sup ≤ t ≤ e Y r : n ( t ) ≤ ye − αh/ (cid:27) . OMPARISON INEQUALITIES FOR ORDER STATISTICS 15 Letting h → ∞ in the above we obtain thatlim sup x ↓ ln P n sup ≤ t ≤ e Y r : n ( t ) ≤ x o ln(1 /x ) ≤ − α c r : n,α ( y ) → − α c r : n,α , y ↓ , where the last step follows by the right-continuity of c r : n,α ( x ) at 0 . Consequently, (54) holds and thus the proofis complete. (cid:3) Appendix We present next the proof of (8) and then present two lemmas which are used for the proof of Theorem 4.1.We conclude this section with the proof of Proposition 3.1 and Theorem 4.1. Proof of (8) . The claim for r = 1 follows from Theorem 2.1 in [20]. For 2 ≤ r ≤ n , we see from the proof ofTheorem 2.1 that, it suffices to prove that E ii ( j, k ) ≤ e E ii ( j, k ) ≤ ≤ i ≤ d, ≤ j < k ≤ n .From Remark 2.5(3) in [16], we see that all orthant tail dependence parameters of multivariate normal distri-butions are zero. Therefore we have for instance for j = 1 and 1 ≤ i ≤ d P (cid:8) Z ′′ i ∈ { w ′′ i = ∞} (cid:9) − P (cid:8) Z ′′ i ∈ { w ′′ i = 1 , w ′′ i = ∞} (cid:9) = (cid:0) − P { Z i > λ i | Z it ≤ λ i , t = 1 , j, k } (cid:1) P { Z it ≤ λ i , t = 1 , j, k } ≤ , λ i → −∞ . It follows then by (30) that E ii ( j, k ) ≤ u i (equals − λ i ). Thus, we complete the proof for r = 2. Similar arguments show that e E ii ( j, k ) ≤ u i (recall (41)). Consequently, the claimfor 2 < r ≤ n follows. (cid:3) For notational simplicity, we set q = q ( u ) = u − /α , u > x ] for the integer part of x . Lemma 6.1. Under the assumptions of Theorem 4.1 with γ = 0 , then for any a, T > u →∞ [ ε/ P { X r : n (0) >u } ] X j =[ T/ ( aq )] P n X r : n ( aqj ) > u (cid:12)(cid:12)(cid:12) X r : n (0) > u o → , ε ↓ . (55) Proof of Lemma up u ( t ) := P n X r : n ( t ) > u (cid:12)(cid:12)(cid:12) X r : n (0) > u o ≤ P n X r : r ( t ) > u, X r : r (0) > u (cid:12)(cid:12)(cid:12) X r : r (0) > u o . Since further X ( t ) − ρ ( t ) X (0) is independent of X (0), we have for some constant K > K mightchange below from line to line) p u ( t ) ≤ r +1 (cid:16) P n X ( t ) > X (0) > u (cid:12)(cid:12)(cid:12) X (0) > u o(cid:17) r ≤ r +1 (cid:16) P n X ( t ) − ρ ( t ) X (0) > u (1 − ρ ( t )) , X (0) > u (cid:12)(cid:12)(cid:12) X (0) > u o(cid:17) r = 2 r +1 − Φ u s − ρ ( t )1 + ρ ( t ) !! r ≤ Ku − r (cid:18) − | ρ ( t ) | | ρ ( t ) | (cid:19) − r/ exp (cid:18) − ru − | ρ ( t ) | | ρ ( t ) | (cid:19) , (56)the last inequality follows by the Mill’s ratio inequality 1 − Φ( x ) ≤ / ( √ πx ) exp (cid:0) − x / (cid:1) , x > g = g ( u ) such that lim u →∞ g ( u ) = ∞ , | ρ ( g ( u )) | = u − . Further it follows from u − ln g ( u ) = o (1) that g ( u ) ≤ exp( ǫ ′ u ) for some 0 < ǫ ′ < r/ − | ρ ( T ) | ) / (1 + | ρ ( T ) | ) (recall that | ρ ( T ) | < see [15], p. 86) and sufficiently large u . Next, we split the sum in (55) at aqj = g ( u ). The first term is [ g ( u ) / ( aq )] X j =[ T/ ( aq )] P n X r : n ( aqj ) > u (cid:12)(cid:12)(cid:12) X r : n (0) > u o ≤ K g ( u ) aq u − r (cid:18) − | ρ ( T ) | | ρ ( T ) | (cid:19) − r/ exp (cid:18) − ru − | ρ ( T ) | | ρ ( T ) | (cid:19) ≤ Ku /α − r exp (cid:18) ǫ ′ u − ru − | ρ ( T ) | | ρ ( T ) | (cid:19) → , u → ∞ . For the remaining term, by Lemma 1 in [7] [ ε/ P { X r : n (0) >u } ] X j =[ g ( u ) / ( aq )] P n X r : n ( aqj ) > u (cid:12)(cid:12)(cid:12) X r : n (0) > u o ≤ K ε P { X r : n (0) > u } u − r (cid:18) − u − u − (cid:19) − r/ exp (cid:18) − ru − u − u − (cid:19) ≤ Kε exp (cid:18) − ru (cid:18) − u − u − − (cid:19)(cid:19) ≤ Kε, u → ∞ . Therefore, the claim follows by taking ε ↓ (cid:3) Next, with the notation as in (20) we set T = T ( u ) = 1 c n,r A r,α (2 π ) r u r − α exp (cid:18) ru (cid:19) , u > . (57) Lemma 6.2. Let T = T ( u ) be defined as in (57) and a > , < λ < be any given constants. Under theassumptions of Lemma 6.1 for any ≤ s < · · · < s p < t < · · · < t p ′ in { aqj : j ∈ Z , ≤ aqj ≤ T } with t − s p ≥ λT (cid:12)(cid:12)(cid:12) P n ∩ pi =1 { X r : n ( s i ) ≤ u } , ∩ p ′ j =1 { X r : n ( t j ) ≤ u } o − P n ∩ pi =1 { X r : n ( s i ) ≤ u } o P n ∩ p ′ j =1 { X r : n ( t j ) ≤ u } o (cid:12)(cid:12)(cid:12) → , u → ∞ . (58) Proof of Lemma X ij = X j ( s i ) I { i ≤ p } + X j ( t i − p ) I { p < i ≤ p + p ′ } , ≤ i ≤ p + p ′ , ≤ j ≤ n, and { Y ij , ≤ i ≤ p, ≤ j ≤ n } d = { X ij , ≤ i ≤ p, ≤ j ≤ n } , independent of { Y ij , p + 1 ≤ i ≤ p + p ′ , ≤ j ≤ n } d = { X ij , p + 1 ≤ i ≤ p + p ′ , ≤ j ≤ n } . Applying Theorem 2.4 with X i ( n − r +1) = X r : n ( s i ) I { i ≤ p } + X r : n ( t i − p ) I { p < i ≤ p + p ′ } and Y i ( n − r +1) = Y r : n ( s i ) I { i ≤ p } + Y r : n ( t i − p ) I { p < i ≤ p + p ′ } , it follows bysimilar arguments as for Lemma 8.2.4 in [15] that, the left-hand side of (58) is bounded from above by Ku − r − (cid:18) Tq (cid:19) X λT ≤ t j − s i ≤ T exp (cid:18) − ru | ρ ( t j − s i ) | (cid:19) Z | ρ ( t j − s i ) | (1 + | h | ) r − (1 − h ) r/ dh ≤ Ku − r − (cid:18) Tq (cid:19) X λT ≤ aqj ≤ T | ρ ( aqj ) | exp (cid:18) − ru | ρ ( aqj ) | (cid:19) for u large , where K is some constant. The rest of the proof consists of the same arguments as that of Lemma 12.3.1 in[15] using further the following asymptotic relation (recall (57)) u = 2 r ln T + (cid:18) rα − (cid:19) ln ln T + ln (cid:18) r (cid:19) − / ( rα ) ( c n,r A r,α ) /r π ! (1 + o (1)) , u → ∞ , hence the proof is complete. (cid:3) OMPARISON INEQUALITIES FOR ORDER STATISTICS 17 Below W is an N (0 , 1) random variable independent of any other random element involved. Proof of Proposition T d containing d elementssuch that T d ⊂ [0 , T ]. We write T d = { t , . . . , t d } ⊂ [0 , T ]. Further we define f ( t i ) = p σ X ( t i , t i ) + c σ Z ( t i , t i ) = p σ Y ( t i , t i ) + c σ Z ( t i , t i ) and X ∗ ij := X j ( t i ) + cZ ( t i ) f ( t i ) , Y ∗ ij := Y j ( t i ) + cZ ( t i ) f ( t i ) , ≤ i ≤ d, ≤ j ≤ n. Then X ∗ ij and Y ∗ ij are N (0 , 1) distributed, and P (cid:26) sup t ∈T d (cid:0) X n − r +1: n ( t ) + cZ ( t ) (cid:1) > u (cid:27) = P n ∪ di =1 { X ∗ i ( r ) > u i } o , u i := uf ( t i ) , ≤ i ≤ d. Noting that { Z ( t ) , t ≥ } is independent of { X ( t ) , t ≥ } and { Y ( t ) , t ≥ } we have E (cid:8) X ∗ ij X ∗ ik (cid:9) = E (cid:8) X j ( t i ) X k ( t i ) + c Z ( t i ) (cid:9) ( f ( t i )) = σ X ( t i , t i ) I { j = k } + c σ Z ( t i , t i )( f ( t i )) = E (cid:8) Y ∗ ij Y ∗ ik (cid:9) , ≤ i ≤ d, ≤ j, k ≤ n and E (cid:8) X ∗ ij X ∗ lk (cid:9) = σ X ( t i , t l ) I { j = k } + c σ Z ( t i , t l ) f ( t i ) f ( t l ) ≤ σ Y ( t i , t l ) I { j = k } + c σ Z ( t i , t l ) f ( t i ) f ( t l ) = E (cid:8) Y ∗ ij Y ∗ lk (cid:9) , ≤ i < l ≤ d, ≤ j, k ≤ n. Therefore, by (9) P (cid:26) sup t ∈T d (cid:0) Y r : n ( t ) + cZ ( t ) (cid:1) > u (cid:27) ≤ P (cid:26) sup t ∈T d (cid:0) X r : n ( t ) + cZ ( t ) (cid:1) > u (cid:27) . The passage from T d to [0 , T ] is standard and therefore we omit the details. We thus complete the proof of (13).Next, for (14), we denote instead f ( t i ) = p σ Z ( t i , t i ) + c σ X ( t i , t i ) = p σ Y ( t i , t i ) + c σ Y ( t i , t i ) and X ∗ ij := Z j ( t i ) + cX ( t i ) f ( t i ) , Y ∗ ij := Z j ( t i ) + cX ( t i ) f ( t i ) , ≤ i ≤ d, ≤ j ≤ n. Then the rest of the proof is the same as that for (13). (cid:3) Proof of Theorem r thorder statistics process { X r : n ( t ) , t ≥ } , we have for T = T ( u ) defined as in (57)lim u →∞ P ( sup t ∈ [0 ,T ( u )] X r : n ( t ) ≤ u + xru ) = exp (cid:0) − e − x (cid:1) , x ∈ R . Expressing u in terms of T using (57) we obtain the required claim for any x ∈ R , with a r,T , b r,T given as in(21); the uniform convergence in x follows since all functions (with respect to x ) are continuous, bounded andincreasing.b) The proof follows from the main arguments of Theorem 3.1 in [21] by showing that, for any ε > x ∈ R Φ( x − ε ) ≤ lim inf T →∞ P n M X ( T ) ≤ c T b r,T + p ρ ( T ) x o ≤ lim sup T →∞ P n M X ( T ) ≤ c T b r,T + p ρ ( T ) x o ≤ Φ( x + ε ) , (59)where M X ( t ) := sup t ∈ [0 ,T ] X r : n ( t ) and c T := p − ρ ( T ). We start with the proof of the first inequality. Let ρ ∗ ( t ) , t ≥ ρ ∗ ( t ) = 1 − | t | α + o ( | t | α ) as t → 0. Then there exists some t > T large ρ ∗ ( t ) c T + ρ ( T ) ≤ ρ ( t ) , ≤ t ≤ t . (60) Denote by { Y k ( t ) , t ≥ } , k ∈ N independent centered stationary Gaussian processes with a.s. continuous samplepaths and common covariance function ρ ∗ ( · ), and define { Y ( t ) , t ≥ } by Y ( t ) = ∞ X k =1 Y k ( t ) I { t ∈ [( k − t , kt ) } , t ≥ . (61)It follows from (60) that for T sufficiently large E { X ( s ) X ( t ) } ≥ E n(cid:0) c T Y ( s ) + p ρ ( T ) W (cid:1)(cid:0) c T Y ( t ) + p ρ ( T ) W (cid:1)o , s, t ≥ . Therefore, by Proposition 3.1 P n M X ( T ) ≤ c T b r,T + p ρ ( T ) x o ≥ P n c T M Y ( T ) + p ρ ( T ) W ≤ c T b r,T + p ρ ( T ) x o ≥ Φ( x − ε ) P ( sup t ∈ [0 ,t ] Y r : n ( t ) ≤ b r,T + ε p ρ ( T ) )! [ T/t ]+1 . Noting that a = inf 0, we have by Theorem 1.1 in [7] (see also (20))lim T →∞ P n sup t ∈ [0 ,t ] Y r : n ( t ) > b r,T + ε p ρ ( T ) o t c n,r b /αr,T (cid:0) − Φ( b r,T + ε p ρ ( T )) (cid:1) r = 2 /α A r,α . Consequently, since γ = ∞ we havelim T →∞ ([ T /t ] + 1) ln P ( sup t ∈ [0 ,t ] Y r : n ( t ) ≤ b r,T + ε p ρ ( T ) ) = − lim T →∞ Tt P ( sup t ∈ [0 ,t ] Y r : n ( t ) > b r,T + ε p ρ ( T ) ) = − lim T →∞ T c n,r /α A r,α b /αr,T (cid:0) − Φ( b r,T + ε p ρ ( T )) (cid:1) r = 0establishing the first inequality in (59).Next, we consider the last inequality in (59). Note that, by the convexity of ρ ( · ), there is a separable stationaryGaussian process { Y ( t ) , t ∈ [0 , T ] } with correlation function given by e ρ ( t ) = ρ ( t ) − ρ ( T )1 − ρ ( T ) , t ∈ [0 , T ] . (62)We have M X ( T )= c T M Y ( T ) + p ρ ( T ) W. Therefore P n M X ( T ) ≤ c T b r,T + p ρ ( T ) x o = Z ∞−∞ P ( M Y ( T ) ≤ b r,T + p ρ ( T ) c T ( x − u ) ) ϕ ( u ) du ≤ Φ( x + ε ) + P ( M Y ( T ) ≤ b r,T − ε p ρ ( T ) c T ) , (63)which means that we only need to provelim T →∞ P n M Y ( T ) ≤ b r,T − ε p ρ ( T ) o = 0 . To this end, using again the convexity of e ρ ( · ), we construct a separable stationary Gaussian process { Z ( t ) , t ∈ [0 , T ] } with the correlation function (recall e ρ ( · ) in (62)) σ ( t ) = max (cid:16)e ρ ( t ) , e ρ (cid:0) T exp (cid:0) − √ ln T (cid:1)(cid:1)(cid:17) , t ∈ [0 , T ] . (64) OMPARISON INEQUALITIES FOR ORDER STATISTICS 19 Again by Proposition 3.1, we have P n M Y ( T ) ≤ b r,T − ε p ρ ( T ) o ≤ P n M Z ( T ) ≤ b r,T − ε p ρ ( T ) o . (65)Now we make a grid as follows. Let I , . . . , I [ T ] be [ T ] consecutive unit intervals with an interval of length δ removed from the right-hand side of each one with δ ∈ (0 , 1) given, and G T = (cid:8) k (2 ln T ) − /α , k ∈ N (cid:9) ∩ (cid:0) ∪ [ T ] i =1 I i (cid:1) . It follows from Theorem 10 in [1] and Theorem 1.1 in [7] that, sup t ∈ [0 ,T ] Z r : n ( t ) and sup t ∈G T Z r : n ( t ) have thesame asymptotic distribution and thus we only need to show thatlim T →∞ P (cid:26) sup t ∈G T Z r : n ( t ) ≤ b r,T − ε p ρ ( T ) (cid:27) = 0 . Let { Z ′ r : n ( t ) , t ≥ } be generated by { Z ′ ( t ) , t ∈ [0 , T ] } which is again a separable stationary process with thecorrelation function (recall σ ( · ) in (64)) σ ′ ( t ) = σ ( t ) − σ ( T )1 − σ ( T ) , t ∈ [0 , T ] . Analogous to the derivation of (63) we obtain P (cid:26) sup t ∈G T Z r : n ( t ) ≤ b r,T − ε p ρ ( T ) (cid:27) = P (cid:26)p − σ ( T ) max t ∈G T Z ′ r : n ( t ) + p σ ( T ) W ≤ b r,T − ε p ρ ( T ) (cid:27) ≤ Φ − ε (cid:18) ρ ( T ) σ ( T ) (cid:19) / ! + P ( max t ∈G T Z ′ r : n ( t ) ≤ b r,T + b r,T σ ( T ) p − σ ( T )(1 + p − σ ( T )) − ε p ρ ( T )2 p − σ ( T ) ) , which tends to 0 as T → ∞ . The proof of it is the same as that of Theorem 3.1 in [21], by using insteadTheorem 1.1 in [7] and our Theorem 2.4. Consequently, the last inequality in (59) follows by (63) and (65). Wecomplete the proof for γ = ∞ .c) Given δ ∈ (0 , I , . . . , I [ T ] as in b). For { Y k ( t ) , t ≥ } , k ∈ N independent copies of X define Y ( t ) := ∞ X k =1 Y k ( t ) I { t ∈ [ k − , k ) } , t ≥ X ∗ ( t ) := p − ρ ∗ ( T ) Y ( t ) + p ρ ∗ ( T ) W, t ∈ ∪ [ T ] k =1 I k , where ρ ∗ ( T ) = γ/ ln T . The rest of the proof is similar to that as for Theorem 2.1 in [28] by using our Theorem2.4 instead of Berman’s inequality. We omit the details.Combining all the arguments for the three cases above, we complete the proof of Theorem 4.1. (cid:3) Acknowledgments Supported in part from SNSF 200021-140633/1 and the project RARE -318984 (an FP7 Marie Curie IRSESFellowship). The first author also acknowledges partial support by NCN Grant No 2013/09/B/ST1/01778(2014-2016). References [1] J.M.P. Albin. On extremal theory for stationary processes. Ann. Probab. , 18(1):92–128, 1990.[2] S.M. Berman. Limit theorems for the maximum term in stationary sequences. Ann. Math. Statist. , 35:502–516, 1964.[3] S.M. Berman. Sojourns and extremes of stationary processes. Ann. Probab. , 10(1):1–46, 1982.[4] S.M. Berman. Sojourns and extremes of stochastic processes . Wadsworth & Brooks/Cole Advanced Books & Software, PacificGrove, CA, 1992.[5] P.J. Bickel and M. Rosenblatt. On some global measures of the deviations of density function estimates. Ann. Statist. , 1:1071–1095, 1973.[6] H. Cram´er and M.R. Leadbetter. Stationary and related stochastic processes. Sample function properties and their applications .John Wiley & Sons Inc., New York, 1967.[7] K. D¸ebicki, E. Hashorva, L. Ji, and C. Ling. Extremes of order statistics of stationary processes. TEST , DOI 10.1007/s11749-014-0404-4, 2014.[8] K. D¸ebicki, E. Hashorva, L. Ji, and K. Tabi´s. Extremes of vector-valued Gaussian processes: Exact asymptotics. Submitted ,2014.[9] K. D¸ebicki, E. Hashorva, L. Ji, and K. Tabi´s. On the probability of conjunctions of stationary Gaussian processes. Statist.Probab. Lett. , 88:141–148, 2014.[10] A.J. Harper. Bounds on the suprema of Gaussian processes, and omega results for the sum of a random multiplicative function. Ann. Appl. Probab. , 23(2):584–616, 2013.[11] A.J. Harper. A note on the maximum of the Riemann zeta function, and log-correlated random variables. http://arxiv.org/abs/1304.0677 , 2013.[12] A.J. Harper. Pickands’ constant h α does not equal 1 / Gamma (1 /α ), for small α . http://arxiv.org/abs/1404.5505 , 2014.[13] E. Hashorva and C. Weng. Berman’s inequality under random scaling. Statistics and Its Interface , 7:339–349, 2014.[14] M.F. Kratz. Level crossings and other level functionals of stationary Gaussian processes. Probab. Surv. , 3:230–288, 2006.[15] M.R. Leadbetter, G. Lindgren, and H. Rootz´en. Extremes and related properties of random sequences and processes , volume 11.Springer Verlag, 1983.[16] H. Li. Orthant tail dependence of multivariate extreme value distributions. J. Multivariate Anal. , 100(1):243–256, 2009.[17] W.V. Li and Q.-M. Shao. A normal comparison inequality and its applications. Probab. Theory Related Fields , 122(4):494–508,2002.[18] W.V. Li and Q.-M. Shao. Lower tail probabilities for Gaussian processes. Ann. Probab. , 32(1A):216–242, 2004.[19] W.V. Li and Q.-M. Shao. Recent developments on lower tail probabilities for Gaussian processes. COSMOS , 1(1):95–106, 2005.[20] D. Lu and X. Wang. Some new normal comparison inequalities related to Gordon’s inequality. Statist. Probab. Lett. , 88:133–140,2014.[21] Y. Mittal and D. Ylvisaker. Limit distributions for the maxima of stationary Gaussian processes. Stochastic Processes Appl. ,3:1–18, 1975.[22] D. Monhor. Inequalities for correlated bivariate normal distribution function. Probab. Engrg. Inform. Sci. , 27(1):115–123, 2013.[23] J. Pickands, III. Upcrossing probabilities for stationary Gaussian processes. Trans. Amer. Math. Soc. , 145:51–73, 1969.[24] V.I. Piterbarg. On the paper by J. Pickands “Upcrossing probabilities for stationary Gaussian processes”. Vestnik Moskov.Univ. Ser. I Mat. Meh. , 27(5):25–30, 1972.[25] V.I. Piterbarg. Asymptotic methods in the theory of Gaussian processes and fields , volume 148 of Translations of MathematicalMonographs . American Mathematical Society, Providence, RI, 1996.[26] R.L. Plackett. A reduction formula for normal multivariate integrals. Biometrika , 41:351–360, 1954.[27] D. Slepian. The one-sided barrier problem for Gaussian noise. Bell System Tech. J. , 41:463–501, 1962.[28] Z. Tan, E. Hashorva, and Z. Peng. Asymptotics of maxima of strongly dependent Gaussian processes. J. Appl. Probab. ,49(4):1106–1118, 2012.[29] L. Yan. Comparison inequalities for one sided normal probabilities. J. Theoret. Probab. , 22(4):827–836, 2009. OMPARISON INEQUALITIES FOR ORDER STATISTICS 21 Krzysztof De¸bicki, Mathematical Institute, University of Wroc law, pl. Grunwaldzki 2/4, 50-384 Wroc law, Poland E-mail address : [email protected] Enkelejd Hashorva, Department of Actuarial Science, University of Lausanne,, UNIL-Dorigny, 1015 Lausanne, Switzer-land E-mail address : [email protected] Lanpeng Ji, Department of Actuarial Science, University of Lausanne, UNIL-Dorigny, 1015 Lausanne, Switzerland E-mail address : [email protected] Chengxiu Ling, Department of Actuarial Science, University of Lausanne, UNIL-Dorigny, 1015 Lausanne, Switzerland E-mail address ::