Extremal behavior of squared Bessel processes attracted by the Brown-Resnick process
aa r X i v : . [ m a t h . P R ] N ov Extremal behavior of squared Bessel processes attracted by theBrown-Resnick process
Bikramjit Das a, ∗ , Sebastian Engelke b,c , Enkelejd Hashorva b a Singapore University of Technology and Design, 20 Dover Drive, Singapore 138682 b University of Lausanne, Faculty of Business and Economics (HEC Lausanne), Lausanne 1015, Switzerland c Ecole Polytechnique F´ed´erale de Lausanne, EPFL-FSB-MATHAA-STAT, Station 8, 1015 Lausanne, Switzerland
Abstract
The convergence of properly time-scaled and normalized maxima of independent standard Brownian mo-tions to the Brown-Resnick process is well-known in the literature. In this paper, we study the extremalfunctional behavior of non-Gaussian processes, namely squared Bessel processes and scalar products ofBrownian motions. It is shown that maxima of independent samples of those processes converge weakly onthe space of continuous functions to the Brown-Resnick process.
Keywords:
Bessel process, Brown-Resnick process, extreme value theory, functional convergence
1. Introduction
The study of Gaussian processes, their suprema and sojourns has been of interest to researchers for quitesome time; see the excellent monographs by Leadbetter et al. [23], Adler [1], Berman [4], Lifshits [24],Piterbarg [26], Adler and Taylor [2] for a detailed overview. These studies involve investigations of theasymptotic behavior of the maximum of a Gaussian (and sometimes non-Gaussian) process over a specificset under time and space scalings. On the other hand, in spatial extreme value theory, the main focusis on pointwise maxima of independent processes representing regular measurements of an environmentalquantity, for instance.Suppose a large number, n , of particles start at the origin and move along the trajectories of independentBrownian motions in an m -dimensional Euclidean space. Denote by M n ( t ), t ≥
0, the maximal squareddisplacement from the origin of those n particles at time t . It is well-known that for a fixed t > a n > b n ∈ R , we have the weak convergencelim n →∞ P M n ( t ) − b n ta n t ≤ x ! = Λ ( x ) , x ∈ R , (1)where Λ ( x ) = exp(exp( − x )), x ∈ R , denotes the Gumbel distribution; see e.g., [10, p.156]. In this paper weare interested in the functional convergence of the quantity in (1) on the space of continuous functions. In the ∗ Corresponding author
Email addresses: [email protected] (Bikramjit Das), [email protected] (Sebastian Engelke), [email protected] (Enkelejd Hashorva)
October 3, 2018 ne-dimensional case, Brown and Resnick [6] showed that the functional limit is given by a stationary, max-stable process. This Brown-Resnick process and its generalizations in Kabluchko et al. [22] and Kabluchko[21] are now well-known in extreme value theory and have recently found importance as models for spatialextreme weather events; see Davis et al. [7], Davison et al. [8], Engelke et al. [13].The finite-dimensional distribution of a Brown-Resnick process can be naturally identified as the so-calledH¨usler-Reiss distributions introduced in H¨usler and Reiss [20] which appear as the limit of maxima of atriangular array of Gaussian random vectors. Those distributions arise even in more general, non-Gaussiansettings, as shown in Hashorva [15] and Hashorva et al. [16]. In fact the latter paper provides conditionsfor the weak convergence of maxima of independent, multivariate chi-square random vectors to the H¨usler-Reiss distribution. Such an observation naturally points us towards the question whether there are somenon-Gaussian processes whose maxima are attracted by the Brown-Resnick process under appropriate linearscaling.This is the principal focus of our paper which is organized as follows. In Section 2 we introduce neces-sary notation, recall the definition of Brown-Resnick processes and provide the two main theorems. Theystate the functional convergence of maxima of independent squared Bessel processes and, furthermore, it isshown that the Brown-Resnick process also appears as the limit of maxima processes obtained by the scalarproduct of two independent, m -dimensional Brownian motions. The main lemma, which might be of someindependent interest, and the proofs of the theorems are relegated to Section 3. Section 4 concludes thepaper. Further necessary tools can be found in the Appendix.
2. Extremal behavior of squared Bessel processes and Brownian scalar product processes
In the sequel, for T > C [0 , T ] and C [0 , ∞ ) the space of real-valued continuous functionson [0 , T ] and [0 , ∞ ), respectively, equipped with the topology of uniform convergence (on bounded inter-vals).Let { X i , i ∈ N } be the points of a Poisson point process on R with intensity measure e − x dx , x ∈ R , and let { B i , i ∈ N } be independent standard Brownian motions on [0 , ∞ ) which are also independent of { X i , i ∈ N } .The original Brown-Resnick process initially presented in [6] is denoted by M B and defined as M B ( t ) = max i ∈ N (cid:16) X i + B i ( t ) − t / (cid:17) , t ≥ . (2)More generally, for a centered Gaussian process { η ( t ) , t ∈ R } with stationary increments and variance func-tion σ ( t ) the corresponding max-stable, stationary Brown-Resnick process M η is defined by M η ( t ) = max i ∈ N (cid:16) X i + η i ( t ) − σ ( t ) / (cid:17) , t ≥ , (3)where η i , i ∈ N , are independent and identically distributed (i.i.d.) copies of η , see Kabluchko et al. [22],Kabluchko [21], Dombry and Eyi-Minko [9].Originally, the standard Brown-Resnick process was derived as the limit of the maximum of i.i.d. Gaussianprocesses, namely Brownian motions and Ornstein-Uhlenbeck processes. Motivated by the recent findingsof Hashorva et al. [16], in this section we show two other classes of non-Gaussian processes, leading to thesame limit process M B . More precisely, we investigate chi-square, or squared Bessel processes, and scalar-product processes related to standard Brownian motions.Let therefore n B i j , i ∈ N , ≤ j ≤ m o be independent standard Brownian motions on [0 , ∞ ) and define for i ∈ N the squared Bessel process of dimension m ≥ ξ i ( t ) = B i , ( t ) + . . . + B i , m ( t ) , t ≥ . (4)2ence, { ξ i , i ∈ N } are i.i.d. with one-dimensional marginals given by a χ m -distribution with m degrees offreedom. Then, for constants a n , b n defined by a n = , b n = n + ( m −
2) ln(ln n ) − Γ ( m / , n ≥ , (5)the maximum M n ,ξ ( t ) = max { ξ ( t ) , . . . , ξ n ( t ) } for any fixed t > n →∞ P M n ,ξ ( t ) − b n ta n t ≤ x ! = Λ ( x ) , x ∈ R . (6)In their paper, Hashorva et al. [16] prove that the normalized maxima of independent chi-square randomvectors converges to the H¨usler-Reiss distribution [20] which are the finite dimensional distributions of theBrown-Resnick processes M η defined above. On the other hand, Brown and Resnick [6] showed that therescaled maxima of an independent sequence of rescaled Brownian motions tends to the Brown-Resnickprocess. Thus a Brown-Resnick limit for the maxima of squared Bessel processes is quite intuitive. Thesequence of processes M n ,ξ , n ≥
1, is defined on C [0 , ∞ ), but weak convergence of M n ,ξ holding on C [0 , T ]for all T > C [0 , ∞ ) and proving convergence on C [0 , T ] is similar to proving itfor C [0 , C [0 , ≤ i ≤ n , n ∈ N , define the local, or rescaled, processes ξ i , n ( t ) = ξ i (1 + t / b n ) − b n (1 + t / b n )2 , t ≥ . (7)Our first result below shows the functional convergence of the maximum process max ≤ i ≤ n ξ i , n to thestandard Brown-Resnick process M B . Theorem 2.1.
We have the weak convergence, as n → ∞ , max i = ,..., n ξ i , n ( t ) d → M B ( t ) , t ∈ [0 , on the space of continuous functions C [0 , . Since Bessel processes are the norm of multivariate Brownian motions, we shall investigate further theextremal behavior of the scalar product of two independent Brownian motion vector processes. Let therefore { B i , j , e B i , j , i ∈ N , ≤ j ≤ m } be independent standard Brownian motions on [0 , ∞ ) and define for i ∈ N γ i ( t ) = B i , ( t ) e B i , ( t ) + . . . + B i , m ( t ) e B i , m ( t ) , t ∈ [0 , ∞ ) . (8)By Lemma 4.2 in the Appendix it follows that for constants a ∗ n , b ∗ n defined by a ∗ n = , b ∗ n = ln n + ( m / −
1) ln(ln n ) − ( m / −
1) ln 2 − ln Γ ( m / , n ≥ M n ,γ ( t ) = max { γ ( t ) , . . . , γ n ( t ) } for a fixed t > n →∞ P M n ,γ ( t ) − b ∗ n ta ∗ n t ≤ x ! = Λ ( x ) , x ∈ R . (10)Note in passing that a ∗ n , b ∗ n are however di ff erent than in the case of squared Bessel processes. Similarly asabove we define for 1 ≤ i ≤ n the local processes γ i , n ( t ) = γ i (cid:0) + t / (2 b ∗ n ) (cid:1) − b ∗ n (cid:0) + t / (2 b ∗ n ) (cid:1) , t ≥ . (11)We have the following result for the convergence of max ≤ i ≤ n γ i , n , as n → ∞ .3 heorem 2.2. For n → ∞ , we have the weak convergence max i = ,..., n γ i , n ( t ) d → M B ( t ) , t ∈ [0 , on the space of continuous functions C [0 , .
3. Proofs
Let us remark, that the space C [0 ,
1] of continuous functions is not locally compact. This fact prevents usto apply the standard theory for Poisson point processes in extreme value theory. In particular, [28, Theorem5.3] is not applicable for Poisson point processes on the space C [0 , Lemma 3.1.
For n ∈ N , ≤ i ≤ n, let the following triangular arrays be given, where the elements withinthe rows of each array are i.i.d.: Identically distributed random variables Y i , n satisfying P ( Y , > u ) = (1 + o (1)) Ku β e − cu du , u → ∞ , (12) with constants K , c > , β ∈ R . By Theorem 3.3.26 in Embrechts et al. [10], this implies that lim n →∞ n P ( X , n > s ) = e − s , ∀ s ∈ R , (13) where X i , n = a − n ( Y i , n − b n ) anda n = c − , b n = c − (cid:16) ln n + β ln( c − ln n ) + ln K (cid:17) , n ≥ . (14) Assume further that for all large r and any p > n →∞ n Z − r − b n / (2 a n ) e − x / p P ( X i , n ∈ dx ) < ∞ . (15)2. Stochastic processes { R i , n ( t ) : t ∈ [0 , } , such that the vector ( X i , n , R i , n ( · )) has the same distribution as ( X i , n , φ i , n W i , n ( · )) , where W i , n ∼ { W ( t ) : t ∈ [0 , } are standard Brownian motions independent of theX i , n , and φ i , n are positive random variables, independent of W i , n such that for some q > n →∞ n P ( φ , n > q ) = , (16) and lim n →∞ P ( | − φ , n | > ǫ | X , n ∈ K ) = , ∀ ǫ > for any compact set K ⊂ R . Stochastic processes { δ i , n ( t ) : t ∈ [0 , } , independent of X i , n , such that lim n →∞ P ( k δ , n k > ǫ ) = , ∀ ǫ > , (18)lim n →∞ n P ( k δ , n k > C ) = , for some C > . (19)4 hen, we have the weak convergence η n ( t ) : = max i = ,..., n ζ i , n ( t ) : = max i = ,..., n (cid:16) X i , n + R i , n ( t ) − t / + δ i , n ( t ) (cid:17) d → M B ( t ) , t ∈ [0 , , (20) on the function space C [0 , , where { M B ( t ) : t ∈ [0 , } is the original Brown-Resnick process given by (2) . Remark 3.2. If (12) holds, then condition (15) is satisfied if Y , possesses a density h such that for somec > P ( Y , > u ) = (1 + o (1)) h ( u ) / c , u → ∞ . We will prove frist the following useful result.
Lemma 3.3.
With the notation and under the assumptions of Lemma 3.1, for any ǫ > we can find constantsR , N > such that for any r > R and n > N, we have P ( A n ) : = P ∃ t ∈ [0 ,
1] : η n ( t ) , max i ∈{ ,..., n } , | X i , n | < r ζ i , n ( t ) ! ≤ ǫ. (21) Proof.
We apply a similar technique as in the proof of Theorem 17 in Kabluchko et al. [22]. First note that A n ⊂ C n ∪ D n ∪ ( A n \ [ C n ∩ D n ]) , where for some r > C n = ( inf t ∈ [0 , η n ( t ) < − r ) , D n = n [ i = (cid:8) X i , n > r (cid:9) . Clearly by (13), for N and R large enough it holds that P ( D n ) = n P ( X , n > r ) < ǫ , for any n > N , r > R .Moreover, note that C n ⊂ T ni = F ci , n , where F i , n = ( X i , n ∈ [ − r , r ] , inf t ∈ [0 , R i , n ( t ) − t / + δ i , n ( t ) ≥ r − r ) . In view of assumption (17), for ∆ i , n ( t ) : = W i , n ( t )( φ i , n − , t ∈ [0 ,
1] (22)we obtain for any δ > P ( k ∆ i , n k > δ | X i , n ∈ [ − r , r ]) ≤ P ( k W i , n k > δ/τ ) + P ( | ( φ i , n − | > τ | X i , n ∈ [ − r , r ]) (23) ≤ Φ ( δ/τ ) + ǫ/ ≤ ǫ for su ffi ciently small τ > Φ the tail of an N (0 ,
1) random variable. Further, using assumption 2 ofLemma 3.1, (18) and (23), we obtain for any δ > P inf t ∈ [0 , R i , n ( t ) − t / + δ i , n ( t ) < r − r (cid:12)(cid:12)(cid:12)(cid:12) X i , n ∈ [ − r , r ] ! ≤ P ( k δ i , n k > δ ) + P inf t ∈ [0 , W i , n ( t ) − t / + ∆ i , n ( t ) < r − r + δ (cid:12)(cid:12)(cid:12)(cid:12) X i , n ∈ [ − r , r ] ! ≤ P ( k δ i , n k > δ ) + P ( k ∆ i , n k > δ | X i , n ∈ [ − r , r ]) + P inf t ∈ [0 , W i , n ( t ) < r − r + δ + / ! ≤ , n and r su ffi ciently large. Thus, by (13) P ( F i , n ) ≥ P ( X i , n ∈ [ − r , r ]) ≥ rn + o (1 / n ) , n → ∞ for r large enough and uniformly in i ∈ N , and consequently P ( C n ) ≤ (cid:0) − P ( F , n ) (cid:1) n ≤ (cid:18) − rn + o (1 / n ) (cid:19) n ≤ e − r < ǫ for r and n large. It remains to show that P ( A n \ ( C n ∩ D n )) becomes small. To this end, define events E i , n = n X i , n < − r , sup t ∈ [0 , ζ i , n ( t ) > − r o and note that A n \ ( C n ∩ D n ) is a subset of the union S ni = E i , n . Let C > R i , n from assumption 2. Then P ( E i , n ) ≤ P ( k δ i , n k > C ) + P X i , n < − r , sup t ∈ [0 , (cid:16) X i , n + φ i , n W i , n ( t ) − t / (cid:17) > − r − C ! . (24)For n large enough, (19) implies that the first summand is bounded by ǫ/ n . A coupling argument yields thatthe second summand can be bounded from above by P ( φ i , n > q ) + P X i , n < − r , sup t ∈ [0 , (cid:16) X i , n + qW i , n ( t ) (cid:17) > − r − C ! , where again the first summand is bounded by ǫ/ n by (16). Clearly, we can estimate P sup t ∈ [0 , W i , n ( t ) > u ! ≤ Φ ( u ) ≤ e − u / for large u >
0. Choosing r > − r − C − x ) / q > − x / (2 q ) for all x < − r thus gives P X i , n < − r , sup t ∈ [0 , (cid:16) X i , n + qW i , n ( t ) (cid:17) > − r − C ! ≤ P sup t ∈ [0 , W i , n ( t ) > − r − C + b n / (2 a n ) q ! + Z − r − b n / (2 a n ) P sup t ∈ [0 , W i , n ( t ) > − r − x − Cq (cid:12)(cid:12)(cid:12)(cid:12) X i , n = x ! P ( X i , n ∈ dx ) ≤ e − ( b n / (4 a n q )) / + Z − r − b n / (2 a n ) e − x / (8 q ) P ( X i , n ∈ dx ) ≤ e − ( b n / (4 a n q )) / + K ′ n ≤ ǫ/ n , with some constant K ′ > a n and b n defined in (14). The second inequality above is a consequence of(15). Collecting all parts together yields P n [ i = E i , n ≤ nP (cid:0) E i , n (cid:1) ≤ ǫ and thus P ( A n ) ≤ ǫ for all n > N and r > R with N , R large enough.6 orollary 3.4. With the notation and under the assumptions of Lemma 3.1, for any ǫ > we can find anN ∈ N , such that for all n > N we have P sup t ∈ [0 , | η n ( t ) − ˜ η n ( t ) | > ǫ ! ≤ ǫ, where ˜ η n ( t ) = max i = ,..., n (cid:16) X i , n + R i , n ( t ) − t / (cid:17) , t ∈ [0 , . (25) Proof.
For any ǫ > P sup t ∈ [0 , | η n ( t ) − ˜ η n ( t ) | > ǫ ! ≤ P ∃ t ∈ [0 ,
1] : η n ( t ) , max i ∈{ ,..., n } , | X i , n | < r (cid:16) X i , n + R i , n ( t ) − t / + δ i , n ( t ) (cid:17)! + P ∃ t ∈ [0 ,
1] : ˜ η n ( t ) , max i ∈{ ,..., n } , | X i , n | < r (cid:16) X i , n + R i , n ( t ) − t / (cid:17)! + P (cid:0) ∃ i ∈ { , . . . , n } : | X i , n | < r , k δ i , n k > ǫ (cid:1) ≤ ǫ/ + ǫ/ + n P (cid:0) | X i , n | < r ) P ( k δ i , n k > ǫ (cid:1) ≤ ǫ, where for the first and second summand r and N can be chosen according to Lemma 3.3. The last inequalitythen follows from assumptions (13) and (18). Proof of Lemma 3.1.
The proof will consist of two steps. First, we establish convergence of the finite dimen-sional margins in (20), and, second, we show that the sequence of probability measures { η n } n ∈ N on C [0 ,
1] istight. In fact, by Corollary 3.4, { η n } n ∈ N converges weakly on C [0 ,
1] if and only if the sequence of probabilitymeasures { ˜ η n } n ∈ N in (25) converges weakly on C [0 , { ˜ η n } n ∈ N instead of { η n } n ∈ N .For the first part, let t = ( t , . . . , t m ) ∈ [0 , m and ( y , . . . , y m ) ∈ R m be fixed. It follows from Lemma 4.1.3in Falk et al. [14] that it su ffi ces to proof the convergencelim n →∞ n P ( ∀ j : X , n + R , n ( t j ) − t j / > y j ) = Z R e − y P (cid:16) ∀ j : W ( t j ) − t j / > y j − y (cid:17) dy , (26)where { W ( t ) : t ∈ [0 , } is a standard Brownian motion. To this end, we recall the definition of ∆ , n in (22)and, for clarity, denote by { W , n ( t ) = W , n ( t ) − t / t ∈ [0 , } the drifted process. For arbitrary δ, r > P ( ∀ j : X , n + W , n ( t j ) + ∆ , n ( t j ) > y j ) ≤ P ( ∀ j : X , n + W , n ( t j ) > y j − δ, | X , n | < r ) + P ( ∀ j : X , n + W , n ( t j ) + ∆ , n ( t j ) > y j , | X , n | > r ) + P ( k ∆ , n k > δ, | X , n | < r ) . (27)Furthermore, as n → ∞ , the first summand fulfills n P ( ∀ j : X , n + W , n ( t j ) > y j − δ, | X , n | < r ) = Z r − r P ( ∀ j : W , n ( t j ) > y j − y − δ ) n P ( X , n ∈ dy ) → Z r − r e − y P ( ∀ j : W , ( t j ) > y j − y − δ ) dy , n P ( X , n ∈ dy ) converges weakly to e − y dy , as n → ∞ . Now, in view of the calculationsfollowing (24) for the second summand in (27), and (23) and (13) for the third summand in (27), we havelim sup n →∞ nP ( ∀ j : X , n + W , n ( t j ) + ∆ , n ( t j ) > y j ) ≤ lim r →∞ Z r − r e − y P ( ∀ j : W , ( t j ) > y j − y − δ ) dy + lim r →∞ lim sup n →∞ n P ( ∀ j : X , n + W , n ( t j ) + ∆ , n ( t j ) > y j , | X , n | > r ) + lim r →∞ lim sup n →∞ n P ( k ∆ , n k > δ, | X , n | < r ) = Z R e − y P ( ∀ j : W , ( t j ) > y j − y − δ ) dy . (28)Similarly, we can show thatlim inf n →∞ nP ( ∀ j : X , n + W , n ( t j ) + ∆ , n ( t j ) > y j ) ≥ Z R e − y P ( ∀ j : W , ( t j ) > y j − y + δ ) dy . (29)Since δ > δ ց
0, and thus the convergence of finitedimensional margins.In order to show the tightness of the sequence { ˜ η n } n ∈ N we note that the sequence { ˜ η n (0) } n ∈ N is tight sinceit equals { max i = ,..., n X i , n } n ∈ N which converges to the Gumbel distribution by (13). For a function g ∈ C [0 , κ >
0, we define the modulus of continuity ω κ ( g ) ω κ ( g ) = sup s , t ∈ [0 , , | s − t |≤ κ | g ( s ) − g ( t ) | . By Theorem 7.3 in Billingsley [5] it su ffi ces to find for any ǫ, α > κ > N ∈ N such that P ( ω κ ( ˜ η n ) > α ) < ǫ, n > N . By choosing κ > r > P ( ω κ ( X , n + W , n + ∆ i , n ) > α | X i , n ∈ [ − r , r ]) ≤ P ( ω κ ( W , n ) > α/ + P ( k ∆ , n k > α/ | X i , n ∈ [ − r , r ]) ≤ ǫ/ n > N with N large enough, because of the fact that W , n is independent of X , n and its distributiondoes not depend on n , and condition (23). We proceed by noting that for any n , we have { ω κ ( ˜ η n ) > α } ⊂ (cid:16) { ω κ ( ˜ η n ) > α } ∩ ˜ A Cn (cid:17) ∪ ˜ A n ⊂ n [ i = G i , n , ∪ ˜ A n , (31)where ˜ A n = ( ∃ t ∈ [0 ,
1] : ˜ η n ( t ) , max i ∈{ ,..., n } , | X i , n | < r (cid:16) X i , n + R i , n ( t ) − t / (cid:17)) and G i , n = n X i , n ∈ [ − r , r ] , ω κ ( X i , n + W , n + ∆ i , n ) > α o . ǫ ′ > P ( G i , n ) = P ( ω κ ( X i , n + W , n + ∆ i , n ) > α | X i , n ∈ [ − r , r ]) P ( X i , n ∈ [ − r , r ]) ≤ ǫ ′ P ( X i , n ∈ [ − r , r ])by (30) and κ > n > N . Thus, since by (13), P ( X i , n ∈ [ − r , r ]) is of order 1 / n , wehave for any n > N with N large enough P n [ i = G i , n ≤ n P ( G i , n ) < ǫ/ . (32)Consequently, (31) together with (21) and (32) implies P ( ω κ ( ˜ η n ) > α ) < ǫ , for n > N , and hence the tightnessof { ˜ η n } n ∈ N . Proof of Theorem 2.1.
For i ∈ N and 1 ≤ j ≤ m , write B i , j (1 + t / b n ) d = B i , j (1) + √ b n B ∗ i , j ( t ) , t ≥ , where { B ∗ i , j ( t ) , i ∈ N , ≤ j ≤ m } are independent standard Brownian motions being further independent of { B i , j (1) , i ∈ N , ≤ j ≤ m } . We thus have ξ i , n ( t ) d = m X j = ( B i , j (1)) + √ b n m X j = B i , j (1) B ∗ i , j ( t ) + b n m X j = ( B ∗ i , j ( t )) − b n (1 + t / b n ) = P mj = ( B i , j (1)) − b n + √ b n m X j = B i , j (1) B ∗ i , j ( t ) − t + b n m X j = ( B ∗ i , j ( t )) = : X i , n + R i , n ( t ) − t / + δ i , n ( t ) , t ∈ [0 , . (33)We check the assumptions of Lemma 3.1. By Lemma 4.2 in the Appendix, Y i , n : = X i , n + b n satisfies for u → ∞ P (cid:0) Y , > u (cid:1) = (1 + o (1)) 12 m / Γ ( m / u m / − exp( − u / = (1 + o (1))2 P (cid:0) Y , ∈ du (cid:1) and hence assumption 1 of Lemma 3.1 holds (recall Remark 3.2).A simple calculation with characteristic functions yields for X i , n and R i , n in (33) the joint stochasticrepresentation (cid:0) X i , n , R i , n (cid:1) d = (cid:0) X i , n , φ i , n W i , n ( · ) (cid:1) , φ i , n : = p X i , n / b n + , where { W i , n ( t ) : t ∈ [0 , } are i.i.d. standard Brownian motions, independent of the X i , n . Clearly, it holds forany q > n →∞ n P ( φ , n > q ) = lim n →∞ n P ( X , n > b n ( q − / = X i , n is in the max-domain of attraction of the Gumbel distribution and lim n →∞ b n ( q − / = ∞ .Furthermore, for arbitrary ǫ, r > n →∞ P ( | − φ , n | > ǫ | X , n ∈ [ − r , r ]) = . δ i , n in (33) is independent of X i , n and for any ǫ > P ( k δ , n k > ǫ ) = P sup t ∈ [0 , m X j = ( B ∗ i , j ( t )) ! > b n ǫ → , n → ∞ . Moreover, for a C >
1, in view of the Piterbarg inequality given in Proposition 3.2 in Tan and Hashorva [29](see also Theorem 8.1 in Piterbarg [26], or in Piterbarg [27]), we have for some positive constant λ n P ( k δ , n k > C ) = n P sup t ∈ [0 , m X j = ( B ∗ i , j ( t )) ! > b n C ≤ nb λ n e − b n C → , n → ∞ and thus assumption 3 of Lemma 3.1 holds, and the assertion of the theorem follows. Proof of Theorem 2.2.
For i ∈ N and 1 ≤ j ≤ m , write B i , j (cid:0) + t / (2 b ∗ n ) (cid:1) d = B i , j (1) + p b ∗ n B ∗ i , j ( t ) , e B i , j (cid:0) + t / (2 b ∗ n ) (cid:1) d = e B i , j (1) + p b ∗ n e B ∗ i , j ( t ) t ≥ , where { B ∗ i , j ( t ) , e B ∗ i , j , i ∈ N , ≤ j ≤ m } are independent standard Brownian motions being further independentof { B i , j (1) , e B i , j , i ∈ N , ≤ j ≤ m } . We thus have γ i , n ( t ) d = m X j = B i , j (1) e B i , j (1) + p b ∗ n m X j = B i , j (1) e B ∗ i , j ( t ) + p b ∗ n m X j = e B i , j (1) B ∗ i , j ( t ) + b ∗ n m X j = B ∗ i , j ( t ) e B ∗ i , j ( t ) − b ∗ n (cid:0) + t / (2 b ∗ n ) (cid:1) = m X j = B i , j (1) e B i , j (1) − b ∗ n + p b ∗ n m X j = B i , j (1) e B ∗ i , j ( t ) + p b ∗ n m X j = e B i , j (1) B ∗ i , j ( t ) − t + b ∗ n m X j = B ∗ i , j ( t ) e B ∗ i , j ( t ) = : X i , n + R i , n ( t ) − t / + δ i , n ( t ) , t ∈ [0 , . (34)As above, we only have to check the assumptions of Lemma 3.1. By Lemma 4.2 in the Appendix, Y i , n : = X i , n + b ∗ n satisfies for u → ∞ P (cid:0) Y , > u (cid:1) = (1 + o (1)) 12 m / Γ ( m / u m / − exp( − u ) = (1 + o (1)) P (cid:0) Y , ∈ du (cid:1) and hence assumption 1 of Lemma 3.1 holds (recall again Remark 3.2).10 simple calculation with characteristic functions yields for X i , n and R i , n in (34) the joint stochasticrepresentation (cid:0) X i , n , R i , n (cid:1) d = (cid:0) X i , n , φ i , n W i , n ( · ) (cid:1) , φ i , n : = s Ψ i , n b n , Ψ i , n : = m X j = (cid:16) B i , j (1) + e B i , j (1) (cid:17) , where { W i , n ( t ) : t ∈ [0 , } are i.i.d. standard Brownian motions, independent of the X i , n . Clearly, since Ψ i , n is chi-square distributed with 2 m degrees of freedom, it holds for any q > n →∞ n P ( φ i , n > q ) = lim n →∞ n P (cid:16) Ψ i , n > b n q (cid:17) ≤ lim n →∞ nK exp( − b n q ) = , where K > ǫ, r > P ( | − φ i , n | > ǫ | X i , n ∈ [ − r , r ]) (35) = P Ψ i , n < [2 b n (1 − ǫ ) , b n (1 + ǫ ) ] (cid:12)(cid:12)(cid:12)(cid:12) m X j = B i , j (1) e B i , j (1) ∈ [ b n − r , b n + r ] = P (cid:16) Ψ i , n < [2 b n (1 − ǫ ) , b n (1 + ǫ ) ] , P mj = B i , j (1) e B i , j (1) ∈ [ b n − r , b n + r ] (cid:17) P (cid:16)P mj = B i , j (1) e B i , j (1) ∈ [ b n − r , b n + r ] (cid:17) . By Lemma 4.2, for large n ∈ N the denominator can be bounded by P m X j = B i , j (1) e B i , j (1) ∈ [ b n − r , b n + r ] ≥ K ′ (cid:16) ( b n − r ) m / − e r − ( b n + r ) m / − e − r (cid:17) e − b n , (36)for some constant K ′ >
0. For the numerator we first note that Ψ i , n = m X j = (cid:16) B i , j (1) + e B i , j (1) (cid:17) ≥ m X j = B i , j (1) e B i , j (1)and thus for n large enough it su ffi ces to consider P Ψ i , n > b n (1 + ǫ ) , m X j = B i , j (1) e B i , j (1) ∈ [2 b n − r , b n + r ] ≤ P m X j = (cid:16) B i , j (1) + e B i , j (1) (cid:17) > b n (1 + ǫ ) + b n − r ≤ P (cid:16) χ m > b n (1 + ǫ ) − r (cid:17) ≤ K ′′ (2 b n (1 + ǫ ) − r ) m / − e − b n (1 + ǫ ) , (37)where χ m is a chi-square distribution with m degrees of freedom and K ′′ > n → ∞ . Thus, assumption 2 of Lemma 3.1is fulfilled. 11ote that δ i , n in (34) is independent of X i , n . For any ǫ > P ( k δ , n k > ǫ ) = P sup t ∈ [0 , (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m X j = B ∗ i , j ( t ) e B ∗ i , j ( t ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) > b n ǫ ≤ m P sup t ∈ [0 , (cid:12)(cid:12)(cid:12) B ∗ i , ( t ) (cid:12)(cid:12)(cid:12) sup t ∈ [0 , (cid:12)(cid:12)(cid:12)e B ∗ i , ( t ) (cid:12)(cid:12)(cid:12) > b n ǫ/ m ! ≤ M exp − b n ǫ m ! = M ′ n − ǫ/ m (ln( n )) − ( m / − ǫ/ m ) → , n → ∞ , where the second inequality follows from Lemma 4.1 in Appendix and M , M ′ are positive constants. Clearly,for C > m we further have n P ( k δ , n k > C ) → , n → ∞ . Thus assumption 3 of Lemma 3.1 holds, and the proof is complete.
4. Conclusion and further work
Brown-Resnick processes have gained a lot of attention recently both because of their theoretical in-tricacies as well as their potential applicability, especially in space-time modeling of extreme events; seeDavison et al. [8]. To this end, it is an important fact that this class of max-stable processes naturally ap-pears as max-limits of Gaussian processes (cf. Kabluchko et al. [22],Kabluchko [21]). We have shown thatthese processes appear more generally as limits of maxima of not only Gaussian, but also squared Besselprocesses and Brownian scalar product processes. Further generalizations are under investigation. A recentwork by Engelke et al. [12] shows that for instance that H¨usler-Reiss type limit distributions are obtainedfor non-identically distributed independent Gaussian random vectors. A natural extension could be thus toconsider maxima of non-identically distributed independent Gaussian processes and their functional limits.Furthermore, the independence assumption can be eventually relaxed regarding the paper as in Hashorvaand Weng [18], so that the limit process still remains Brown-Resnick. In a di ff erent direction, there has beensome developments in simulating Brown-Resnick processes [11, 25]. An alternative formulation as the limitof other processes as described in this paper can potentially lead to further techniques for simulation. Acknowledgement
The authors are grateful to an anonymous referee for several comments and corrections that consider-ably improved the paper. The authors kindly acknowledge partial support by the Swiss National ScienceFoundation grants 200021-1401633 /
1, 200021-134785 and the project RARE -318984 (a Marie Curie FP7IRSES Fellowship). Bikramjit Das would also like to thank RiskLab for partial financial support.
AppendixLemma 4.1.
Let X i , i = , , be two positive independent random variables such that P ( X i > x ) = (1 + o (1)) C i x α i exp( − L i x p i ) , (38)12 ith L i , C i , p i , i = , positive constants, α , α ∈ R . Then as x → ∞ we have P ( X X > x ) = (1 + o (1)) (cid:16) π p L p + p (cid:17) / C C A p / + α − α x p α + p α + p p p + p × exp (cid:18) − ( L A − p + L A p ) x p p p + p (cid:19) , (39) where A = [( p L ) / ( p L )] / ( p + p ) . If further X possesses a density h which is bounded and ultimatelydecreasing such that h ( x ) = (1 + o (1)) L p x p − P ( X > x ) as x → ∞ , then X X possesses a density hsatisfyingh ( x ) = (1 + o (1)) L p A − p x p p / ( p + p ) − P ( X X > x ) , x → ∞ . (40) Proof.
The tail asymptotics of X X is proved in Arendarczyk and Debicki [3] whereas (40) is given in [19],Corollary 2.2. An alternative somewhat shorter proof of (39) is derived using the following arguments: Wehave for some 0 < l < < l < ∞ and z x = Ax p / ( p + p ) , A = [( p L ) / ( p L )] / ( p + p ) P ( X X > x ) ∼ C C p L x α z p + α − − α x Z l Ax p p + p l Ax p p + p exp( − L ( x / y ) p − L y p )d y ∼ C C p L x α z p + α − α x Z l l exp( − L ( x / z x ) p (1 / y ) p − L z p x y p )d y = C C p L x α z p + α − α x Z l l exp( − x p p p + p [ L A − p y − p + L A p y p ])d y . Since the function ψ ( y ) = L A − p y − p + L A p y p attains its minimum in [ l , l ] at 1, applying the Laplaceapproximation we obtain Z l l exp( − x p p p + p [ L A − p y − p + L A p y p ])d y ∼ √ π q x p p p + p ψ ′′ (1) exp (cid:18) − ψ (1) x p p p + p (cid:19) , x → ∞ , where ψ (1) = L [( p L ) / ( p L )] − p p + p + L [( p L ) / ( p L )] p p + p , ψ ′ (1) = , ψ ′′ (1) = L A p p ( p + p ) > , hence the claim follows. Lemma 4.2.
If X i , Y i , i ≥ , are independent N (0 , Gaussian random variables, then for any positiveinteger k , m, as x → ∞ we have P m X i = X i Y i > x = (1 + o (1))2 m / Γ ( m / x m / − exp( − x ) = (1 + o (1)) f m ( x ) , (41) P m X i = X i > x = (1 + o (1))2 m / − Γ ( m / x m / − exp( − x / = (1 + o (1))2 g m ( x ) , (42) where f m and g m are the densities of P mi = X i Y i and P mi = X i , respectively. Furthermore P mi = X i Y i is in theGumbel max-domain of attraction with norming constantsa ∗ n = , b ∗ n = ln n + ( m / −
1) ln(ln n ) − ( m / −
1) ln 2 − ln Γ ( m / . roof. The proof follows from Example 5 and 6 in Hashorva et al. [17]. We give here a direct proof utilizingLemma 4.1. Since P mi = X i Y i is symmetric about 0 and | m X i = X i Y i | d = | X vt m X i = Y i | = : | X || Z | , with Z being independent of X . The asymptotic behavior of Z follows immediately from the properties ofGamma random variables as given in (42). Consequently, applying Lemma 4.1 we obtain P m X i = X i Y i > x = P | m X i = X i Y i | > x = P (cid:16) X Z > x (cid:17) = (1 + o (1)) 12 m / Γ ( m / x m / − exp( − x ) , x → ∞ , and thus the norming constants can be easily found; see e.g., [10, p.155], hence the claim follows. References [1] R. Adler, An Introduction to Continuity, Extrema and Related Topics for General Gaussian Processes,volume Vol. 12 of
IMS Lectures Notes , Institute of Mathematical Statistics, Hayward, 1990.[2] R. Adler, J. Taylor, Random fields and geometry, Springer Monographs in Mathematics, Springer, NewYork, 2007.[3] M. Arendarczyk, K. Debicki, Asymptotics of supremum distribution of a Gaussian process over aWeibullian time, Bernoulli 17 (2011) 194–210.[4] S. Berman, Sojourns and Extremes of Stochastic Processes, Wadsworth and Brooks / Cole, PacificGrove, CA, 1992.[5] P. Billingsley, Convergence of Probability Measures, second ed., John Wiley & Sons Inc., New York,1999.[6] B. Brown, S.I. Resnick, Extreme values of independent stochastic processes, Journal of Applied Prob-ability 14 (1977) 732–739.[7] R.A. Davis, C. Kl¨uppelberg, C. Steinkohl, Statistical inference for max-stable processes in space andtime, Journal of the Royal Statistical Society, Series B 75 (2013) 791–819.[8] A.C. Davison, S.A. Padoan, M. Ribatet, Statistical modeling of spatial extremes, Statistical Science 27(2012) 161–186.[9] C. Dombry, F. Eyi-Minko, Strong mixing properties of max-infinitely divisible random fields, Stochas-tic Processes and their Applications 122 (2012) 3790–3811.[10] P. Embrechts, C. Kl¨uppelberg, T. Mikosch, Modelling Extreme Events for Insurance and Finance,Springer-Verlag, Berlin, 1997. 1411] S. Engelke, Z. Kabluchko, M. Schlather, An equivalent representation of the Brown-Resnick process,Statistics & Probability Letters 81 (2011) 1150–1154.[12] S. Engelke, Z. Kabluchko, M. Schlather, Maxima of independent, non-identically distributed Gaussianvectors, Bernoulli (to appear), preprint: http: // arxiv.org / // arxiv.org / abs / .[20] J. H¨usler, R. Reiss, Maxima of normal random vectors: between independence and complete depen-dence, Statistics & Probability Letters 7 (1989) 283–286.[21] Z. Kabluchko, Extremes of independent Gaussian processes, Extremes 14 (2011) 285–310.[22] Z. Kabluchko, M. Schlather, L. de Haan, Stationary max-stable fields associated to negative definitefunctions, The Annals of Probability 37 (2009) 2042–2065.[23] M. Leadbetter, G. Lindgren, H. Rootz´en, Extremes and related properties of random sequences andprocesses, volume 11, Springer Verlag, 1983.[24] M.A. Lifshits, Gaussian random functions, volume 322 of Mathematics and its Applications , KluwerAcademic Publishers, Dordrecht, 1995.[25] M. Oesting, Z. Kabluchko, M. Schlather, Simulation of Brown–Resnick processes, Extremes 15 (2012)89–107.[26] V.I. Piterbarg, Asymptotic methods in the theory of Gaussian processes and fields, volume 148 of
Translations of Mathematical Monographs , American Mathematical Society, Providence, RI, 1996.[27] V.I. Piterbarg, Large deviations of a storage process with fractional Browanian motion as input, Ex-tremes 4 (2001) 147–164.[28] S.I. Resnick, Heavy Tail Phenomena: Probabilistic and Statistical Modeling, Springer Series in Oper-ations Research and Financial Engineering, Springer-Verlag, New York, 2007.[29] Z. Tan, E. Hashorva, Exact asymptotics and limit theorems for supremum of stationary χχ