Change in the mean in the domain of attraction of the normal law via Darling-Erdős theorems
aa r X i v : . [ m a t h . S T ] A p r Change in the mean in the domain of attraction of thenormal law via Darling-Erd˝os theorems
Mikl´os Cs¨org˝o ∗ School of Mathematics and StatisticsCarleton University, 1125 Colonel By Drive, Ottawa, ON K1S 5B6, Canada [email protected] Hu † Department of Statistics and Finance, School of ManagementUniversity of Science and Technology of China, Hefei, Anhui 230026, China [email protected]
Abstract.
This paper studies the problem of testing the null assumption ofno-change in the mean of chronologically ordered independent observations ona random variable X versus the at most one change in the mean alternativehypothesis. The approach taken is via a Darling-Erd˝os type self-normalizedmaximal deviation between sample means before and sample means after pos-sible times of a change in the expected values of the observations of a ran-dom sample. Asymptotically, the thus formulated maximal deviations areshown to have a standard Gumbel distribution under the null assumption ofno change in the mean. A first such result is proved under the condition that EX log log( | X | + 1) < ∞ , while in the case of a second one, X is assumed tobe in a specific class of the domain of attraction of the normal law, possiblywith infinite variance. Key Words:
Change in the mean, domain of attraction of the normal law,Darling-Erd˝os theorems, Gumbel distribution, weighted metrics, Brownianbridge.
AMS 2000 Subject Classification:
Primary 60F05; secondary 62G10. ∗ Research supported by an NSERC Canada Discovery Grant at Carleton University. † Partially supported by NSFC(No.10801122) and RFDP(No.200803581009), and by an NSERC CanadaDiscovery Grant of M. Cs¨org˝o at Carleton University. Introduction and main results
Let
X, X , X , · · · be non-degenerate independent identically distributed (i.i.d.) real-valued random variables (r.v.’s) with a finite mean EX = µ . We are interested in testingthe null assumption H : X , X , · · · X n is a random sample on X with a finite mean EX = µ versus the “at most one change in the mean” (AMOC) alternative hypothesis H A : there is an integer k ∗ , ≤ k ∗ < n such that EX = · · · = EX k ∗ = EX k ∗ +1 = · · · = EX n . The hypothesized time k ∗ of at most one change in the mean is usually unknown.Hence, given chronologically ordered independent observables X , X , · · · , X n , n ≥
1, inorder to test H versus H A , from a non-parametric point of view it appears to be reasonableto compare the sample mean ( X + · · · + X k ) /k =: S k /k at any time 1 ≤ k < n to thesample mean ( X k +1 + · · · + X n ) / ( n − k ) =: ( S n − S k ) / ( n − k ) after time 1 ≤ k < n viafunctionals in k of the family of the standardized statisticsΓ n ( k ) := (cid:16) n kn (cid:16) − kn (cid:17)(cid:17) / (cid:16) S k k − S n − S k n − k (cid:17) = 1( kn (1 − kn )) / (cid:16) S k n / − kn S n n / (cid:17) , ≤ k < n. (1.1)For instance, one would want to reject H in favor of H A for large observed values ofΓ n := max ≤ k 1, are N ( µ, σ ) random variables, then we find ourselves modeling andtesting for a parametric shift in the mean AMOC problem. It is, however, easy to checkthat, when the variance σ is known, then − k = 1 σ (Γ n ( k )) , (1.3)where Λ k is the likelihood ratio statistic if the change in the mean occurs at k ∗ = k .Hence, the maximally selected likelihood ratio statistic max ≤ k 1] (cf. Cs¨org˝o et al. (2004), Remark 5.2, as well as Corollaries 2 and 4of Cs¨org˝o et al. (2008a) and their extension (46) in Theorem 4 of Cs¨org˝o et al. (2008b)).The applicability of (1.7) is much enhanced by Orasch and Pouliot (2004), tabulatingfunctionals in weighted sup-norm.An alternative way of studying change in the mean is via Darling-Erd˝os type theorems.For example (cf. Theorems 2.1.2, A.4.2 and Corollary 2.1.2 in Cs¨org˝o and Horv´ath (1997)), under H with EX log log( | X | + 1) < ∞ , we have lim n →∞ P (cid:16) a ( n ) max ≤ k 12 log π. (1.9)In view of (1.7), the aim of this paper is to explore the possibility of extending theresult of (1.8) to versions of Z n ( kn +1 ) under H with X ∈ DAN, for the sake of havingan alternative approach to the sup-norm procedure of (1.7) for studying the problem of achange in the mean in DAN, possibly with EX = ∞ .Define the family of statistics T k,n = S k k − S n − S k n − k r P ki =1 ( X i − S k /k ) k ( k − + P ni = k +1 ( X i − ( S n − S k ) / ( n − k )) ( n − k )( n − k − , ≤ k ≤ n − . (1.10)We note in passing that, on writing˜ σ k,n := P ≤ i ≤ k (cid:16) X i − S k k (cid:17) k ( k − 1) + P k
Assume that H holds and EX log log( | X | + 1) < ∞ . (1.13) Then lim n →∞ P (cid:16) a ( n ) max ≤ k ≤ n − T k,n ≤ t + b ( n ) (cid:17) = exp( − e − t ) , t ∈ R . Write l ( x ) := E ( X − µ ) I ( | X − µ | ≤ x ). Assume that X belongs to the domainof attraction of the normal law. Then l ( x ) is a slowly varying function as x → ∞ .Consequently, there exists some a > x > a (see, for example, Galambosand Seneta (1973)), ℓ ( x ) = exp n c ( x ) + Z xa ε ( t ) t dt o , (1.14)where c ( x ) → c ( | c | < ∞ ) as x → ∞ and ε ( t ) → t → ∞ . Theorem 1.2. Assume that H holds and l ( x ) is a slowly varying function at ∞ that,in terms of the representation (1.14), satisfies the additional conditions c ( x ) ≡ c and ε ( t ) ≤ C / log t for some C > , i.e., X ∈ DAN, possibly with infinite variance, underthe latter specific conditions on l ( x ) . Then, for all t ∈ R , lim n →∞ P (cid:16) a ( n ) max ≤ k ≤ n − T k,n ≤ t + b ( n ) (cid:17) = exp( − e − t ) . Remark 1. The additional conditions in Theorem 1.2 are satisfied by a large class ofslowly varying functions, such as l ( x ) = (log log x ) α and l ( x ) = (log x ) α , for example, forsome 0 < α < ∞ . Remark 2. Cs¨org˝o, Szyszkowicz and Wang (2003) obtained the follwoing Darling-Erd˝os theorem for self-normalized sums: suppose that H holds with EX = 0 and l ( x ) isa slowly varying function at ∞ , satisfying l ( x ) ≤ Cl ( x ) for some C > . (1.15) Then, for every t ∈ R , lim n →∞ P (cid:16) a ( n ) max ≤ k ≤ n S k /V k ≤ t + b ( n ) (cid:17) = exp( − e − t ) . If l ( x ) has the representation (1.14) with c ( x ) ≡ c and ε ( t ) ≤ C / log t for some C > l ( x ) l ( x ) = exp n Z x x ε ( t ) t dt o ≤ exp n C Z x x t log t dt o = 2 C . So, (1.15) holds under the additional smoothness conditions for l ( x ) that are needed forresults like Lemma 2.1, for example. On the other hand, if ε ( x ) = (log x ) − α for some5 < α < 1, then lim x →∞ l ( x ) /l ( x ) = ∞ , i.e., (1.15) fails. Thus, the additional conditions on l ( x ) in Theorem 1.2 that are sufficient for having (1.15), are seen to be not far from beingalso necessary.Before proving Theorems 1.1 and 1.2, we pose the following question. Question 1. In view of Theorems 1.1 and 1.2, one may like to know if the result of(1.8) could also hold true when replacing condition (1.13) by X ∈ DAN, possibly with EX = ∞ . Question 2. In view of having Theorems 1.1 and 1.2, one would hope to have (1.7) interms of T k,n , i.e., when replacing σ [ nt +1] ,n by σ [ nt +1] ,n ( n [ nt +1]( n − [ nt ]) ) / on the left hand sideof (1.7), with ˜ σ k,n , ≤ k ≤ n − σ n,n := n P ≤ i ≤ n ( X i − S n n ) .As to these questions, it is clear from the respective proofs of (1.8) (cf. Corollary2.1.2 in Cs¨org˝o and Horv´ath (1997)) and Theorem 1.1 that, under the condition (1.13),the two estimators ˆ σ k,n and ( k ( n − k ) /n )˜ σ k,n of σ are asymptotically equivalent. WhenVar( X ) = ∞ , this does not appear to be true any more, i.e., when these “estimators” inhand are being used as self-normalizers. However, we could not resolve this problem asposed in the context of these two questions. Without loss of generality, in this section we assume that µ = 0. Proof of Theorem 1.1. Write K n = exp { log / n } . With ˜ σ k,n as in (1.11), in view of(1.12), at first, we prove that, as n → ∞ ,max K n Since 1 ≥ l ( η n/ (log log n ) ) l ( η n ) ≥ exp n − C Z η n η n/ (log log n )5 u log u du o ≥ exp n − C η n η n/ (log log n ) log η n/ (log log n ) o , and η n is a regularly varying function with index 1 / 2, for any ε > 0, we have η n /η n/ (log log n ) ≤ (log log n ) / ε for sufficiently large n , and log η n/ (log log n ) ∼ (1 / 2) log n as n → ∞ . Hence l ( η n ) − l ( η n/ (log log n ) ) l ( η n ) = o (1 / log log n ) . ✷ Lemma 2.2. As n → ∞ , we have P nj =1 ( | Z j | + E | Z j | ) B n / √ log log n P → . Proof. Let τ j = η j (log log j ) and Z ∗ j = X j I ( η j < | X j | < τ j ). From the proof of Lemma 2in Cs¨org˝o et al. (2003), we have P ( Z j = Z ∗ j , i.o. ) = 0. Hence, by Chebyshev’s inequality,in order to prove Lemma 2.2, we only need to prove that, as n → ∞ , n X j =1 E | Z ∗ j | = o ( B n / p log log n ) , (2.6) n X j =1 EZ ∗ j = o ( B n / log log n ) , (2.7) n X j =1 E | X j | I ( | X j | > τ j ) = o ( B n / p log log n ) . (2.8)We only prove (2.6) and (2.8), for the proof of (2.7) is similar to that of (2.6). Since η n isa regularly varying function with index 1 / 2, we have that for sufficiently large n , η n/ (log log n ) (log log n ) ≤ η n/ (log log n ) . Also, similarly, by the fact that √ j (log log j ) / p l ( η j ) is a regularly varying function withindex 1 / 2, we have that for sufficiently large n ,max ≤ j ≤ n/ (log log n ) jη j = max ≤ j ≤ n/ (log log n ) √ j (log log j ) p l ( η j ) ≤ √ n p l ( η n )(log log n ) . n X j =1 E | Z ∗ j | ≤ n/ (log log n ) X j =1 E | X | I ( η i < | X | < η n/ (log log n ) )+ nE | X | I ( η n/ (log log n ) < | X | < η n (log log n ) ) ≤ n/ (log log n ) X j =1 jE | X | I ( η j < | X | < η j +1 )+ n ( l ( η n (log log n ) ) − l ( η n/ (log log n ) )) η n/ (log log n ) = o ( B n / (log log n )) , n → ∞ . Thus (2.6) is proved.Next, we prove (2.8). By the fact that E | X | I ( | X | ≥ x ) = o (1) l ( x ) /x as x → ∞ , n X j =1 E | X j | I ( | X j | > τ j ) = o (1) n X j =1 l ( τ j ) τ j ≤ o (1) l ( τ n ) n X j =1 τ j . Since 1 /τ n is a regularly varying function with index − / 2, by Tauberian theorem (see,for instance, Theorem 5 in Feller (1971), page 447), we have P nj =1 1 τ j ∼ n/τ n as n → ∞ .Hence, as n → ∞ , n X j =1 E | X j | I ( | X j | > τ j ) = o (1) nl ( τ n ) τ n = o (1) B n / (log log n ) . Thus (2.8) is proved and the proof of Lemma (2.2) is complete. ✷ Lemma 2.3. For all t ∈ R , we have lim n →∞ P (cid:16) a ( n ) max ≤ k We only prove (2.9), since the proof of (2.10) is similar. Since l ( x ) ≤ C l ( x ), by(42) in Cs¨org˝o et al. (2003), there exist two independent Wiener processes W (1) and W (2) such that, as n → ∞ , S ∗ n − W (1) ( B n ) = o ( B n / p log log n ) a.s. (2.11)and ˜ S ∗ n − W (2) ( B n ) = o ( B n / p log log n ) a.s. (2.12)10efine K n = exp { log / n } and W ( n, t ) = (cid:26) n − / ( W (1) ( nt ) − t ( W (1) ( n/ 2) + W (2) ( n/ , ≤ t ≤ / ,n − / ( − W (2) ( n − nt ) + (1 − t )( W (1) ( n/ 2) + W (2) ( n/ , / < t ≤ . Computing its covariance function, one concludes that W ( n, t ) is a Brownian bridge in0 ≤ t ≤ n ≥ 1. Now, as n → ∞ , we have p log log n max K n ≤ k ≤ n/ (cid:12)(cid:12)(cid:12) S ∗ k,n B k,n − B n W ( B n , B k /B n ) q B k ( B n − B k ) (cid:12)(cid:12)(cid:12) P → . (2.13)To prove (2.13), we notice that for k ≤ n/ S ∗ k,n = nk ( n − k ) (cid:16) S ∗ k − kn ( ˜ S ∗ n − [ n/ + S ∗ [ n/ ) (cid:17) . Hence, for k ≤ n/ (cid:12)(cid:12)(cid:12) S ∗ k,n B k,n − B n W ( B n , B k /B n ) q B k ( B n − B k ) (cid:12)(cid:12)(cid:12) ≤ | W ( B n , B k /B n ) | (cid:12)(cid:12)(cid:12) nB n k ( n − k ) B k,n − B n q B k ( B n − B k ) (cid:12)(cid:12)(cid:12) + nB n k ( n − k ) B k,n (cid:12)(cid:12)(cid:12) k ( n − k ) nB n S ∗ k,n − W ( B n , B k /B n ) (cid:12)(cid:12)(cid:12) := L ( k, n ) + L ( k, n ) . (2.14)First, we estimate L ( k, n ). We have k ( n − k ) B k,n n B n − B k ( B n − B k ) B n = (cid:16) B k B n − kn (cid:17) − k ( B n − B n/ − B n − [ n/ ) n B n . Note that ( k/n ) / ≤ B k /B n ≤ ( k/n ) / holds for all K n ≤ k ≤ n and sufficiently large n by the fact that B n is a regularly varying function with index 1 / 2. Thenmax K n ≤ k ≤ n/ (log log n ) B n B k (cid:16) B k B n − kn (cid:17) ≤ K n ≤ k ≤ n/ (log log n ) (cid:16) B k B n + B n k B k n (cid:17) ≤ n ) − / . Also, by Lemma 2.1,max n/ (log log n ) 0, it follows from (2.16) and(2.17) that p log log n max K n ≤ k ≤ n/ (cid:12)(cid:12)(cid:12) nB n k ( n − k ) B k,n − B n q B k ( B n − B k ) (cid:12)(cid:12)(cid:12) ≤ p n max K n ≤ k ≤ n/ (cid:12)(cid:12)(cid:12) k ( n − k ) B k,n n B n − B k ( B n − B k ) B n (cid:12)(cid:12)(cid:12)(cid:16) B k ( B n − B k ) B n (cid:17) − / ≤ p log log n max K n ≤ k ≤ n/ B n B k (cid:12)(cid:12)(cid:12) k ( n − k ) B k,n n B n − B k ( B n − B k ) B n (cid:12)(cid:12)(cid:12) → . (2.18)By properties of Brownian motion,max K n ≤ k ≤ n/ | W ( B n , B k /B n ) | ≤ B − n sup ≤ t ≤ B n | W (1) ( t ) | + B − n | W (2) ( B n / | d = 2 sup ≤ t ≤ | W (1) ( t ) | + | W (2) (1 / | . This together with (2.18) yields p log log n max K n ≤ k ≤ n/ L ( k, n ) P → , n → ∞ . (2.19)Next, we estimate L ( k, n ). By (2.11) and (2.12), (cid:12)(cid:12)(cid:12) k ( n − k ) nB n S ∗ k,n − W ( B n , B k /B n ) (cid:12)(cid:12)(cid:12) ≤ knB n | W (1) ( B n / − W (1) ( B n/ ) | knB n | W (2) ( B n / − W (2) ( B n − [ n/ ) | + (cid:12)(cid:12)(cid:12) kn − B k B n (cid:12)(cid:12)(cid:12) | W (1) ( B n / | + | W (2) ( B n / | B n + o k (1) B k B n √ log log k , where o k (1) → k → ∞ . Similarly to the proof of (2.15), we have p log log n max K n ≤ k ≤ n/ B n B k (cid:12)(cid:12)(cid:12) B k B n − kn (cid:12)(cid:12)(cid:12) → , n → ∞ . This, together with (2.17) and the fact that | W (1) ( B n / | + | W (2) ( B n / | B n d = | W (1) (1 / | + | W (2) (1 / | , as n → ∞ , yields p log log n max K n ≤ k ≤ n/ nB n k ( n − k ) B k,n (cid:12)(cid:12)(cid:12) kn − B k B n (cid:12)(cid:12)(cid:12) | W (1) ( B n / | + | W (2) ( B n / | B n P → . Similarly to the proof of Lemma 2.1, we have √ log log nB n | W (1) ( B n / − W (1) ( B n/ ) | d = p log log n (cid:16) B n / − B n/ B n (cid:17) / | W (1) (1) | = p log log n (cid:16) ( n/ l ( η n ) − [ n/ l ( η [ n/ ) nl ( η n ) (cid:17) / | W (1) (1) | P → , n → ∞ . Hence, by (2.17), as n → ∞ , p log log n max K n ≤ k ≤ n/ nB n k ( n − k ) B k,n knB n | W (1) ( B n / − W (1) ( B n/ ) | P → . Similarly, as n → ∞ , p log log n max K n ≤ k ≤ n/ nB n k ( n − k ) B k,n knB n | W (2) ( B n / − W (2) ( B n − [ n/ ) | P → . Also, by (2.17), as n → ∞ , p log log n max K n ≤ k ≤ n/ nB n k ( n − k ) B k,n o k (1) B k B n √ log log k P → . Hence p log log n max K n ≤ k ≤ n/ L ( k, n ) P → , n → ∞ . (2.20)Now (2.13) follows from (2.14), (2.19) and (2.20). Now, similarly, as n → ∞ , p log log n max n/ 1] for each n ≥ 1. Hence, to prove(2.21), we only need to show that, as n → ∞ , p log log n sup B KnB n ≤ t,s ≤ B n − KnB n sup | t − s |≤ ∆ n (cid:12)(cid:12)(cid:12) W ( t ) − tW (1) p t (1 − t ) − W ( s ) − sW (1) p s (1 − s ) (cid:12)(cid:12)(cid:12) P → , where W ( t ) is a standard Brownian motion. This follows from results on the incrementsof a Brownian motion (see for instance Cs¨org˝o and R´ev´esz (1981), Theorem 1.2.1) and bysome basic calculations. We omit the details here. Hence, as n → ∞ , p log log n (cid:12)(cid:12)(cid:12) max K n ≤ k ≤ n − K n S ∗ k,n B k,n − sup B KnB n ≤ t ≤ B n − KnB n W ( B n , t ) p t (1 − t ) (cid:12)(cid:12)(cid:12) P → . (2.22)By using (A.4.30) and (A.4.31) in Cs¨org˝o and Horv´ath (1997), as n → ∞ , we conclude(2 log log B n ) − / sup /B n ≤ t ≤ c ( B n ) W ( B n , t ) p t (1 − t ) P → p / , (2 log log B n ) − / sup − c ( B n ) ≤ t ≤ /B n W ( B n , t ) p t (1 − t ) P → p / , where c ( B n ) = exp { (log B n ) / } /B n . Notice that B K n /B n ≤ c ( B n ) and B n − K n /B n ≥ − c ( B n ) for sufficiently large n . Hence, as n → ∞ , a ( B n ) sup /B n ≤ t ≤ B Kn /B n W ( B n , t ) p t (1 − t ) − b ( B n ) P → −∞ , (2.23) a ( B n ) sup B n − Kn /B n ≤ t ≤ − /B n W ( B n , t ) p t (1 − t ) − b ( B n ) P → −∞ . (2.24)14y (A.4.29) and Theorem A.3.1 in Cs¨org˝o and Horv´ath (1997), we arrive atlim n →∞ P (cid:16) a ( B n ) sup /B n ≤ t ≤ − /B n W ( B n , t ) p t (1 − t ) ≤ t + b ( B n ) (cid:17) = exp( − e − t ) . (2.25)Now, from (2.22)–(2.25) it follows that for all t ∈ R ,lim n →∞ P (cid:16) a ( B n ) max K n ≤ k ≤ n − K n S ∗ k,n /B k,n ≤ t + b ( B n ) (cid:17) = exp( − e − t ) . This, together with (2.28) below, implies that for all t ∈ R ,lim n →∞ P (cid:16) a ( B n ) max ≤ k 1, where W is a Brownianmotion and b n is a regularly varying function with index 1 / 2. Hencemin k ≤ n/ ( ˜ V n − [ n/ + V n/ − V k − ( ˜ S n − [ n/ + S [ n/ − S k ) / ( n − k )) b n ≥ ˜ V n − [ n/ b n − S n − [ n/ + 6(max ≤ k ≤ n/ | S k | ) ( n/ b n P → / , n → ∞ . Notice that by the self-normalized LIL of Griffin and Kuelbs (1989), as n → ∞ , we havelim sup n →∞ | S n | p n ( V n − S n /n ) = 1 a.s. √ n max ≤ k ≤ K n | S k | q ( V k − S k /k ) ≤ √ K n √ n (1 + o (1)) = p / o (1) a.s. Similarly, by (18) in Cs¨org˝o et al. (2003), we conclude1 √ n max k>K n and k ∈ Ω ′ | S k | q ( V k − S k /k ) ≤ p / o (1) a.s., n → ∞ . Thus, by noting that a + b √ c + d ≤ a √ c + b √ d holds for all a, b, c, d > √ n max k ∈ Ω ′ | S k,n | ¯ V k,n ≤ √ n max k ∈ Ω ′ nn − k | S k | q V k − S k /k + ( | S [ n/ | + | ˜ S n − [ n/ | ) / ( b n √ n )min k ≤ n/ q ˜ V n − [ n/ + V n/ − V k − ( ˜ S n − [ n/ + S [ n/ − S k ) / ( n − k ) /b n ≤ √ / o P (1) , n → ∞ . This, as n → ∞ , implies a ( n ) max k ∈ Ω ′ | S k,n | ¯ V k,n − b ( n ) P → −∞ , (2.26)and, similarly a ( n ) max k ∈ Ω ′ | S k,n | ¯ V k,n − b ( n ) P → −∞ . (2.27)Furthermore, similarly, by using (20) in Cs¨org˝o et al. (2003), and by the facts that, as n → ∞ , S ∗ n /B n d → N (0 , 1) and lim sup n →∞ S ∗ n / (2 B n log log n ) / = 1 a.s. (by (2.11)), weinfer a ( n ) max k ∈ Ω ′ ∪ Ω ′ | S ∗ k,n | B k,n − b ( n ) P → −∞ . (2.28)Now, in order to prove Theorem 1.2, we only need to show that, as n → ∞ , a ( n ) max k ∈ Ω ′ (cid:12)(cid:12)(cid:12) S k,n V k,n − S ∗ k,n B k,n (cid:12)(cid:12)(cid:12) P → , (2.29)and a ( n ) max k ∈ Ω ′′ (cid:12)(cid:12)(cid:12) S k,n V k,n − S ∗ k,n B k,n (cid:12)(cid:12)(cid:12) P → . (2.30)16n fact, if (2.29) and (2.30) hold true, then it follows from (2.28) and Lemma 2.3 that,for all t ∈ R , lim n →∞ P (cid:16) a ( n ) max k ∈ Ω ′ ∪ Ω ′′ S k,n /V k,n ≤ t + b ( n ) (cid:17) = exp( − e − t ) . (2.31)And also by Lemma 2.3, we obtain that1 √ n max ≤ k 2, it follows from the Marcinkiewicz-Zygmund strong law of large number (c.f. Chow and Teicher (1978), page 125) that S n /n /r → a.s. Hence, as n → ∞ ,(log log n ) S n nB n → a.s. Note that for n/ < k ≤ n/ P kj =1 ( Z j + | EZ j | ) /k B n/ / ( n − k ) ≤ P kj =1 ( Z j + | EZ j | ) B n/ , and, by Lemma 2.2, P nj =1 | Z j | B n / log log n ≤ (cid:16) P nj =1 | Z j | B n / √ log log n (cid:17) P → , P nj =1 | EY j | B n / log log n = P nj =1 | EZ j | B n / log log n ≤ (cid:16) P nj =1 | EZ j | B n / √ log log n (cid:17) → , n → ∞ . Now, by (40) of Cs¨org˝o et al. (2003), we have(log log n ) max k ∈ Ω ′ (cid:12)(cid:12)(cid:12) V k,n − B k,n B k,n (cid:12)(cid:12)(cid:12) ≤ k ∈ Ω ′ log log k | P kj =1 ( Y j − EY j ) | B k + log log n | P [ n/ j =1 ( Y j − EY j ) | B n/ + log log n | P n − [ n/ j =1 ( ˜ Y j − EY j ) | B n − [ n/ + 3 max k ∈ Ω log log k P kj =1 ( Z j + | EY j | ) B k + 10 log log n P [ n/ j =1 ( Z j + | EY j | ) B n/ + log log n P n − [ n/ j =1 ( ˜ Z j + | EY j | ) B n − [ n/ + 12 max k ∈ Ω ′ (log log k ) S k kB k + 3 (log log n ) ˜ S n − [ n/ ( n/ B n − [ n/ + 3 (log log n ) S n/ ( n/ B n/ P → , n → ∞ . (2.36)By the self-normalized LIL of Griffin and Kuelbs (1989), we concludemax k ≤ n/ | S [ n/ − S k | V n √ n ≤ k ≤ n/ | S k | V n √ n ≤ a.s, n → ∞ . (2.37)By the facts that V n /b n P → V n /b n P → 1, as n → ∞ , we get V n ˜ V n − [ n/ = V n b n b n − [ n/ ˜ V n − [ n/ b n b n − [ n/ P → . (2.38)18hus, by using (2.35)-(2.38) and applying again the self-normalized LIL of Griffin andKuelbs (1989), as n → ∞ , we arrive at a ( n ) max k ∈ Ω ′ (cid:12)(cid:12)(cid:12) S k,n V k,n V k,n − B k,n B k,n (cid:12)(cid:12)(cid:12) P → . (2.39)Similarly to the proof of (2.36), by using Lemma 2.2, we have a ( n ) max k ∈ Ω ′ (cid:12)(cid:12)(cid:12) S k,n − S ∗ k,n B k,n (cid:12)(cid:12)(cid:12) ≤ √ k ∈ Ω √ log log k P kj =1 ( | Z j | + | EZ j | ) B k + 4 √ log log n P [ n/ j =1 ( | Z j | + | EZ j | ) B [ n/ + √ log log n P n − [ n/ j =1 ( | ˜ Z j | + | EZ j | ) B n − [ n/ P → , n → ∞ . (2.40)Now (2.29) follows from (2.34), (2.39) and (2.40). This also completes the proof of Theorem1.2. ✷ References [1] Brodsky, B. E. and Darkhovsky, B. S. (1993). Nonparametric Methods in Change-PointProblems. Kluwer, Dordrecht.[2] Chow, Y. S. and Teicher, H.(1978). Probability Theory . Springer-Verlag, New York.[3] Cs¨org˝o, M. and Horv´ath, L. (1988). Nonparametric methods for changepoint problems.In Quality Control and Reliability (P.R. Krishnaiah, C.R., Rao, eds.), Handbook ofStatistics , Elsevier, Amsterdam, 403C425.[4] Cs¨org˝o, M. and Horv´ath, L. (1997) Limit Theorems in Change-Point Analysis. Wiley,New York.[5] Cs¨org˝o, M. and R´ev´esz P. (1981). Strong Approximations in Probability and Statistics .Probability and Mathematical Statistics. Academic Press.[6] Cs¨org˝o, M., Szyszkowicz, B. and Wang, Q. (2003). Darling–Erd˝os theorem for self-normalized sums. Ann. Probab. , 676-692.[7] Cs¨org˝o, M., Szyszkowicz, B. and Wang, Q. (2004). On Weighted Approximations andStrong Limit Theorems for Self-normalized Partial Sums Processes. In AsymptoticMethods in Stochastics , (L. Horv´ath, B. Szyszkowicz, eds.), 489–521, Fields Inst. Com-mun. , Amer. Math. Soc., Providence RI.[8] Cs¨org˝o, M., Szyszkowicz, B. and Wang, Q. (2008a). On weighted approximations inD[0,1] with applications to self-normalized partial sum processes. Acta MathematicaHungarica , 307-332. 199] Cs¨org˝o, M., Szyszkowicz, B. and Wang, Q. (2008b). Asymptotics of Studentized U-typeprocesses for changepoint problems. Acta Mathematica Hungarica , 333-357.[10] Feller, W. (1971). An Introduction to Probability Theory and Its Applications. Vol. 2,Wiley, New York.[11] Galambos, J. and Seneta, E. (1973). Regularly Varying Sequences. Proc. Amer. Math.Soc. , 110-116.[12] Gombay, E. and Horv´ath, L. (1994). An application of the maximum likelihood testto the changepoint problem. Stochastic Processes and their Applications , 161-171.[13] Gombay, E. and Horv´ath, L. (1996a). Applications for the time and change and thepower function in change-point models. Journal of Statistical Planning and Inference ,: 43C66.[14] Gombay, E. and Horv´ath, L. (1996b). On the rate of approximations for maximumlikelihood test in change-point models. Journal of Multivariate Analysis , 120C152.[15] Griffin, P. and Kuelbs, J. (1989). Self-normalized laws of the iterated logarithm. Ann.Probab. , 1571-1601.[16] Orasch, M. and Pouliot, W. (2004). Tabulating weighted sup-norm functionals used inchange-point analysis. Journal of Statistical Computation and Simulation74