A Central Limit Theorem for Non-Stationary Strongly Mixing Random Fields
aa r X i v : . [ m a t h . P R ] D ec Noname manuscript No. (will be inserted by the editor)
A Central Limit Theorem for Non-StationaryStrongly Mixing Random Fields
Richard C. Bradley · Cristina Tone Received: date / Accepted: date
Abstract
In this paper we extend a central limit theorem of Peligrad for uni-formly strong mixing random fields satisfying the Lindeberg condition in theabsence of stationarity property. More precisely, we study the asymptotic nor-mality of the partial sums of uniformly α -mixing non-stationary random fieldssatisfying the Lindeberg condition, in the presence of an extra dependenceassumption involving maximal correlations. Keywords
Central limit theorem · non-stationary random fields · strongmixing · Lindeberg condition · Kolmogorov’s distance.
Mathematics Subject Classification (2010) · In applications of statistics to data indexed by location, there is often an appar-ent lack of both stationarity and independence, but with a reasonable indica-tion of “weak dependence” between data whose locations are “far apart”. Thishas motivated a large amount of research on the theoretical question of to whatextent central limit theorems hold for non-stationary random fields. This paper Supported partially by the NSA grant H98230-15-1-0006. Department of MathematicsIndiana UniversityRawles Hall 313Bloomington, Indiana 47405E-mail: [email protected] · Department of MathematicsUniversity of Louisville328 Natural Sciences BuildingLouisville, Kentucky 40292E-mail: [email protected] Richard C. Bradley , Cristina Tone will examine that theoretical question for “arrays of (non-stationary) randomfields” under mixing assumptions analogous to those studied by Peligrad [3]in central limit theorems for “arrays of random sequences”.Let ( Ω, F , P ) be a probability space. For any two σ -fields A , B ⊆ F , definenow the strong mixing coefficient α ( A , B ) := sup A ∈A ,B ∈B | P ( A ∩ B ) − P ( A ) P ( B ) | and the maximal coefficient of correlation ρ ( A , B ) := sup | Corr ( f, g ) | , f ∈ L ( A ) , g ∈ L ( B ) . Suppose d is a positive integer and X := ( X k , k ∈ Z d ) is not necessarily astrictly stationary random field. In this context, for each positive integer n ,define the following quantity: α ( X, n ) := sup α ( σ ( X k , k ∈ Q ) , σ ( X k , k ∈ S )) , where the supremum is taken over all pairs of nonempty, disjoint sets Q , S ⊂ Z d with the following property: There exist u ∈ { , , . . . , d } and j ∈ Z suchthat Q ⊂ { k := ( k , k , . . . , k d ) ∈ Z d : k u ≤ j } and S ⊂ { k := ( k , k , . . . , k d ) ∈ Z d : k u ≥ j + n } .The random field X := ( X k , k ∈ Z d ) is said to be “strongly mixing” (or“ α -mixing”) if α ( X, n ) → n → ∞ .Also, for each positive integer n , define the following quantity: ρ ′ ( X, n ) := sup ρ ( σ ( X k , k ∈ Q ) , σ ( X k , k ∈ S )) , where the supremum is taken over all pairs of nonempty, finite disjoint sets Q , S ⊂ Z d with the following property: There exist u ∈ { , , . . . , d } andnonempty disjoint sets A , B ⊂ Z , with dist ( A, B ) := min a ∈ A,b ∈ B | a − b | ≥ n such that Q ⊂ { k := ( k , k , . . . , k d ) ∈ Z d : k u ∈ A } and S ⊂ { k :=( k , k , . . . , k d ) ∈ Z d : k u ∈ B } .The random field X := ( X k , k ∈ Z d ) is said to be “ ρ ′ -mixing” if ρ ′ ( X, n ) → n → ∞ .Again, suppose d is a positive integer. For a given random field X :=( X k , k ∈ Z d ) and for each L := ( L , L , . . . , L d ) ∈ N d , define the “box” B ( L ) := { k := ( k , k , . . . , k d ) ∈ N d : ∀ u ∈ { , , . . . , d } , ≤ k u ≤ L u } . (1.1)Obviously, the number of elements in the set B ( L ) is L · L · . . . · L d .For any given L ∈ N d and any given “collection” X := ( X k , k ∈ B ( L )),the dependence coefficients mentioned above can be defined for n ∈ N in thefollowing way for convenience: one can trivially extend that collection X to arandom field e X := ( X k , k ∈ Z d ) by defining X k = 0 for each k ∈ Z d − B ( L ),and then one can define the dependence coefficients introduced in the previoussection in the following way: for example, for n ∈ N , ρ ′ ( X, n ) := ρ ′ ( e X, n ). LT for Non-Stationary Strongly Mixing Random Fields 3
We are interested in obtaining CLT’s for non-stationary strongly mixingrandom fields, in the presence of an extra condition involving the maximalcorrelation coefficient ρ ′ ( X, n ) defined above.Our main result presents a central limit theorem for sequences of randomfields that satisfy a Lindeberg condition and uniformly satisfy both strongmixing and an upper bound less than 1 on ρ ′ ( · , ρ ′ ( · ,
1) cannot simply be deleted alto-gether from the theorem, even in the case of strict stationarity. For the case d = 1, that can be seen from any (finite-variance) strictly stationary, stronglymixing counterexample to the CLT such that the rate of growth of the vari-ances of the partial sums is at least linear; for several such examples, see e.g.[1], Theorem 10.25 and Chapters 30-33. Our main theorem and an extensionof it, given at the end of the paper, extend certain central limit theorems ofPeligrad [3] involving “arrays of random sequences”.The main result of this paper will be given in Theorem 1.1. Then thematerial of this article will be divided as follows: Background results necessaryin the proof of the main result will be given in Section 2. Sections 3, 4 and 5will contain the proof of Theorem 1.1. More precisely, Section 3 will set up theinduction assumption of the proof and contains two special cases introducedin Lemma 3.1, respectively Lemma 3.2, that imply our result. The general casewill be presented in Lemma 4.1, which covers Section 4 entirely. Section 5 ofthe paper will deal with the Lindeberg condition and the truncation argument.Finally, Section 6 will state an extension of Theorem 1.1 to a more generalsetup. Theorem 1.1
Suppose d is a positive integer. For each n ∈ N , suppose L n :=( L n , L n , . . . , L nd ) is an element of N d , and suppose X ( n ) := (cid:16) X ( n ) k , k ∈ B ( L n ) (cid:17) is an array of random variables such that for each k ∈ B ( L n ) , EX ( n ) k = 0 and E (cid:16) X ( n ) k (cid:17) < ∞ . Suppose the following mixing assumptions hold: α ( m ) := sup n α ( X ( n ) , m ) → as m → ∞ and (1.2) ρ ′ (1) := sup n ρ ′ ( X ( n ) , < . (1.3) For each n ∈ N , define the random sum S (cid:0) X ( n ) , L n (cid:1) = P k ∈ B ( L n ) X ( n ) k , definethe quantity σ n := E ( S ( X ( n ) , L n )) , and assume that σ n > . Suppose alsothat the Lindeberg condition ∀ ε > , lim n →∞ σ n X k ∈ B ( L n ) E (cid:16) X ( n ) k (cid:17) I (cid:16)(cid:12)(cid:12)(cid:12) X ( n ) k (cid:12)(cid:12)(cid:12) > εσ n (cid:17) = 0 (1.4) holds. Then σ − n S ( X ( n ) , L n ) ⇒ N (0 , as n → ∞ . Richard C. Bradley , Cristina Tone (Here and throughout the paper ⇒ denotes convergence in distribution.)This result extends a theorem of Peligrad (see [3], Theorem 2.2), whichis Theorem 1.1 for the case d = 1. Later on, Peligrad and Utev [5] obtainedan invariance principle for random elements associated to sums of stronglymixing triangular arrays of random variables associated with the interlacedmixing coefficients ρ ∗ n . Their invariance principle generalizes the correspondingresults for independent random variables treated e.g. by Prohorov [6]. For thestrictly stationary case see Peligrad [4].For a sequence of strictly stationary random fields that are uniformly ρ ′ -mixing and satisfy a Lindeberg condition, a central limit theorem is obtainedin [7] for sequences of “rectangular” sums from the given random fields. The“Lindeberg CLT” is then used to prove a CLT for some kernel estimatorsof probability density for some strictly stationary random fields satisfying ρ ′ -mixing, and whose probability density and joint densities are absolutelycontinuous, generalizing the results in [2], under ρ ∗ -mixing. The proof of Theorem 1.1 uses frequently the following results. The first oneis a consequence of Theorem 28.10(I) [1] which gives an upper bound for thevariance of partial sums.
Theorem 2.1
Suppose d is a positive integer, L ∈ N d , and X := ( X k , k ∈ B ( L )) is a (not necessarily strictly stationary) random field such that for each k ∈ B ( L ) , the random variable X k has mean zero and finite second moments. Sup-pose ρ ′ ( X, j ) < for some j ∈ N . Then for any nonempty finite set S ⊆ B ( L ) , E (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)X k ∈ S X k (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ C X k ∈ S E ( X k ) , (2.1) where C := j d (1 + ρ ′ ( X, j )) d / (1 − ρ ′ ( X, j )) d . The second result is a consequence of Theorem 28.9 [1] which gives lowerand upper bounds for the variance of partial sums.
Theorem 2.2
Suppose d is a positive integer, L ∈ N d , and X := ( X k , k ∈ B ( L )) is a (not necessarily strictly stationary) random field such that for each k ∈ B ( L ) , the random variable X k has mean zero and finite second moments. Sup-pose ρ ′ ( X, < . Then for any nonempty finite set S ⊆ B ( L ) , C − X k ∈ S E | X k | ≤ E (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)X k ∈ S X k (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ C X k ∈ S E | X k | , (2.2) where C := (1 + ρ ′ ( X, d / (1 − ρ ′ ( X, d . The next result used is a particular case of the Rosenthal inequality (seeTheorem 29.30, [1]) for the exponent 4.
LT for Non-Stationary Strongly Mixing Random Fields 5
Theorem 2.3
Suppose d and m are each a positive integer and r ∈ [0 , .Then there exists a constant C := C ( d, , r, m ) such that the following holds:Suppose L ∈ N d and X := ( X k , k ∈ B ( L )) is a (not necessarily strictlystationary) random field such that for each k ∈ B ( L ) , EX k = 0 and E | X k | < ∞ , and ρ ′ ( X, m ) ≤ r . Then for any nonempty finite set S ⊆ B ( L ) , one hasthat E (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)X k ∈ S X k (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ C · X k ∈ S E | X k | + X k ∈ S E | X k | ! . (2.3) The proof of Theorem 1.1 will be done by induction on d . For d = 1, Theorem1.1 was proved by Peligrad ([3], Theorem 2.2). Now suppose d is an integersuch that d ≥
2. As the induction hypothesis, suppose Theorem 1.1 holds inthe case where d is replaced by the particular integer d −
1. To complete theinduction step (and thereby the proof of Theorem 1.1), it suffices to proveTheorem 1.1 in the case of the given integer d .To carry out the induction step, we will first treat the case whereinf n ∈ N σ n > θ n := sup k ∈ B ( L n ) (cid:13)(cid:13)(cid:13) X ( n ) k (cid:13)(cid:13)(cid:13) ∞ → . (3.2)Notice that (3.2) (together with (3.1)) implies the Lindeberg condition (1.4).Our goal in Sections 3 and 4 is to show that for X ( n ) := (cid:16) X ( n ) k , k ∈ B ( L n ) (cid:17) satisfying (1.2), (1.3), (3.1), and (3.2), the CLT holds, that is1 σ n X k ∈ B ( L n ) X ( n ) k ⇒ N (0 ,
1) as n → ∞ . (3.3)Then in Section 5, the induction argument will be completed with the useof a standard truncation argument to reduce to the case of the restrictions(3.1)-(3.2).In what follows, for convenience, we shall use the notation L n := L ( n ) := (cid:16) L ( n )1 , L ( n )2 , . . . , L ( n ) d (cid:17) . Lemma 3.1
Suppose in addition to the properties (1.2) , (1.3) , (3.1) , and (3.2) that sup n ∈ N L ( n )1 < ∞ . For each n ≥ , define the element e L ( n ) ∈ N d − by e L ( n ) := (cid:16) L ( n )2 , L ( n )3 , . . . , L ( n ) d (cid:17) . For each n ≥ , define the random field W ( n ) := (cid:16) W ( n ) k , k ∈ B ( e L ( n ) ) (cid:17) as follows: For each k := ( k , k , . . . , k d ) ∈ B ( e L ( n ) ) , W ( n ) k := X u ∈ n , ,...,L ( n )1 o X ( n )( u, k ) . Richard C. Bradley , Cristina Tone Then σ n X k ∈ B ( L ( n ) ) X ( n ) k = E X k ∈ B ( e L ( n ) ) W ( n ) k − / X k ∈ B ( e L ( n ) ) W ( n ) k ⇒ N (0 , as n → ∞ .Proof It is easy to see that E X k ∈ B ( e L ( n ) ) W ( n ) k = E X k ∈ B ( e L ( n ) ) L ( n )1 X u =1 X ( n )( u,k ) = σ n . The random field W ( n ) inherits the properties from the parent random field X ( n ) , that is, the mixing and the moment properties. In addition,sup k ∈ B ( e L ( n ) ) k W ( n ) k k ∞ = sup k ∈ B ( e L ( n ) ) k L ( n )1 X u =1 X ( n )( u,k ) k ∞ ≤ sup k ∈ B ( e L ( n ) ) L ( n )1 X u =1 k X ( n )( u,k ) k ∞ ≤ L ( n )1 X u =1 sup k ∈ B ( e L ( n ) ) k X ( n )( u,k ) k ∞ ≤ L ( n )1 X u =1 sup k ∈ B ( L ( n ) ) k X ( n ) k k ∞ = L ( n )1 θ n → n → ∞ . By the induction hypothesis for d −
1, the CLT holds, and the proof of Lemma3.1 is complete.
Lemma 3.2
Suppose that L ( n )1 → ∞ as n → ∞ together with the propertiesmentioned earlier, namely, (1.2) , (1.3) , (3.1) , and (3.2) . For ∀ n ∈ N , ∀ j ∈{ , , . . . , L ( n )1 } , let us define the random variable Y ( n ) j = X { k =( k ,...,k d ) ∈ B ( L ( n ) ): k = j } X ( n ) k . Assume also that sup j ∈{ , ,...,L ( n )1 } (cid:16) s ( n ) j (cid:17) → as n → ∞ , where (cid:16) s ( n ) j (cid:17) = E (cid:16) Y ( n ) j (cid:17) . (3.4) Then σ n X k ∈ B ( L ( n ) ) X ( n ) k = 1 σ n L ( n )1 X j =1 Y ( n ) j ⇒ N (0 , as n → ∞ . (3.5) LT for Non-Stationary Strongly Mixing Random Fields 7
Proof
We shall first give some notations and basic observations that will beused in both the main argument below for Lemma 3.2 and the argument forLemma 4.1 in Section 4.For each n ∈ N and each j ∈ n , , . . . , L ( n )1 o , define the (“slice”) setslice ( n ) j := n k := ( k , . . . , k d ) ∈ B ( L ( n ) ) : k = j o . Then for each such n and j , Y ( n ) j = P k ∈ slice ( n ) j X ( n ) k . By Theorem 2.2, for eachsuch n and j , the two numbers (cid:16) s ( n ) j (cid:17) = E (cid:16) Y ( n ) j (cid:17) = E (cid:16)P k ∈ slice ( n ) j X ( n ) k (cid:17) and P k ∈ slice ( n ) j E (cid:16) X ( n ) k (cid:17) either are both 0 or are both positive and withina constant factor (in [ c − , c ], where c := (1 + ρ ′ (1)) d / (1 − ρ ′ (1)) d ) of eachother. Similarly, by (3.1) and Theorem 2.2, for each n ∈ N , the following threequantities are positive and are within a constant factor (in the same interval[ c − , c ]) of each other: σ n = E X k ∈ B ( L n ) X ( n ) k = E L ( n )1 X j =1 Y ( n ) j ; L ( n )1 X j =1 (cid:16) s ( n ) j (cid:17) = L ( n )1 X j =1 E (cid:16) Y ( n ) j (cid:17) = L ( n )1 X j =1 E X k ∈ slice ( n ) j X ( n ) k ; L ( n )1 X j =1 X k ∈ slice ( n ) j E (cid:16) X ( n ) k (cid:17) = X k ∈ B ( L ( n ) ) E (cid:16) X ( n ) k (cid:17) . Finally, by (3.1), σ n ≪ σ n as n → ∞ . Here and below, the notation “ ≪ ”means O ( . . . ).To prove (3.5), the main task will be to show that Lyapounov’s conditionholds (with exponent 4), that is,lim n →∞ σ n L ( n )1 X j =1 E (cid:16) Y ( n ) j (cid:17) = 0 . (3.6)For each n ∈ N , applying (1.3) and Theorem 2.3 (and using its constant C )and then adding up over all j ∈ n , , . . . , L ( n )1 o , we obtain that L ( n )1 X j =1 E (cid:16) Y ( n ) j (cid:17) ≤ C L ( n )1 X j =1 X k ∈ slice ( n ) j E (cid:16) X ( n ) k (cid:17) + L ( n )1 X j =1 X k ∈ slice ( n ) j E (cid:16) X ( n ) k (cid:17) (3.7) Richard C. Bradley , Cristina Tone Using (3.2) and Theorem 2.2, the first term in the right-hand side of (3.7) canbe bounded above in the following way: L ( n )1 X j =1 X k ∈ slice ( n ) j E (cid:16) X ( n ) k (cid:17) = L ( n )1 X j =1 X k ∈ slice ( n ) j E (cid:20)(cid:16) X ( n ) k (cid:17) (cid:16) X ( n ) k (cid:17) (cid:21) ≤ θ n L ( n )1 X j =1 X k ∈ slice ( n ) j E (cid:16) X ( n ) k (cid:17) ≪ θ n σ n ≪ θ n σ n = o ( σ n ) as n → ∞ .. By (3.4) (and the fact σ n ≪ σ n ), the second term in the right-hand side of(3.7) can be bounded above in the following way: L ( n )1 X j =1 X k ∈ slice ( n ) j E (cid:16) X ( n ) k (cid:17) = L ( n )1 X j =1 X k ∈ slice ( n ) j E (cid:16) X ( n ) k (cid:17) X k ∈ slice ( n ) j E (cid:16) X ( n ) k (cid:17) ≪ sup j ∈{ , ,...,L ( n )1 } (cid:16) s ( n ) j (cid:17) L ( n )1 X j =1 X k ∈ slice ( n ) j E (cid:16) X ( n ) k (cid:17) ≪ sup j ∈{ , ,...,L ( n )1 } (cid:16) s ( n ) j (cid:17) σ n = o ( σ n ) as n → ∞ . Hence, (3.6) holds, and as a consequence, the Lindeberg condition is satis-fied. Applying Peligrad’s CLT for d = 1 (see [3], Theorem 2.2) to the array (cid:16) Y ( n ) j , n ∈ N , j ∈ n , , . . . , L ( n )1 o(cid:17) , one has that (3.5) holds. The proof ofLemma 3.2 is complete. The following lemma deals with the most general case under the restrictions(3.1) and (3.2).
Lemma 4.1
Suppose that for each n ∈ N , L n ∈ N d , X ( n ) := (cid:16) X ( n ) k , k ∈ B ( L n ) (cid:17) is a (not necessarily strictly stationary) random field such that for each k ∈ B ( L n ) , X ( n ) k has mean zero and finite second moment. Suppose that (1.2) , (1.3) , (3.1) , and (3.2) are satisfied. Then σ n X k ∈ B ( L n ) X ( n ) k ⇒ N (0 , as n → ∞ . LT for Non-Stationary Strongly Mixing Random Fields 9
Proof
It suffices to show that for an arbitrary fixed infinite set S ⊆ N , thereexists an infinite set T ⊆ S such that1 σ n X k ∈ B ( L n ) X ( n ) k ⇒ N (0 ,
1) as n → ∞ , n ∈ T. (4.1)Again we write L n as L ( n ) := (cid:16) L ( n )1 , L ( n )2 , . . . , L ( n ) d (cid:17) . We freely use the nota-tions Y ( n ) j , (cid:16) s ( n ) j (cid:17) and slice ( n ) j from Lemma 3.2 and its proof. The observationsin the first part of the proof of Lemma 3.2 (that is, prior to the paragraphcontaining equation (3.6)) hold in our context here, and will be used freely.(Of course the convergence to 0 in (3.4) is not assumed, and may not hold,in our context here.) Applying those observations, without loss of generality(that is, without sacrificing (3.1) or (3.2)) we now normalize so that ∀ n ≥ , L ( n )1 X j =1 (cid:16) s ( n ) j (cid:17) = 1 . (4.2)The proof of (4.1) (including the choice of an appropriate infinite set T ⊆ S )will be divided into twelve “steps”. Step 1:
Consider first the case where sup n ∈ S L ( n )1 < ∞ . By Lemma 3.1,the asymptotic normality in (4.1) holds with T := S , and for this case we aredone. Step 2:
Now henceforth suppose that sup n ∈ S L ( n )1 = ∞ .Let us choose an infinite set S ⊆ S be such that L ( n )1 → ∞ as n → ∞ , n ∈ S . For each n ≥
1, let p ( n, j ), j ∈ { , , . . . , L ( n )1 } be a permutation of the set { , , . . . , L ( n )1 } such that (cid:16) s ( n ) p ( n, (cid:17) ≥ (cid:16) s ( n ) p ( n, (cid:17) ≥ . . . ≥ (cid:18) s ( n ) p ( n,L ( n )1 ) (cid:19) . (4.3)By (4.2), we obtain that L ( n )1 X j =1 (cid:16) s ( n ) p ( n,j ) (cid:17) = 1 . (4.4)As a consequence, by (4.3) and (4.4), ∀ n ≥ , ∀ j ∈ { , , . . . , L ( n )1 } , (cid:16) s ( n ) p ( n,j ) (cid:17) ≤ j . (4.5)Of course since L ( n )1 → ∞ as n → ∞ , n ∈ S , one has that for each l ≥ p ( n, l ) and the number (cid:16) s ( n ) p ( n,l ) (cid:17) are defined for all sufficiently large n ∈ S . That will be used repeatedly in what follows. , Cristina Tone Let us now define the following infinite sets: S ⊆ S such that λ = lim n →∞ , n ∈ S (cid:16) s ( n ) p ( n, (cid:17) exists; S ⊆ S such that λ = lim n →∞ , n ∈ S (cid:16) s ( n ) p ( n, (cid:17) exists; S ⊆ S such that λ = lim n →∞ , n ∈ S (cid:16) s ( n ) p ( n, (cid:17) exists;and so on. By the Cantor diagonalization method, we obtain an infinite set S := { e n < e n < e n < . . . } such that e n l ∈ S l and S l ⊇ { e n l , e n l +1 , e n l +2 , . . . } .For the resulting infinite set S , one has that S ⊆ S ⊆ S , and by (4.3) onealso has that ∀ l ≥ , lim n →∞ , n ∈ S (cid:16) s ( n ) p ( n,l ) (cid:17) = λ l ; with λ ≥ λ ≥ λ . . . . (4.6)In addition, ∀ m ≥
1, one has by (4.4) that P mj =1 (cid:16) s ( n ) p ( n,j ) (cid:17) ≤ n ∈ S sufficiently large such that L ( n )1 ≥ m ; and hence for every m ≥ P mj =1 λ j ≤ λ := ∞ X j =1 λ j ≤ . (4.7) Step 3:
Consider first the case where λ = 0. Then λ j = 0 for all j ≥
1. By(4.5), (4.6), and a simple argument, sup j ∈{ , ,...,L ( n )1 } (cid:16) s ( n ) p ( n,j ) (cid:17) → n →∞ , n ∈ S . By Lemma 3.2,1 σ n L ( n )1 X j =1 Y ( n ) j = 1 σ n X k ∈ B ( L ( n ) ) X ( n ) k ⇒ N (0 ,
1) as n → ∞ , n ∈ S . (4.8)Thus (4.1) holds with T := S , and for this case we are done. Step 4:
Now henceforth suppose that λ >
0. (Then by (4.6) and (4.7), λ > T ⊆ S .Recall again that L ( n )1 → ∞ as n → ∞ , n ∈ S . For each q ≥ n ∈ S such that L ( n )1 > q , define the set Γ ( q,n )1 = { p ( n, , p ( n, , . . . , p ( n, q ) } and the random variable W ( q,n ) := X j ∈ Γ ( q,n )1 Y ( n ) j . LT for Non-Stationary Strongly Mixing Random Fields 11
Recall that (here in Step 4 and henceforth) λ >
0. By (4.6), E (cid:16) Y ( n ) p ( n, (cid:17) = (cid:16) s ( n ) p ( n, (cid:17) > λ / n ∈ S sufficiently large.For each positive integer q , the following observations hold: Trivially, wehave that P j ∈ Γ ( q,n )1 E (cid:16) Y ( n ) j (cid:17) ≥ E (cid:16) Y ( n ) p ( n, (cid:17) ≥ λ / n ∈ S suffi-ciently large. Hence, by Theorem 2.2, there exists a positive number c (noteven depending on q ) such that E (cid:0) W ( q,n ) (cid:1) ≥ c for all n ∈ S sufficientlylarge. That is the analog of (3.1) for sufficiently large n ∈ S when the indices k := ( k , . . . , k d ) ∈ B ( L ( n ) ) are restricted to the ones such that k ∈ Γ ( q,n )1 .Hence, one can apply Lemma 3.1, and one obtains that W ( q,n ) k W ( q,n ) k ⇒ N (0 ,
1) as n → ∞ , n ∈ S . The convergence above was shown for arbitrary q ≥
1. By a well known theo-rem for continuous limiting distributions, one now has that ∀ q ≥ , sup x ∈ R (cid:12)(cid:12) F W ( q,n ) / k W ( q,n ) k ( x ) − Φ ( x ) (cid:12)(cid:12) → n → ∞ , n ∈ S . Here Φ ( x ) represents the distribution function of a N (0 ,
1) random variableand F V is the distribution function of a given random variable V . Step 5:
For each q ≥
1, let m q ∈ N be such that α ( m q ) < q . (4.9)Let n < n < . . . ∈ S be such that for all q ≥
1, the following hold: L ( n q )1 > q m q ; (4.10) k W ( q,n q ) k > x ∈ R (cid:12)(cid:12)(cid:12) F W ( q,nq ) / k W ( q,nq ) k ( x ) − Φ ( x ) (cid:12)(cid:12)(cid:12) ≤ q , and (4.11) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) q m q X j = q +1 (cid:16) s ( n q ) p ( n q ,j ) (cid:17) − q m q X j = q +1 λ j (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ q . (4.12)(To justify (4.12), see (4.6).)For each q ≥
1, define the following four index sets: Γ ( q )1 = { p ( n q , , p ( n q , , . . . , p ( n q , q ) } ,Γ ( q )2 = { p ( n q , q + 1) , p ( n q , q + 2) , . . . , p ( n q , q m q ) } ,Γ ( q )3 = { j ∈ { , . . . , L ( n q )1 } − { Γ ( q )1 ∪ Γ ( q )2 }| ∃ i ∈ Γ ( q )1 such that | i − j | ≤ m q } ,Γ ( q )4 = { j ∈ { , . . . , L ( n q )1 } − { Γ ( q )1 ∪ Γ ( q )2 }| ∀ i ∈ Γ ( q )1 , | i − j | > m q } . (4.13) , Cristina Tone For each q ≥
1, those four sets in (4.13) form a partition of the set n , , . . . , L ( n q )1 o (see (4.10)). For a given q ≥
1, one of the latter two sets Γ ( q )3 , Γ ( q )4 could per-haps be empty. Note that for each q ≥
1, the set Γ ( q )1 here is the set Γ ( q,n q )1 in the notations in Step 4.For each q ≥ i ∈ { , , , } , define the random variable U ( q ) i = X j ∈ Γ ( q ) i Y ( n q ) j . (4.14)Note that for each q ≥ U ( q )1 = W ( q,n q ) by (4.14) (see Step 4), and also X i =1 U ( q ) i = L ( nq )1 X j =1 Y ( n q ) j = X k ∈ B ( L ( nq ) ) X ( n q ) k . (4.15) Step 6:
Notice that due to (1.3), Theorem 2.2, followed by (4.2), weobtain that for each q ≥ ≤ (cid:18) − ρ ′ (1)1 + ρ ′ (1) (cid:19) X j ∈ Γ ( q )1 E (cid:16) Y ( n q ) j (cid:17) ≤ E (cid:16) U ( q )1 (cid:17) ≤ (cid:18) ρ ′ (1)1 − ρ ′ (1) (cid:19) X j ∈ Γ ( q )1 E (cid:16) Y ( n q ) j (cid:17) ≤ (cid:18) ρ ′ (1)1 − ρ ′ (1) (cid:19) < ∞ . Similarly, for each q ≥ ≤ E (cid:16) U ( q )4 (cid:17) ≤ (cid:18) ρ ′ (1)1 − ρ ′ (1) (cid:19) < ∞ . Hence, there exists an infinite set T ⊆ N such that η := lim q →∞ , q ∈ T E (cid:16) U ( q )1 (cid:17) exists (in R ), and η := lim q →∞ , q ∈ T E (cid:16) U ( q )4 (cid:17) exists (in R ) . Our goal now is to prove that for the infinite set T just specified here, σ − n q X k ∈ B ( L ( nq ) ) X ( n q ) k ⇒ N (0 ,
1) as q → ∞ , q ∈ T. That will accomplish (4.1) (and therefore complete the proof of Lemma 4.1)with the set T in (4.1) replaced here by the set { n q : q ∈ T } , which is aninfinite subset of S and hence of S .In what follows, the “ N (0 ,
0) distribution” will of course mean the degen-erate “point mass at 0”. It will be tacitly kept in mind and used freely that ifa sequence of random variables converges to 0 in the 2-norm, then it convergesto 0 in probability and hence converges to N (0 ,
0) in distribution.
LT for Non-Stationary Strongly Mixing Random Fields 13
Step 7: “The asymptotic normality of U ( q )1 ”. By (4.11), we obtain thatsup x ∈ R (cid:12)(cid:12)(cid:12) F W ( q,nq ) / k W ( q,nq ) k ( x ) − Φ ( x ) (cid:12)(cid:12)(cid:12) → q → ∞ , hence U ( q )1 (cid:13)(cid:13)(cid:13) U ( q )1 (cid:13)(cid:13)(cid:13) ⇒ N (0 ,
1) as q → ∞ , q ∈ T. So, we obtain the asymptotic normality of the random variable U ( q )1 , namely U ( q )1 ⇒ N (0 , η ) as q → ∞ , q ∈ T. (4.16) Step 8: “The asymptotic normality of U ( q )4 ”. Recall from (4.10) that L ( n q )1 → ∞ as q → ∞ . In addition, by (4.5) and the definition of Γ ( q )4 in(4.13), sup j ∈ Γ ( q )4 E (cid:16) Y ( n q ) j (cid:17) ≤ q m q + 1 → q → ∞ , q ∈ T. Trivially if η = 0, or if instead η > k := ( k , . . . , k d ) ∈ B (cid:0) L ( n q ) (cid:1) restricted to the ones such that k ∈ Γ ( q )4 ), onehas that U ( q )4 ⇒ N (0 , η ) as q → ∞ , q ∈ T. (4.17) Step 9: “Negligibility of U ( q )2 ”. By (4.7), ∞ X j = q +1 λ j → q → ∞ , q ∈ T. Therefore, q m q X j = q +1 λ j → q → ∞ , q ∈ T, which gives us by (4.12) that q m q X j = q +1 E (cid:16) Y ( n q ) p ( n q ,j ) (cid:17) → q → ∞ , q ∈ T. As a consequence, referring to (4.13) and (4.14) and bounding above the secondmoment of the random variable U ( q )2 using Theorem 2.2 , we obtain that E X j ∈ Γ ( q )2 Y ( n q ) j → q → ∞ , q ∈ T, hence U ( q )2 → q → ∞ , q ∈ T. (4.18) , Cristina Tone Step 10: “Negligibility of U ( q )3 ”. By (4.13), for each q ≥
1, card Γ ( q )1 = q and hence by a simple argument, card Γ ( q )3 ≤ q · m q . Using the definition of U ( q )3 given in (4.14), by Theorem 2.2 and equations (4.5), (4.10), and (4.13)(and using an obvious constant C ), E (cid:16) U ( q )3 (cid:17) = E X j ∈ Γ ( q )3 Y ( n q ) j ≤ (cid:18) ρ ′ (1)1 − ρ ′ (1) (cid:19) d X j ∈ Γ ( q )3 (cid:16) s ( n q ) j (cid:17) ≤ C · X j ∈ Γ ( q )3 q m q ≤ C · q · m q q m q → q → ∞ , q ∈ T. Therefore, U ( q )3 → q → ∞ , q ∈ T. (4.19) Step 11: “A Special Blocking Argument”. We now return to the index sets Γ ( q )1 and Γ ( q )4 and the random variables U ( q )1 and U ( q )4 , from (4.13), (4.14), andSteps 6, 7, and 8. We will set up (possibly “porous”) “blocks” that alternatebetween indices in Γ ( q )1 and Γ ( q )4 . We carry out this process for the case where,for a given q ≥
1, the minimum and maximum elements of Γ ( q )1 ∪ Γ ( q )4 bothbelong to Γ ( q )4 . Then we will indicate the trivial changes needed for the othercases.Suppose q ≥
1. Suppose that min (cid:16) Γ ( q )1 ∪ Γ ( q )4 (cid:17) and max (cid:16) Γ ( q )1 ∪ Γ ( q )4 (cid:17) each belong to Γ ( q )4 . Recall from (4.13) that card Γ ( q )1 = q . For some positiveinteger h ( q ) such that h ( q ) ≤ q , there exists an “alternating sequence” ofnonempty, finite, (pairwise) disjoint subsets of Z , namely β ( q )1 , γ ( q )1 , β ( q )2 , γ ( q )2 , . . . , β ( q ) h ( q ) , γ ( q ) h ( q ) , and, β ( q ) h ( q )+1 with the following properties: Γ ( q )1 = h ( q ) [ i =1 γ ( q ) i ; Γ ( q )4 = h ( q )+1 [ i =1 β ( q ) i ; ∀ i ∈ { , , . . . , h ( q ) } , m q + max β ( q ) i ≤ min γ ( q ) i ; ∀ i ∈ { , , . . . , h ( q ) } , m q + max γ ( q ) i ≤ min β ( q ) i +1 . (The last two properties come from the definition of Γ ( q )4 in (4.13).) Next,define the following random variables: ∀ i ∈ { , , . . . , h ( q ) + 1 } , V ( q ) i := X j ∈ β ( q ) i Y ( n q ) j and (4.20) LT for Non-Stationary Strongly Mixing Random Fields 15 ∀ i ∈ { , , . . . , h ( q ) } , Z ( q ) i := X j ∈ γ ( q ) i Y ( n q ) j . (4.21)Then by (4.14), we have the following identities: U ( q )1 = h ( q ) X i =1 Z ( q ) i ; (4.22) U ( q )4 = h ( q )+1 X i =1 V ( q ) i . (4.23)For a given q ≥
1, those notations were defined in the case where min (cid:16) Γ ( q )1 ∪ Γ ( q )4 (cid:17) and max (cid:16) Γ ( q )1 ∪ Γ ( q )4 (cid:17) both belong to Γ ( q )4 . In the other cases, the notationsare the same, but with one or both of the following trivial changes: (i) Ifmin (cid:16) Γ ( q )1 ∪ Γ ( q )4 (cid:17) belongs to Γ ( q )1 , then the set β ( q )1 is empty and the randomvariable V ( q )1 is identically 0. (ii) If max (cid:16) Γ ( q )1 ∪ Γ ( q )4 (cid:17) belongs to Γ ( q )1 , thenthe set β ( q ) h ( q )+1 is empty and the random variable V ( q ) h ( q )+1 is identically 0.The rest of the argument here in Step 11 will be carried out in the casewhere for each q ≥
1, min (cid:16) Γ ( q )1 ∪ Γ ( q )4 (cid:17) and max (cid:16) Γ ( q )1 ∪ Γ ( q )4 (cid:17) both belongto Γ ( q )4 . The changes needed in the argument to accommodate all other casesare trivial and need not be spelled out here.For each q ≥
1, construct independent copies of the random variablesdefined in (4.20) and (4.21), denoted e V ( q )1 , e Z ( q )1 , e V ( q )2 , e Z ( q )2 , . . . , e V ( q ) h ( q ) , e Z ( q ) h ( q ) ,and e V ( q ) h ( q )+1 . By (4.22) and Step 7, we obtain that h ( q ) X i =1 Z ( q ) i ⇒ N (0 , η ) as q → ∞ , q ∈ T. By (4.9), the following holds: h ( q ) − X k =1 α (cid:16) σ (cid:16) Z ( q ) i , ≤ i ≤ k (cid:17) , σ (cid:16) Z ( q ) k +1 (cid:17)(cid:17) ≤ h ( q ) − X k =1 α (2 m q ) ≤ qq → q → ∞ , q ∈ T. Hence, by [1] (Theorem 25.56), h ( q ) X i =1 e Z ( q ) i ⇒ N (0 , η ) as q → ∞ , q ∈ T. (4.24)Similarly, we obtain that h ( q ) X k =1 α (cid:16) σ (cid:16) V ( q ) i , ≤ i ≤ k (cid:17) , σ (cid:16) V ( q ) k +1 (cid:17)(cid:17) ≤ h ( q ) − X k =1 α (2 m q ) ≤ qq → q → ∞ , q ∈ T, , Cristina Tone and hence, h ( q )+1 X i =1 e V ( q ) i ⇒ N (0 , η ) as q → ∞ , q ∈ T. (4.25)By equations (4.24), (4.25), and independence of the random variables e V ( q ) i , e Z ( q ) j , with i ∈ { , , . . . , h ( q ) + 1 } and j ∈ { , , . . . , h ( q ) } , we obtain that h ( q ) X i =1 e Z ( q ) i + h ( q )+1 X i =1 e V ( q ) i ⇒ N (0 , η + η ) as q → ∞ , q ∈ T. (4.26)Next, for the entire “alternating sequence” V ( q )1 , Z ( q )1 , V ( q )2 , Z ( q )2 , . . . , V ( q ) h ( q )+1 ,we note from (4.9) that2 q · α ( m q ) ≤ qq → q → ∞ , q ∈ T, and applying again [1] (Theorem 25.56) and (4.26), we obtain the analog of(4.26) with e Z ( q ) i and e V ( q ) i replaced by Z ( q ) i and V ( q ) i , that is, U ( q )1 + U ( q )4 ⇒ N (0 , η + η )) as q → ∞ , q ∈ T. (4.27)Applying Slutski’s theorem, by (4.18), (4.19), and (4.27), we obtain that X k ∈ B ( L ( nq ) ) X ( n q ) k = X k ∈ slice ( nq ) j Y ( n q ) j = X i =1 U ( q ) i ⇒ N (0 , η + η )) as q → ∞ , q ∈ T. (4.28) Step 12: ”Convergence of Variance”. Refer to (3.1), the last paragraphof Step 6 and the last line of Step 11. To complete the proof of Lemma 4.1,we now only need to show that σ n q → η + η as q → ∞ , q ∈ T. (4.29)To accomplish that, it will (by a well know theorem) suffice to show that thereis an upper bound on the fourth moments of the random variables P i =1 U ( q ) i , q ∈ T .Referring to the first equality in (4.28), one of course has by (4.2), (1.3),and Theorem 2.2 that the set of numbers σ n q , q ∈ T is bounded.Since ρ ′ (1) <
1, by Theorem 2.3, we obtain (for the constant C in Theorem2.3) that E X i =1 U ( q ) i ! = E X k ∈ B ( L ( nq ) ) X ( n ) k ≤ C X k ∈ B ( L ( nq ) ) E (cid:16) X ( n ) k (cid:17) + X k ∈ B ( L ( nq ) ) E (cid:16) X ( n q ) k (cid:17) . (4.30) LT for Non-Stationary Strongly Mixing Random Fields 17
Using (3.2) and Theorem 2.2, the first term in the right-hand side of (4.30)can be bounded above in the following way: X k ∈ B ( L ( nq ) ) E (cid:16) X ( n q ) k (cid:17) = X k ∈ B ( L ( nq ) ) E (cid:20)(cid:16) X ( n q ) k (cid:17) (cid:16) X ( n q ) k (cid:17) (cid:21) ≤ θ n q X k ∈ B ( L ( nq ) ) E (cid:16) X ( n q ) k (cid:17) ≪ θ n q · σ n q → q → ∞ , q ∈ T..
The second term in the right-hand side of (4.30) can be bounded above asfollows: As q → ∞ , q ∈ T , by Theorem 2.2 again, X k ∈ B ( L ( nq ) ) E (cid:16) X ( n q ) k (cid:17) = X k ∈ B ( L ( nq ) ) E (cid:16) X ( n q ) k (cid:17) X k ∈ B ( L ( nq ) ) E (cid:16) X ( n q ) k (cid:17) ≪ (cid:16) σ n q (cid:17) ≪ . Hence, sup q ∈ T E (cid:16)P i =1 U ( q ) i (cid:17) < ∞ . That completes the proof of Lemma 4.1. Recall the Lindeberg condition in (1.4). Without loss of generality, we canassume σ n = 1 for each n ∈ N . Then by a simple argument, ∃ ǫ ≥ ǫ ≥ . . . ↓ n →∞ X k ∈ B ( L n ) E (cid:16) X ( n ) k (cid:17) I (cid:16)(cid:12)(cid:12)(cid:12) X ( n ) k (cid:12)(cid:12)(cid:12) > ǫ n (cid:17) = 0 . (5.1)We truncate now at the level ǫ n . Define the following random variables: forevery n ∈ N and every k ∈ B ( L n ), X ′ ( n ) k := X ( n ) k I ( | X ( n ) k | ≤ ǫ n ) − EX ( n ) k I ( | X ( n ) k | ≤ ǫ n ) and (5.2) X ′′ ( n ) k := X ( n ) k I ( | X ( n ) k | > ǫ n ) − EX ( n ) k I ( | X ( n ) k | > ǫ n ) . (5.3)Obviously (since EX ( n ) k = 0 for each n and k ), X k ∈ B ( L n ) X ( n ) k = X k ∈ B ( L n ) X ′ ( n ) k + X k ∈ B ( L n ) X ′′ ( n ) k . (5.4)Since ρ ′ (1) <
1, we can apply again Theorem 2.2 and by (5.1), we obtain that , Cristina Tone ≤ E X k ∈ B ( L n ) X ′′ ( n ) k ≤ (cid:18) ρ ′ (1)1 − ρ ′ (1) (cid:19) d X k ∈ B ( L n ) E (cid:16) X ′′ ( n ) k (cid:17) ≤ C X k ∈ B ( L n ) E (cid:16) X ( n ) k (cid:17) I ( | X ( n ) k | > ǫ n ) → n → ∞ . Therefore, X k ∈ B ( L n ) X ′′ ( n ) k → n → ∞ . As a consequence, by Slutski’s theorem, to prove that X k ∈ B ( L n ) X ( n ) k ⇒ N (0 ,
1) as n → ∞ , (5.5)we only have left to show that X k ∈ B ( L n ) X ′ ( n ) k ⇒ N (0 ,
1) as n → ∞ . (5.6)Note that k X ′ ( n ) k k ∞ ≤ ǫ n for every n ∈ N and every k ∈ B ( L n ). Since ǫ n → n → ∞ by (5.1), we have thatsup k ∈ B ( L n ) k X ′ ( n ) k k ∞ → n → ∞ . Hence by Lemma 4.1, (5.6) holds, and hence also (5.5). The proof of Theorem1.1 is complete.
Suppose d is a positive integer. For each n ∈ N , suppose L n :=( L n , L n , . . . , L nd ) is an element of N d , and suppose X ( n ) := (cid:16) X ( n ) k , k ∈ B ( L n ) (cid:17) is an array of random variables such that for each k ∈ B ( L n ) , EX ( n ) k = 0 and E (cid:16) X ( n ) k (cid:17) < ∞ , and for at least one k ∈ B ( L n ) , E (cid:16) X ( n ) k (cid:17) > . Supposealso that the mixing assumptions (1.2) and lim m →∞ ρ ′ ( m ) < hold, where for each m ∈ N , ρ ′ ( m ) := sup n ∈ N ρ ′ ( X ( n ) , m ) . LT for Non-Stationary Strongly Mixing Random Fields 19
For each n ∈ N , define the random sum S (cid:0) X ( n ) , L n (cid:1) := P k ∈ B ( L n ) X ( n ) k anddefine the quantity σ n := E (cid:0) S (cid:0) X ( n ) , L n (cid:1)(cid:1) . Suppose there exists a positiveconstant C such that for every n ∈ N and every nonempty set S ∈ B ( L n ) , E X k ∈ S X ( n ) k ! ≥ C · X k ∈ S E (cid:16) X ( n ) k (cid:17) . (6.2) Suppose the Lindeberg condition (1.4) holds. Then σ − n S ( X ( n ) , L n ) ⇒ N (0 , as n → ∞ . For d = 1, this result was proved by Peligrad ([3], Theorem 2.1), with (6.2)replaced by a weaker assumption. The proof of Theorem 6.1 again involvesinduction on the dimension d , and is just a slight modification of the argumentin Sections 3, 4, and 5 for Theorem 1.1. In essence, in place of (1.3) andTheorem 2.2, one uses (6.1), Theorem 2.1, and (6.2).In fact, to make that argument work smoothly, it suffices to have a weakerversion of (6.2) in which, for a given n ∈ N , the sets S ⊆ B ( L n ) are restrictedto certain special “rectangles” of the form S = S × S × . . . × S d where foreach j ∈ { , , . . . , d } , the set S j either is { , , . . . , L n j } or is { k } for some k ∈ { , , . . . , L n j } . References
1. Bradley, R.C.: Introduction to Strong Mixing Conditions, vol. 1, 2, &3. Kendrick Press,Heber City (Utah) (2007)2. Miller, C.: Three theorems on ρ ∗ -mixing random fields. J. Theor. Probab. , 867–882(1994)3. Peligrad, M.: On the asymptotic normality of sequences of weak dependent randomvariables. J. Theor. Probab. , 703–715 (1996)4. Peligrad, M.: Maximum of partial sums and an invariance principle for a class of weakdependent random variables. Proc. AMS , 1181–1189 (1998)5. Peligrad, M., Utev, S.A.: Maximal inequalities and an invariant principle for a class ofweakly dependent random variables. J. of Theor. Probab. (1), 101–115 (2003)6. Prohorov, Y.V.: Convergence of random processes and limit theorems in probabilitytheory. Theor. Probability Appl. , 157–214 (1956)7. Tone, C.: Kernel density estimators for random fields satisfying an interlaced mixingcondition. Journal of Statistical Planning and Inference143