Central Limit Theorem for Linear Processes with Infinite Variance
aa r X i v : . [ m a t h . P R ] M a y Central Limit Theorem for Linear Processeswith Infinite Variance
Magda Peligrad a and Hailin Sang ba Department of Mathematical Sciences, University of Cincinnati, PO Box210025, Cincinnati, OH 45221-0025, USA. E-mail address:[email protected] b National Institute of Statistical Sciences, PO Box 14006, Research TrianglePark, NC 27709, USA. E-mail address: [email protected]
MSC 2000 subject classification : Primary: 60F05, 60G10, 60G42Key words and phrases: linear process, central limit theorem, martingale,mixing, infinite variance.
Abstract
This paper addresses the following classical question: giving a sequence ofidentically distributed random variables in the domain of attraction of a normallaw, does the associated linear process satisfy the central limit theorem? Westudy the question for several classes of dependent random variables. For inde-pendent and identically distributed random variables we show that the centrallimit theorem for the linear process is equivalent to the fact that the variablesare in the domain of attraction of a normal law, answering in this way an openproblem in the literature. The study is also motivated by models arising ineconomic applications where often the innovations have infinite variance, coef-ficients are not absolutely summable, and the innovations are dependent.
Let ( ξ n ) n ∈ Z be a sequence of identically distributed random variables and let( c ni ) ≤ i ≤ m n be a triangular array of numbers. In this paper we analyze theasymptotic behavior of statistics of the type S n = m n X i =1 c ni ξ i (1)when the variables are centered and satisfy: H ( x ) = E ( ξ I ( | ξ | ≤ x )) is a slowly varying function at ∞ . (2) Supported in part by a Charles Phelps Taft Memorial Fund grant and NSA grant H98230-09-1-0005 b n → ∞ such that P ni =1 ξ i /b n is convergentin distribution to a standard normal variable (see for instance Feller, 1966;Ibragimov and Linnik, 1971; Araujo and Gin´e, 1980). It is an open question toextend the general central limit theorem from equal weights to weighted sums ofi.i.d. random variables with infinite variance. Linear combinations of identicallydistributed random variables are important, since many random evolutions andalso statistical procedures such as parametric or nonparametric estimation ofregression with fixed design, produce statistics of type (1) (see for instanceChapter 9 in Beran, 1994, for the case of parametric regression, or the paper byRobinson, 1997, where kernel estimators are used for nonparametric regression).One example is the simple parametric regression model Y i = βα i + ξ i where( ξ i ) is a sequence of identically distributed random variables with marginaldistribution satisfying (2), ( α i ) is a sequence of real numbers and β is theparameter of interest. The least squares estimator ˆ β of β , based on a sampleof size n, satisfies S n = ˆ β − β = ( P ni =1 α i ) − P ni =1 α i ξ i . So the representationof type (1) holds with c ni = α i / ( P ni =1 α i ).We shall also see that the asymptotic behavior of the sums of variables ofthe form X k = ∞ X j = −∞ a k + j ξ j (3)can be obtained by studying sums of the type (1). We shall refer to such aprocess as to a linear process with innovations ( ξ i ) i ∈ Z . In 1971 Ibragimov andLinnik extended the central limit theorem from i.i.d. random variables to linearprocesses defined by (3) for innovations that have finite second moment, underthe conditions P ∞ j = −∞ a k < ∞ and stdev ( P nj =1 X j ) → ∞ . They showed that n X j =1 X j /stdev ( n X j =1 X j ) D → N (0 , . This result is striking, since var ( P nj =1 X j ) can be of order different of n ; prac-tically it can be any positive sequence going to infinite of an order o ( n ) . It wasconjectured that a similar result might hold without the assumption of finitesecond moment. Steps in this direction are papers by Knight (1991), Mikoschet al. (1995) and Wu (2003) who studied this problem under the additionalassumption P ∞ j = −∞ | a k | < ∞ . Our Theorem 2.5 positively answers this conjec-ture. Under condition (2) we show that X k is well defined, if and only if X j ∈ Z ,a j =0 a j H ( | a j | − ) < ∞ , and under this condition we show that the central limit theorem for P nj =1 X j properly normalized is equivalent to condition (2).2s an example in this class we mention the particular linear process withregularly varying weights with exponent α where 1 / < α < . This meansthat the coefficients are of the form a n = n − α L ( n ), where for n ≥ a n =0 for n ≤ , and L ( n ) is a slowly varying function at ∞ . It incorporatesthe fractionally integrated processes that play an important role in financialeconometrics, climatology and so on and they are widely studied. Such processesare defined for 0 < d < / X k = (1 − B ) − d ξ k = X i ≥ a i ξ k − i where a i = Γ( i + d )Γ( d )Γ( i + 1)and B is the backward shift operator, Bε k = ε k − . For this example, bythe well known fact that for any real x, lim n →∞ Γ( n + x ) /n x Γ( n ) = 1 wehave lim n →∞ a n /n d − = 1 / Γ( d ). Notice that these processes have long mem-ory because P j ≥ | a j | = ∞ . This particular class was recently investigated byPeligrad and Sang (2010), where further reaching properties were pointed out.Our study is not restricted to the class of independent identically distributedrandom variables. We also consider larger classes including martingales andmixing processes. The results obtained for the class of martingale innovationsare also useful to study more general innovations that can be approximatedby martingale differences. The martingale approximation method was recentlyused by Jara et al (2009) to study the attraction to stable laws with exponent α, α ∈ (0 ,
2) for additive functionals of a stationary Markov chain.There is a huge literature on the central limit theorem for linear processeswith dependent innovations and finite second moment but we are not aware ofany study considering the infinite variance case in its full generality. A step inthis direction, under the assumption P ∞ j = −∞ | a k | < ∞ , is the paper by Tyran-Kami´nska (2010).In all the central limit theorems for variables with infinite variance the con-struction of the normalizer is rather complicated and is based heavily on thefunction H ( x ) . This is the reason why it is important to replace the normalizerby a selfnormalizer, constructed from the data. We mention in this directionthe recent results by Kulik (2006), under the assumption that P ∞ j = −∞ | a k | < ∞ and by Peligrad and Sang (2010) for regularly varying weights with exponent α where 1 / < α <
1. In this paper, as in Mason (2005), we suggest a Raikovtype selfnormalizer based on a weighted sum of squares of the innovations.Our paper is organized in the following way: Section 2 contains the defi-nitions and the results, Section 3 contains the proofs. For convenience, in theAppendix, we give some auxiliary results and we also mention some known factsneeded for the proofs.In the sequel we shall use the following notations: a double indexed sequencewith indexes n and i will be denoted by a ni and sometimes a n,i ; we use thenotation a n ∼ b n instead of a n /b n → a n = o ( b n ) means that a n /b n → I ( A )denotes the indicator function of A ; the notation ⇒ is used for convergence indistribution and also for convergence in probability to a constant.3n this paper we shall make two conventions in order to simplify the nota-tions. Convention 1 . By convention, for x = 0, | x | H ( | x | − ) = 0 . For instance wecan write instead P j ∈ Z , a j =0 a j H ( | a j | − ) < ∞ , simply P j ∈ Z a j H ( | a j | − ) < ∞ . Convention 2.
The second convention refers to the function H ( x ) definedin (2) . Since the case E ( ξ ) < ∞ is known, we shall consider the case E ( ξ ) = ∞ . Let b = inf { x ≥ H ( x ) > } and H b ( x ) = H ( x ∨ ( b +1)) . Then clearly b < ∞ , H b ( x ) ≥ H b ( x ) = H ( x ) for x > b + 1 . From now on we shall redenote H b ( x ) by H ( x ) . Therefore, since our results are asymptotic, without restrictingthe generality we shall assume that H ( x ) ≥ x ≥ . Our first results treat the general weights and identically distributed martingaledifferences with infinite second moment. The case of finite second moment wastreated in Peligrad and Utev (1997, 2006).We shall establish first a general theorem for martingale differences undera convergence in probability condition (5). This condition will be verified inthe next main results for classes of martingale differences and i.i.d. randomvariables.
Theorem 2.1
Let ( ξ k ) k ∈ Z be a sequence of identically distributed martingaledifferences adapted to the filtration ( F k ) k ∈ Z that satisfy (2) and let ( c nk ) ≤ k ≤ m n be a triangular array of real numbers, such that sup n m n X k =1 c nk H ( | c nk | − ) < ∞ and max ≤ k ≤ m n c nk H ( | c nk | − ) → as n → ∞ . (4) Assume m n X k =1 c nk ξ k ⇒ as n → ∞ . (5) Then m n X k =1 c nk ξ k ⇒ N (0 , as n → ∞ . (6)We shall mention two pairwise mixing type conditions that are sufficient for(5). Proposition 2.1
Assume that all the conditions of Theorem 2.1 (except for(5)) are satisfied. Assume that one of the following two conditions holds:( M ) There is a sequence of positive numbers ψ k → such that for all a and b positive numbers and all integers j,cov ( ξ j I ( | ξ | ≤ a ) , ξ j + k I ( | ξ k | ≤ b )) ≤ ψ k E ( ξ I ( | ξ | ≤ b )) E ( ξ I ( | ξ | ≤ a )) . M ) There is a sequence of positive numbers ( ϕ k ) with P k ≥ ϕ k < ∞ , suchthat for all a and b and all integers j,cov ( ξ j I ( | ξ | ≤ a ) , ξ j + k I ( | ξ k | ≤ b )) ≤ a ϕ k E ( ξ I ( | ξ | ≤ b )) .If either ( M ) or ( M ) holds then (5) is satisfied and therefore the conclusionof Theorem 2.1 holds. Remark 2.1
According to the above proposition we mention that for indepen-dent identically distributed innovations satisfying (2) and coefficients ( c ni ) sat-isfying condition (4) the central limit theorem (6) holds. For further applications to time series we shall comment on the normalizedform of the above results, which is important for the case when condition (4) isnot satisfied.Recall Conventions 1 and 2 and define: D n = inf ( s ≥ m n X k =1 c nk s H (cid:18) sc nk (cid:19) ≤ ) . (7) D n is well defined since by Convention 2, H ( x ) ≥ x and, since H ( x ) is aslowly varying function, we have lim x →∞ x − H ( x ) = 0. By using this definitionalong with Theorem 2.1 we obtain the following corollary: Corollary 2.1
Let ( ξ k ) k ∈ Z be as in Theorem 2.1 and assume max ≤ k ≤ m n c nk D n H (cid:18) D n | c nk | (cid:19) → as n → ∞ (8) and D n m n X k =1 c nk ξ k ⇒ as n → ∞ . (9) Then D n m n X k =1 c nk ξ k ⇒ N (0 , as n → ∞ . (10) Moreover, as in Proposition 2.1, condition (9) is satisfied under either (M ) or(M ). Clearly, by combining (9) with (10) we also have1( P m n k =1 c nk ξ k ) / m n X k =1 c nk ξ k ⇒ N (0 ,
1) as n → ∞ . (11)For equal weights, when c n,i = 1 for all n and 1 ≤ i ≤ n, D n becomes thestandard normalizer for the central limit theorem for variables in the domain ofattraction of a normal law: D n = inf n s ≥ ns H ( s ) ≤ o .5hen, it is well known that D n → ∞ and, by the properties of slowly varyingfunction, condition (8) is satisfied with m n = n. For this case we easily obtainthe following central limit theorem for martingales with infinite variance:
Corollary 2.2
Let ( ξ k ) k ∈ Z be a sequence of identically distributed martingaledifferences adapted to the filtration ( F k ) k ∈ Z satisfying (2) and assume D n n X k =1 ξ k ⇒ as n → ∞ . (12) Let M n = P nk =1 ξ k . Then M n D n ⇒ N (0 , as n → ∞ .Moreover, as in Proposition 2.1, condition (12) is satisfied under either (M )or (M ). Remark 2.2
This corollary can be used to obtain the CLT for classes of stochas-tic processes that can be approximated by stationary martingale differences. Forinstance, assume ( η k ) k ∈ Z is a stationary sequence of centered, integrable ran-dom variables, F i = σ ( η j , j ≤ i ) and V n = P nk =1 η k . An idea, going back toGordin (1969), is to decompose V n into a martingale with stationary differencesand a telescoping rest called coboundary. More precisely, η n = ξ n + Z n − − Z n , where Z n is a stationary integrable sequence and ξ n is a stationary sequence ofmartingale differences. Voln´y (1993) gave necessary and sufficient conditionsfor such an approximation. Under the assumption E ( V n |F ) is convergent in L , we have V n = M n + R n where M n is a martingale with stationary differ-ences adapted to F n , and R n = Z − Z n has the property that E | R n | < ∞ . Then clearly, after normalizing by a sequence of constants B n converging to ∞ ,the limiting distribution of V n /B n is equivalent to the limiting distribution of M n /B n . By combining our Corollary 2.1 with the main result in Gin´e et al. (1997)we formulate next a theorem for i.i.d. sequences.
Theorem 2.2
Let ( ξ k ) k ∈ Z be a sequence of independent and identically dis-tributed centered random variables. Then the following three statements areequivalent:(1) ξ is in the domain of attraction of a normal law ( i.e. condition (2) issatisfied).(2) For any sequence of constants ( c nk ) ≤ k ≤ m n satisfying (8) the CLT in ( 10)holds.(3) For any sequence of constants ( c nk ) ≤ k ≤ m n satisfying (8) the selfnormalizedCLT in (11) holds. The implication (2) → (3) also follows by Mason (2005). We shall apply ourgeneral results for time series of the form (3) with identically distributed mar-tingale differences innovations. We shall prove first the following proposition:6 roposition 2.2 Let ( ξ k ) k ∈ Z be a sequence of identically distributed martingaledifferences adapted to the filtration ( F k ) k ∈ Z that satisfy (2). The linear process X = P ∞ j = −∞ a j ξ j is well defined in the almost sure sense under the condition X j ∈ Z a j H ( | a j | − ) < ∞ . (13) If the innovations are independent and identically distributed random variablessatisfying (2) then condition (13) is necessary and sufficient for the existence of X a.s . Denote b nj = a j +1 + · · · + a j + n and with this notation S n = n X k =1 X k = X j ∈ Z b nj ξ j . (14)Construct D n by (7) where we replace c nj by b nj . Then we have: Theorem 2.3
Assume that ( ξ k ) k ∈ Z is a sequence of identically distributed mar-tingale differences satisfying (2), the coefficients satisfy condition (13), and P k b nk → ∞ . Assume in addition that D n X k b nk ξ k ⇒ as n → ∞ . (15) Then S n D n ⇒ N (0 , as n → ∞ . (16)Notice that the normalizer D n is rather complicated and contains the slowlyvarying function H ( x ) . By combining however the convergences in (15) and (16)it is easy to see that we can use in applications the selfnormalized form of thistheorem. S n ( P k b nk ξ k ) / ⇒ N (0 ,
1) as n → ∞ . (17)By simple arguments D n in (16) can also be replaced by p π/ E ( | S n | ) , or aconsistent estimator of this quantity.We mention now sufficient conditions for the validity of (15). Theorem 2.4
Assume that ( ξ k ) k ∈ Z is a sequence of identically distributed mar-tingale differences that satisfy (2), the coefficients ( a i ) are as in Theorem 2.3and one of the conditions (M ) or (M ) of Proposition 2.1 is satisfied. Thenboth the central limit theorem (16) and its selfnormalized form (17) hold. In the independent case, by combining theorem (2.4) with the result onselfnormalized CLT in Gin´e et al. (1997), we have:7 heorem 2.5
Let ( ξ k ) k ∈ Z be a sequence of independent and identically dis-tributed centered random variables. Then the following three statements areequivalent:(1) ξ satisfies condition (2).(2) For any sequence of constants ( a n ) n ∈ Z satisfying (13) and P k b nk → ∞ theCLT in (16) holds.(3) For any sequence of constants ( a n ) n ∈ Z satisfying (13) and P k b nk → ∞ theselfnormalized CLT in (17) holds. This theorem is an extension of Theorem 18.6.5 in Ibragimov and Linnik(1971) from i.i.d. innovations with finite second moment to innovations in thedomain of attraction of a normal law. It positively answers the question onthe stability of the central limit theorem for i.i.d. variables under formation oflinear sums. The implication (2) → (3) also follows by Mason (2005).The casual linear process is obtained when a i = 0 for i ≤ . Then X = P j = −∞ a j ξ j is well defined if and only if X j ≥ a j H ( | a j | − ) < ∞ . (18)For this case the coefficients b nj have the following expression b nj = a + . . . + a j for j < nb nj = a j − n +1 + . . . + a j for j ≥ n and with this notation, S n = ∞ X i =1 b ni ξ n − i . For particular casual linear processes with coefficients a i = i − α L ( i ), where1 / < α < L ( i ) is a slowly varying function at ∞ in the strong sense(i.e. there is h ( t ) continuous such that L ( n ) = h ( n ) and h ( t ) is slowly varying),the normalizer can be made more precise. Peligrad and Sang (2010) studied thecase of i.i.d. innovations and showed that D n ∼ c α H ( η n ) n − α L ( n )where c α = (1 − α ) − R ∞ [ x − α − max( x − , − α ] dx and η n is defined by η j = inf (cid:26) s > H ( s ) s ≤ j (cid:27) , j = 1 , , · · · (19)Furthermore, in this context, Peligrad and Sang (2010) showed that the self-normalizer can be estimated by observing only the variables X k . More pre-cisely they showed that c α n a n V n / ( A D n ) ⇒ V n = P ni =1 X i and A = P ∞ i =1 a i . Therefore S n na n V n ⇒ N (0 , c α A ) .
8y combining this result with Theorem 2.3 we notice that for this case we havethe following striking law of large numbers: A P ∞ i =1 b ni ξ n − i c α n a n V n ⇒ n → ∞ . Another particular case of linear processes with i.i.d. innovations in the domainof attraction of a normal law was studied by Kulik (2006), under the condition P j> | a j | < ∞ . For this case his result is: S n V n ⇒ N (0 , | P i> a i | A ) . (20)It is an interesting related question to extend Kulik’s result to martingale differ-ences. We shall not pursue this path here where the goal is to consider generalcoefficients.We shall also study the case of weak dependent random variables whosedefinition is based on the maximum coefficient of correlation. Definition 2.1
Let A and B be two σ -algebras of events and define ρ ( A , B ) = sup f ∈ L ( A ) ,g ∈ L ( B ) | corr ( f, g ) | where L ( A ) denotes the the class of random variables that are A -measurableand square integrable. Definition 2.2
Let ( ξ k ) k ∈ Z be a sequence of random variables and let F mn = σ ( ξ i , n ≤ i ≤ m ) . We call the sequence ρ -mixing if ρ n = sup k ρ ( F k , F ∞ k + n ) → as n → ∞ . This class is significant for studying functions of Gaussian processes, as wellas additive functionals of Markov processes. A convenient reference for basicproperties and the computation of these coefficients for functions of Markovchains and functions of Gaussian processes is Bradley (2007), chapters 7, 9 and27. The next theorem solves the same problem as Theorem 2.1 for this classof dependent random variables. The conditions imposed to the variables andmixing rates are similar to those used by Bradley (1988) who studied the cen-tral limit theorem for partial sums of stationary ρ -mixing sequences under (2).Bradley’s result was extended in Shao (1993) in several directions, but still forpartial sums. Our theorem extends Theorem 1 of Bradley (1988) from equalweights to linear processes and Theorem 2.2 (b) in Peligrad and Utev (1997) tovariables with infinite second moment. Theorem 2.6
Let ( ξ k ) be a sequence of centered identically distributed randomvariables satisfying (2). Assume that ( ξ k ) is ρ -mixing with P k ρ (2 k ) < ∞ and (1) < .Let ( c nk ) be a triangular array of real numbers satisfying sup n m n X k =1 c nk H ( | c nk | − ) < ∞ and max ≤ k ≤ m n c nk H ( | c nk | − ) → as n → ∞ . Then B n m n X k =1 c nk ξ k ⇒ N (0 , as n → ∞ ,where B n = ( π/ / E | P m n k =1 c nk ξ k | . Proof of Theorem 2.1.
The proof of Theorem 2.1 involves a few steps. Define ξ ′ i = ξ ′ ni = ξ i I ( | c ni ξ i | ≤ − E i − ( ξ i I ( | c ni ξ i | ≤ ξ ′′ i = ξ ′′ ni = ξ i I ( | c ni ξ i | > − E i − ( ξ i I ( | c ni ξ i | > E i ( X ) instead of E ( X |F i ).We show now that m n X k =1 c nk ξ ′′ k ⇒ E | m n X k =1 c nk ξ ′′ k | ≤ m n X i =1 | c ni | E ( | ξ | I ( | c ni ξ | > o ( m n X i =1 c ni H ( | c ni | − ) = o (1) as n → ∞ .To prove the theorem, by Theorem 3.1 in Billingsley (1999) and (21), it is enoughto study the limiting distribution for the linear process associated to ( ξ ′ i ) . We shall verify the sufficient conditions for the CLT for sums of a triangulararray of martingale differences with finite second moment, given for conveniencein Theorem 4.1 in the Appendix. We start by verifying the point ( a ) of Theorem4.1. Fix 0 < ε < E ( max ≤ i ≤ m n | c ni ξ ′ i | ) ≤ ε + m n X k =1 c nk E (( ξ ′ k ) I ( | c nk ξ ′ k | > ε )) ≤ ε + 1 ε m n X k =1 | c nk | E | ξ ′ k | ≤ ε + 8 ε m n X k =1 | c nk | E ( | ξ k | I ( | c nk ξ k | ≤ ≤ ε + 8 ε o ( m n X k =1 | c nk | H ( | c nk | − ) as n → ∞ .Now we take into account condition (4) and obtain E (max ≤ i ≤ m n | c ni ξ ′ i | ) → n → ∞ followed by ε → . In order to verify the item ( b ) of Theorem 4.1, we have to study the limit inprobability of P m n k =1 c nk ( ξ ′ k ) . We start from the decomposition m n X k =1 c nk ( ξ ′ k ) = m n X k =1 c nk ξ k I ( | c nk ξ k | ≤
1) + m n X i =1 c ni E i − ( ξ i I ( | c ni ξ i | ≤ m n X k =1 c nk ξ k I ( | c nk ξ k | ≤ E k − ( ξ k I ( | c nk ξ k | > A + B + 2 C .We shall show that it is enough to analyze the first term by the following simpleargument.We discuss now the term B . By the fact that c ni ( E i − ( ξ i I ( | c ni ξ i | ≤ ≤ B ≤ m n X i =1 | c ni E i − ( ξ i I ( | c ni ξ i | ≤ | a.s.By the martingale property E i − ( ξ i I ( | c ni ξ i | ≤ E i − ( ξ i I ( | c ni ξ i | > E ( B ) ≤ E ( m n X i =1 | c ni E i − ( ξ i I ( | c ni ξ i | > | ) (22) ≤ m n X i =1 | c ni | E ( | ξ | I ( | c ni ξ | > o (1) as n → ∞ .Then, by Cauchy Schwarz inequality, condition (4) and (22), we get E ( C ) ≤ ( E ( A ) E ( B )) / → n → ∞ .By these arguments, the limit in probability of P m n k =1 c nk ( ξ ′ k ) coincides to thelimit of P m n k =1 c nk ξ k I ( | c nk ξ k | ≤ . Notice now that m n X k =1 c nk ξ k I ( | c nk ξ k | ≤ − m n X k =1 c nk ξ k −
1) + m n X k =1 c nk ξ k I ( | c nk ξ k | > c , such that for any ε > P ( m n X k =1 c nk ( ξ ′′ k ) > ε/ ≤ cε E | m n X k =1 c nk ( ξ ′′ k ) | which, combined with (21) and the arguments in (22) gives m n X k =1 c nk ξ k I ( | c nk ξ k | > ⇒
0. (23)We just have to take into account condition (5) to conclude m n X k =1 c nk ξ k I ( | c nk ξ k | ≤ ⇒ P m n k =1 c nk ( ξ ′ k ) ⇒ ♦ Proof of Proposition 2.1.
We assume that (M ) holds. Because of (23) it is enough to prove m n X k =1 c nk ξ k I ( | c nk ξ k | ≤ ⇒ . We start by computing the variance of P m n k =1 c nk ξ k I ( | c nk ξ k | ≤
1) and, takinginto account the variables are identically distributed, we majorate the covari-ances by using the coefficients ψ i defined in Condition (M ). var ( m n X k =1 c nk ξ k I ( | c nk ξ k | ≤ ≤ m n X k =1 c nk E ( ξ I ( | c nk ξ | ≤ m n − X i =1 ψ i m n − i X k =1 c nk c n,k + i E ( ξ I ( | c nk ξ | ≤ E ( ξ I ( | c n,k + i ξ | ≤ D + 2 E .By the item 4 of Lemma 4.1, D = m n X k =1 c nk E ( ξ I ( | c nk ξ | ≤ o ( m n X k =1 c nk H ( | c nk | − )) = o (1) as n → ∞ .In order to estimate the second term we split the sum in two, one up to h and12nother after h , where h is an integer. E = m n − X i =1 ψ i m n − i X k =1 c nk c n,k + i H ( | c nk | − ) H ( | c n,k + i | − ) ≤ h max ≤ i ≤ h m n − i X k =1 c nk c n,k + i H ( | c nk | − ) H ( | c nk + i | − )+ max h ≤ i ≤ m n ψ i m n − X i =1 m n − i X k =1 c nk c n,k + i H ( | c nk | − ) H ( | c n,k + i | − ) ≤ h max ≤ k ≤ m n c nk H ( | c nk | − ) + max h ≤ i ≤ m n ψ i .By letting n → ∞ and then h → ∞ , by taking into account conditions (4) and ( M ) , we obtain E → n → ∞ . Then, clearly by the above considerations var ( P m n k =1 c nk ( ξ ′ k ) ) → ).The proof of this proposition under (M ) is similar. Because c ni E ( ξ i I ( | c ni ξ i | ≤ ≤ var ( m n X k =1 c nk ξ k I ( | c nk k ξ | ≤ ≤ m n X k =1 c nk E ( ξ I ( | c nk ξ | ≤ m n − X i =1 m n X k = i +1 ϕ k − i c nk E ( ξ I ( | c nk ξ | ≤ h fixed we easilyobtain m n − X i =1 m n X k = i +1 ϕ k − i c nk E ξ I ( | c nk ξ | ≤ ≤ h max ≤ k ≤ m n c nk H ( | c nk | − ) m n X k =1 ϕ k + ∞ X i = h ϕ i m n X k =1 c nk E ξ I ( | c nk ξ | ≤ ) by letting first n → ∞ followedby h → ∞ . ♦ Proof of Proposition 2.2
By the three series theorem for martingales, (Theorem 2.16 in Hall andHeyde, 1980) X exists in the almost sure sense if and only if:1. P i P ( | a i ξ i | > |F i − ) < ∞ a.s.,2. P i E ( a i ξ i I ( | a i ξ i | ≤ |F i − ) converges a.s.,13. P i V ar ( a i ξ i I ( | a i ξ i | ≤ |F i − )) < ∞ a.s.Notice that, by taking into account Convention 1, the fact that the variablesare identically distributed and item 2 in Lemma 4.1 from the appendix, X i P ( | a i ξ i | >
1) = X i P ( | a i ξ | >
1) = X i a i o ( H ( | a i | − )) < ∞ and this easily implies 1.Then by item 3 of Lemma 4.1 and again by the fact that the variables areidentically distributed, | X i E ( a i ξ i I ( | a i ξ i | ≤ | ≤ X i | a i | E ( | ξ | I ( | a i ξ | > X i a i o ( H ( | a i | − ) < ∞ .This implies X i E ( | a i ξ i | I ( | a i ξ i | ≤ |F i − ) < ∞ a.s. (24)and 2. follows.Finally, X i E ( a i ξ i I ( | a i ξ i | ≤ X i a i E ( ξ I ( | a i ξ | ≤ X i a i H ( | a i | − ) . Then X i a i E ( ξ i I ( | a i ξ i | ≤ |F i − ) < ∞ a.s.and together with (24) gives 3. For the i.i.d. case the proof is similar and it isbased on the i.i.d. version of the three series theorem. ♦ Proof of Theorem 2.3
We start by rewriting S n as in relation (14) by changing the order of sum-mation. S n = n X k =1 X k = ∞ X j = −∞ ( n X k =1 a k + j ) ξ j = ∞ X j = −∞ b nj ξ j .We shall verify the conditions of Theorem 2.1.According to Corollary (2.1) it is sufficient to show thatsup k b nk D n H ( D n | b nk | ) → n → ∞ .where D n = inf { x ≥ X k b nk x H ( x | b nk | ) ≤ } .14otice that condition (13) implies P k a k < ∞ . Therefore we can apply theargument from Peligrad and Utev (1997, pages 448-449) and obtainsup k b nk P i b ni → n → ∞ , (25)since we imposed P k b nk → ∞ . Notice that by taking into account Convention2 we obviously have 1 P k b nk X k b nk H (cid:18) ( P k =1 b nk ) / | b nk | (cid:19) ≥ D n , this implies D n ≥ X k b nk ,whence, by (25) sup k b nk D n → n → ∞ .Now, by the properties of slowly varying functions, for ε >
0, we know that H ( x ) = o ( x ε ) as x → ∞ . We then obtainsup k b nk D n H ( D n | b nk | ) = o (1) as n → ∞ .This completes the proof of this Theorem. ♦ Proof of Theorem 2.6
In order to prove Theorem 2.6 we start by the truncation argument ξ ′ ni = ξ i I ( | c ni ξ i | ≤ − E ( ξ i I ( | c ni ξ i | ≤ ξ ′′ ni = ξ i I ( | c ni ξ i | > − E ( ξ i I ( | c ni ξ i | > E | m n X k =1 c nk ξ ′′ k | →
0, (26)whence, by Theorem 3.1 in Billingsley (1999), the proof is reduced to studyingthe asymptotic behavior of P m n k =1 c nk ξ ′ k .According to Theorem 4.1 and Theorem 5.5 in Utev (1990), given for con-venience in the appendix (Theorem 4.2), we have only to verify the Lindeberg’scondition. Denote ( σ ′ n ) = E ( P m n k =1 c nk ξ ′ k ) . Our conditions on the mixing co-efficients allows us by relation (29) to bound ( σ ′ n ) above and below by the sum15f squares of the variance of individual summands; so we can find two positiveconstants C < C such that C m n X i =1 c ni H ( | c ni | − ) ≤ ( σ ′ n ) ≤ C m n X i =1 c ni H ( | c ni | − ) . (27)Lindeberg’s condition is satisfied because Lyapunov’s condition is satisfied. Tosee this, by (27) we have1( σ ′ n ) m n X i =1 c ni E ( | ξ | I ( | c ni ξ | ≤ ≤ σ ′ n ) m n X i =1 c ni o ( H ( | c ni | − )) = o (1).The central limit theorem follows and we have( m n X k =1 c nk ξ ′ k ) /σ ′ n ⇒ N (0 ,
1) as n → ∞ . (28)Now, ( P m n k =1 c nk ξ ′ k ) /σ ′ n is uniformly integrable and by Theorem 3.4 in Billingsley(1999), this implies that E ( | P m n k =1 c nk ξ ′ k | /σ ′ n ) → p /π . By taking now intoaccount (26) we obtain E ( | P m n k =1 c nk ξ k | /σ ′ n ) → p /π , that combined with (28)and (26), leads to the conclusion of this theorem. ♦ We mention first a lemma which contains some equivalent formulation for vari-ables in the domains of attraction of normal law. It is Lemma 1 in Cs¨org˝o et al(2003).
Lemma 4.1
Let H ( x ) := E ( X I ( | X | ≤ x )) . The following statements areequivalent. H ( x ) is a slowly varying function at ∞ ;2. P ( | X | > x ) = o ( x − H ( x ));3. E ( | X | I ( | X | > x )) = o ( x − H ( x ));4. E ( | X | α I ( | X | ≤ x )) = o ( x α − H ( x )) for α > Theorem 4.1
Let ( D ni ) ≤ i ≤ k n be a square integrable martingale difference and ( F i ) a filtration of sigma algebras such that for each, n and ≤ i ≤ k n , D ni is F i measurable. Suppose that(a) max ≤ i ≤ k n | D ni | is uniformly integrable;(b) P k n i =1 D ni ⇒ as n → ∞ ; Then S n ⇒ N (0 , as n → ∞ where S n = P k n i =1 X ni .16y combining Theorem 4.1 and Theorem 5.5 in Utev (1990) we formulate ageneral result for triangular arrays of ρ -mixing random variables. Theorem 4.2
Assume ( Y nk ) ≤ k ≤ m n is a triangular array of centered randomvariables with finite second moment such that ρ (1) < and P k ρ (2 k ) < ∞ where ρ ( k ) = sup s ≥ ,n ≥ ρ ( σ ( Y ni ; i ≤ s ) , σ ( Y ni ; i ≥ s + k ) . Set S n = P m n i =1 Y ni and denote var ( S n ) = σ n . Then, there are two positiveconstants C and C such that C m n X i =1 E ( Y ni ) ≤ σ n ≤ C m n X i =1 E ( Y ni ) . (29) Moreover if Lindeberg’s condition is satisfied: σ n m n X i =1 E ( Y ni I ( | Y ni | > εσ n )) → ,then σ n m n X i =1 Y ni ⇒ N (0 , . The authors are grateful to the referees for their useful suggestions which im-proved the presentation of this paper.
References [1] Araujo, A. and Gin´e, E. (1980).
The Central Limit Theorem for Real andBanach Valued Random Variables . Wiley Series in Probability and Math-ematical Statistics. John Wiley & Sons, New York-Chichester-Brisbane.[2] Beran, J. (1994).
Statistics for long-memory processes . Monographs onStatistics and Applied Probability, . Chapman and Hall, New York.[3] Billingsley, P. (1999). Convergence of Probability measures . Second edition,Wiley, New York.[4] Bradley, R. C. (1988). A Central Limit Theorem for Stationary ρ -MixingSequences with Infinite Variance. Ann. Probab . Introduction to strong mixing conditions . Volumes1-3, Kendrick Press. 176] Burkholder, D. L. (1966). Martingale transforms.
Ann. Math. Statist . Ann. Probab . An Introduction to Probability Theory and Its Applica-tions
Vol. 2, Willey, New York.[9] Gaenssler, P. and Haeusler, E. (1986). On martingale central limit theory.
Dependence in Probability and Statistics , Birkh¨auser, Boston, 303-334.[10] Gin´e, E., G¨otze, F. and Mason, D. M. (1997). When is the Student t–statistic asymptotically standard normal?
Ann. Probab. , 1514–1531.[11] Gordin, M. I. (1969). The central limit theorem for stationary processes, Soviet. Math. Dokl. , 1174–1176.[12] Hall P. and Heyde, C. C. (1980). Martingale Limit Theory and Its Appli-cations , Academic Press.[13] Jara, M., Komorowski, T. and Olla, S. (2009). Limit theorems for additivefunctionals of a Markov Chain.
Ann. Appl. Probab. , 2270–2300[14] Knight, K. (1991). Limit theory for M-estimates in an integrated infinitevariance process. Econometric Theory Statis-tics and Probability Letters
Independent and StationarySequences of Random Variables , Wolters, Groningen.[17] Mason, D.M. (2005). The asymptotic distribution of self-normalized trian-gular arrays.
J. Theor. Probab. , 853-870.[18] Mikosch, T. Gadrich, T. Kliippelberg C. and Adler, R. J. (1995). Parameterestimation for ARMA models with infinite variance innovations. Annals ofStatistics
Ann. Probab . , 443-456.[20] Peligrad, M. and Utev, S. (2006). Central limit theorem for stationarylinear processes. Ann. Probab . , 1608-1622.[21] Peligrad, M. and Sang, H. (2010). Asymptotic Properties of Self-Normalized Linear Processes with Long Memory. arXiv:1006.1572[22] Robinson, P. M. (1997). Large-sample inference for non parametric regres-sion with dependent errors. Ann. Statist. , 2054-2083.1823] Shao, Q. (1993). On the invariance principle for ρ -mixing sequence of ran-dom variables with infinite variance. Chinese Ann. Math.
Ser. B. , 27-42.[24] Tyran-Kami´nska, M. (2010). Functional limit theorems for linear processesin the domain of attraction of stable laws . Statistics and Probability Letters , 1535-1541.[25] Utev, S. A. (1990). Central limit theorem for dependent random vari-ables. Prob. Theory and Math. Stat.
Vol. , B. Grigelionis et al. (eds.),VSP/Mokslas. 519-528.[26] Voln´y, D (1993). Approximating martingales and central limit theorem forstrictly stationary processe. Stoch. Proc. Appl.
Statistica Sinica13