Operator Fractional Brownian Motion and Martingale Differences
aa r X i v : . [ m a t h . P R ] D ec Operator Fractional Brownian Motion andMartingale Differences
Hongshuai Dai [ a ] , Tien-Chung Hu [ b ] , June-Yung Lee [ b ] September 28, 2018
Abstract
It is well known that martingale difference sequences are very useful in applicationsand theory. On the other hand, the operator fractional Brownian motion as an ex-tension of the well-known fractional Brownian motion also plays important role inboth applications and theory. In this paper, we study the relationship between them.We will construct an approximation sequence of operator fractional Brownian motionbased on a martingale difference sequence.
Keywords:
Operator fractional Brownian motion, martingale-differences, weak con-vergence
1. Introduction
Fractional Brownian motion (FBM) is a continuous Gaussian process with stationaryincrements. It is one of the well-known self-similar processes. Some studies of financial timeseries and telecommunication networks have shown that this kind of process with long-range dependency-memory might be a better model in some cases than the traditionalstandard Brownian motion. Due to its applications in the real world and its interestingtheoretical properties, fractional Brownian motion has become an object of intense study.One of those studies concerns obtaining its weak limit theorems; see, for example, Enriquez[7], Niemine [15], Sottinent [17], Li and Dai [12] and the reference therein.Based on the study of FBMs, many authors have proposed a generalization of it, andhave obtained many new processes. An extension of FBMs is the operator fractionalBrownian motion(OFBM). OFBMs are multivariate analogues of one-dimensional FBMs.They arise in the context of multivariate time series and long range dependence (see, forexample, Chung [1], Davidson and de Jong [4], Dolado and Marmol [6], Robinson [16], andMarinucci and Robinson [13]). Another context is that of queuing systems, where reflectedOFBMs model the size of multiple queues in particular classes of queuing models. Theyare also studied in problems related to, for example, large deviations (see Delgado [5], andKonstantopoulos and Lin[9]). Similar to those for FBMs, weak limit theorems for OFBMs [ a ] College of Mathematics and Information Sciences, Guangxi University, Nanning, 530004, P.R. China[ b ] Department of Mathematics, National Tsing-Hua University Hsinchu, Taiwan, 30043 have been studied recently. Some new results on approximations of OFBMs have beenobtained. See Dai [2, 3] and the references therein.It is well known that a martingale difference sequence is extremely useful because it im-poses much milder restrictions on the memory of the sequence than under independence,yet most limit theorems that hold for an independent sequence will also hold for a martin-gale difference sequence. In recent years, some researchers have used this type of sequencesto construct approximation sequences of some known processes. For example, Nieminen[15] studied the limit theorems for FBMs based on martingale difference sequences. Thisis a natural motivation for this paper. The direct motivation is the recent works by Dai[2, 3], in which, based on a sequence of I.I.D. random variables, the author presented someweak limit theorems for some special kinds of OFBMs.In this short paper, we establish a weak limit theorem for a special case of OFBMs,which comes from Maejima and Mason [14]. The rest of this paper is organized as follows.In Section 2, we recall OFBMs and martingale-difference sequences, and present the mainresult of this paper. Section 3 is devoted to prove the main result of this paper.
2. Operator fractional Brownian motion and Martingale-differences
In this section, we first introduce a special type of OFBMs. Let
End ( R d ) be the set oflinear operators on R d (endomorphisms) and Aut ( R d ) be the set of invertible linear oper-ators (automorphisms) in End ( R d ). For convenience, we will not distinguish an operator D ∈ End ( R d ) from its associated matrix relative to the standard basis of R d . As usual,for c > c D = exp (cid:0) (log c ) D (cid:1) = ∞ X k =0 k ! (log c ) k D k . Throughout this paper, we will use k x k to denote the usual Euclidean norm of x ∈ R d . Without confusion, for A ∈ End ( R d ), we also let k A k = max k x k =1 k Ax k denote theoperator norm of A . It is easy to see that for A, B ∈ End ( R d ), k AB k ≤ k A k · k B k , (2.1)and for every A = ( A ij ) d × d ∈ End ( R d ),max ≤ i,j ≤ d | A ij | ≤ k A k ≤ d max ≤ i,j ≤ d | A ij | . (2.2)Let σ ( A ) be the collection of all eigenvalues of A . We denote λ A = min { Reλ : λ ∈ σ ( A ) } and Λ A = max { Reλ : λ ∈ σ ( A ) } . (2.3)Let x ′ denote the transpose of a vector x ∈ R d . We now extend the fractional Brownianmotion of Riemann-Liouville type studied by L´evy [11, p. 357] to the multivariate case. Definition 2.1
Let D be a linear operator on R d with < λ D , Λ D <
1. For t ∈ R + ,define X ( t ) = Z t (cid:0) t − u ) D − I/ dW ( u ) , (2.4)2 where W ( u ) = { W ( u ) , · · · , W d ( u ) } ′ is a standard d -dimensional Brownian motion. Wecall the process X = { X ( t ) } an operator fractional Brownian motion of Riemann-Liouville(RL-OFBM).As is standard for the multivariate context, we assume that RL-OFBM is proper. Arandom variable in R d is proper if the support of its distribution is not contained in aproper hyperplane of R d . Remark 2.1
The operator fractional Brownian motion in the current work is a specialcase of the operator fractional Brownian motions in the work of Maejima and Mason [14,Theorem 3.1].
Remark 2.2
The RL-OFBM X defined by (2.4) is an operator self-similar Gaussianprocess.In this short note, we want to obtain an approximation of RL-OFBMs. Inspired byNieminen [15], we want to construct an approximation sequence of RL-OFBM X by mar-tingale differences.Let (cid:8) ξ ( n ) = ( ξ ( n ) i , F ni ) ≤ i ≤ n (cid:9) n ∈ N be a sequence of square integrable martingale differ-ences such that for every sequence { i n } with lim n →∞ i n = ∞ , where 1 ≤ i n ≤ n ,lim n →∞ ( ξ ( n ) i n ) n = 1 , a.s. (2.5)and max ≤ i ≤ n (cid:12)(cid:12) ξ ( n ) i (cid:12)(cid:12) ≤ C √ n , a.s., (2.6)for some C ≥ . The following lemma follows from Jacod and Shiryaev [10].
Lemma 2.1
Under the condition (2.6) and the condition ⌊ nt ⌋ X i =1 (cid:0) ξ ( n ) i (cid:1) → t, a.s., (2.7) the processes B n ( t ) = ⌊ nt ⌋ X i =1 ξ ( n ) i (2.8) converge in distribution to a Brownian motion B , as n → ∞ . Remark 2.3
Such a type of sequences is very useful, since it is very easy to obtain it inthe real world. See, Nieminen [15], for example.Below, we extend Lemma 2.1 to the d -dimensional case. Define η ( n ) i = (cid:16) ξ ( n ) i, , · · · , ξ ( n ) i,d (cid:17) ′ , (2.9)3 where ξ ( n ) i,k , k = 1 , · · · , d , are independent copies of ξ ( n ) i in Lemma 2.1. Define η n ( t ) = ⌊ nt ⌋ X i =1 η ( n ) i . (2.10)Then, we can get that { η ( n ) } n ∈ N = { η ( n ) i , F ni } is still a sequence of square integrablemartingale differences on the probability space (cid:0) Ω , F , P (cid:1) . Inspired by Lemma 2.1, wehave the following lemma. Lemma 2.2
Under conditions (2.6) and (2.7), the sequence of processes η n ( t ) convergesin law to a d-dimensional Brownian motion W , as n → ∞ . Noting that W i ( u ), i = 1 , · · · , d , are mutually independent, and so are ξ ( n ) k,i , we candirectly get Lemma 2.2 from Lemma 2.1 and Theorem 11.4.4 in Whitt [19, Chapter 12].Inspired by Lemma 2.2 and (2.4), we construct the approximation sequence by X n ( t ) = ⌊ nt ⌋ X i =1 n Z ini − n (cid:16) ⌊ nt ⌋ n − u (cid:17) D − I + η ( n ) i du. (2.11)Our main objective in this paper is to explain and prove the following theorem. Theorem 2.1
The sequence of processes { X n ( t ) , t ∈ [0 , } given by (2.11), as n → ∞ ,converges weakly to the operator fractional Brownian motion X given by (2.4). In the rest of this paper, most of the estimates contain unspecified constants. Anunspecified positive and finite constant will be denoted by ˜ K , which may not be the samein each occurrence.
3. Proof of Theorem 2.1
In order to prove the main result of this paper, we need a technical lemma. Before westate this technical lemma, we first introduce the following notation K ( t, s ) = ( t − s ) D − I + = (cid:0) K i,j ( t, s ) (cid:1) d × d , (3.1)and K n ( t, s ) = (cid:0) ⌊ nt ⌋ n − s (cid:1) D − I + = (cid:0) K ni,j ( t, s ) (cid:1) d × d . (3.2)The technical lemma follows. Lemma 3.1
For any k, j ∈ { , , · · · , d } , n X i =1 n Z ini − n K nk,j ( t l , s ) ds Z ini − n K nk,j ( t q , s ) ds (cid:0) ξ ( n ) i,j (cid:1) → Z K k,j ( t l , s ) K k,j ( t q , s ) ds, a.s. (3.3) for t l , t q ∈ [0 , , as n → ∞ . Before we prove it, we need the following lemma which is due to Maejima and Mason[14].
Lemma 3.2
Let D ∈ End ( R d ) . If λ D > and r > , then for any δ > , there existpositive constants K and K such that (cid:13)(cid:13) r D (cid:13)(cid:13) ≤ ( K r λ D − δ , for all r ≤ ,K r Λ D + δ , for all r ≥ . (3.4)Next, we give the detailed proof of Lemma 3.1. Proof of Lemma 3.1:
In order to simplify the discussion, we split the proof into two steps.
Step 1.
We claim that for any t ∈ [0 , n X i =1 n (cid:16) Z ini − n K k,j ( t, s ) ds (cid:17) (cid:0) ξ ( n ) i,j (cid:1) → Z K k,j ( t, s ) ds, a.s. , (3.5)as n → ∞ .For convenience, define G n ( t, u ) = n n X i =1 ( i − n , in ] ( u ) Z ini − n K k,j ( t, s ) ds ξ ( n ) i,j ( √ n ) − . Therefore, we have Z G n ( t, u ) du = n X i =1 n (cid:18) Z ini − n K k,j ( t, s ) ds (cid:19) (cid:0) ξ ( n ) i,j (cid:1) ≤ n X i =1 n Z ini − n (cid:0) K k,j ( t, s ) (cid:1) ds (cid:0) ξ ( n ) i,j (cid:1) , (3.6)where we have used the Cauchy-Schwartz inequality and by (2.6).Therefore, Z G n ( t, u ) du ≤ ˜ K Z (cid:0) K k,j ( t, s ) (cid:1) ds. (3.7)On the other hand, by (2.2) and Lemma 3.2, (cid:12)(cid:12) K k,j ( t, s ) (cid:12)(cid:12) ≤ (cid:13)(cid:13) K ( t, s ) (cid:13)(cid:13) ≤ ˜ K (cid:0) t − s (cid:1) ( λ D − δ ) − + , (3.8)since t − s ∈ [0 , Z G n ( t, u ) du ≤ ˜ K Z (cid:0) K k,j ( t, s ) (cid:1) ds ≤ ˜ K Z (cid:0) t − s (cid:1) λ D − δ ) − < ∞ , (3.9)since λ D − δ > . Therefore, { G n ( t, u ) } is uniformly integrable.On the other hand, we have for any u ∈ (0 , G n ( t, u ) → K k,j ( t, u ) , a.s. , (3.10)5 since for u ∈ ( i − n , in ], (cid:16) n Z ini − n K k,j ( t, s ) ds (cid:17) → K k,j ( t, u ) , as n → ∞ , and the condition (2.5).By (3.9) and (3.10), we get that as n → ∞ Z G n ( t, u ) du → Z K k,j ( t, s ) ds, a.s. (3.11)Therefore, (3.5) holds. Step 2 . We prove the original claim. In order to simplify the discussion, we let t nq = ⌊ nt q ⌋ n and t nl = ⌊ nt l ⌋ n . By (3.5), we can get n X i =1 n Z ini − n K k,j ( t l , s ) ds Z ini − n K k,j ( t q , s ) ds (cid:0) ξ ( n ) i,j (cid:1) → Z K k,j ( t l , s ) K k,j ( t q , s ) ds, a.s.(3.12)for t l , t q ∈ [0 , n → ∞ .In fact, it follows from (3.5) that n X i =1 n (cid:16) Z ini − n K k,j ( t l , s ) + K k,j ( t q , s ) ds (cid:17) (cid:0) ξ ( n ) i,j (cid:1) → Z (cid:16) K k,j ( t l , s ) + K k,j ( t q , s ) (cid:17) ds. (3.13)On the other hand, we have (cid:16) Z ini − n K k,j ( t l , s ) + K k,j ( t q , s ) ds (cid:17) = (cid:16) Z ini − n K k,j ( t l , s ) ds (cid:17) + (cid:16) Z ini − n K k,j ( t q , s ) ds (cid:17) +2 Z ini − n K k,j ( t l , s ) ds Z ini − n K k,j ( t q , s ) ds. (3.14)Hence (3.5), (3.13), and (3.14) imply (3.12).Therefore, in order to prove (3.3), it suffices to prove that n X i =1 n (cid:18) Z ini − n K k,j ( t l , s ) ds Z ini − n K k,j ( t q , s ) ds − Z ini − n K k,j ( t nl , s ) ds Z ini − n K k,j ( t nq , s ) ds (cid:19)(cid:0) ξ ( n ) i,j (cid:1) → , a.s. (3.15)as n → ∞ . 6 For the left-hand side of (3.15), we have Z ini − n K k,j ( t l , s ) ds Z ini − n K k,j ( t q , s ) ds − Z ini − n K k,j ( t nl , s ) ds Z ini − n K k,j ( t nq , s ) ds = Z ini − n K k,j ( t l , s ) ds Z ini − n (cid:16) K k,j ( t q , s ) − K k,j ( t nq , s ) (cid:17) ds − Z ini − n (cid:16) K k,j ( t nl , s ) − K k,j ( t l , s ) (cid:17) ds Z ini − n (cid:16) K k,j ( t nq , s ) − K k,j ( t q , s ) (cid:17) ds + Z ini − n K k,j ( t q , s ) ds Z ini − n (cid:16) K k,j ( t l , s ) − K k,j ( t nl , s ) (cid:17) ds. (3.16)By (2.2), we have (cid:12)(cid:12)(cid:12) K k,j ( t q , s ) − K k,j ( t nq , s ) (cid:12)(cid:12)(cid:12) ≤ (cid:13)(cid:13)(cid:13) K ( t q , s ) − K ( t nq , s ) (cid:13)(cid:13)(cid:13) . (3.17)On the other hand, using the same method as in the proof of the inequality (3.52) below, n X i =1 Z ini − n (cid:13)(cid:13)(cid:13) K ( t q , s ) − K ( t nq , s ) (cid:13)(cid:13)(cid:13) ds ≤ Z (cid:13)(cid:13)(cid:13) K ( t q , s ) − K ( t nq , s ) (cid:13)(cid:13)(cid:13) ds ≤ ˜ K ( t nq − t q ) H , (3.18)where H = λ D − δ .By the condition (2.6) and (3.16), (3.15) can be bounded by˜ Kn Z (cid:13)(cid:13)(cid:13) K ( t l , s ) (cid:13)(cid:13)(cid:13) ds Z (cid:13)(cid:13)(cid:13) K ( t q , s ) − K ( t nq , s ) (cid:13)(cid:13)(cid:13) ds + ˜ Kn Z (cid:12)(cid:12)(cid:12) | K ( t nl , s ) − K ( t l , s ) (cid:13)(cid:13)(cid:13) ds Z (cid:13)(cid:13)(cid:13) K ( t nq , s ) − K ( t q , s ) (cid:13)(cid:13)(cid:13) ds + ˜ Kn Z (cid:13)(cid:13)(cid:13) K ( t q , s ) (cid:13)(cid:13)(cid:13) ds Z (cid:13)(cid:13)(cid:13) K ( t l , s ) − K ( t nl , s ) (cid:13)(cid:13)(cid:13) ds. (3.19)It follows from (3.9), (3.18), and (3.19) that the left-hand side of (3.15) can be boundedby ˜ Kn − H , (3.20)since | t nq − t q | ≤ n and | t nl − t l | ≤ n .From (3.20), we can easily prove the lemma. (cid:3) From the proof of Lemma 3.1 and (2.2), we can easily get that
Corollary 3.1
Let H ( t, s ) = P dk =1 a k K nk,j ( t l , s ) for any a k ∈ R . Then n X i =1 n Z ini − n H ( t l , s ) ds Z ini − n H ( t q , s ) ds (cid:0) ξ ( n ) i,j (cid:1) → Z H ( t l , s ) H ( t q , s ) ds, a.s. (3.21) for any t l , t q ∈ (0 , . Next, we prove the main result of this paper. Before we give the details, we first introducea technical tool.
Lemma 3.3
Let t ∈ (0 , , σ t > and let { ξ ( n ) } be a sequence of martingale differencesas in Section 2 and satisfies the following Lindberg condition: for ǫ > ⌊ nt ⌋ X i =1 E h(cid:0) ξ ( n ) i (cid:1) I {| ξ ( n ) i | >ǫ } | F ni − i P → . (3.22) Then ⌊ nt ⌋ X i =1 (cid:0) ξ ( n ) i (cid:1) P → σ t (3.23) implies B n ( t ) D → N ∼ N (0 , σ t ) , (3.24) where D → denotes convergence in distribution. Lemma 3.3 can be found in Shiryaev [18, p. 511].
Proof of Theorem 2.1 : We will prove this theorem by two steps.
Step 1 : First, we have to show that the finite-dimensional distributions of X n convergeto those of X . It suffices to prove that for any q ∈ N , a , · · · , a q ∈ R and t , · · · , t q ∈ [0 , q X l =1 a l X n ( t l ) D → q X l =1 a l X ( t l ) . (3.25)By the Cram´ e r-Wold device (see, Whittle [19, Chapter 4]), in order to prove (3.25), weonly need to show q X l =1 a l bX n ( t l ) D → q X l =1 a l bX ( t l ) , (3.26)for any vector b = ( b (1) , · · · , b ( d ) ) ∈ R d .For convenience, define X n ( t ) = (cid:16) X ( n )1 ( t ) , · · · , X ( n ) d ( t ) (cid:17) ′ , where X ( n ) j ( t ) = n ⌊ nt ⌋ X i =1 Z ini − n K nj ( t, s ) η ( n ) i ds, with K nj ( t, s ) = (cid:0) K nj, ( t, s ) , · · · , K nj,d ( t, s ) (cid:1) , and X ( t ) = (cid:16) X (1) ( t ) , · · · , X ( d ) ( t ) (cid:17) ′ , where X ( j ) ( t ) = Z t K j ( t, s ) dW ( s ) , with K j ( t, s ) = (cid:16) K j, ( t, s ) , · · · , K j,d ( t, s ) (cid:17) . By some calculations, we can get that (3.26) is equivalent to q X l =1 d X k =1 d X j =1 ⌊ nt l ⌋ X i =1 n Z ini − n a l b ( k ) K nk,j ( t l , s ) ξ ( n ) i,j ds D → q X l =1 d X k =1 d X j =1 Z t a l b ( k ) K k,j ( t l , s ) dW j ( s ) . (3.27)In order to simplify the discussion, we define¯ X n ( l, k, j ) = ⌊ nt l ⌋ X i =1 n Z ini − n K nk,j ( t l , s ) ξ ( n ) i,j ds and ¯ X ( l, k, j ) = Z t l K k,j ( t l , s ) dW j ( s ) . Hence (3.27) can be rewrote as follows. q X l =1 d X k,j =1 a l b ( k ) ¯ X n ( l, k, j ) D → q X l =1 d X k,j =1 a l b ( k ) ¯ X ( l, k, j ) . (3.28)By the independence of ξ ( n ) i,j , j = 1 , · · · , d , it suffices to show that for every j ∈ { , · · · , d } q X l =1 d X k =1 a l b ( k ) ¯ X n ( l, k, j ) D → q X l =1 d X k =1 a l b ( k ) ¯ X ( l, k, j ) . (3.29)We will prove (3.29) by Lemma 3.3. We first prove the Lindeberg condition holds in ourcase. For convenience, define: Z nk,i ( t ) = n Z ini − n K nk,j ( t, s ) ξ ( n ) i,j ds. We have (cid:16) Z nk,i ( t ) (cid:17) = n (cid:0) ξ ( n ) i,j (cid:1) (cid:16) Z ini − n K nk,j ( t, s ) ds (cid:17) ≤ n (cid:0) ξ ( n ) i,j (cid:1) Z ini − n (cid:16) K nk,j ( t, s ) (cid:17) ds, (3.30)where we have used the H ¨o lder inequality. By (2.2), we have (cid:12)(cid:12) K nk,j ( t, s ) (cid:12)(cid:12) ≤ (cid:13)(cid:13) ( t − s ) D − I + (cid:13)(cid:13) . (3.31)9 By Lemma 3.2, we have (cid:13)(cid:13) ( t − s ) D − I + (cid:13)(cid:13) ≤ ˜ K ( t − s ) λ D − − δ + , (3.32)since t, s ∈ [0 , Z ini − n (cid:13)(cid:13) K k,j ( t, s ) (cid:13)(cid:13) ds ≤ ˜ K Z ini − n ( t − s ) λ D − δ ) − ds ≤ ˜ K Z n (1 − s ) λ D − δ ) − ds, (3.33)since (1 − s ) λ D − δ ) − with λ D − δ > is decreasing in s . Therefore, it follows from (3.30)and (3.33) that (cid:16) Z nk,i ( t ) (cid:17) ≤ ˜ Kn (cid:0) ξ ( n ) i,j (cid:1) δ n (3.34)with δ n = R n (1 − s ) λ D − δ ) − ds .On the other hand, from (3.2), we get for any s ≥ ⌊ nt ⌋ n , K nk,j ( t, s ) = 0 . (3.35)Hence, by (3.35), q X l =1 d X k =1 a l b ( k ) ¯ X n ( l, k, j ) = n X i =1 q X l =1 d X k =1 a l b ( k ) Z nk,i ( t l ) . (3.36)Finally, we have (cid:0) q X l =1 d X k =1 a l b ( k ) Z nk,i ( t l ) (cid:1) ≤ ˜ K q X l =1 d X k =1 ( b ( k ) ) a l (cid:0) Z nk,i ( t l ) (cid:1) . (3.37)Combining (3.33) and (3.37), we have (cid:0) q X l =1 d X k =1 a l b ( k ) Z nk,i ( t l ) (cid:1) ≤ ˜ Kn (cid:0) ξ ni,j (cid:1) δ n . (3.38)Noting that n | q X l =1 d X k =1 a l b ( k ) Z nk,i ( t l ) | > ǫ o = n(cid:0) q X l =1 d X k =1 a l b ( k ) Z nk,i ( t l ) (cid:1) > ǫ o , (3.39)from (3.38), we have n | q X l =1 d X k =1 a l b ( k ) Z nk,i ( t l ) | > ǫ o ⊂ n ˜ Kn (cid:0) ξ ni,k (cid:1) δ n > ǫ o . (3.40)10 Therefore, by (3.38) and (3.40) E (cid:16)(cid:0) q X l =1 d X k =1 a l b ( k ) Z nk,i ( t l ) (cid:1) I {| P ql =1 P dk =1 a l b ( k ) Z nk,i ( t l ) | >ξ } | F ni − (cid:17) ≤ ˜ Kn (cid:0) ξ ni,j (cid:1) δ n E (cid:16) I n ˜ Kn (cid:0) ξ ni,j (cid:1) δ n >ǫ o | F ni − (cid:17) ≤ ˜ Kδ n E (cid:16) I n ˜ Kδ n >ǫ o | F ni − (cid:17) . (3.41)Combining (3.36) and (3.41), one can easily prove that, as n approaches ∞ , n X i =1 E (cid:16)(cid:0) q X l =1 d X k =1 a l b ( k ) Z nk,i (cid:1) I {| P ql =1 P dk =1 a l b ( k ) Z nk,i | >ξ } | F ni − (cid:17) → . Hence the Lindeberg condition holds.Next, we show the condition (3.23) holds. We first study the right-hand side of (3.29).We have q X l =1 d X k =1 a l b ( k ) ¯ X ( l, k, j ) = q X l =1 a l ˜ W ( t l ) , (3.42)where ˜ W ( t ) = Z t h d X k =1 b ( k ) K k,j ( t, s ) i dW j ( s ) = Z t ¯ K ( t, s ) dW j ( s ) , (3.43)with ¯ K ( t, s ) = d X k =1 b ( k ) K k,j ( t, s ) . Combining (3.42) and (3.43), we have E h q X l =1 d X k =1 a l b ( k ) ¯ X ( l, k, j ) i = E h q X l =1 a l ˜ W ( t l ) i = q X l,j =1 a j a l Z ¯ K ( t j , s ) ¯ K ( t l , s ) ds. (3.44)In order to show the condition (3.23), we only need to show n X i =1 (cid:0) q X l =1 d X k =1 a l b ( k ) Z nk,i (cid:1) P → q X l,j =1 a j a l Z ¯ K ( t j , s ) ¯ K ( t l , s ) ds. (3.45)Now, we focus on the left-hand side of (3.45). Similar to (3.42), we have q X l =1 d X k =1 a l b ( k ) Z nk,i = q X l =1 a l ¯ Z nl,i , (3.46)11 where ¯ Z nl,i = n Z ini − n ¯ K nj ( t l , s ) ξ ( n ) i,j ds (3.47)with ¯ K nj ( t l , s ) = P dk =1 b ( k ) K nk,j ( t l , s ). Hence n X i =1 (cid:0) q X l =1 d X k =1 a l b ( k ) Z nk,i (cid:1) = n X i =1 q X l ,l =1 n a l a l Z ini − n ¯ K j ( t l , s ) ds Z ini − n ¯ K j ( t l , s ) ds (cid:0) ξ ( n ) i,j (cid:1) . (3.48)It follows from Corollary 3.1 that the right-hand side of the equation (3.48) converges to q X l ,l =1 a l a l Z ¯ K j ( t l , s ) ¯ K j ( t l , s ) ds, a.s. (3.49)as n → ∞ . On the other hand, one can easily get that E h ˜ W ( t l ) ˜ W ( t k ) i = Z ¯ K ( t l , s ) ¯ K ( t k , s ) ds. (3.50)By (3.44), (3.49), and (3.50), we get the condition (3.23). Step 2 : We need to prove the tightness of the sequence { X n ( t ) } .By some calculations, E (cid:16) k X n ( t ) − X n ( s ) k (cid:17) ≤ ˜ K Z (cid:13)(cid:13)(cid:13)(cid:16) ⌊ nt ⌋ n − u (cid:17) D − I + − (cid:16) ⌊ ns ⌋ n − u (cid:17) D − I + (cid:13)(cid:13)(cid:13) du. (3.51)In order to simplify the discussion, let˜ t = ⌊ nt ⌋ n , and ˜ s = ⌊ ns ⌋ n . Next, we show that Z (cid:13)(cid:13)(cid:13)(cid:16) ˜ t − u (cid:17) D − I + − (cid:16) ˜ s − u (cid:17) D − I + (cid:13)(cid:13)(cid:13) du ≤ K (˜ t − ˜ s ) H , (3.52)where H = λ D − δ .In fact, Z (cid:13)(cid:13)(cid:13) (˜ t − u ) D − I + − (˜ s − u ) D − I + (cid:13)(cid:13)(cid:13) du = Z ˜ s (cid:13)(cid:13) (˜ t − u ) D − I − (˜ s − u ) D − I (cid:13)(cid:13) du + Z ˜ t ˜ s (cid:13)(cid:13) (˜ t − u ) D − I (cid:13)(cid:13) du. (3.53)12 It follows from Lemma 3.2 and (2.1) that (cid:13)(cid:13) (˜ t − u ) D − I (cid:13)(cid:13) ≤ ˜ K (˜ t − u ) λ D − δ − , since u ≤ ˜ t ∈ [0 , Z ˜ t ˜ s (cid:13)(cid:13) (˜ t − u ) D − I + (cid:13)(cid:13) du ≤ ˜ K Z ˜ t ˜ s (˜ t − u ) λ D − δ ) − du = ˜ K (˜ t − ˜ s ) λ D − δ ) λ D − δ ) . (3.54)Next, we deal with the first term on the right-hand side of (3.53). Note that Z ˜ s (cid:13)(cid:13) (˜ t − u ) D − I − (˜ s − u ) D − I (cid:13)(cid:13) du = Z ˜ s (cid:13)(cid:13) (˜ t − ˜ s + u ) D − I − u D − I (cid:13)(cid:13) du = Z ˜ s/ (˜ t − ˜ s )0 (cid:13)(cid:13)(cid:13)(cid:2) (˜ t − ˜ s )(1 + u ) (cid:3) D − I − (cid:2) (˜ t − ˜ s ) u (cid:3) D − I (cid:13)(cid:13)(cid:13) du (˜ t − ˜ s ) ≤ (cid:13)(cid:13) (˜ t − ˜ s ) D − I (cid:13)(cid:13) (˜ t − ˜ s ) Z ˜ s/ (˜ t − ˜ s )0 (cid:13)(cid:13) (1 + u ) D − I − u D − I (cid:13)(cid:13) du ≤ (cid:13)(cid:13) (˜ t − ˜ s ) D − I (cid:13)(cid:13) (˜ t − ˜ s ) Z R + (cid:13)(cid:13)(cid:13) (1 + u ) D − I − u D − I (cid:13)(cid:13)(cid:13) du, (3.55)where we used the fact that (˜ t ˜ s ) A = ˜ t A · ˜ s A .It follows from Lemma 3.2 and (2.1) that (cid:13)(cid:13) (˜ t − ˜ s ) D − I (cid:13)(cid:13) (˜ t − ˜ s ) ≤ ˜ K (˜ t − ˜ s ) λ D − δ ) . In order to prove our result, it suffices to show that Z R + (cid:13)(cid:13) (1 + u ) D − I − u D − I (cid:13)(cid:13) du < ∞ . (3.56)Then, in order to prove (3.56), it suffices to show that Z u ≤ (cid:13)(cid:13) u D − I (cid:13)(cid:13) du < ∞ , (3.57)and for large enough T >
1, that Z u ≥ T (cid:13)(cid:13) (1 + u ) D − I − u D − I (cid:13)(cid:13) du < ∞ . (3.58)It follows from Lemma 3.2 and (2.1) that (cid:13)(cid:13) u D − I (cid:13)(cid:13) ≤ ˜ Ku λ D − δ ) − for u ≤ . Hence, one can easily see that (3.56) holds.13 Next, we show that (3.58) holds. We see that(1 + u ) D − I − u D − I = Z uu ( D − I s D − I s − ds. (3.59)Then (cid:13)(cid:13) (1 + u ) D − I − u D − I (cid:13)(cid:13) ≤ (cid:13)(cid:13) ( D − I (cid:13)(cid:13) Z uu || s D − I (cid:13)(cid:13) s − ds. (3.60)It follows from Lemma 2.1 and (2.1) that Z uu (cid:13)(cid:13) s D − I (cid:13)(cid:13) s − ds ≤ Z uu ˜ Ks Λ D + δ − ds, (3.61)since u ≥ (cid:13)(cid:13) (1 + u ) D − I − ( u ) D − I (cid:13)(cid:13) ≤ ˜ Ku D + δ ) − . (3.62)By (3.62), we have that (3.58) holds, since Λ D + δ < E (cid:16) k X n ( t ) − X n ( s ) k (cid:17) ≤ ˜ K (˜ t − ˜ s ) H . (3.63)Hence for any s ≤ t ≤ u ∈ [0 , E (cid:20)(cid:13)(cid:13)(cid:13) X n ( t ) − X n ( s ) (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) X n ( t ) − X n ( u ) (cid:13)(cid:13)(cid:13)(cid:21) ≤ h E (cid:13)(cid:13) X n ( t ) − X n ( s ) (cid:13)(cid:13) i h E (cid:13)(cid:13) X n ( t ) − X n ( u ) (cid:13)(cid:13) i ≤ ˜ K (cid:12)(cid:12)(cid:12) ⌊ nt ⌋ n − ⌊ ns ⌋ n (cid:12)(cid:12)(cid:12) H (cid:12)(cid:12)(cid:12) ⌊ nu ⌋ n − ⌊ nt ⌋ n (cid:12)(cid:12)(cid:12) H ≤ ˜ K (cid:12)(cid:12)(cid:12) ⌊ nu ⌋ n − ⌊ ns ⌋ n (cid:12)(cid:12)(cid:12) H . (3.64)If u − s ≥ n , then one can easily see that E (cid:20)(cid:13)(cid:13)(cid:13) X n ( t ) − X n ( s ) (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) X n ( t ) − X n ( u ) (cid:13)(cid:13)(cid:13)(cid:21) ≤ ˜ K ( u − s ) H . (3.65)On the other hand, if u − s < n , then either s and t or t and u belong to the interval[ in , i +1 n ] for some i . Thus the left-hand side of (3.64) is zero. Therefore (3.65) still holdsfor this case. Hence it follows from Ethier and Kurtz [8, Chapter 3] that { X n ( t ) } is tight.By Theorem 7.8 in Ethier and Kurtz [8, Chapter 3], we get that Theorem 2.1 holds.This completes the proof. (cid:3) Acknowledgments
This work was supported by the Natural Science Foundation of China(No. 11361007) and the Guangxi Natural Science Foundation (No. 2012NSFGXBA05301).14 References [1] Chung, C. F.: Sample means, sample autocovariances, and linear regression ofstationary multivariate long memory processes.
Econometric Theory,
18, 51–78(2002)[2] Dai, H.: Converge in law to operator fractional Brownian motion.
Journal ofTheoretical Probability,
26, 676-696 (2013)[3] Dai, H. : Convergence in law to operator fractional Brownian motion of Riemann-Liouville type.
Acta Mathematica Sinica, English Series,
29, 777–788 (2013)[4] Davidson, J., de Jong, R. M.: The functional central limit theorem and weakconvergence to stochastic integrals II.
Econometric Theory,
16, 643–666 (2000)[5] Delgado, R.: A reflected fBm limit for fluid models with ON/OFF sources underheavy traffic.
Stoch. Process. Appl.,
Econometrics Journal,
7, 168–190 ( 2004)[7] Enriquez, N.: A simple construction of the fractional Brownian motion.
Stoch.Process. Appl.,
Markov Processes: Characterization and Convergence.
Wi-ley, New York, 1986[9] Konstantopoulos, T., Lin, S. J.: Fractional Brownian approximations of queuingnetworks. In: Lecture Notes in Statistics 117: Stochastic Networks, Springer, NewYork, 257–273 (1996)[10] Jacod, J., Shiryaev, A.N.:
Limit Theorems for Stochastic Processes.
Springer,Berlin, 1987[11] L´evy, P.: Random functions: General theory with special reference to Laplacianrandom funcitons,
Univ. California. Publ. Statist.,
1, 331–390 (1953)[12] Li, Y., Dai, H.: Approximations of fractional Brownian motion.
Bernoulli , 17,1195–1216 (2011)[13] Marinucci, D., Robinson, P.: Weak convergence of multivariate fractional pro-cesses.
Stoch. Process. Appl.,
86, 103–120 (2000)[14] Maejima, M., Mason, J. D.: Operator self-similar stable processes.
Stoch. Process.Appl.,
54, 139–163 (1994)[15] Nieminen, A.: Fractional Brownian motion and martingale-differences.
Stat.Probab. Lett.,
70, 1–10 (2004)[16] Robinson, P.: Multiple local Whittle estimation in stationary systems.
Ann. ofStatist.,
36, 2508–2530 (2008) 15 [17] Sottinen, T.: Fractional Brownian motion, random walks and binary market mod-els. Finance Stoch.,
5, 343–355 (2001)[18] Shiryaev, A.N.:
Probability.
Springer, New York, 1984[19] Whitt, W.: