Berry-Esseen bounds of second moment estimators for Gaussian processes observed at high frequency
Soukaina Douissi, Khalifa Es-Sebaiy, George Kerchev, Ivan Nourdin
aa r X i v : . [ m a t h . S T ] F e b BERRY-ESSEEN BOUNDS OF SECOND MOMENT ESTIMATORS FORGAUSSIAN PROCESSES OBSERVED AT HIGH FREQUENCY
SOUKAINA DOUISSI, KHALIFA ES-SEBAIY, GEORGE KERCHEV, AND IVAN NOURDIN
Abstract.
Let Z := { Z t , t ≥ } be a stationary Gaussian process. We study two esti-mators of E [ Z ], namely b f T ( Z ) := T R T Z t dt , and e f n ( Z ) := n P ni =1 Z t i , where t i = i ∆ n , i = 0 , , . . . , n , ∆ n → T n := n ∆ n → ∞ . We prove that the two estimators arestrongly consistent and establish Berry-Esseen bounds for a central limit theorem in-volving b f T ( Z ) and e f n ( Z ). We apply these results to asymptotically stationary Gaussianprocesses and estimate the drift parameter for Gaussian Ornstein-Uhlenbeck processes. Mathematics Subject Classifications (2010) : Primary 60F05; Secondary: 60G15;60G10; 62F12; 62M09.
Keywords:
Parameter estimation, Strong consistency, rate of normal convergence of theestimators, stationary Gaussian processes, continuous-time observation, high frequencydata. 1.
Introduction
Statistical inference for stochastic processes is of great importance for theoreticians andpractitioners alike. While for some processes like Itˆo-type diffusions and semimartingales,there is extensive literature, the statistical analysis for fractional Gaussian processes isrelatively recent.In this paper, we are interested in the parametric estimation of the variance of stationaryGaussian process which is not necessarily a semimartingale. Let Z = { Z t , t ≥ } be acontinuous centered stationary Gaussian process and f Z := E ( Z ) >
0. We consider thefollowing estimators of f Z : • When a complete path of the process over a large finite interval is observable, weuse the estimator: b f T ( Z ) := 1 T Z T Z t dt, T > . (1.1) • A more practical assumption is that the process Z is observed at discrete timeinstants t i = i ∆ n , where i = 0 , . . . , n and ∆ n is the step size. Then we consider thefollowing estimator over the observation window T n := n ∆ n : e f n ( Z ) := 1 n n X i =1 Z t i , n ≥ . (1.2) These estimators are unbiased and we show that they are strongly consistent and admita central limit theorem. Moreover, we bound the rate of convergence to the normal distri-bution in terms of total variation distance and Wasserstein distance. Recall that, for tworandom variables X and Y , the former metrics are respectively given by d T V ( X, Y ) := sup A ∈B ( R ) | P [ X ∈ A ] − P [ Y ∈ A ] | , (1.3)where the supremum is over all Borel sets, and d W ( X, Y ) := sup f ∈ Lip (1) | E [ f ( X )] − E [ f ( Y )] | , (1.4)where Lip (1) is the set of all Lipschitz functions with Lipschitz constant ρ ( t ) = ρ ( − t ) := E [ Z Z t ] for t ≥
0. The central result for b f T ( Z ), whose proof followsthe lines of the approach developed in [28, Chapter 7], is the following. Theorem 1.1.
Assume R R ρ ( r ) dr < ∞ . Let N ∼ N (0 , be the standard normal randomvariable. Then for all T > , d T V b f T ( Z ) − f Z q V ar ( b f T ( Z ) − f Z ) , N ≤ ϕ T ( Z ) , (1.5) where ϕ T ( Z ) = C max ( √ T (cid:18)Z T − T | ρ ( t ) | / dt (cid:19) , T (cid:18)Z T − T | ρ ( t ) | / dt (cid:19) ) , (1.6) for some absolute constant C > . The same result holds for the Wasserstein distance. For the discrete estimator e f n ( Z ) we have that: Theorem 1.2.
Assume R R ρ ( r ) dr < ∞ and that E [ | Z t − Z s | ] ≤ c | t − s | α for some c > and α ∈ (0 , and when | t − s | is small enough. Let N ∼ N (0 , be thestandard normal random variable. If ∆ n → and n ∆ n → ∞ as n → ∞ , then there is C > such that, for every n ≥ , d T V e f n ( Z ) − f Z q V ar ( e f n ( Z ) − f Z ) , N ≤ ϕ T n ( Z ) + 2 (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) − V ar ( b f T n ( Z ) − f Z ) V ar ( e f n ( Z ) − f Z ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) + C (cid:2) n ∆ α +1 n (cid:3) / , where ϕ T n ( Z ) satisfies (1.6) . The same result holds for the Wasserstein distance. Thanks to the robustness of our approach, we can extend Theorem 1.1 and Theorem 1.2to the case when the process is only an asymptotically stationary Gaussian process. Inparticular we prove the rate of convergence of the second moment estimators for X := Z + Y ,where Z is the stationary Gaussian process as above and Y is a stochastic process with ERRY-ESSEEN BOUNDS FOR GAUSSIAN PROCESS ESTIMATORS 3 k Y t k L = O ( t − γ ) for some absolute γ >
1. The bounds will be the same up to an extraterm CT (1 − γ ) / (or CT (1 − γ ) / n for the discrete case).Moreover, we explore applications to drift estimation for the Ornstein-Uhlenbeck process.Let X θ = ( X θt ) t ≥ be an ergodic type Gaussian Ornstein-Uhlenbeck process given by thedifferential equation dX θt = − θX θt dt + dG t , X θ = 0 , where θ is the drift parameter and ( G t ) t ≥ is an arbitrary mean-zero Gaussian process.One can show that X θ is asymptotically stationary and write X θ = Z θ + Y θ where Z θ is a stationary Gaussian process and k Y t k L = O ( t − γ ) for any γ >
1. Then we provideBerry-Esseen bounds for the estimators b θ := g Z θ (cid:16) b f T ( X θ ) (cid:17) and e θ := g Z θ (cid:16) e f n ( X θ ) (cid:17) . Thefunction g Z θ is given via g − Z θ ( θ ) = E [( Z θ ) ] and b f T , e f n are as in (1.1) and (1.2). Concretebounds are computed for the cases when the process X θ is of the first and second kind,i.e., when ( G t ) t ≥ is a particular Gaussian process, following the terminology in [22].Parameter estimation for stationary Gaussian process is usually done via the Maximumlikelihood estimator because of its asymptotic optimality, see [35] and [34]. For instance, theMLE estimator of a stationary ARMA process is strongly consistent and asymptoticallyefficient [5, Section 10.8]. The method of moments is more computationally tractableespecially when one considers discrete estimators. Some recent studies include [11, 17, 21]where the mesh in time ∆ n = 1 in (1.2), which is akin to the discretization of a least-squares method for fractional Gaussian processes using fixed-time-step observations. Seealso [8] for an application of the second moment method to an AR (1) model.In the last few years the estimators (1.1) and (1.2) have been used, in a number ofinstance, to study parameter estimation problems in various fractional Gaussian models.Some of these results can be summarized below. • The case of continuous-time observations for ergodic-type Gaussian processes, us-ing (1.1) : The work [36] derived a central limit theorem and a Berry-Esseen boundin Kolmogorov distance for the second moment estimator (1.1) of the limitingvariance of an Ornstein-Uhlenbeck (OU) process driven by stationary-incrementGaussian noise. In [20], the authors considered the estimator (1.1) (called “Alter-native estimator” there) to estimate the drift parameter of an OU process drivenby fBm with Hurst parameter H ∈ (0 , H ∈ (0 , /
4] and a noncentral limit theorem for H ∈ (3 / , H ∈ (1 / , / α − stable L´evy motionin [7]. On the other hand, the consistency and speed of convergence in the TVand Wasserstein norms for the estimator (1.1) of the drift parameter in infinitedimensional linear stochastic equations driven by a fBm are studied by [26]. S. DOUISSI, K. ES-SEBAIY, G. KERCHEV, AND I. NOURDIN • The case of discrete-time observations for ergodic-type Gaussian processes, using (1.2) : In the case when the mesh in time ∆ n = 1, the consistency and speed of con-vergence in the TV and Wasserstein distance for the estimator (1.2) of the limitingvariance of of general Gaussian sequences were recently developed in the papers[17, 11]. Also, the drift parameter in linear stochastic evolution equation drivenby a fBm is considered in [26]. On the other hand, in the case of high frequencydata corresponding to ∆ n → Elements of Malliavin calculus on Wiener space
This section gives a brief overview of some useful facts from the Malliavin calculus onWiener space. Some of the results presented here are essential for the proofs in the presentpaper. For our purposes we focus on special cases that are relevant for our setting and omitthe general high-level theory. We direct the interested reader to [32, Chapter 1]and [28,Chapter 2].Fix (Ω , F , P ) for the Wiener space of a standard Wiener process W = ( W t ) t ≥ . Thefirst step is to identify the general centered Gaussian process ( Z t ) ≥ with an isonormalGaussian process X = { X ( h ) , h ∈ H} for some Hilbert space H . Recall that for suchprocesses X , for every h , h ∈ H , one has E [ X ( h ) X ( h )] = h h , h i H .One can define H as the closure of real-valued step functions on [0 , ∞ ) with respectto the inner product h [0 ,t ] , [0 ,s ] i H = E [ Z t Z s ]. Then the isonormal process X is given byWiener integral X ( h ) := R R + h ( s ) dW s . Note, that, in particular X ( [0 ,t ] ) d = Z t .The next step involves the multiple Wiener-Itˆo integrals . The formal definition involvesthe concepts of Malliavin derivative and divergence. We refer the reader to [32, Chapter1]and [28, Chapter 2]. For our purposes we define the multiple Wiener-Itˆo integral I p viathe Hermite polynomials H p . In particular, for h ∈ H with k h k H = 1, and any p ≥ H p ( X ( h )) = I p ( f ⊗ p ) . ERRY-ESSEEN BOUNDS FOR GAUSSIAN PROCESS ESTIMATORS 5
For p = 1 and p = 2 we have the following: H ( X ( [0 ,t ] )) = X ( [0 ,t ] ) = I ( [0 ,t ] ) = Z t (2.1) H ( X ( [0 ,t ] )) = X ( [0 ,t ] ) − E [ X ( [0 ,t ] ) ] = I ( ⊗ ,t ] ) = Z t − E [ Z t ] . (2.2)Note also that I can be taken to be the identity operator. Remark . Some notation for Hilbert spaces.
Let H be a Hilbert space. Given an integer q ≥ H ⊗ q and H ⊙ q correspond to the q th tensor product and q th symmetric tensor product of H . If f ∈ H ⊗ q is given by f = P j ,...,j q a ( j , . . . , j q ) e j ⊗· · · e j q ,where ( e j i ) i ∈ [1 ,q ] form an orthonormal basis of H ⊗ q , then the symmetrization ˜ f is given by˜ f = 1 q ! X σ X j ,...,j q a ( j , . . . , j q ) e σ ( j ) ⊗ · · · e σ ( j q ) , where the first sum runs over all permutations σ of { , . . . , q } . Then ˜ f is an element of H ⊙ q . We also make use of the concept of contraction. The r th contraction of two tensorproducts e j ⊗ · · · ⊗ e j p and e k ⊗ · · · e k q is an element of H ⊗ ( p + q − r ) given by( e j ⊗ · · · ⊗ e j p ) ⊗ r ( e k ⊗ · · · ⊗ e k q )= " r Y ℓ =1 h e j ℓ , e k ℓ i e j r +1 ⊗ · · · ⊗ e j q ⊗ e k r +1 ⊗ · · · ⊗ e k q . (2.3)The main motivation for introducing the multiple integrals comes from the followingproperties: • Isometry property of integrals [28, Proposition 2.7.5] Fix integers p, q ≥ f ∈ H ⊙ p and g ∈ H ⊙ q . E [ I q ( f ) I q ( g )] = (cid:26) p ! h f, g i H ⊗ p if p = q • Product formula [28, Proposition 2.7.10] Let p, q ≥
1. If f ∈ H ⊙ p and g ∈ H ⊙ q then I p ( f ) I q ( g ) = p ∧ q X r =0 r ! (cid:18) pr (cid:19)(cid:18) qr (cid:19) I p + q − r ( f e ⊗ r g ) . (2.5) • Hypercontractivity in Wiener Chaos.
For every q ≥ H q denotes the q th Wienerchaos of W , defined as the closed linear subspace of L (Ω) generated by the randomvariables { H q ( W ( h )) , h ∈ H , k h k H = 1 } where H q is the q th Hermite polynomial.For any F ∈ ⊕ ql =1 H l (i.e. in a fixed sum of Wiener chaoses), we have (cid:0) E (cid:2) | F | p (cid:3)(cid:1) /p c p,q (cid:0) E (cid:2) | F | (cid:3)(cid:1) / for any p ≥ . (2.6)It should be noted that the constants c p,q above are known with some precisionwhen F is a single chaos term: indeed, by [28, Corollary 2.8.14], c p,q = ( p − q/ . S. DOUISSI, K. ES-SEBAIY, G. KERCHEV, AND I. NOURDIN
The second part of important results we borrow from Malliavin calculus concerns esti-mates on the distance between random variables. There are two key estimates linking totalvariation distance and the Malliavin calculus, which were both obtained by Nourdin andPeccati. The first one is an observation relating an integration-by-parts formula on Wienerspace with a classical result of Ch. Stein. The second is a quantitatively sharp version ofthe famous fourth moment theorem of Nualart and Peccati.Let N denote the standard normal law. For each integer n , let F n ∈ H q . Assume V ar [ F n ] = 1 and ( F n ) n converges in distribution to a normal law. It is known (the fourthmoment theorem in [33]) that this convergence is equivalent to lim n E [ F n ] = 3. Thefollowing optimal estimate for d T V ( F n , N ), known as the optimal fourth moment theorem,was proved in [29]: with the sequence F as above, assuming convergence, there exist twoconstants c, C > F but not on n , such that c max (cid:8) E (cid:2) F n (cid:3) − , (cid:12)(cid:12) E (cid:2) F n (cid:3)(cid:12)(cid:12)(cid:9) d T V ( F n , N ) C max (cid:8) E (cid:2) F n (cid:3) − , (cid:12)(cid:12) E (cid:2) F n (cid:3)(cid:12)(cid:12)(cid:9) . (2.7)Recall that for a standardized random variable F , i.e., with E [ F ] = 0 and E [ F ] = 1,the third and fourth cumulants are respectively κ ( F ) := E [ F ] ,κ ( F ) := E (cid:2) F (cid:3) − . Throughout the paper we use the notation
N ∼ N (0 , C forany positive real constant, independently of its value which may change from line to linewhen this does not lead to ambiguity. Remark . We note that the optimal bound (2.7) holds with d T V replaced by d W . Indeed,by [28, Theorem 3.5.2]: d W ( F, N ) ≤ sup ϕ ∈F | E [ ϕ ′ ( F )] − E [ F ϕ ( F )] | where F is the set of C functions ϕ such that | ϕ ′ | ∞ ≤ p /π . Now using the concepts ofthe Malliavin derivative operator D and Ornstein-Uhlenbeck generator L , see [27, Theorem4.15], one has | E [ ϕ ′ ( F )] − E [ F ϕ ( F )] | = | E [ ϕ ′ ( F ) E [(1 − h DF, − DL − F i ) |F ]] | , and thus: d W ( F, N ) ≤ p /π E | E [1 − h DF, − DL − F i|F ] | However, this is the same bound one has (up to the constant p /π ) in the proof of theoptimal fourth moment theorem [29, Proof of Theorem 1.2].3. Parameter estimation for stationary Gaussian processes
In this section we present a general framework for the parameter estimation of the vari-ance of a stationary Gaussian process. We prove the consistency and provide upper boundsin the total variation and Wasserstein distances for the rate of normal convergence of theMC estimators (1.1) and (1.2).Let Z := { Z t , t ≥ } be a continuous centered stationary Gaussian process that can berepresented as a Wiener-Itˆo (multiple) integral Z t = I ( [0 ,t ] ) for every t ≥
0, as in (2.1).
ERRY-ESSEEN BOUNDS FOR GAUSSIAN PROCESS ESTIMATORS 7
Let ρ ( r ) = E ( Z r Z ) denote the covariance of Z for every r ≥
0, and let ρ ( r ) = ρ ( − r ) forall r <
0. Our main assumption throughout the paper is that σ Z := 4 Z R ρ ( r ) dr < ∞ . Continuous-time observations.
We estimate the variance f Z := E ( Z ), when thewhole trajectory of Z is observed up to time T >
0. We consider the estimator (1.1) givenby b f T ( Z ) = 1 T Z T Z t dt, T > f Z , based on the continuous-time observation of Z . Our goal isto establish Theorem 1.1, i.e., a Berry-Esseen bound on the the convergence of b f T ( Z ) − f Z .First we show some simpler properties of b f T ( Z ). Lemma 3.1.
The estimator b f T ( Z ) is unbiased and strongly consistent. In particular, √ T (cid:13)(cid:13)(cid:13) b f T ( Z ) − f Z (cid:13)(cid:13)(cid:13) L ↑ σ Z as T → ∞ . (3.1) Proof.
By stationarity, E [ Z ] = E [ Z s ] for every s ≥ E [ b f T ( Z )] = f Z , so theestimator b f T ( Z ) is unbiased.The next step is to show that the estimator b f T ( Z ) is strongly consistent, i.e., b f T ( Z ) → f Z almost surely as T → ∞ . Let V T ( Z ) := √ T (cid:16) b f T ( Z ) − f Z (cid:17) = 1 √ T Z T (cid:0) Z t − E [ Z t ] (cid:1) dt = 1 √ T Z T I ( ⊗ ,t ] ) dt, (3.2)where we have used (2.2). Then E [ V T ( Z ) ] = 1 T E "(cid:18)Z T I ( ⊗ ,t ] ) dt (cid:19) = 1 T Z [0 ,T ] E h I ( ⊗ ,t ] ) I ( ⊗ ,s ] ) i dtds (3.3)By (2.4), E h I ( ⊗ ,t ] ) I ( ⊗ ,s ] ) i = 2! h ⊗ ,t ] , ⊗ ,s ] i H ⊗ = 2 ( E [ Z t Z s ]) = 2 ρ ( t − s ) . Therefore, E [ V T ( Z ) ] = 2 T Z [0 ,T ] ρ ( t − s ) dtds = 4 T Z T Z Tu ρ ( u ) dtdu = 4 Z ∞ (cid:16) − uT (cid:17) + ρ ( u ) ≤ Z T ρ ( u ) du < ∞ . (3.4) S. DOUISSI, K. ES-SEBAIY, G. KERCHEV, AND I. NOURDIN where ( x ) + = max { x, } . Note that the above also implies that E [ V T ( Z ) ] ↑ σ Z , establish-ing (3.1). Alternatively, (cid:13)(cid:13)(cid:13) b f T ( Z ) − f Z (cid:13)(cid:13)(cid:13) L ≤ σ Z / √ T . In particular, this shows that b f T ( Z )converges to f Z in L . At this point we recall [25, Lemma 2.1]): Lemma 3.2.
Let γ > . Let ( Z n ) n ∈ N be a sequence of random variables. If for every p ≥ there exists a constant c p > such that for all n ∈ N , k Z n k L p (Ω) c p · n − γ , then for all ε > there exists a random variable α ε which is almost surely finite such that | Z n | α ε · n − γ + ε almost surelyfor all n ∈ N . Moreover, E | α ε | p < ∞ for all p ≥ . We can apply Lemma 3.2 as soon as (cid:13)(cid:13)(cid:13) b f T ( Z ) − f Z (cid:13)(cid:13)(cid:13) L p ≤ C p / √ T for every p ≥
1. We haveshown that for p = 2. However, using the inequality between L p norms and L q norms, onehas that the same bound holds for p ∈ [1 , p ≥ b f n ( Z ) converges almost surely to f Z as n → ∞ (and n ∈ N ). We now use thefollowing more technical result. Its proof is delayed to the Section 6. Lemma 3.3.
Let { u t , t ≥ } be a continuous stochastic process such that for any p ≥ ,there is a positive constant C p > such that sup t ≥ E [ | u t | p ] < C p . In addition, we assume n Z n u t dt −→ almost surely as n → ∞ . Then, T Z T u t dt −→ almost surely as T → ∞ . Note that by stationarity sup t ≥ E [ | Z t | p ] = E [ | Z | p < C p for some C p >
0. Therefore, byLemma 3.3, b f T ( Z ) converges almost surely to f Z as T → ∞ (and T ∈ R + ). (cid:3) Now, we turn to the proof of Theorem 1.1.3.2.
Proof of Theorem 1.1.
The random variable b f T ( Z ) − f Z q V ar ( b f T ( Z ) − f Z ) = V T ( Z ) p E [ V T ( Z ) ]is centered and normalized. Then the fourth moment theorem (2.7) applies and then d T V b f T ( Z ) − f Z q V ar ( b f T ( Z ) − f Z ) , N = d T V V T ( Z ) p E [ V T ( Z ) ] , N ! ≤ C max (cid:26) κ ( V T ( Z )) E [ V T ( Z ) ] / , κ ( V T ( Z )) E [ V T ( Z ) ] (cid:27) . (3.5) ERRY-ESSEEN BOUNDS FOR GAUSSIAN PROCESS ESTIMATORS 9
We are left to study the third and fourth cumulants of V T ( Z ). The following technicalresult is a slight modification of [4, Propositions 6.3, 6.4]: Lemma 3.4.
For every
T > , | κ ( V T ( Z )) | ≤ √ T (cid:18)Z T − T | ρ ( t ) | / dt (cid:19) , (3.6) | κ ( V T ( Z )) | ≤ T (cid:18)Z T − T | ρ ( t ) | / dt (cid:19) . (3.7) Proof.
The proof follows the same approach as in [4] and is included in Section 6. We notethat our proof is more detailed than the one in [4]. (cid:3)
The corresponding bound for the Wasserstein distance follows from Remark 2.2. ThusTheorem 1.1 is established. At this point we present two corollaries. First, we study theasymptotic behavior of the bound (1.6) under the additional assumption of the decay ofcorrelations of Z t . We have the following: Corollary 3.5.
Assume there exists < β < such that, | ρ ( t ) | = O ( t β − ) . Then, d T V b f T ( Z ) − f Z q V ar ( b f T ( Z ) − f Z ) , N ≤ C T − / if < β < , log ( T ) T − / if β = ,T β − if < β < . (3.8) Proof.
Note that there is a constant
C > | ρ ( s ) | < C for s ∈ [ − , Z T − T | ρ ( t ) | p dt ≤ C p (cid:18) Z T | ρ ( t ) | p dt (cid:19) , for p = 3 / p = 4 /
3, one can establish the following bounds on κ ( V T ( Z )) and κ ( V T ( Z )): | κ ( V T ( Z )) | ≤ C T − / if 0 < β < , log ( T ) T − / if β = ,T β − if < β < , (3.9)and | κ ( V T ( Z )) | ≤ C T − if 0 < β < , log ( T ) T − if β = ,T β − if < β < . (3.10)The result follows by a direct application of Theorem 1.1. (cid:3) Next, using the convergence of E [ V T ( Z ) ], one can also establish the following corollaryto Theorem 1.1. Corollary 3.6.
There exists a constant
C > such that, for all T > , d T V √ Tσ Z ( b f T ( Z ) − f Z ) , N ! ≤ ϕ T ( Z ) + 2 (cid:12)(cid:12)(cid:12)(cid:12) − σ Z E ( V T ( Z ) ) (cid:12)(cid:12)(cid:12)(cid:12) where ϕ T ( Z ) is as in Theorem 1.1. Moreover, if there exists < β < such that, | ρ ( t ) | = O ( t β − ) , d T V √ Tσ Z ( b f T ( Z ) − f Z ) , N ! ≤ C T − / if < β ≤ ,T β − if < β < . (3.11) The same result holds for the Wasserstein distance.Proof.
The first part follows from the following technical result:
Lemma 3.7 ([11, Lemma 5.1]) . Let µ ∈ R and σ > . Then , for every integrable real-valued random variable F , d T V ( µ + σF, N ) ≤ d T V ( F, N ) + r π | µ | + 2 (cid:12)(cid:12)(cid:12)(cid:12) − σ (cid:12)(cid:12)(cid:12)(cid:12) . Therefore, d T V √ Tσ Z ( b f T ( Z ) − f Z ) , N ! ≤ d T V b f T ( Z ) − f Z q V ar ( b f T ( Z ) − f Z ) , N + 2 (cid:12)(cid:12)(cid:12)(cid:12) − σ Z E [ V T ( Z ) ] (cid:12)(cid:12)(cid:12)(cid:12) , by the definition of V T ( Z ) and the fact that E [ b f T ( Z )] = f Z . Recall that E [ V T ( Z ) ] ↑ σ Z .Then the second term on the right-hand side above is bounded by C | E [ V T ( Z ) − σ Z | forsome C > | ρ ( t ) | = O ( t β − ), and using the representation (3.4), | E [ V T ( Z ) − σ Z | = 4 Z ∞ T ρ ( u ) du + 4 Z T uT ρ ( u ) du ≤ C (cid:18)Z ∞ T u β − du + 1 T Z ρ ( u ) du + 1 T Z T u β − du (cid:19) . A direct computation yields that, | E [ V T ( Z ) − σ Z | ≤ C T − if 0 < β < , log( T ) T − if β = ,T β − if < β < , (3.12)Finally, the bound (3.11) follows from (3.8) and (3.12). (cid:3) ERRY-ESSEEN BOUNDS FOR GAUSSIAN PROCESS ESTIMATORS 11
Discrete-time observations.
In this section we estimate the limiting variance f Z based on discrete high-frequency data in time of Z , by considering the discrete version e f n ( Z ) of the estimator b f T ( Z ): e f n ( Z ) := 1 n n X i =1 Z t i , where t i = i ∆ n , i = 0 , . . . , n , ∆ n → T n := n ∆ n → ∞ .We will assume an additional property of the process Z . It would allow us to comparequantitatively the two estimators b f T ( Z ) and e f n ( Z ): Assumption . For all s, t ∈ R + such that | s − t | is small enough, E [ | Z t − Z s | ] ≤ C | t − s | α , (3.13)for some constant 0 < α < b f T ( Z ), we first show the following: Lemma 3.8.
The estimator e f n ( Z ) is unbiased. Assume that Assumption 1 holds. Then, E | b f T ( Z ) − e f n ( Z ) | ≤ C α ∆ αn , (3.14) where C α > is a constant that depends only on α . Moreover, if n ∆ ηn → , as n → ∞ forsome η > , then e f n ( Z ) is strongly consistent.Proof. The first property follows from the fact that the process Z is stationary. To showstrong consistency define, similarly to (3.2), U n ( Z ) := p T n (cid:16) e f n ( Z ) − f Z (cid:17) = 1 √ T n n X i =1 ( Z t i − E Z t i ) . (3.15)Then e f n ( Z ) − f Z = U n ( Z ) √ T n = V T n ( Z ) √ T n + U n ( Z ) − V T n ( Z ) √ T n . (3.16)We know from (3.1) and Lemma 3.2 that V T n / √ T n converges almost surely to 0. Let δ n ( Z ) := U n ( Z ) − V T n ( Z ) = √ T n ( e f n ( Z ) − b f T ( Z )). We estimate the second moment of δ n ( Z ): E [ δ n ( Z ) ] ≤ E √ T n n X i =1 Z t i t i − | Z t i − Z t | dt ! ≤ nT n n X i =1 E "(cid:18)Z t i t i − | Z t i − Z t | dt (cid:19) , where we have applied the inequality between arithmetic mean and quadratic mean. Next,by the Cauchy-Schwarz inequality, for s, t ∈ [ t i − , t i ], E [ | Z t i − Z t || Z t i − Z s | ] ≤ (cid:0) E [( Z t i − Z t ) ] E [( Z t i − Z s ) ] (cid:1) / ≤ (cid:0) E [( Z t i − Z t ) ] E [( Z t i − Z s ) ] E [( Z t i + Z t ) ] E [( Z t i + Z s ) ] (cid:1) / . Now, by the hypercontractivity property (2.6) and (3.13) E [( Z t i − Z t ) ] / ≤ C E [( Z t i − Z t ) ] / ≤ C | t i − t | α . Next, using stationarity E [( Z t i + Z t ) ] ≤ E [( Z t i + Z t ) ] ≤ E [ Z ] . Therefore, for some constant
C > Z , E [ δ n ( Z ) ] ≤ nT n n X i =1 Z t i t i − Z t i t i − C | t i − t | α | t i − s | α dsdt = C ∆ α +2 n T n n X i =1 Z Z ( uv ) α dudv ≤ Cn ∆ α +1 n , with u = t − t i − ∆ n , v = s − t j − ∆ n . Thus, (3.14) is established.Now, using that n ∆ ηn → η > E "(cid:18) δ n ( Z ) √ T n (cid:19) ≤ C ∆ αn ≤ Cn − α/η ( n ∆ ηn ) α/η ≤ Cn − α/η . (3.17)By the hypercontractivity property (2.6) and Lemma 3.2, we obtain δ n ( Z ) √ T n → , and thus b f T n ( Z ) − e f n ( Z ) → b f T n − f Z → e f n ( Z ) − f Z → (cid:3) We now turn to the proof of Theorem 1.2.3.4.
Proof of Theorem 1.2.
The discrete estimator e f n ( Z ) is unbiased, so V ar ( e f n ( Z ) − f Z ) = E [( e f n ( Z ) − f Z ) ]. Next, by the triangle inequality one has, d T V e f n ( Z ) − f Z q E [( e f n ( Z ) − f Z ) ] , N ≤ d T V e f n ( Z ) − f Z q E [( e f n ( Z ) − f Z ) ] , b f T n ( Z ) − f Z q E [( e f n ( Z ) − f Z ) ] + d T V b f n ( Z ) − f Z q E [( e f T n ( Z ) − f Z ) ] , N . (3.18)By Lemma 3.7, d T V b f n ( Z ) − f Z q E [( e f T n ( Z ) − f Z ) ] , N ≤ d T V b f n ( Z ) − f Z q E [( b f T n ( Z ) − f Z ) ] , N + 2 (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) − V ar ( b f T n ( Z ) − f Z ) V ar ( e f n ( Z ) − f Z ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) . Then, by Theorem 1.1 the first term on the right-hand side is further bounded by ϕ ( T n )as defined in (1.6). ERRY-ESSEEN BOUNDS FOR GAUSSIAN PROCESS ESTIMATORS 13
Thus, we are left to bound the first term in (3.18). By the definition of total variationdistance one has d T V e f n ( Z ) − f Z q E [( e f n ( Z ) − f Z ) ] , b f T n ( Z ) − f Z q E [( e f n ( Z ) − f Z ) ] = d T V ( U n ( Z ) , V T n ( Z )) , with the two quantities given in (3.2) and (3.15). We recall the following technical result Lemma 3.9 ([24, Theorem 3.5]) . There is a positive constant
C > such that, for allmultiple integrals F and G of order 2, d T V ( F, G ) ≤ C (cid:18) E [( F − G ) ] E [ F ] (cid:19) / . Therefore, there is a positive constant
C > n ≥ d T V ( U n ( Z ) , V T n ( Z )) ≤ C (cid:18) E [ δ n ( Z ) ] E [ V T n ( Z ) ] (cid:19) / ≤ C ( n ∆ n α +1 ) / , (3.19)where we have used (3.1) and (3.14). The corresponding bound for the Wasserstein distancefollows from Remark 2.2. Thus, the proof of Theorem 1.2 is completed. Remark . Let us present an example where the bounds on total variation in The-orem 1.2 decrease to 0 and thus a central limit theorem holds. Let ∆ n = 1 /n λ with α +1 < λ <
1. Then, n ∆ n → ∞ and n ∆ α +1 n →
0. Moreover,2 (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) − V ar ( b f T n ( Z ) − f Z ) V ar ( e f n ( Z ) − f Z ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = 2 (cid:12)(cid:12)(cid:12)(cid:12) E [ U n ( Z ) ] − E [ V T n ( Z ) ] E [ U n ( Z ) ] (cid:12)(cid:12)(cid:12)(cid:12) ≤ E [ δ n ( Z ) ] E [ U n ( Z ) ] . Now, if E [ δ n ( Z ) ] → E [ V T n ( Z )] ↑ σ Z , then E [ U n ( Z ) ] ↑ σ Z . Thus, d T V e f n ( Z ) − f Z q E [( e f n ( Z ) − f Z ) ] , N ≤ ϕ T n ( Z ) + Cn ∆ α +1 n . Further bounds on ϕ T n ( Z ) are presented, for instance, in Corollary 3.5 when | ρ ( t ) | = O ( t β − ) for some β ∈ (0 , / Corollary 3.11.
Let σ Z = 4 R ∞ ρ ( u ) du . Under the same assumptions as in Theorem 1.2,for all n ≥ , d T V √ Tσ Z ( e f n ( Z ) − f Z ) , N ! ≤ ϕ T n ( Z ) + 2 (cid:12)(cid:12)(cid:12)(cid:12) − σ Z E ( V T n ( Z ) ) (cid:12)(cid:12)(cid:12)(cid:12) + C ( n ∆ n α +1 ) / , where ϕ T n ( Z ) is as in Theorem 1.1 and C > is an absolute constant depending on E [ Z ] .The same result holds for the Wasserstein distance. Parameter estimation for non-stationary Gaussian processes
In practical applications, the data rarely comes from a stationary process. This is re-flected on the modeling side by for instance studying stochastic systems started at a pointmass rather than the system’s stationary distribution. Some more specific examples areincluded in Section 5.The present section is devoted to the general treatment of a process that is asymptoticallystationary. In particular, let Z be a centered stationary Gaussian process (as in Section 3),and let Y be a stochastic process satisfying the following: there exists a constant γ > p ≥ T > k Y T k L p = O (cid:0) T − γ (cid:1) . (4.1)Then we consider second moment estimators for the process X := Z + Y . Our goal is toestimate the limiting variance f X := lim T →∞ E [ X T ]. The limit exists and in fact f X = f Z .Indeed, E [ X T ] = E [( Z T + Y T ) ] = E [ Z T ] + 2 E [ Z T Y T ] + E [ Y T ] → f Z , since E [ Z T ] = f Z , E [ Y T ] = O ( T − γ ) and by the Cauchy-Schwarz inequality | E [ Z T Y T ] | = O ( T − γ ). As in Section 3 we consider the second moment estimators b f T ( X ) and e f n ( X ),based on continuous-time and discrete-time observations of X : b f T ( X ) := 1 T Z T X t dt, T > , (4.2) e f n ( X ) := 1 n n X i =1 X t i , n ≥ , (4.3)where t i = i ∆ n , i = 0 , . . . , n , ∆ n → T n := n ∆ n → ∞ . We establish some basicproperties of the two estimators. Proposition 4.1.
The estimators b f T ( X ) and e f n ( X ) are asymptotically unbiased. More-over, there is a constant C > , such that for all T > , | E [ b f T ( X ) − f X ] | ≤ CT − γ , and | E [ e f T ( X ) − f X ] | ≤ CT − γ . (4.4) Proof.
We established that for all
T > E [ X T ] = f X + err ( t ), where the error termsatisfies | err ( t ) | = O ( t − γ ). Therefore, for the continuous estimator, | E [ b f T ( X ) − f X ] | ≤ T Z T | err ( t ) | dt ≤ CT − γ → T → ∞ , and thus b f T ( X ) is asymptotically unbiased. A similar computation yields the same for e f n ( Z ). (cid:3) We next show strong consistency.
ERRY-ESSEEN BOUNDS FOR GAUSSIAN PROCESS ESTIMATORS 15
Proposition 4.2. If R R ρ ( r ) dr < ∞ and the condition (4.1) holds, then (cid:13)(cid:13)(cid:13) b f T ( X ) − f X (cid:13)(cid:13)(cid:13) L ≤ CT − / . (4.5) Moreover, b f T ( X ) −→ f X almost surely as T → ∞ . (4.6) In addition, if Z satisfies the helix property (3.13), and n ∆ α +1 n → , as n → ∞ , then e f n ( X ) −→ f X almost surely as T → ∞ . (4.7) Proof.
The first step, as before, is to establish a bound on (cid:13)(cid:13)(cid:13) b f T ( X ) − f X (cid:13)(cid:13)(cid:13) L . We have that (cid:13)(cid:13)(cid:13) b f T ( X ) − f X (cid:13)(cid:13)(cid:13) L ≤ (cid:13)(cid:13)(cid:13) b f T ( X ) − b f T ( Z ) (cid:13)(cid:13)(cid:13) L + (cid:13)(cid:13)(cid:13) b f T ( Z ) − f X (cid:13)(cid:13)(cid:13) L . Note, that by the triangle inequality and the Cauchy-Schwarz inequality, (cid:13)(cid:13)(cid:13) b f T ( X ) − b f T ( Z ) (cid:13)(cid:13)(cid:13) L ≤ s E (cid:12)(cid:12)(cid:12)(cid:12) T Z T Y t dt (cid:12)(cid:12)(cid:12)(cid:12) ≤ T (cid:18)Z [0 ,T ] (cid:0) E [ Y t ] E [ Y s ] (cid:1) / dsdt (cid:19) / . (4.8)Therefore, using k Y t k L p = O ( t − γ ), (cid:13)(cid:13)(cid:13) b f T ( X ) − f X (cid:13)(cid:13)(cid:13) L ≤ C ( T − γ + T − / ) ≤ CT − / , where we have applied (3.1).By hypercontractivity (2.6) we can extend the bound to L p norms for p ≥
1. Thenapplying Lemma 3.2 and Lemma 3.3 consecutively establishes strong consistency for b f T ( X ).The proof for e f n ( X ) is similar and a main ingredient is the bound from (3.14). (cid:3) We now turn to the alternative formulations for our main results Theorem 1.1 andTheorem 1.2 when the stochastic process is asymptotically stationary. Recall that ρ ( t ) = ρ ( − t ) := E [ Z Z t ] for t ≥ Theorem 4.3.
Assume that R R ρ ( r ) dr < ∞ and that the condition (4.1) holds. Let N ∼ N (0 , be the standard normal random variable. Then, there exists a constant C > such that, for all T > , d T V b f T ( X ) − f X − E [ b f T ( X ) − f X ] q V ar ( b f T ( X ) − f X ) , N ≤ ϕ T ( Z ) + CT − γ , (4.9) where ϕ T ( Z ) is defined in (1.6) . The same result holds for the Wasserstein distance.Proof. By the triangle inequality and Lemma (3.7): d T V b f T ( X ) − f X − E [ b f T ( X ) − f X ] q V ar ( b f T ( X ) − f X ) , N ≤ d T V b f T ( Z ) − f Z q V ar ( b f T ( Z ) − f Z ) , N + d T V b f T ( X ) − f Z q V ar ( b f T ( Z ) − f Z ) , b f T ( Z ) − f Z q V ar ( b f T ( Z ) − f Z ) + r π (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) E [ b f T ( X ) − f X ] q V ar ( b f T ( X ) − f X ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) + 2 (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) − V ar ( b f T ( X ) − f X ) V ar ( b f T ( Z ) − f Z ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) . (4.10)We analyze each term on the right hand side of (4.10). First, by Theorem 1.1, d T V b f T ( Z ) − f Z q V ar ( b f T ( Z ) − f Z ) , N ≤ ϕ T ( Z ) , with ϕ T ( Z ) as defined in (1.6).Then, using standard properties of the total variation distance and Lemma 3.9 : d T V b f T ( X ) − f Z q V ar ( b f T ( Z ) − f Z ) , b f T ( Z ) − f Z q V ar ( b f T ( Z ) − f Z ) = d T V (cid:16) b f T ( X ) − f Z , b f T ( Z ) − f Z (cid:17) ≤ C E | b f T ( X ) − b f T ( Z ) | E | b f T ( Z ) − f Z | ! / ≤ CT (1 − γ ) / . Indeed, E | b f T ( X ) − b f T ( Z ) | = O ( T − γ ) and T E | b f T ( Z ) − f Z | → σ Z < ∞ , see (3.1). Moreoversince | E [ b f T ( X ) − f X ] | = O ( T − γ ), one has (cid:12)(cid:12)(cid:12) V ar ( b f T ( X ) − f X ) − V ar ( b f T ( Z ) − f Z ) (cid:12)(cid:12)(cid:12) = O ( T − γ ) . (4.11)Then, there exists a constant C >
T > r π (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) E [ b f T ( X ) − f X ] q V ar ( b f T ( X ) − f X ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ CT / − γ and 2 (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) − V ar ( b f T ( X ) − f X ) V ar ( b f T ( Z ) − f Z ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ CT − γ . Therefore, d T V b f T ( X ) − f X − E [ b f T ( X ) − f X ] q V ar ( b f T ( X ) − f X ) , N ≤ ϕ T ( Z ) + CT − γ , as desired. The corresponding bound for the Wasserstein distance follows from Remark 2.2. (cid:3) We now turn to the discrete version of the previous result.
ERRY-ESSEEN BOUNDS FOR GAUSSIAN PROCESS ESTIMATORS 17
Theorem 4.4.
Assume R R ρ ( r ) dr < ∞ , Z verifies (3.13), and the condition (4.1) holds.Let N ∼ N (0 , . Then, there is C > such that, for every n ≥ , d T V (cid:18) √ T n σ Z ( e f n ( X ) − f X ) , N (cid:19) ≤ ϕ T n ( Z ) + C (cid:2) n ∆ α +1 n (cid:3) / + CT − γ n . Proof.
We follows the same steps as in the proof of Theorem 4.4. The extra term Cn [∆ α +1 n ] / comes from the bound in Theorem 1.2. (cid:3) Applications to Gaussian Ornstein-Uhlenbeck processes
In this section we consider the Gaussian Ornstein-Uhlenbeck process X θ := { X θt , t ≥ } defined via the following linear stochastic differential equation dX θt = − θX θt dt + dG t , X θ = 0 , . (5.1)Here θ > G is an arbitrary mean-zero Gaussian process suchthat Z θt := R t −∞ e − θ ( t − s ) dG s , for t ≥
0, is a stationary Gaussian process. The equation (5.1)has the following explicit solution (see [10]) X θt = e − θt Z t e θs dG s , t ≥ , where the integral can be understood in the Wiener sense. Then X θt = Z θt + e − θt Z θ thussatisfies (4.1) with Y := e − θt Z θ . Moreover, k Y T k L p = O ( T − γ ) is satisfied for every p ≥ γ > θ . Our approach isbased on writing θ = g Z θ ( E [( Z θ ) ]) , for some invertible function g Z θ : R + → R + . There are certain cases when there is anexplicit expression for g Z θ (or its inverse). We explore those in the coming sections. Fornow, we describe the general treatment and make only one assumption for g Z θ : Assumption . The function g Z θ is a diffeomorphism, and is twice continuously differen-tiable.We estimate θ based the continuous and discrete observations of X : b θ T := g Z θ (cid:18) T Z T ( X θt ) dt (cid:19) = g Z θ (cid:16) b f T ( X θ ) (cid:17) , T > , (5.2) e θ n := g Z θ n n X i =1 ( X θt i ) ! = g Z θ (cid:16) e f n ( X θ ) (cid:17) , n ≥ , (5.3)where t i = i ∆ n , i = 0 , . . . , n , ∆ n → T n = n ∆ n → ∞ , whereas b f T ( X θ ) and e f n ( X θ )are given by (4.2) and (4.3), respectively. The following holds for the continuous estimator b θ T . Theorem 5.1.
Assume that R R ρ ( r ) dr < ∞ and that Assumption 2 holds. Then for every γ > , and p ≥ , d W b θ T − θ − E [ b θ − θ ] g ′ Z θ ( f X θ ) q V ar ( b f T ( X θ ) − f X θ ) , N ≤ C ( E | g ′′ ( ζ T ) | p ) /p √ T + ϕ T ( Z θ ) + CT − / , (5.4) for some absolute constant C > and where ζ T is a random variable in [ b f T ( X θ ) , f X θ ] .Proof. Recall that by definition θ = g Z θ ( f X θ ). Under Assumption 2 (cid:16)b θ T − θ (cid:17) = g ′ Z θ ( f X θ ) (cid:16) b f T ( X θ ) − f X θ (cid:17) + 12 g ′′ Z θ ( ζ T ) (cid:16) b f T ( X θ ) − f X θ (cid:17) for some random point ζ T between b f T ( X θ ) and f X θ . Denote V f := V ar ( b f T ( X θ ) − f X θ ).Then, d W b θ T − θ − E [ b θ T − θ ] g ′ Z θ ( f X θ ) V f , N ! ≤ | E [ b θ − θ ] | + E (cid:12)(cid:12)(cid:12) ( b f T ( X θ ) − f X θ ) g ′′ Z θ ( ζ T ) (cid:12)(cid:12)(cid:12) | g ′ Z θ ( f X θ ) V f | + d W (cid:18) V f ( b f T ( X θ ) − f X θ ) , N (cid:19) , where we have used that d W ( x + x , y ) ≤ E [ | x | ] + d W ( x , y ) for any random variables x , x , y . This property of the Wasserstein distance is the main reason our results inSection 5 concern d W and not d T V .The second term in the inequality above is bounded in Theorem 4.3. By H¨older’s in-equality, and the hypercontractivity property, for p, q > /p + 1 /q = 1, E (cid:12)(cid:12)(cid:12) ( b f T ( X θ ) − f X θ ) g ′′ Z θ ( ζ T ) (cid:12)(cid:12)(cid:12) ≤ ( E | g ′′ ( ζ T ) | p ) /p (cid:18) E (cid:12)(cid:12)(cid:12) b f T ( X θ ) − f X θ (cid:12)(cid:12)(cid:12) q (cid:19) /q ≤ C ( E | g ′′ ( ζ T ) | p ) /p E (cid:12)(cid:12)(cid:12) b f T ( X θ ) − f X θ (cid:12)(cid:12)(cid:12) , for some constant C > p . Moreover, | E [ e θ − θ ] | ≤ | g ′ Z θ ( f X θ ) | (cid:12)(cid:12)(cid:12) E [ b f T ( X θ ) − f X θ ] (cid:12)(cid:12)(cid:12) + E (cid:12)(cid:12)(cid:12) ( b f T ( X θ ) − f X θ ) g ′′ Z θ ( ζ T ) (cid:12)(cid:12)(cid:12) . Recall, that by (4.5), E | b f T ( X θ ) − f X θ | ≤ C/T , and by (4.4) | E [ b f T ( X θ ) − f X θ ] | ≤ CT − γ .Therefore, E | b θ − θ | + 12 E (cid:12)(cid:12)(cid:12) ( b f T ( X θ ) − f X θ ) g ′′ Z θ ( ζ T ) (cid:12)(cid:12)(cid:12) ≤ C (cid:18) | g ′ Z θ ( f X θ ) | + 32 ( E | g ′′ ( ζ T ) | p ) /p (cid:19) T − . Now, by (3.1) and (4.11),
T V f → σ z . The bound (5.4) follows. (cid:3) Similarly, we obtain the rate of convergence in law of √ T n ( e θ n − θ ) as follows. ERRY-ESSEEN BOUNDS FOR GAUSSIAN PROCESS ESTIMATORS 19
Theorem 5.2.
Suppose that the conditions of Theorem 4.4 hold. If g ′′ ( ξ n ) , with ξ n ∈ [ | e f n ( X θ ) , f X θ | ] , has a moment of order greater than which is bounded in n , d W (cid:18) √ T n σ Z θ g ′ Z θ ( f X θ ) ( e θ n − θ ) , N (cid:19) ≤ C √ T n + ϕ T n ( Z θ ) + C (cid:2) n ∆ α +1 n (cid:3) / . Fractional Ornstein-Uhlenbeck process of the first kind.
Here we consider theOrnstein-Uhlenbeck process X θ := (cid:8) X θt , t ≥ (cid:9) driven by a fractional Brownian motion (cid:8) B Ht , t ≥ (cid:9) of Hurst index H ∈ (0 , X θ is the solution of the followinglinear stochastic differential equation X θ = 0; dX θt = − θX θt dt + dB Ht , t ≥ , (5.5)where θ > X θ is called a fractional Ornstein-Uhlenbeck process of the first kind, following the notation in [22]. As in the general casedescribed above there is an explicit solution to (5.5): X θt = Z t e − θ ( t − s ) dB Hs . (5.6)Moreover, Z θt = Z t −∞ e − θ ( t − s ) dB Hs . (5.7)is a stationary Gaussian process, see [10, 17]. The process Z θ is also the stationary solutionof equation (5.5) when X θ = Z θ . We are thus in the setup of Section 4 with X θ = Z θ + Y θ ,where Y θt = − e − θt Z θ . Again, k Y t k L p = O ( t − γ ) for every p ≥ γ >
1. Note thatby integration by parts and the formula for covariance for fractional Brownian motion E [( Z θ ) ] = E (cid:18)Z −∞ e θs dB Hs (cid:19) = E Z −∞ Z −∞ B Hs B Hr e θ ( s + r ) dsdr = θ Z −∞ Z −∞ e θ ( s + r ) ( | r | H + | s | H − | r − s | H ) drds = θ − H H Γ(2 H ) . Therefore, θ = g Z θ ( E [( Z θ ) ] where the invertible function g Z θ : R + → R + is given by g Z θ ( x ) = (cid:18) H Γ(2 H ) x (cid:19) H , x > . (5.8)The estimators b θ T and e θ T given by (5.2) and (5.3) were carefully studied in [20]. Inparticular, due to [20, Theorem 9] and [20, Theorem 11] the following holds: Proposition 5.3.
Let H ∈ (0 , / . The estimators b θ T and e θ n are strongly consistent.Denote δ H := θ (2 H ) × ( (4 H −
1) + − H )Γ(4 H )Γ(2 H )Γ(1 − H ) if H ∈ (cid:0) , (cid:1) , (4 H − (cid:16) Γ(3 − H )Γ(4 H − − H )Γ(2 H ) (cid:17) if H ∈ (cid:2) , (cid:1) . (5.9) The following limit theorems hold:(1) √ T ( b θ T − θ ) L → N (0 , δ H ) as T → ∞ .(2) Assume there is p ∈ (1 , H H ∧ (1 + 2 H )) such that n ∆ pn → . Then √ T n ( e θ − θ ) L →N (0 , δ H ) as n → ∞ . We can extend these results and obtain Berry-Es´een bounds using Theorems 5.1 and 5.2.
Theorem 5.4.
Let H ∈ (0 , ) , and δ H be given by (5.9) . Then d W √ Tδ H ( b θ T − θ ) , N ! ≤ C √ T if < H ≤ , T − H if < H < . (5.10) Moreover, d W (cid:18) √ T n δ H ( e θ n − θ ) , N (cid:19) ≤ C (cid:2) n ∆ H +1 n (cid:3) / + C √ n ∆ n if < H ≤ , n ∆ n )3 − H if < H < . (5.11) Proof.
Recall that E [ Z Z r ] = O ( r H − ) and H ∈ (0 , / ϕ T ( Z ) and ϕ T n ( Z ) are respectively boundedvia (3.11) with β = H .To establish (5.10) and (5.11) it is left to show that E | g ′′ ( ζ T ) | p < ∞ for some p ≥ g ′′ and the fact that ζ T ∈ [ | b f T ( X θ ) , f X θ | ], it is enough to showthat E | b f T ( X θ ) | p < ∞ for some p ≥
1. This follows as an application of the technicalProposition 6.3 in Section 6.2. Indeed, since X θt = Z θt − e θt Z θ for a centered stationaryGaussian process ( Z θt ) t ≥ , one needs only to check that Z θ satisfies the condition (6.4), i.e,lim t → E [( Z θt − Z θ ) ] = 0 , and \ t ∈ R sp { Z θs : −∞ < s ≤ t } = { } , (5.12)where sp denotes the L -closure of the linear span of a set of square-integrable randomvariables. Note, that by (5.7), and integration by parts, E [( Z θt − Z θ ) ] = E Z t Z t B Hs B Hr e θ ( s + r ) dsdr = Z t Z t θ e θ ( s + r ) ( r H + s H − | r − s | H dsdr, which in turn approaches 0 as t → { Z θs : −∞ < s ≤ t } ⊂ σ ( B Hs : −∞ < s ≤ t ) and the intersection of thesigma algebras is empty. (cid:3) Fractional Ornstein-Uhlenbeck process of the second kind.
The last examplewe consider is the so-called fractional Ornstein-Uhlenbeck process of the second kind,defined via the stochastic differential equation S µ = 0 , and dS µt = − µS µt dt + dY (1) t , t ≥ , (5.13)where Y (1) t = R t e − s dB Ha s with a s = He sH and (cid:8) B Ht , t ≥ (cid:9) is a fractional Brownian motionwith Hurst parameter H ∈ (cid:0) , (cid:1) , and where µ > ERRY-ESSEEN BOUNDS FOR GAUSSIAN PROCESS ESTIMATORS 21 would like to estimate. The equation (5.13) admits an explicit solution, see [22, Equation3.9]: S µt = e − µt Z t e µs dY (1) s = e − µt Z t e ( µ − s dB Ha s = H (1 − µ ) H e − µt Z a t a r ( µ − H dB Hr . Hence we can also write S µt = Z µt − e − µt Z µ , where Z µt = e − µt Z t −∞ e ( µ − s dB Ha s = H (1 − µ ) H e − µt Z a t r ( µ − H dB Hr . From [17, Lemma 37], for every H ∈ ( , g − Z µ ( µ ) = f X µ = f Z µ = E h ( Z µ ) i = (2 H − H H µ B (1 − H + µH, H − , where here B ( · ) is the usual beta function. The function µ g − Z µ ( µ ) is monotone (de-creasing) and convex from R + to R + . Now, the following Berry-Esseen result holds: Theorem 5.5.
Assume H ∈ ( , . Then d W √ Tσ Z µ g ′ Z µ ( f X µ ) ( b µ T − µ ) , N ! ≤ C/ √ T .
Also, d W (cid:18) √ T n σ Z µ g ′ Z µ ( f X µ ) ( e µ n − µ ) , N (cid:19) ≤ C (cid:2) n ∆ H +1 n (cid:3) / + C/ √ n ∆ n . Proof.
From [22], there exist c, C > | t | , ρ Z µ ( t ) = E [ Z µ Z µt ] ≤ Ce − c | t | . Thus, by a straightforward calculation, ϕ T ( Z µ ) ≤ C/ √ T .
Moreover, according to [3, Lemma 4], for all H ∈ (0 , | t − s | small enough, E (cid:12)(cid:12)(cid:12) Y (1) t − Y (1) s (cid:12)(cid:12)(cid:12) = C | t − s | H and we are in the setting of Corollary 3.6 .Finally, one needs to bound E | g ′′ ( ζ T ) | p . As in the proof of Theorem 5.4 we use thetechnical proposition Proposition 6.3 in Section 6.2. Indeed, since S µt = Z µt − e − µt Z µ fora centered stationary Gaussian process ( Z µt ) t ≥ , one needs only to check that Z µ satisfiesthe condition (6.4), i.e,lim t → E [( Z µt − Z µ ) ] = 0 , and \ t ∈ R sp { Z µs : −∞ < s ≤ t } = { } , (5.14)where sp denotes the L -closure of the linear span of a set of square-integrable randomvariables. Recall [2, Lemma 2.1]. One has E [( Z µt − Z µ ) ] ∼ Ht H as t → + . Thus the first part of (5.14) is satisfied. For the second part, note that { Z θs : −∞ < s ≤ t } ⊂ σ ( B HHe s/H : −∞ < s ≤ t ) and the intersection of the sigma algebras is empty. (cid:3) Technical results
Two technical lemmas.
Proof of Lemma 3.3 .
We employ similar arguments to [30, Proposition 4.1]. First, write1 T Z T u t dt = 1 ⌊ T ⌋ Z ⌊ T ⌋ u t dt + 1 T Z T ⌊ T ⌋ u t dt + (cid:18) T − ⌊ T ⌋ (cid:19) Z ⌊ T ⌋ u t dt. Notice that (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:18) T − ⌊ T ⌋ (cid:19) Z ⌊ T ⌋ u t dt (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = (cid:18) − ⌊ T ⌋ T (cid:19) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ⌊ T ⌋ Z ⌊ T ⌋ u t dt (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ⌊ T ⌋ Z ⌊ T ⌋ u t dt (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) We know that ⌊ T ⌋ R ⌊ T ⌋ u t dt → T →
0. Thus, we are left to show that (cid:12)(cid:12)(cid:12)(cid:12) T Z T ⌊ T ⌋ u t dt (cid:12)(cid:12)(cid:12)(cid:12) → , almost surely as T → ∞ . (6.1)Recall, that sup t ≥ E | u t | p < C p for any p ≥
1. Thus, by Minkowski’s inequality E (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ⌊ T ⌋ Z ⌊ T ⌋ +1 ⌊ T ⌋ | u t | dt (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) p ! /p ≤ ⌊ T ⌋ Z ⌊ T ⌋ +1 ⌊ T ⌋ ( E | u t | p ) /p dt ≤ C /pp ⌊ T ⌋ . Now, by Lemma 3.2, (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ⌊ T ⌋ Z ⌊ T ⌋ +1 ⌊ T ⌋ | u t | dt (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) → , almost surely as T → ∞ , and (6.1) follows. (cid:3) Proof of Lemma 3.4.
According to (3.2) and since E [ V T ( Z )] = 0, we have κ ( V T ( Z )) = E [ V T ( Z ) ] = E "(cid:18) √ T Z T I ( ⊗ ,t ] ) (cid:19) = 1 T / Z [0 ,T ] E h I ( ⊗ ,r ] ) I ( ⊗ ,s ] ) I ( ⊗ ,t ] ) i drdsdt. Applying the product formula (2.5) I ( ⊗ ,s ] ) I ( ⊗ ,t ] ) = I ( ⊗ ,s ] e ⊗ ⊗ ,t ] ) + 4 I ( ⊗ ,s ] e ⊗ ⊗ ,t ] ) + I ( ⊗ ,s ] e ⊗ ⊗ ,t ] ) . (6.2)Now, by isometry (2.4) E h I ( ⊗ ,r ] ) I ( ⊗ ,s ] ) I ( ⊗ ,t ] ) i = 4 E h I ( ⊗ ,r ] ) I ( ⊗ ,s ] e ⊗ ⊗ ,t ] ) i = 8 h ⊗ ,r ] , ( ⊗ ,s ] e ⊗ ⊗ ,t ] i H ⊗ . ERRY-ESSEEN BOUNDS FOR GAUSSIAN PROCESS ESTIMATORS 23
Recall the formula for contraction (2.3). Then, ⊗ ,s ] ⊗ ⊗ ,t ] = h [0 ,s ] , [0 ,t ] i H [0 ,s ] ⊗ [0 ,t ] . The symmetrization is then ⊗ ,s ] e ⊗ ⊗ ,t ] = 12 h [0 ,s ] , [0 ,t ] i H (cid:0) [0 ,s ] ⊗ [0 ,t ] + [0 ,t ] ⊗ [0 ,s ] (cid:1) . Therefore, E h I ( ⊗ ,r ] ) I ( ⊗ ,s ] ) I ( ⊗ ,t ] ) i = 8 h [0 ,r ] , [0 ,s ] i H h [0 ,r ] , [0 ,t ] i H h [0 ,s ] , [0 ,t ] i H , and then κ ( V T ( Z )) = 8 T / Z [0 ,T ] h [0 ,r ] , [0 ,s ] i H h [0 ,r ] , [0 ,t ] i H h [0 ,s ] , [0 ,t ] i H drdsdt = 8 T / Z [0 ,T ] E [ Z r Z s ] E [ Z r Z t ] E [ Z s Z t ] drdsdt = 8 T / Z [0 ,T ] ρ ( r − s ) ρ ( r − t ) ρ ( s − t ) drdsdt = 8 T / Z T Z T − t − t Z T − t − t ρ ( x − y ) ρ ( x ) ρ ( y ) dxdydt = 8 T / Z T − T Z T − T ρ ( x − y ) ρ ( x ) ρ ( y ) dxdy Z ( T − x ) ∧ ( T − y ) ∧ T ( − x ) ∨ ( − y ) ∨ dt Let ρ T ( x ) := | ρ ( x ) | | x |≤ T . Then, κ ( V T ( Z )) ≤ √ T Z R Z R ρ T ( x − y ) ρ ( x ) ρ ( y ) dxdy = 8 √ T Z R ( ρ T ∗ ρ T )( y ) ρ T ( y ) dy ≤ k ρ T ∗ ρ T k L ( R ) k ρ T k L / ( R ) , where ρ T ∗ ρ T is the convolution of the two functions and we have applied H¨older’s inequalityin the last line. Now, recall Young’s inequality: If p, q, r ≥ f ∈ L p ( R ), g ∈ L q ( R ) and1 /p + 1 /q = 1 /r + 1, then k f ∗ g k L r ( R ) ≤ k f k L p ( R ) k g k L q ( R ) . Hence, κ ( V T ( Z )) ≤ k ρ T k L / ( R ) = (cid:18)Z T − T | ρ ( T ) | / (cid:19) , and (3.6) is established. Similarly, we have κ ( V T ( Z )) = E [ V T ( Z ) ] − E [ V T ( Z ) ] = 1 T E "(cid:18)Z T I ( ⊗ ,t ] ) (cid:19) − T E "(cid:18)Z T I ( ⊗ ,t ] ) (cid:19) = 1 T Z [0 ,T ] E h I ( ⊗ ,s ] ) I ( ⊗ ,t ] ) I ( ⊗ ,u ] ) I ( ⊗ ,v ] ) i dsdtdudv − T (cid:18)Z [0 ,T ] E h I ( ⊗ ,t ] ) I ( ⊗ ,s ] ) i dtds (cid:19) . By (6.2) and the isometry property E h I ( ⊗ ,s ] ) I ( ⊗ ,t ] ) I ( ⊗ ,s ] ) I ( ⊗ ,t ] ) i = E h I ( ⊗ ,s ] e ⊗ ⊗ ,t ] ) I ( ⊗ ,u ] e ⊗ ⊗ ,v ] ) i + 16 E h I ( ⊗ ,s ] e ⊗ ⊗ ,t ] ) I ( ⊗ ,u ] e ⊗ ⊗ ,v ] ) i + E h I ( ⊗ ,s ] e ⊗ ⊗ ,t ] ) I ( ⊗ ,u ] e ⊗ ⊗ ,v ] ) i . For the contractions the following holds: ⊗ ,s ] ⊗ ⊗ ,t ] = [0 ,s ] ⊗ [0 ,s ] ⊗ [0 ,t ] ⊗ [0 ,t ] , ⊗ ,s ] ⊗ ⊗ ,t ] = h [0 ,s ] , [0 ,t ] i H [0 ,s ] ⊗ [0 ,t ] , ⊗ ,s ] ⊗ ⊗ ,t ] = h [0 ,s ] , [0 ,t ] i H h [0 ,s ] , [0 ,t ] i H . After taking symmetrization into account and by the isometry formula (2.4), E h I ( ⊗ ,s ] e ⊗ ⊗ ,t ] ) I ( ⊗ ,u ] e ⊗ ⊗ ,v ] ) i =4 ( E [ Z s Z u ] E [ Z t Z v ]) + 4 ( E [ Z s Z u ] E [ Z t Z v ]) + 16 E [ Z s Z u ] E [ Z s Z v ] E [ Z t Z u ] E [ Z t Z v ] , E h I ( ⊗ ,s ] e ⊗ ⊗ ,t ] ) I ( ⊗ ,u ] e ⊗ ⊗ ,v ] ) i = E [ Z s Z t ] E [ Z u Z v ] E [ Z s Z u ] E [ Z t Z v ]+ E [ Z s Z t ] E [ Z u Z v ] E [ Z s Z v ] E [ Z t Z u ] , E h I ( ⊗ ,s ] e ⊗ ⊗ ,t ] ) I ( ⊗ ,u ] e ⊗ ⊗ ,v ] ) i =( E [ Z s Z t ] E [ Z u Z v ]) . Therefore by symmetry and (3.3), κ ( V T ( Z )) = 48 T Z [0 ,T ] E [ Z s Z u ] E [ Z s Z v ] E [ Z t Z u ] E [ Z t Z v ] dsdtdudv + 4 + 4 + 1 − ∗ T Z [0 ,T ] E [ Z s Z t ] E [ Z u Z v ] dsdtdudv ≤ T Z [0 ,T ] ρ ( s − u ) ρ ( s − v ) ρ ( t − u ) ρ ( t − v ) dsdtdudv = 48 T Z [0 ,T ] Z R ρ T ( u − s ) ρ T ( v − s ) ρ T ( t − u ) ρ T ( t − v ) dudvdsdt ERRY-ESSEEN BOUNDS FOR GAUSSIAN PROCESS ESTIMATORS 25 = 48 T Z [0 ,T ] Z R ρ T ( t − s − x ) ρ T ( t − s − y ) ρ T ( x ) ρ T ( y ) dxdydsdt = 48 T Z [0 ,T ] (( ρ T ∗ ρ T )( t − s )) dsdt = 48 T k ρ T ∗ ρ T k L ( R ) , where we have also applied the change of variables x = t − u , y = t − v and used theformula for convolution. Next, by Young’s inequality, k ρ T ∗ ρ T k L ( R ) ≤ k ρ T k L / ( R ) , and this establishes (3.7) (cid:3) A bound on E | b f T ( X ) | p . First, we recall an important representation for stationaryGaussian processes.
Definition 6.1.
Let φ S denote the set of functions ξ ∈ L ( R ) such that ξ ( t ) = 0 for all t <
0. If ξ ∈ φ S , we can define for all t ∈ R , Z ξt := Z R ξ ( t − u ) dW u , (6.3)where ( W t ) t ≥ is the Wiener process. The process ( Z ξt ) t ∈ R is a stationary centered Gaussianprocess.The following result is key for our approach. Theorem 6.2 (Karhunen [23]) . Let { Z t , t ∈ R } be a stationary centered Gaussian processsuch that lim t → E [( Z t − Z ) ] = 0 , \ t ∈ R sp { Z s : −∞ < s ≤ t } = { } , (6.4) where sp denotes the L -closure of the linear span of a set of square-integrable randomvariables.Then there exists a ξ ∈ φ S such that { Z t , t ∈ R } ( d ) = { Z ξt , t ∈ R } , where ( d ) denotes equality of all finite-dimensional distributions, and { Z ξt , t ∈ R } is astationary centered Gaussian process defined as in (6.3) . Now, our goal is to show the following:
Proposition 6.3.
Let { Z t , t ≥ } be a stationary centered Gaussian process satisfy-ing (6.4) , and E [ Z ] = 1 . Then, for every p > , there is T > such that sup T ≥ T E "(cid:18) T Z T Z t dt (cid:19) − p < ∞ , and sup T ≥ T E "(cid:18) T Z T ( Z t − e θt Z ) dt (cid:19) − p < ∞ . (6.5) Moreover, for every p > , there is n ≥ such that sup n ≥ n E " n n X i =1 Z t i ! − p < ∞ , and sup n ≥ n E " n n X i =1 ( Z t i − e θt i Z ) ! − p < ∞ , (6.6) where t i = i ∆ n , for i = 1 , . . . , n , with n ∆ n → ∞ and ∆ n → , as n → ∞ .Remark . We note the main idea for the proof is inspired by the approach in [31,Theorem 1.1], which provides a bound on the some negative moments of the Malliavinderivative.
Proof.
Let p >
0. Condition (6.4) is satisfied and thus Theorem 6.2 implies that { Z t , t ∈ R + } ( d ) = { Z ξt , t ∈ R + } where ξ ∈ L ( R ) with ξ ( t ) = 0, for t ≤
0, and Z ξt is given by (6.3).Then, for a positive integer m > p , E "(cid:18) T Z T Z t dt (cid:19) − p = E "(cid:18) T Z T ( Z ξt ) dt (cid:19) − p = E " m X k =1 T Z kT/m ( k − T/m ( Z ξt ) dt ! − p . Recall, that by the inequality between arithmetic and geometric means, for any positivereals x , . . . , x m , P mk =1 x k ≥ m Q mk =1 x /mk . Then, E "(cid:18) T Z T Z t dt (cid:19) − p ≤ m − p E m Y k =1 T Z kT/m ( k − T/m ( Z ξt ) dt ! − p/m . (6.7)We proceed by conditioning. Let F t := σ ( W u , u ≤ t ). By definition, Z ξt is F t − measurable.Thus, E "(cid:18) T Z T Z t dt (cid:19) − p ≤ m − p E m Y k =1 E T Z kT/m ( k − T/m ( Z ξt ) dt ! − p/m (cid:12)(cid:12)(cid:12) F ( k − T/m . Next, note that E T Z kT/m ( k − T/m ( Z ξt ) dt ! − p/m (cid:12)(cid:12)(cid:12) F W ( k − T/m = Z ∞ P T Z kT/m ( k − T/m ( Z ξt ) dt ≤ x − m/p (cid:12)(cid:12)(cid:12) F W ( k − T/m ! dx ≤ Z ∞ P T Z kT/m ( k − T/m ( Z ξt ) dt ≤ x − m/p (cid:12)(cid:12)(cid:12) F W ( k − T/m ! dx. (6.8) ERRY-ESSEEN BOUNDS FOR GAUSSIAN PROCESS ESTIMATORS 27
Applying the Carbery-Wright Inequality [6], there is a universal constant c > ε > P T Z kT/m ( k − T/m ( Z ξt ) dt ≤ ε (cid:12)(cid:12)(cid:12) F W ( k − T/m ! ≤ c √ ε E h T R kT/m ( k − T/m ( Z ξt ) dt | F W ( k − T/m i . (6.9)Next, note that for any 0 ≤ a < b , E (cid:20)Z ba ( Z ξt ) dt (cid:12)(cid:12)(cid:12) F a (cid:21) = Z ba E h ( Z ξt ) |F a i dt ≥ Z ba Z ξa E [ Z ξt − Z ξa |F a ] + E [( Z ξt − Z ξa ) |F a ] dt ≥ Z ba Z ta ξ ( t − u ) dudt = Z b − a ( b − a − v ) ξ ( v ) dv, (6.10)where we have used Itˆo isometry and the fact that Z ξt − Z ξa is independent of F a .By isometry, R ∞ ξ ( v ) dv = E [ Z ] = 1, so there is T > R T m ξ ( v ) dv ≥ .Thus, by (6.10), for every T ≥ T , E " T Z kT/m ( k − T/m ( Z ξt ) dt | F ( k − T/m ≥ m Z T m ξ ( v ) dv ≥ m . (6.11)Therefore, combining (6.8), (6.9) and (6.11), we obtain for every T ≥ T , E T Z kT/m ( k − T/m Z t dt ! − p/m (cid:12)(cid:12)(cid:12) F ( k − T/m ≤ cm Z ∞ x − m p dx. (6.12)Hence, γ m,T := sup T ≥ T ≤ k ≤ m E T Z kT/m ( k − T/m Z t dt ! − p/m (cid:12)(cid:12)(cid:12) F ( k − T/m < ∞ . (6.13)Consequently, it follows from (6.7) and (6.13) that, for all T ≥ T , E "(cid:18) T Z T Z t dt (cid:19) − p ≤ m − p ( γ m,T ) m < ∞ , which completes the proof of the first part of (6.5). To establish the second part of (6.5) oneneed only replace Z t by Z t − e θt Z in the proof above. Indeed, the key inequalities (6.7), (6.8)and (6.9) are the same (with the change Z ξt → Z ξt − e θt Z ξ ) and the equivalent of (6.10) is E (cid:20)Z ba ( Z ξt − e θt Z ξ ) dt (cid:12)(cid:12)(cid:12) F a (cid:21) ≥ Z ba Z ξa − e θt Z ξ ) E [ Z ξt − Z ξa |F a ] + E [( Z ξt − Z ξa ) |F a ] dt ≥ Z ba Z ta ξ ( t − u ) dudt = Z b − a ( b − a − v ) ξ ( v ) dv, (6.14)Now, let us prove the discrete version (6.6). First we prove it for the case n = m .Define, for every m ≥ T m := m ∆ m such that ∆ m →
0, and Tm = m ∆ m → ∞ as m → ∞ . Fix p >
0, and let m be a positive integer such that, for every m ≥ m , m > p and R Tmm ξ ( v ) dv ≥ . We have t i = i ∆ m for i = 1 , . . . , m . Write1 m m X i =1 Z t i = 1 T m Z T m Y t dt, where Y t := P m i =1 Z t i ( t i − ,t i ] ( t ). Also, denote Y ξt := P m i =1 Z ξt i ( t i − ,t i ] ( t ).We follow the same techniques as in the proof of (6.5). Notice that the inequalities (6.7)-(6.10) hold with Y ξt instead of Z ξt .Thus, it suffices to prove the following equivalent of (6.11): for every m ≥ m , E " T m Z kT m /m ( k − T m /m ( Y ξt ) dt (cid:12)(cid:12)(cid:12) F ( k − T m /m ≥ m . Notice that for every k = 1 , . . . , m ,1 T m Z kT/m ( k − T m /m ( Y ξt ) dt = 1 m m X j =1 ( Z ξt ( k − m + j ) . Moreover, E h ( Z ξt ( k − m + j ) (cid:12)(cid:12)(cid:12) F t ( k − m i ≥ Z t ( k − m + j t ( k − m ξ ( t ( k − m + j − u ) du = Z t j ξ ( v ) dv. Therefore, for every m ≥ m , E " T m Z kT m /m ( k − T m /m ( Y ξt ) dt | F ( k − T m /m ≥ m m X j =1 Z t j ξ ( v ) dv ≥ m Z t m ξ ( v ) dv = 1 m Z Tm ξ ( v ) dv ≥ m , which yields the proof of (6.6) for n = m .For general n a simple computation yields1 n n X i =1 Z t i ≥ n ⌊√ n ⌋ X i =1 Z t i ≥ C ⌊√ n ⌋ ⌊√ n ⌋ X i =1 Z t i , (6.15)for some absolute constant C >
0, and thus the first part of (6.6) is established. Thesecond part follows using the same techniques as above and (6.14). (cid:3)
References [1] Alazemi, F., Alsenafi, A., Es-Sebaiy, K. (2020). Parameter estimation for Gaussian mean-revertingOrnstein-Uhlenbeck processes of the second kind: non-ergodic case. Stochastics and Dynamics 19(5),2050011 (25 pages).
ERRY-ESSEEN BOUNDS FOR GAUSSIAN PROCESS ESTIMATORS 29 [2] Azmoodeh, E., Viitasaari, L. (2015) Parameter estimation based on discrete observations of fractionalOrnstein-Uhlenbeck process of the second kind. Stat Inference Stoch Process, 18, 205-227.[3] Bajja, S., Es-Sebaiy, K., Viitasaari, L. (2020). Volatility estimation in fractional Ornstein-Uhlenbeckmodels. Stochastic Models, 36(1), 94-111.[4] Bierm´e, H., Bonami, A., Nourdin, I., Peccati, G. (2012). Optimal Berry-Esseen rates on the Wienerspace: the barrier of third and fourth cumulants.
ALEA , no. 2, 473-500.[5] Brockwell, P., Davis, R. (2009) Time Series: Theory and Methods (Springer series in Statistics),second ed. Springer.[6] Carbery, A., Wright, J. (2001) Distributional and L q norm inequalities for polynomials over convexbodies in R n . Math. Res. Lett. 8, 3, 233-248.[7] Chen, Y., Hu, Y., Long, H. (2019) Generalized moment estimators for α − stable Ornstein–Uhlenbeckmotions from discrete observations. Stat. Inference Stoch. Process. 23, 1, 53-81.[8] Chen, Y., Tian, L., Ying, L. (2020) Second moment estimator for an AR (1) model driven by a longmemory Gaussian noise. arXiv preprint arXiv:2008.12443.[9] Cheridito, P. (2004). Gaussian moving averages, semimartingales and option pricing. Stochastic pro-cesses and their applications, 109(1), 47-68.[10] Cheridito, P., Kawaguchi, H., Maejima, M. (2003). Fractional Ornstein-Uhlenbeck processes, Electr.J. Prob. 8, 1-14.[11] Douissi, S., Es-Sebaiy, K., Viens, F. (2019). Berry-Esseen bounds for parameter estimation of generalGaussian processes. ALEA, Lat. Am. J. Probab. Math. Stat. , , 633-664.[12] El Machkouri, M., Es-Sebaiy, K., Ouknine, Y. (2016). Least squares estimator for non-ergodicOrnstein-Uhlenbeck processes driven by Gaussian processes. Journal of the Korean Statistical So-ciety 45, 329-341.[13] El Onsy, B., Es-Sebaiy, K., Viens, F. (2017). Parameter Estimation for a partially observed Ornstein-Uhlenbeck process with long-memory noise. Stochastics, 89(2), 431-468.[14] Es-Sebaiy, K. (2013). Berry-Esseen bounds for the least squares estimator for discretely observedfractional Ornstein-Uhlenbeck processes. Statist. Probab. Lett. 83., 10, 2372-2385.[15] Es-Sebaiy, K., Alazemi, F., Al-Foraih, M. (2019). Least squares type estimation for discretely observednon-ergodic Gaussian Ornstein-Uhlenbeck processes. Acta Mathematica Scientia, 39(4), 989-1002.[16] Es-Sebaiy, K. and Es.Sebaiy, M. (2020). Estimating drift parameters in a non-ergodic GaussianVasicek-type model. Statistical Methods and Applications.[17] Es-Sebaiy, K., Viens, F. (2019). Optimal rates for parameter estimation of stationary Gaussian pro-cesses. Stochastic Processes and their Applications, 129(9), 3018-3054.[18] Jiang, H., Liu, J., Wang, S. (2018) Self-normalized asymptotic properties for the parameter estimationin fractional Ornstein-Uhlenbeck process. Stochastics and Dynamics. vol. 18, 5 (29 pages).[19] Hu, Y., Nualart, D. (2010). Parameter estimation for fractional Ornstein-Uhlenbeck processes. Statist.Probab. Lett. , 1030-1038.[20] Hu, Y., Nualart, D., Zhou, H. (2019). Parameter estimation for fractional Ornstein-Uhlenbeck pro-cesses of general Hurst parameter. Statistical Inference for Stochastic Processes, 22(1), 111-142.[21] Hu, Y., Song, J. (2013). Parameter estimation for fractional Ornstein-Uhlenbeck processes with dis-crete observations. F. Viens et al (eds), Malliavin Calculus and Stochastic Analysis: A Festschrift inHonor of David Nualart, 427-442, Springer.[22] Kaarakka, T., Salminen, P. (2011). On Fractional Ornstein-Uhlenbeck process. Communications onStochastic Analysis, 5 (1), 121-133.[23] Karhunen, K., (1950). ¨Uber die Struktur station¨arer zuf¨alliger Funktionen. Ark. Mat. 1 (3), 141-160.[24] Kosov, E. (2019). Total variation distance estimates via L2-norm for polynomialsin log-concave random vectors. International Mathematics Research Notices, rnz278.https://doi.org/10.1093/imrn/rnz278 [25] Kloeden, P. and Neuenkirch, A. (2007). The pathwise convergence of approximation schemes forstochastic differential equations. LMS J. Comp. Math. , 235-253.[26] Kˇr´aˇz, P., Maslowski, B. (2019). Central limit theorems and minimum-contrast estimators for linearstochastic evolution equations. Stochastics, 91, 1109-1140.[27] Nourdin, I. (2013) Lectures on Gaussian approximations with Malliavin calculus. S´eminaire de Prob-abilit´es XLV. Lecture Notes in Math, 2078, Springer, Cham.[28] Nourdin, I., Peccati, G. (2012). Normal approximations with Malliavin calculus : from Stein’s methodto universality. Cambridge Tracts in Mathematics 192. Cambridge University Press, Cambridge.[29] Nourdin, I., Peccati, G. (2015). The optimal fourth moment theorem. Proc. Amer. Math. Soc. 143,3123-3133.[30] Nourdin, I., Tran, T. D. (2019). Statistical inference for Vasicek-type model driven by Hermite pro-cesses. Stochastic Processes and their Applications, 129(10), 3774-3791.[31] Nourdin, I., Nualart, D. (2013) Fisher Information and the Fourth Moment Theorem. Ann. Inst. H.Poincar´e Probab. Statist., 52, 2, 849-867.[32] Nualart, D. (2006). The Malliavin calculus and related topics. Springer-Verlag, Berlin. [33] Nualart, D., Peccati, G. (2005). Central limit theorems for sequences of multiple stochastic integrals.Ann. Probab. 33, no. 1, 177-193.[34] Pham-Dinh-Tuan (1977). Estimation of parameters of a continuous time Gaussian stationary processwith rational spectral density, Biometrica 64, 2, 385-399.[35] Schervish, M. (1997). Theory of Statistics (Springer series in Statistics) Springer.[36] Sottinen, T., Viitasaari, L. (2018). Parameter estimation for the Langevin equation with stationary-increment Gaussian noise. Statistical Inference for Stochastic Processes, 21(3), 569-601.[37] Wood, A., Chan, G. (1994). Simulation of stationary Gaussian processes. Journal of computationaland graphical statistics, 3(4):409-432.
National School of Applied Sciences, Marrakech, Morocco
Email address : [email protected] Department of Mathematics, Faculty of Science, Kuwait University, Kuwait
Email address : [email protected] University of Luxembourg, Department of Mathematics, Luxembourg
Email address : [email protected] University of Luxembourg, Department of Mathematics, Luxembourg
Email address ::