Central Limit Theorems and Minimum-Contrast Estimators for Linear Stochastic Evolution Equations
aa r X i v : . [ m a t h . P R ] F e b CENTRAL LIMIT THEOREMS AND MINIMUM-CONTRASTESTIMATORS FOR LINEAR STOCHASTIC EVOLUTIONEQUATIONS
PAVEL KˇR´IˇZ AND BOHDAN MASLOWSKI
Abstract.
Central limit theorems and asymptotic properties of the minimum-contrast estimators of the drift parameter in linear stochastic evolution equa-tions driven by fractional Brownian motion are studied. Both singular (
H < )and regular ( H > ) types of fractional Brownian motion are considered.Strong consistency is achieved by ergodicity of the stationary solution. Thefundamental tool for the limit theorems and asymptotic normality (shown forHurst parameter H < ) is the so-called 4 th moment theorem considered onthe second Wiener chaos. This technique provides also the Berry-Esseen-typebounds for the speed of the convergence. The general results are illustratedfor parabolic equations with distributed and pointwise fractional noises. Introduction
Estimation of the drift parameter in linear stochastic PDEs with additive noiseis a problem that is both well-motivated by practical needs and theoretically chal-lenging. Two pioneering works on this topic – [13] by Koski, Akademi and Loges;and [12] by Huebner and Rozovskii – both considered Wiener process as the sourceof noise (i.e. the white noise in time). In [13] the minimum contrast (MC) estimatorof the drift parameter was derived and its time asymptotics were studied (namelystrong consistency and asymptotic normality). The paper [12] constitutes the max-imum likelihood estimator (MLE) of the drift parameter and studies conditions forstrong consistency and asymptotic normality with increasing number of observeddimensions (the space asymptotics). Several other works dealing with estimationin the white noise setting followed.The literature on parameter estimation in the drift term for SPDEs driven by a fractional
Brownian motion (fBm), which is capable to generate noise coloured intime, is rather limited. The work [5] develops the space asymptotics for the MLEin the fBm setting, but only in regular case when the Hurst parameter H ≥ .In [15] the strong consistency of the MC estimator is proved considering fBm withtrace-class covariance operator as the driving noise. Another contribution in thisdirection are the papers [17] and [1] dealing with the least squares estimators con-structed from the one-dimensional projection of the mild solution to the linearSPDEs driven by fBm and integrated fBm, respectively. Both works assume the Date : 28 May 2018.2010
Mathematics Subject Classification.
Key words and phrases.
Stochastic evolution equations, fractional Ornstein-Uhlenbeck process,parameter estimation, asymptotic normality, Wiener chaos.The work of B. Maslowski was partially supported by the GACR Grant no. 15-08819S. urst parameter 1 / ≤ H ≤ /
4. To authors’ best knowledge, no proof of asymp-totic normality of the MC estimator in the fBm setting has been published so far.The strong consistency of the estimator follows from the appropriate strong lawof large numbers (in the case of fBm-driven equations see e.g. [15] or [17]) whilefor the proof of asymptotic normality a central limit theorem-type result is needed,which may be viewed as a more challenging problem in the present case. Bothproblems are, of course, of independent interest.However, recently developed theory of the Malliavin calculus and the Stein’smethod (widely known as the 4 th moment theorems) provides a very powerful toolfor studying limit theorems of the Gaussian-subordinate sequences/processes, and,consequently, of the MC estimators. The idea of the 4 th moment theorem was firstpresented in [21] and further developed in many papers (see for example [2] or [20]and references therein).Applications of the Malliavin calculus to study asymptotic normality of the MCestimators of the drift parameters in one-dimensional linear SDEs driven by fBmwere first presented in [9] (the MC estimator is called ”Alternative estimator”there), but it covers only regular case < H < and continuous-time observations.The results were later generalized for arbitrary H ∈ (0 ,
1) and for both continuous-time and discrete-time observations (combination of increasing time-horizon andobservation frequency considered) in [10]. It was demonstrated there that asymp-totic normality is violated for
H > . Similar approach used in more generalsetting of the fractional Vasicek model was presented in [26] and [27], where bothergodic and non-ergodic cases are considered. Recently, the 4 th moment theoremwas successfully utilized to demonstrate not only asymptotic normality, but also toestablish the speed of the convergence to the normal distribution (Berry-Esseen-type of bounds) of the MC estimator of the drift parameter in one-dimensional SDEsdriven by fBm - see [11] for discrete-time observations with increasing time-horizonand fixed mesh or [24] for continuous-time observations or discrete-time observa-tions with combination of increasing time-horizon and observation frequency.In this paper, we study the asymptotic properties (in time) of the MC estimatorof the drift parameter in infinite dimensional linear stochastic equations driven bya fBm (the solutions represent infinite dimensional fractional Ornstein-Uhlenbeckprocesses). We consider both continuous-time observations and discrete-time ob-servations with increasing time-horizon and fixed mesh. We believe that most ofthe results concerning the speed of convergence in limit theorems are new also inthe classical case of equations driven by a standard cylindrical Wiener process. Thenoise term in the equation may contain an unbounded operator, which makes thegeneral results applicable to parabolic SPDEs with pointwise or boundary noise (seee.g. Example 7.2). In section 3, we revise and generalize the proof of its strong con-sistency. Strong consistency without assuming that the covariance operator of thedriving noise is trace-class is proved, which extends a result from [15]. This gener-alization thus covers the basic equations with white noise in space (cylindrical fBm)and many others. Motivated by the recent development, we apply the techniquesbased on the 4 th moment theorem to prove asymptotic normality and construct erry-Esseen-type bounds on the speed of the convergence. In section 4 we gener-alize the essential theory related to the 4 th moment theorem to the processes withcontinuous time and with values possibly in infinite dimensional Hilbert spaces.To authors’ best knowledge, this approach has been studied only for real-valuedrandom sequences with discrete time so-far. For continuously observed real-valuedfractional Ornstein-Uhlenbeck processes, different approach based on the variancesof the Malliavin derivatives of their Skorohod integrals was presented in [10].In sec-tion 5 we apply this theory to construct the Berry-Esseen-type bounds for samplemoments of the observed solution. These can be utilized for statistical inference onthe drift parameter (such as confidence intervals or hypothesis testing) after refor-mulation in terms of the second moment. Finally, in section 6, the Berry-Esseenbounds for MC estimators are constructed and asymptotic normality is proved.All the results on asymptotic normality and Berry-Esseen bounds assume that theHurst parameter of the driving fBm fulfills H < . Hence, both regular and sin-gular cases are covered. Note that if H > the asymptotic normality cannot beexpected in general (cf. Remark 7.1). Section 7 contains two examples illustratingthe general results: Stochastic linear parabolic equations with fractional distributedand pointwise noises, respectively (Examples 7.1 and 7.2).Let us note that in practice it is often important to estimate the parameter H . In one-dimensional case it has been studied in many works, for example, in[3] and references therein. These methods are, in principle, applicable also in thepresent case (for instance, by taking projections of solutions to finite-dimensionalsubspaces). However, we do not address this problem here.2. Basic setting and preliminaries
Consider a linear stochastic evolution equation in a separable Hilbert space V ,which is driven by fractional Brownian motion in a separable Hilbert space U :(1) dX ( t ) = αAX ( t ) dt + Φ dB H ( t ) , t > X (0) = X . In this equation, α > unknown parameter and the linear operator A : Dom ( A ) ⊂ V 7→ V generates an analytic semigroup ( S ( t ) , t ≥ β > βI − A is strictly positive, denote by D δA thedomain of the fractional power ( βI − A ) δ , δ >
0, equipped with the graph norm | · | D δA = | ( βI − A ) δ · | V (in the sequel, β is supposed to be fixed). Furthermore, let( B H ( t ) , t ∈ R ) be a standard (two-sided) cylindrical fractional Brownian motionon U with Hurst parameter H ∈ (0 , , F , P ). The initial condition X is assumed to be a V -valued random variablesuch that E | X | V < ∞ . The noise term satisfies the following condition:(A0) Φ : U ⊃ Dom (Φ) → ( Dom ( A ∗ )) ′ , the dual of Dom ( A ∗ ) with respect tothe topology of V , and ( βI − A ) ǫ − Φ ∈ L ( U, V ) for a given ǫ ∈ (0 , . Notice that in the simplest case when (A0) is satisfied with ǫ = 1 (equivalentlyΦ ∈ L ( U, V )) and H > / S ( t ) , t ≥
0) neednot be analytic. This is the most usual case which holds true in standard exam-ples (for instance, stochastic parabolic PDE with distributed fractional noise, cf. xample 7.1). On the other hand, in some situations (stochastic parabolic PDEwith pointwise or boundary fractional noise, cf. Example 7.2) the value of ǫ mustbe chosen strictly smaller than one (see e.g. [14] for more general results).Obviously, αA generates an analytic semigroup ( S α ( t ) , t ≥ S α ( t ) = S ( αt ).Existence of the V -valued mild solution is established in the following proposition(for proofs, see papers [7] for H > and [8] for H < ). Proposition 2.1.
Assume (A0) and (A1) | S ( t )Φ | L ≤ ct − γ , ∀ t ∈ (0 , T ] , for some T > , c > and γ ∈ [0 , H ) , where | . | L denotes the Hilbert-Schmidt normon V . Then (3) X ( t ) = S α ( t ) X + Z t S α ( t − u )Φ dB H ( u ) is a well-defined V -valued process with continuous paths in C ([0 , T ] , V ) and is calledthe mild solution to the equation (1) satisfying the initial condition (2) . The following theorem (established in [15] and [16]) ensures existence and ergod-icity of stationary solutions.
Proposition 2.2.
Assume (A0) , (A1) and (A2) | S ( t ) | L ≤ M e − ρt , ∀ t > , for some constants M > and ρ > , where | . | L denotes the operator norm. Thenthere exists a strictly stationary continuous solution to (1) , i.e. there exists aninitial value ˜ X (a V -valued random variable) such that the solution (4) Z ( t ) = S α ( t ) ˜ X + Z t S α ( t − u )Φ dB H ( u ) , t ≥ , is a strictly stationary process with continuous paths.Moreover, under assumptions (A0) - (A2) the strictly stationary solution ( Z ( t ) , t ≥ is ergodic, i.e. for any measurable functional ̺ : V → R satisfying E | ̺ ( ˜ X ) | < ∞ ,the following holds: (5) lim T →∞ T Z T ̺ ( Z ( t )) dt = Z V ̺ ( x ) µ ( dx ) , a.s. , lim n →∞ n n X i =1 ̺ ( Z ( i )) = Z V ̺ ( x ) µ ( dx ) , a.s.where µ = Law ( ˜ X ) = Law ( Z ( t )) = N (0 , Q α ∞ ) , ∀ t ≥ . Proof.
See proofs of Theorems 3.1, 4.6 in [15] and Theorems 3.1, 3.2 in [16]. Notethat the discrete-time ergodicity can be shown by same means as continuous-timeergodicity. (cid:3) n [15] it is shown that(6) Q α ∞ = 1 α H Q ∞ , where Q ∞ is the covariance operator of the invariant measure in the case α = 1.The minimum contrast estimator is based on the ergodic behaviour of the solutionsto the equation (1). The ergodicity of the stationary solution implies the following(almost sure) convergence of the sample second moments:lim T →∞ T Z T | Z ( t ) | V dt = Z V | x | V µ ( dx ) = E | Z ( t ) | V = Tr( Q α ∞ ) = 1 α H Tr( Q ∞ ) . Similar behaviour of the (non-stationary) solution ( X ( t ) , t ≥
0) for large t motivatesthe construction of the minimum-contrast estimator of the parameter α . Assumethat(7) Φ = 0 , to ensure that Tr( Q ∞ ) = 0 and the following minimum contrast estimator is welldefined:(8) ˆ α T := (cid:18) T R T | X ( t ) | V dt Tr( Q ∞ ) (cid:19) − H . Similarly, for observations in discrete time instants with fixed step sizes define(9) ˇ α n := (cid:18) n P ni =1 | X ( i ) | V Tr( Q ∞ ) (cid:19) − H . Obviously, in the degenerate case Φ = 0 the stationary solution is constantly zeroand the parameter is not identifiable.We may also consider the case when only a finite-dimensional projection of thesolution ( X ( t ) , t ≥
0) is observed, for instance, the process ( h X ( t ) , w i V , t ≥
0) fora given vector w ∈ V , such that(10) h Q ∞ w, w i V = 0 . We obtain the estimators(11) ˜ α T := (cid:18) T R T |h X ( t ) , w i V | dt h Q ∞ w, w i V (cid:19) − H , and for observations in discrete time instants with fixed step sizes(12) ¯ α n := (cid:18) n P ni =1 |h X ( i ) , w i V | h Q ∞ w, w i V (cid:19) − H . Strong consistency
Note that in [15] the strong consistency of the estimators (8) and (11) is provedfor fractional Brownian motion ( H ∈ (0 , emma 3.1. Consider continuous functions g, h : [0 , + ∞ ) R such that • T R T | g ( t ) | dt T →∞ −→ K < ∞ ; and • h ( t ) t →∞ −→ .Then T Z T g ( t ) h ( t ) dt T →∞ −→ . To show the convergence of the sample second moment of the non-stationarysolution, we utilize ergodicity of the stationary solution and the following theorem:
Theorem 3.1.
Let X = ( x t , t ≥ and Y = ( y t , t ≥ be two real-valued contin-uous random processes defined on a probability space (Ω , F , P ) . Let Y be strictlystationary and ergodic. Further let (13) | x t − y t | t →∞ −→ a.s.Consider a smooth function g ∈ C p ( R ) for some integer p ≥ . Denote its k -thderivative by g ( k ) and assume: • E | g ( k ) ( y ) | < ∞ ; k = 0 , , .., p ; and • g ( p ) is globally Lipschitz.Then (14) 1 T Z T g ( x t ) dt T →∞ −→ E g ( y ) a.s.Proof. Ergodicity of Y implies T R T g ( y t ) dt T →∞ −→ E g ( y ) a.s.. It suffices to prove (cid:12)(cid:12)(cid:12)(cid:12) T Z T (cid:18) g ( x t ) − g ( y t ) (cid:19) dt (cid:12)(cid:12)(cid:12)(cid:12) T →∞ −→ ω ∈ Ω and apply Taylor’s approximation with Lagrange remainder: (cid:12)(cid:12)(cid:12)(cid:12) T Z T (cid:18) g ( x t ( ω )) − g ( y t ( ω )) (cid:19) dt (cid:12)(cid:12)(cid:12)(cid:12) ≤ T Z T (cid:12)(cid:12)(cid:12)(cid:12) g ( x t ( ω )) − g ( y t ( ω )) (cid:12)(cid:12)(cid:12)(cid:12) dt = 1 T Z T (cid:12)(cid:12)(cid:12)(cid:12) p − X k =1 g ( k ) ( y t ( ω )) k ! ( x t ( ω ) − y t ( ω )) k + g ( p ) ( η t ( ω )) p ! ( x t ( ω ) − y t ( ω )) p (cid:12)(cid:12)(cid:12)(cid:12) dt ≤ p − X k =1 T Z T | g ( k ) ( y t ( ω )) | | x t ( ω ) − y t ( ω ) | k k ! dt + 1 T Z T | g ( p ) ( η t ( ω )) | | x t ( ω ) − y t ( ω ) | p p ! dt, where η t is a random point between x t and y t .Firstly, for almost all (a.a.) ω we have T R T | g ( k ) ( y t ( ω )) | dt T →∞ −→ E | g ( k ) ( y ) | < ∞ by ergodicity of Y and | x t ( ω ) − y t ( ω ) | k k ! t →∞ −→ ω T Z T | g ( k ) ( y t ( ω )) | | x t ( ω ) − y t ( ω ) | k k ! dt T →∞ −→ , for k = 1 , ..., p − T →∞ T Z T | g ( p ) ( η t ( ω )) | dt = lim T →∞ T Z T | ( g ( p ) ( η t ( ω )) − g ( p ) ( y t ( ω )))+ g ( p ) ( y t ( ω )) | dt. owever, the Lipschitz condition implies | g ( p ) ( η t ( ω )) − g ( p ) ( y t ( ω )) | ≤ C | η t ( ω ) − y t ( ω ) | ≤ C | x t ( ω ) − y t ( ω ) | t →∞ −→ ω .Hence, we obtain for a.a. ω :lim T →∞ T Z T | ( g ( p ) ( η t ( ω )) | dt = lim T →∞ T Z T | g ( p ) ( y t ( ω )) | dt = E | g ( p ) ( y ) | < ∞ by ergodicity of Y . If we combine this with the a.s. convergence | x t ( ω ) − y t ( ω ) | p p ! t →∞ −→ , Lemma 3.1 yieldslim T →∞ T Z T | g ( p ) ( η t ( ω )) | | x t ( ω ) − y t ( ω ) | p p ! dt = 0 for a.a. ω . (cid:3) Corollary 3.1.
Let (A0) , (A1) and (A2) are satisfied and let ( X ( t ) , t ≥ be thesolution to (1) - (2) with X such that E | X | V < ∞ . For any integer p ≥ we havethat (15) 1 T Z T | X ( t ) | p V dt T →∞ −→ Z V | y | p V µ ( dy ) a.s. . Moreover, for any w ∈ V (16) 1 T Z T |h X ( t ) , w i V | p dt T →∞ −→ Z V |h y, w i V | p µ ( dy ) a.s. . Proof.
Starting with (15), consider ( Z ( t ) , t ≥
0) the strictly stationary solution to(1) and set y t = | Z ( t ) | V and x t = | X ( t ) | V . Obviously, ( y t , t ≥
0) and ( x t , t ≥ Z and X (cf. Proposition2.1 and Proposition 2.2) and ( y t , t ≥
0) is strictly stationary and ergodic (by sta-tionarity and ergodicity of Z ). Clearly | x t − y t | ≤ | S ( t ) | L | X (0) − Z (0) | V → g ( x ) = x p , in view of the Fernique theorem, we obtain E | g ( k ) ( y ) | = E ( c | Z (0) | p − k V ) < ∞ ; k = 0 , , .., p − , and g ( p − ( x ) = p ! x is globally Lipschitz. Therefore, the proof of (15) is completedin virtue of Theorem 3.1.Proof of convergence (16) runs similarly using processes y t = |h Z ( t ) , w i V | and x t = |h X ( t ) , w i V | . (cid:3) orollary 3.2. Under the assumptions (A0) , (A1) and (A2) , the estimators (8) , (9) , (11) and (12) are strongly consistent, i.e. ˆ α T := (cid:18) T R T | X ( t ) | V dt Tr( Q ∞ ) (cid:19) − H T →∞ −→ α a.s. , ˇ α n := (cid:18) n P ni =1 | X ( i ) | V Tr( Q ∞ ) (cid:19) − H n →∞ −→ α a.s. , ˜ α T := (cid:18) T R T |h X ( t ) , w i V | dt h Q ∞ w, w i V (cid:19) − H T →∞ −→ α a.s. , ¯ α n := (cid:18) n P ni =1 |h X ( i ) , w i V | h Q ∞ v, v i V (cid:19) − H n →∞ −→ α a.s. . Proof.
The continuous-time versions of the statements follow immediately fromCorollary 3.1 by setting p = 2. The long-span asymptotics with fixed time stepin the discrete-time versions may be shown analogously using discrete version ofergodicity. (cid:3) Normal approximation for Gaussian-subordinatedsequences/processes
Preliminaries.
The fundamental tool for proving asymptotic normality (aswell as for assessing the speed of convergence) will be the celebrated 4 th momenttheorem (see [20]): Proposition 4.1.
Consider an isonormal Gaussian process X on a separable Hilbertspace H . Let ( F n : n ∈ N ) be a sequence of random variables belonging to the q -th chaos of X with E F n = 1 and consider a normally distributed random variable N ∼ N (0 , . Then sequence F n converges in distribution to N if and only if the th cumulants of F n ( κ ( F n ) ) converge to zero, i.e. (17) F n d → N ⇐⇒ E F n − → . In this case E F n → and there exist positive finite constants < c < C < ∞ (which may depend on the sequence ( F n ) , but not on n ) such that (18) c M ( F n ) ≤ d T V ( F n , N ) ≤ C M ( F n ) , where d T V denotes the total-variation distance and M ( F n ) = max {| E F n | , | E F n − |} = max {| κ ( F n ) | , | κ ( F n ) |} represents the optimum bound of Berry-Esseen type for F n expressed in terms ofthe rd and the th cumulants. Remark 4.1.
It directly follows from the proof of Proposition 4.1 (see the proof ofTheorem 1.2 in [20] ) that the corresponding upper bound d T V ( F T , N ) ≤ C max {| κ ( F T ) | , | κ ( F T ) |} holds also for a random process ( F T : T > from the q -th Wiener chaos.Furthermore, the upper bound in (18) also holds for the Wasserstein distance: d W ( F n , N ) ≤ C max {| κ ( F n ) | , | κ ( F n ) |} , possibly with different constant C . To show this inequality, we can employ condi-tional expectation in the proof of formula (3 . in [18] (cf. Theorem 3.1 therein) to ormulate Proposition 2.4 from [20] in terms of the Wasserstein distance and applythe proof of the upper bound in Theorem 1.2 therein. In order to use the 4 th moment theorem, we will need to calculate the 3 rd andthe 4 th cumulants of the sample second moments of the observations.4.2. ∞ -dimensional Gaussian-subordinated sequences. Following the approachfor one-dimensional Gaussian-subordinated sequences presented in the paper [2],we derive corresponding results for Gaussian sequences with values in an infinite-dimensional separable Hilbert space V with an orthonormal basis { e k } ∞ k =1 .Let ( Z i : i ∈ Z ) be a non-degenerate stationary V -valued centered Gaussiansequence with a trace class covariance operator Q = 0, i.e. Z i ∼ N (0 , Q ) anddenote Q ( i − j ) = cov ( Z i , Z j ) the auto-covariance operators of the sequence, whichmeans that E ( h Z i , v i V h Z j , w i V ) = h Q ( i − j ) v, w i V ∀ v, w ∈ V . Now define(19) V n := 1 √ n n X i =1 ( | Z i | V − Tr( Q )) , s n := E V n , F n := V n √ s n . We can express s n in terms of Q (using the spectral decomposition and Isserlistheorem):(20) s n = 2 n − X i = − ( n − (cid:18) − | i | n (cid:19) | Q ( i ) | L . We need to define a Hilbert space with scalar product generated by the auto-covariance operators Q ( i ). To this end, we have to assume positive definiteness of Q . This assumption, however, can be made without loss of generality. If Q is onlypositive-semidefinite, we can take an orthonormal basis { e k } ∞ k =1 of V consisting ofeigenvectors of Q and restrict ourselves on the subspace of V generated by thoseeigenvectors corresponding to positive eigenvalues (denote it by V pos ). Clearly theprojections of Z i onto the orthogonal complement of the subspace V pos is (almostsurely) zero. Hence, Z i ∈ V pos almost surely and Q , restricted to V pos , is positive-definite.The construction of the corresponding Hilbert space H and isonormal Gaussianprocess for Wiener chaos decomposition follows the approach from [19] (cf. Propo-sition 7.2.3). Consider the linear span of the abstract set E := { h i,v : i ∈ Z , v ∈V} ⊂ H endowed with the scalar product obtained by natural extension of thefollowing binary operation on E : h h i,v , h j,w i H := h Q ( i − j ) v, w i V . Taking the completion of this linear space with respect to the scalar product yieldsthe separable Hilbert space H . Separability of H follows from separability of V .The appropriate isonormal Gaussian process X is defined as the L -isometric linearextension of the mapping X ( h i,v ) = h Z i , v i V , ∀ h i,v ∈ E . ext step is to express F n as an element of the second Wiener chaos of X .Consider h i,e k the elements of H . Then(21) F n = 1 √ n s n n X i =1 ∞ X k =1 ( h Z i , e k i V − h Qe k , e k i V )= 1 √ n s n n X i =1 ∞ X k =1 ( X ( h i,e k ) − h Qe k , e k i V )= 1 √ n s n n X i =1 L -lim N →∞ N X k =1 I ( h ⊗ i,e k )= I (cid:18) √ n s n n X i =1 ∞ X k =1 h ⊗ i,e k (cid:19) = I ( f n ) , where I is the isometric isomorphism between the space of symmetric tensor prod-uct H ⊙ (equipped with the modified tensor norm √ | . | H ⊗ ) and the second Wienerchaos of X (equipped with the L norm).The 3 rd and the 4 th cumulants of F n can be bounded above (see Lemma 8.1 inappendix for details):(22) κ ( F n ) ≤ C √ n s / n (cid:18) X | i | Real-valued Gaussian-subordinated sequences are also covered by thetheory above. The bounds (22) and (23) in this special case correspond to those in [2] . ∞ -dimensional Gaussian-subordinated processes. Consider a non-degeneratecentered stationary continuous Gaussian random process ( Z t : t ∈ R ) with values ina separable Hilbert space V with orthonormal basis { e k } ∞ k =1 . Denote by Q ( t − s ) the(trace-class) auto-covariance operators of the process (i.e. Q ( t − s ) = cov ( Z t , Z s ))and Q = Q (0) = 0 the covariance operator of Z t . Define the corresponding Gauss-ian subordinate processes:(24) V T := 1 √ T Z T ( | Z t | V − Tr( Q )) dt, s T := E V T , F T := V T √ s T . The spectral decomposition, the Isserlis’ theorem and the change of variableswithin integrals yield:(25) s T = 2 Z T − T (cid:18) − | t | T (cid:19) | Q ( t ) | L dt. To construct the appropriate Hilbert space H with the isonormal Gaussian pro-cess X , start with a set E := { h t,v : t ∈ R , v ∈ V} and proceed analogously to revious subsection. Separability of H follows from separability of V and from L -continuity of process ( Z t : t ∈ R ).To enable using results for random sequences, fix T > { t i = i Tn : i = 0 , ..., n } of the interval [0 , T ]. Further consider V ( T ) n constructedfrom the stationary sequence ( Z t , Z t , ..., Z t n ) with the auto-covariance operators Q ( T ) n ( i ) = Q ( i T /n ).The L continuity of the real-valued random process | Z . | V : t 7→ | Z t | V implies: V T = 1 √ T Z T ( | Z t | V − Tr( Q )) dt = L -lim n →∞ √ T n X i =1 ( | Z t i | V − Tr( Q )) Tn = L -lim n →∞ r Tn V ( T ) n . Therefore s T = E V T = lim n →∞ E (cid:18)r Tn V ( T ) n (cid:19) = lim n →∞ Tn s ( T ) n ,F T = L -lim n →∞ q Tn V ( T ) n q Tn s ( T ) n = L -lim n →∞ V ( T ) n q s ( T ) n = L -lim n →∞ F ( T ) n . Since F ( T ) n = I (cid:18) q n s ( T ) n n X i =1 ∞ X k =1 h ⊗ t i ,e k (cid:19) belongs to the second chaos, F T is in the second chaos as well, with F T = I (cid:18) √ T s T Z T ∞ X k =1 h ⊗ t,e k dt (cid:19) . By hypercontractivity property on the second Wiener chaos, E | F ( T ) n − F T | p → p ≥ 2, which implies κ ( F T ) = lim n →∞ κ ( F ( T ) n ) and κ ( F T ) = lim n →∞ κ ( F ( T ) n ) . Now we can utilize the bounds from the previous subsection (note that t Q ( t ) | L is continuous):(26) κ ( F T ) = lim n →∞ κ ( F ( T ) n ) ≤ lim inf n →∞ C √ n ( s ( T ) n ) / (cid:18) X | i | Real-valued Gaussian-subordinated processes are again a special caseof the theory above. Berry-Esseen bounds for sample moments In this section, we derive the upper bounds for the speed of convergence to normaldistribution of the (centered and standardized) sample second moments calculatedfrom the solutions to the original SPDE (1). Our approach is motivated by thework [11], which covers the real-valued fractional Ornstein-Uhlenbeck processes.We study only the case H < . Note that asymptotic normality in case H > canin general not be expected (see discussion in subsection 7.1).Recall that ( Z ( t ) : t ≥ 0) denotes the stationary solution to (1), which is a sta-tionary centered Gaussian process with values in V and with covariance operator Q α ∞ = 0 (by assumption (7)).To obtain an explicit formula for the auto-covariance operators Q α ∞ ( t ) = cov ( Z ( t ) , Z (0)),we first formulate a proposition on calculating covariance of stochastic integralsdriven by a fractional Brownian motion. This proposition is a straightforward gen-eralization of Lemma 2.1 in [4]. Proposition 5.1. Let ( β H ( t ) : t ∈ R ) be a scalar fractional Brownian motion withHurst parameter H ∈ (0 , . Consider real numbers a < b ≤ c < d and continuousdeterministic functions f ∈ C ([ a, b ]) , g ∈ C ([ c, d ]) such that Z ba f ′ ( t ) β H ( t ) dt and Z dc g ′ ( t ) β H ( t ) dt exist almost surely (as Riemann integrals). Then (28) E (cid:18)Z ba f ( t ) dβ H ( t ) (cid:19)(cid:18)Z dc g ( s ) dβ H ( s ) (cid:19) = H (2 H − Z ba Z dc f ( t ) g ( s )( s − t ) H − dsdt. Note that the equality (28) is well known in the regular case H ≥ . Thisproposition, however, covers also the singular cases H < , because it relies on thedisjoint integration domains. Proof. The proof consists of integration by parts applied on the left-hand side,calculating expectations and applying reverse integration by parts. For details, seeLemma 2.1 in [4]. (cid:3) emma 5.1. Assume (A0) , (A1) and (A2) . The auto-covariance operators of thestationary solution ( Z ( t ) : t ≥ to (1) may be expressed by the formula (29) Q α ∞ ( t ) = cov ( Z ( t ) , Z (0))= Q α ∞ S ∗ α ( t ) + Z t Z −∞ S α ( − r )ΦΦ ∗ S ∗ α ( t − s ) H (2 H − s − r ) H − drds. Proof. Notice that the conditions (A0) and (A2) imply(30) | S ( r )Φ | L ≤ C e − ρr r − ǫ , r > , for a constant C > 0, because we have | S ( r )Φ | L ≤ | S ( r )( βI − A ) − ǫ | L · | ( βI − A ) ǫ − Φ | L ( U, V ) and (30) follows by [22], Theorem 6.13. Consequently the second term on the right-hand side of (29) is well defined and the integral converges in the operator norm.Indeed, for each t ≥ c t such that(31) Z t Z −∞ H (2 H − s − r ) H − | S α ( − r )ΦΦ ∗ S ∗ α ( t − s ) | L drds ≤ c t Z t Z −∞ e ρr e − ρ ( t − s ) ( − r ) ǫ − ( t − s ) ǫ − ( s − r ) H − drds < ∞ in view of Lemma 8.2 in the Appendix. By construction of the stationary solution,for arbitrary x, y ∈ Dom (( βI − A ∗ ) − ǫ ) we have h Q α ∞ ( t ) x, y i V = E (cid:18) h Z ( t ) , x i V h Z (0) , y i V (cid:19) = E (cid:18)(cid:28) S α ( t ) Z (0) + Z t S α ( t − s )Φ dB H ( s ) , x (cid:29) V (cid:28) Z (0) , y (cid:29) V (cid:19) = E (cid:28) S α ( t ) Z (0) , x (cid:29) V (cid:28) Z (0) , y (cid:29) V + E (cid:28)Z t S α ( t − s )Φ dB H ( s ) , x (cid:29) V (cid:28)Z −∞ S α ( − r )Φ dB H ( r ) , y (cid:29) V = ( A ) + ( B ) . For the first term, we have( A ) = E (cid:28) Z (0) , S ∗ α ( t ) x (cid:29) V (cid:28) Z (0) , y (cid:29) V = h Q α ∞ S ∗ α ( t ) x, y i V . For the second term, we consider an orthonormal basis of V { e k } ∞ k =1 and denoteby β Hk , k = 1 , , ... a system of independent R -valued fractional Brownian motions, β Hk ( t ) = h B H ( t ) , e k i V . Then( B ) = E (cid:28) ∞ X k =1 Z t S α ( t − s )Φ e k dβ Hk ( s ) , x (cid:29) V (cid:28) ∞ X k =1 Z −∞ S α ( − r )Φ e k dβ Hk ( r ) , y (cid:29) V = ∞ X k =1 E Z t h S α ( t − s )Φ e k , x i V dβ Hk ( s ) Z −∞ h S α ( − r )Φ e k , y i V dβ Hk ( r ) . n view of the assumption (A0) the functions s → Φ ∗ S ∗ α ( t − s ) x and r → Φ ∗ S ∗ α ( − r ) y are continuously differentiable on their respective domains, hence wemay apply Proposition 5.1 to obtain( B ) = ∞ X k =1 Z t Z −∞ h S α ( t − s )Φ e k , x i V h S α ( − r )Φ e k , y i V H (2 H − s − r ) H − drds. Taking into account (31) and using the Dominated Convergence Theorem wehave that( B ) = Z t Z −∞ ∞ X k =1 h e k , Φ ∗ S ∗ α ( t − s ) x i V h e k , Φ ∗ S ∗ α ( − r ) y i V H (2 H − s − r ) H − drds = Z t Z −∞ h Φ ∗ S ∗ α ( t − s ) x, Φ ∗ S ∗ α ( − r ) y i V H (2 H − s − r ) H − drds = Z t Z −∞ h S α ( − r )ΦΦ ∗ S ∗ α ( t − s ) x, y i V H (2 H − s − r ) H − drds = (cid:28)(cid:18)Z t Z −∞ S α ( − r )ΦΦ ∗ S ∗ α ( t − s ) H (2 H − s − r ) H − drds (cid:19) x, y (cid:29) V . Since x, y ∈ Dom (( βI − A ∗ ) − ǫ ) are arbitrary and Dom (( βI − A ∗ ) − ǫ ) is dense in V , this concludes the proof of (29). (cid:3) Lemma 5.2. The family of auto-covariance operators ( Q α ∞ ( t ) : t ∈ R ) is right-continuous with respect to the Hilbert-Schmidt norm and (32) | Q α ∞ ( t ) | L ≤ min { Tr( Q α ∞ ) , C | t | H − } , ∀ t ∈ R , where C is a constant which does not depend on t .Proof. Since | Q α ∞ ( − t ) | L = | ( Q α ∞ ( t )) ∗ | L = | Q α ∞ ( t ) | L , we can consider only thecase t ≥ | Q α ∞ ( t ) | L ≤ Tr( Q α ∞ ) . For asymptotic behaviour, we can utilize the previous Lemma, which yields | Q α ∞ ( t ) | L ≤ | Q α ∞ S ∗ α ( t ) | L + Z t Z −∞ | S α ( − r )ΦΦ ∗ S ∗ α ( t − s ) | L H (2 H − s − r ) H − drds Using the exponential stability (A2), the first term can easily be bounded by anexponential function: | Q α ∞ S ∗ α ( t ) | L = | S α ( t ) Q α ∞ | L ≤ | S α ( t ) | L | Q α ∞ | L ≤ M e − ραt | Q α ∞ | L . For the second term, we employ both assumptions (A1) and (A2) and the relation S α ( t ) = S ( αt ), which implies | S α ( t )Φ | L ≤ ce − ραt t − γ , ∀ t > , or some constants c > γ ∈ [0 , H ). If we apply this inequality to the operatorsinside the integral, we obtain Z t Z −∞ | S α ( − r )ΦΦ ∗ S ∗ α ( t − s ) | L H (2 H − s − r ) H − drds ≤ C Z t Z −∞ e − ρα ( − r ) ( − r ) − γ e − ρα ( t − s ) ( t − s ) − γ H (2 H − s − r ) H − drds ≤ Ct H − , for some constant C > (cid:3) For clarity of exposition, we shall describe construction of the Berry-Esseen typeof bounds for discretely observed solution (the continuous-time case is analogous).The calculations below are similar to those presented in [19] in section 7.4, whereincrements of fBm are considered. The definitions of F n from (19) applied to thestationary solution to the SPDE take the form F n = 1 √ s n √ n n X i =1 ( | Z ( i ) | V − Tr( Q α ∞ )) , where(33) s n = 2 n − X i = − ( n − (cid:18) − | i | n (cid:19) | Q α ∞ ( i ) | L . The upper bounds of auto-covariance operators (32) yield X | i | Lemma 5.3. Consider F n , s n , s ∗∞ defined above and N ∼ N (0 , . Then (37) d T V (cid:18) √ s n √ s ∗∞ F n , N (cid:19) ≤ | − s n s ∗∞ | + C max {| κ ( F n ) | , | κ ( F n ) |} . Proof. Denote by D and L − the Malliavin derivative and the pseudo-inverse ofthe generator of the Ornstein-Uhlenbeck semigroup with respect to the isonormalGaussian process X on Hilbert space H generated from the stationary solution Z as described in Section 4.2. In particular, for random elements from the secondWiener chaos of X , we have L − I ( f ) = − I ( f ). For more details on these twolinear operators, consult e.g. [19]. It immediately follows from the proof of Theorem1.2 in [20] that d T V (cid:18) √ s n √ s ∗∞ F n , N (cid:19) ≤ E (cid:20)(cid:12)(cid:12)(cid:12)(cid:12) E (1 − h D √ s n √ s ∗∞ F n , − DL − √ s n √ s ∗∞ F n i H | F n ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:21) = 2 E (cid:20)(cid:12)(cid:12)(cid:12)(cid:12) E (1 − s n s ∗∞ + s n s ∗∞ − s n s ∗∞ h DF n , − DL − F n i H | F n ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:21) ≤ | − s n s ∗∞ | + 2 s n s ∗∞ E (cid:2)(cid:12)(cid:12) E (1 − h DF n , − DL − F n i H | F n ) (cid:12)(cid:12)(cid:3) . Clearly 0 < s n s ∗∞ < E (cid:2)(cid:12)(cid:12) E (1 − h DF n , − DL − F n i H | F n ) (cid:12)(cid:12)(cid:3) can bebounded above by C max {| κ ( F n ) | , | κ ( F n ) |} (which again follows from the proofof Theorem 1.2 in [20]). (cid:3) Note that by Remark 4.1 we can reformulate previous results in terms of theWasserstein distance.Next step is to extend the result from the stationary solutions to the general so-lutions to (1). Following the approach in [11] we consider the Wasserstein distance.Consider a solution ( X ( t ) , t ≥ 0) to (1) with a general initial condition X andthe stationary solution ( Z ( t ) , t ≥ X ( t ) = Z ( t ) + S α ( t )( X (0) − Z (0)) . Furthermore, the effect of the non-stationary term can then be controlled by thefollowing elementary (but useful) lemma ([11]): Lemma 5.4. Consider two random variables X and Z defined on the same prob-ability space. Then d W ( X + Z, N ) ≤ d W ( Z, N ) + E | X | . Theorem 5.1. Let (A0) , (A1) and (A2) be satisfied and let the previous notationprevail. Let N v denote a Gaussian random variable, N v ∼ N (0 , s ∗∞ ) , nd define the upper-bound function: (38) ξ H ( n ) := ( √ n , H ≤ , n − H , < H < . Then for the stationary solution ( Z ( t ) , t ≥ to the equation (1) we have that (39) d T V (cid:20) √ n (cid:18) n n X i =1 | Z ( i ) | V (cid:19) − α H Tr( Q ∞ ) ! , N v (cid:21) ≤ C ξ H ( n ) . Now consider a general solution ( X ( t ) , t ≥ to (1) with initial condition X suchthat E | X | V < ∞ . Then (40) d W (cid:20) √ n (cid:18) n n X i =1 | X ( i ) | V (cid:19) − α H Tr( Q ∞ ) ! , N v (cid:21) ≤ C ξ H ( n ) . Proof. In the stationary case, we use Lemma 5.3 to see d T V (cid:20) √ n (cid:18) n n X i =1 | Z ( i ) | V (cid:19) − α H Tr( Q ∞ ) ! , N v (cid:21) ≤ | − s n s ∗∞ | + C max {| κ ( F n ) | , | κ ( F n ) |} . The term max {| κ ( F n ) | , | κ ( F n ) |} can be estimated as in (34).Next, we use the formulas (33) and (36) together with (32) to obtain | − s n s ∗∞ | ≤ C n , H < ,C ln nn , H = ,Cn H − , < H < . Combining these two bounds, we arrive at (39).For the non-stationary case, we apply Lemma 5.4. Consequently, we need tocalculate E (cid:12)(cid:12)(cid:12)(cid:12) | X ( i ) | V − | Z ( i ) | V (cid:12)(cid:12)(cid:12)(cid:12) ≤ q E | X ( i ) − Z ( i ) | V (cid:18)q E ( | X ( i ) | V + q E ( | Z ( i ) | V (cid:19) . For the first factor on the r.h.s., we have q E | X ( i ) − Z ( i ) | V = q E | S α ( i )( X (0) − Z (0)) | V ≤ | S α ( i ) | L ( V ) q E | ( X (0) − Z (0)) | V ≤ C e − ρi , where we employed the assumption E | X | V < ∞ and the exponential stability(A2).The second factor can be bounded above by a constant, since (using the abovebound) q E ( | X ( i ) | V + q E ( | Z ( i ) | V ≤ q E ( | X ( i ) − Z ( i ) | V + q E ( | Z ( i ) | V + q E ( | Z ( i ) | V ≤ C e − ρi + 2Tr( Q α ∞ ) . ence, E (cid:12)(cid:12)(cid:12)(cid:12) | X ( i ) | V − | Z ( i ) | V (cid:12)(cid:12)(cid:12)(cid:12) ≤ Ce − ρi , where C is a constant independent of i and ρ is the coefficient in the exponentialstability (A2). It follows directly E (cid:12)(cid:12)(cid:12)(cid:12) √ n n X i =1 (cid:18) | X ( i ) | V − | Z ( i ) | V (cid:19)(cid:12)(cid:12)(cid:12)(cid:12) ≤ C √ n , where C does not depend on n . This, according to Lemma 5.4, does not distort theupper bound ξ H ( n ).The continuous-time case can by treated similarly. (cid:3) Remark 5.1. In continuous-time case, we can proceed analogously to show d T V (cid:20) √ T (cid:18) T Z T | Z ( t ) | V dt (cid:19) − α H Tr( Q ∞ ) ! , N u (cid:21) ≤ C ξ H ( T ) ,d W (cid:20) √ T (cid:18) T Z T | X ( t ) | V dt (cid:19) − α H Tr( Q ∞ ) ! , N u (cid:21) ≤ C ξ H ( T ) , where N u ∼ N (0 , u ∗∞ ) and u ∗∞ = 2 R ∞−∞ | Q α ∞ ( t ) | L ( t ) dt < ∞ . Remark 5.2. If we observe a one-dimensional projection of the solution, we canproceed similarly. Considering the stationary centered Gaussian process z t := h Z ( t ) , w i V with auto-covariance function (41) r αz ( t ) = E ( z t z ) = h Q α ∞ ( t ) w, w i V , we can show again that r αz is right-continuous with (42) | r αz ( t ) | ≤ min { r αz (0) , C | t | H − } , ∀ t ∈ R . The above approach then leads to the following Berry-Esseen bounds for the sta-tionary solution: (43) d T V (cid:20) √ n (cid:18) n n X i =1 h Z ( i ) , w i V (cid:19) − α H h Q ∞ w, w i V ! , N v (cid:21) ≤ C ξ H ( n ) ,d T V (cid:20) √ T (cid:18) T Z T h Z ( t ) , w i V dt (cid:19) − α H h Q ∞ w, w i V ! , N u (cid:21) ≤ C ξ H ( T ) , and for a general solution ( X ( t ) , t ≥ to (1) with initial condition X such that E | X | V < ∞ : (44) d W (cid:20) √ n (cid:18) n n X i =1 h X ( i ) , w i V (cid:19) − α H h Q ∞ w, w i V ! , N v (cid:21) ≤ C ξ H ( n ) ,d W (cid:20) √ T (cid:18) T Z T h X ( t ) , w i V dt (cid:19) − α H h Q ∞ w, w i V ! , N u (cid:21) ≤ C ξ H ( T ) , where ξ H ( n ) is defined in Theorem 5.1, N v ∼ N (0 , s ∞ ) , N u ∼ N (0 , u ∞ ) ), s ∞ =2 P ∞ i = −∞ ( r αz ( i )) and u ∞ = 2 R ∞−∞ ( r αz ( t )) dt . . Berry-Esseen bounds for estimators Recall that the estimators (8), (9), (11) and (12) are all constructed (underassumption (7)) as monotonous, twice differentiable functions of the correspondingsample second moments, whose asymptotic properties were studied in the previoussection. It is well-known, that this monotonous differentiable transformation doesnot distort asymptotic normality. However, when constructing the Berry-Esseenbounds for the transformed processes, we have to switch to the Kolmogorov distancelocalized on compacts, as suggested by the following proposition: Proposition 6.1. Denote by Ψ the distribution function of the standard normaldistribution N (0 , and consider a stochastic process ( U T : T > with mean value µ and a standardizing function σ T , σ T T →∞ −→ , such that sup z ∈ R (cid:12)(cid:12)(cid:12)(cid:12) P (cid:18) U T − µσ T ≤ z (cid:19) − Ψ( z ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ ξ ( T ) , ∀ T > ,ξ ( T ) being the upper bound for the Kolmogorov distance.Now let g be a monotonous function, g ∈ C ( A ) , where P ( U T ∈ A ) = 1 for all T . Then for each K > there exists a constant C K such that sup z ∈ [ − K,K ] (cid:12)(cid:12)(cid:12)(cid:12) P (cid:18) g ( U T ) − g ( µ ) | g ′ ( µ ) | σ T ≤ z (cid:19) − Ψ( z ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ C K max { ξ ( T ) , σ T } , ∀ T > . Proof. The idea of the proof follows calculations from [24] (see the proof of Theorem3.2. therein). Denote ψ := g − , φ := g ( µ ) and assume g ′ < g ′ > (cid:12)(cid:12)(cid:12)(cid:12) P (cid:18) g ( U T ) − g ( µ ) | g ′ ( µ ) | σ T ≤ z (cid:19) − Ψ( z ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ (cid:12)(cid:12)(cid:12)(cid:12) P (cid:18) U T − µσ T ≥ ν (cid:19) − (1 − Ψ( ν )) (cid:12)(cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12)(cid:12) Ψ( − ν ) − Ψ( z ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ ξ ( T ) + | − ν − z | , where ν = ψ ( zσ T | g ′ ( µ ) | + φ ) − ψ ( φ ) σ T = 1 σ T ψ ′ ( η T ) zσ T | g ′ ( µ ) | = z ψ ′ ( η T ) | ψ ′ ( φ ) | = − z ψ ′ ( η T ) ψ ′ ( φ ) , with η T ∈ ( φ, φ + zσ T | g ′ ( µ ) | ). Now(46) | − ν − z | = | z ψ ′ ( η T ) ψ ′ ( φ ) − z | = | zψ ′ ( φ ) | | ψ ′ ( η T ) − ψ ′ ( φ ) | = | zψ ′ ( φ ) | | ψ ′′ ( ξ T )( η T − φ ) | ≤ | zψ ′ ( φ ) | | ψ ′′ ( ξ T ) | | zσ T | | g ′ ( µ ) |≤ | z | | g ′ ( µ ) | σ T sup {| ψ ′′ ( y ) | : y ∈ [ g ( µ ) − zσ T | g ′ ( µ ) | , g ( µ ) + zσ T | g ′ ( µ ) | ] } . Clearly for any K > δ > T > | z |≤ K | − ν − z | ≤ K | g ′ ( µ ) | σ T sup {| ψ ′′ ( y ) | : y ∈ [ g ( µ ) − δ, g ( µ ) + δ ] } ≤ C K σ T , for all T > T . Combining this estimate with (45) we obtain the desired result. (cid:3) e shall first formulate the Berry-Esseen bounds for the estimators constructedby means of the stationary solution. The minimum contrast estimators are wellsuited for these situations.Note that the Berry-Esseen bounds for the sample moments from previous sec-tion can be reformulated in terms of the Kolmogorov distance, therefore Proposition6.1 can be used. Theorem 6.1. Let (A0) , (A1) and (A2) be satisfied. Consider the estimators ˆ α T , ˇ α n , ˜ α T , ¯ α n defined in (8) - (12) based on the stationary solution to the equa-tion (1) (i.e. X ( t ) = Z ( t ) ) and recall the upper-bound function: (47) ξ H ( t ) := ( √ t , H ≤ , t − H , < H < . Then for each K > , there exists a constant C K , such that sup z ∈ [ − K,K ] (cid:12)(cid:12)(cid:12)(cid:12) P (cid:18) √ n ˇ α n − αγ α √ s ∗∞ ≤ z (cid:19) − Ψ( z ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ C K ξ H ( n ) , ∀ n > , sup z ∈ [ − K,K ] (cid:12)(cid:12)(cid:12)(cid:12) P (cid:18) √ T ˆ α T − αγ α √ u ∗∞ ≤ z (cid:19) − Ψ( z ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ C K ξ H ( T ) , ∀ T > T for some T > , sup z ∈ [ − K,K ] (cid:12)(cid:12)(cid:12)(cid:12) P (cid:18) √ n ¯ α n − αδ α √ s ∞ ≤ z (cid:19) − Ψ( z ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ C K ξ H ( n ) , ∀ n > , sup z ∈ [ − K,K ] (cid:12)(cid:12)(cid:12)(cid:12) P (cid:18) √ T ˜ α T − αδ α √ u ∞ ≤ z (cid:19) − Ψ( z ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ C K ξ H ( T ) , ∀ T > T for some T > , where γ α = α H H Tr( Q ∞ ) ,δ α = α H H h Q ∞ w, w i V . Proof. It is a direct consequence of the Theorem 5.1 (cf. also Remarks 5.1 and 5.2)and the Proposition 6.1. (cid:3) Remark 6.1. In case of general (non-stationary) solution , we can proceedsimilarly. First, we have to replace the Wasserstein distance by the Kolmogorovdistance. For this purpose, we can utilize the following well-known general inequal-ity: (48) d Kol ( X, N ) ≤ p C d W ( X, N ) , where X and N are any random variables and N has absolutely continuous distribu-tion with the density function bounded by the constant C . This inequality, however,decelerates the speed of convergence of the local Berry-Esseen bounds.In view of Theorem 5.1 (cf. Remarks 5.1 and 5.2) and Proposition 6.1 we obtain re-sults corresponding to Theorem 6.1, but with upper bounds for the local Kolmogorovdistance being c K p ξ H ( n ) and c K p ξ H ( T ) instead of C K ξ H ( n ) and C K ξ H ( T ) ,respectively. he central limit theorem for the estimators is now an easy corollary to theprevious theorem. Corollary 6.1. Assume (A0) , (A1) , (A2) , E | X | V < ∞ and H < . For theestimators (8) , (9) , (11) and (12) the central limit theorems √ n (ˇ α n − α ) d → N (0 , σ ) , √ T (ˆ α T − α ) d → N (0 , σ ) , √ n (¯ α n − α ) d → N (0 , σ ) , √ T (˜ α T − α ) d → N (0 , σ ) , hold true for σ = α H H Tr( Q ∞ ) vuut ∞ X i = −∞ | Q α ∞ ( i ) | L ,σ = α H H Tr( Q ∞ ) s Z ∞−∞ | Q α ∞ ( t ) | L ( t ) dt,σ = α H H h Q ∞ w, w i V vuut ∞ X i = −∞ ( h Q α ∞ ( i ) w, w i V ) ,σ = α H H h Q ∞ w, w i V s Z ∞−∞ ( h Q α ∞ ( t ) w, w i V ) dt. Proof. Observe that for H < we have ξ H ( n ) → n → ∞ . (cid:3) Examples Parabolic equation with distributed fractional noise. Consider the fol-lowing formal parabolic equation with fractional noise(49) ∂u∂t ( t, ξ ) = − α ( − ∆) m u ( t, ξ ) + η H ( t, ξ ) , for ( t, ξ ) ∈ R + × O , accompanied by the initial condition(50) u (0 , ξ ) = x ( ξ ) , ξ ∈ O , and the Dirichlet boundary conditions(51) ∀ j = 0 , ..., m − ∂ j u∂ν j ( t, ξ ) = 0 , for ( t, ξ ) ∈ [0 , + ∞ ) × ∂ O , where O is a bounded domain in R d with smooth boundary ∂ O , ∆ the Laplaceoperator, α > ∂∂ν denotes the normal derivative. Thenoise term η H ( t, ξ ) can be viewed as a formal time-derivative of a fractional Brow-nian motion with Hurst parameter H ∈ (0 , m = 1 corresponds to the stochastic heat equation. e can reformulate the above parabolic problem rigorously as the stochasticevolution equation(52) dX ( t ) = αAX ( t ) dt + Φ dB H ( t ) ,X (0) = X , in the Hilbert space V = U = L ( O ), where X ∈ L ( O ), A = − ( − ∆) m withdomain Dom ( A ) = { y ∈ W m ( O ) : ∂ j y∂ν j = 0 on ∂ O , j = 0 , ..., m − } , where W m ( O ) is the standard Sobolev space (see e.g. [25]). For the noise part, B H denotes the cylindrical fractional Brownian motion with Hurst parameter H and Φ, being the bounded operator on L ( O ), determines the space correlations.It clearly fulfills the condition (A0) with ǫ = 1.It is a well-known fact that A generates an analytic semigroup ( S ( t ) : t ≥ | S ( t )Φ | L ≤ | S ( t ) | L | Φ | L ≤ ct − d m , ∀ t ∈ (0 , T ] , and | S ( t ) | L ≤ M e − ρ t , ∀ t > , where − ρ < A . Hence, conditions (A1) and (A2)are fulfilled whenever H > d m . In this case, we can apply Theorem 3.2 to see thatthe estimators (8) and (9) (defined if Φ = 0) and estimators (11) and (12) (definedif h Q ∞ w, w i V = 0) are strongly consistent. If we assume, in addition, that H < ,Corollary 6.1 implies asymptotic normality of the estimators with the upper boundsfor the speed of the convergence given in Theorem 6.1 (resp. Remark 6.1). Remark 7.1. Note that for H > , Corollary 6.1 may not hold. If the equation (52) is diagonalizable (e.g. if Φ is identity so that the noise is white in space), theprojection ( h X ( t ) , e k i V , t ≥ onto the common eigenvector e k of the operators Φ and A is a one-dimensional fractional Ornstein-Uhlenbeck process. It is well knownfor such process (see e.g. Remark 16 in [11] ) that if H > , the sequence P ni =1 |h X ( i ) , e k i V | − h Q ∞ e k , e k i V n H − converges in distribution to the (non-Gaussian) law of the second-chaos variable,also called the (scaled) Rosenblatt law. Parabolic equation with pointwise noise. Consider the heat equation(53) ∂u∂t ( t, ξ ) = α ∆ u ( t, ξ ) + δ y η H ( t ) , for ( t, ξ ) ∈ R + × O with the Dirichlet boundary condition u | R + × ∂ O = 0 and the initial condition u (0 , . ) = X , where O ⊂ R d is a bounded domain with smooth boundary ∂ O , α > y ∈ O is fixed, δ y stands for theDirac distribution at the point y and ( η H ( t )) is formally one-dimensional fractionalnoise with the Hurst parameter H ∈ (0 , he equation (53) may be treated rigorously as the system(54) dX ( t ) = αAX ( t ) dt + Φ dB H ( t ) , t > , (55) X (0) = X in a standard way in the state space V = L ( O ), where A = ∆ | Dom ( A ) , Dom ( A ) = W ( O ) ∩ W , ( O ) is the Dirichlet Laplacian, ( B H ( t ) , t ∈ R ) is the fractional Brow-nian motion in U = R and Φ = δ y . It is well known that the operator A isstrictly negative and generates exponentially stable analytic semigroup (cf. (A2)).To verify (A0) (where we put β = 0) note that Φ ∈ L ( R , ( C ( O )) ′ ) ∼ = ( C ( O )) ′ and Dom (( − A ) − ǫ ) ⊂ W − ǫ )2 ( O ) for each ǫ ∈ (0 , C ( O )) ′ ֒ → ( W − ǫ )2 ( O )) ′ ֒ → ( Dom (( − A ) − ǫ )) ′ , if 0 < ǫ < − d which verifies (A0). Furthermore, we have that | S ( t )Φ | L ≤ | S ( t ) | L ( D ǫ − A , V ) · | ( − A ) ǫ − Φ | L ≤ c T t − ǫ , t ∈ (0 , T ] , T > , and since U = R operator norm and Hilbert-Schmidt norm of S ( t )Φ are equiva-lent. Hence, the condition (A1) is satisfied if γ = 1 − ǫ < H . Therefore all conditions(A0)-(A2) are satisfied by a choice of ǫ ,1 − d > ǫ > − H, which is possible if H > d . Note that this restriction is imposed because in thisExample we consider only solution with values in a function space (more precisely,the state space V is L ( O )). If it is omitted the solution will still exist in a suitabledistribution space.Given w ∈ V = L ( O ), let us examine the condition h Q ∞ w, w i V > X for the equation(1) (with α = 1), the law of which is the stationary one, may be defined as˜ X = Z −∞ S ( − t )Φ dB H ( t )(see e.g. [16]). Therefore, by reflexivity of increments of the process ( B H ( t ) , t ≥ Law ( ˜ X ) = Law (cid:18)Z ∞ S ( t )Φ dB H ( t ) (cid:19) = N (0 , Q ∞ ) , hence h Q ∞ w, w i V = E (cid:28)Z ∞ S ( t )Φ dB H ( t ) , w (cid:29) V = E (cid:18)Z ∞ h Φ ∗ S ∗ ( t ) w, dB H ( t ) i V (cid:19) . For simplicity, assume that H > (for H < the argument is similar). Computingthe variance on the r.h.s. we obtain(56) h Q ∞ w, w i V = Z ∞ |K ∗ (Φ ∗ S ∗ ( . ) w ) ( r ) | dr, where K ∗ ( f )( r ) = r − H I H − − (cid:16) u H − f ( u ) (cid:17) ( r ) , r > , s defined for f : R + → R such that R ∞ |K ∗ ( f )( r ) | dr < ∞ and I α − denotes thefractional integral( I α − f )( r ) = Γ( α ) − Z R f ( u )( u − r ) α − du, α ∈ (0 , , (cf. [23] for details). Assume that h Q ∞ w, w i V = 0 for some w ∈ V . By (56) andLemma 6.1 in [23] we have that Φ ∗ S ∗ ( t ) w = 0 for t > 0. On the other hand,Φ ∗ S ∗ ( t ) w = Z O G ( t, y, ξ ) w ( ξ ) dξ, where G is the Green function corresponding to Dirichlet Laplacian on the domain O . Therefore if we assume that w ≥ w > G on the domain R + × O × O weobtain Φ ∗ S ∗ ( t ) w = 0 and by contradiction we get h Q ∞ w, w i V > 0, the conditionneeded in (11) and (12).For instance, if w is a characteristic function (indicator) of a subset of O of pos-itive Lebesgue measure, we may consider the estimators (11) and (12) to be builtupon a partial observation of the process ”through a window”.On the other hand, if for example O = (0 , 1) and w ( ξ ) = sin 4 πξ , we have that S ∗ ( t ) w ( ξ ) = S ( t ) w ( ξ ) = e − π t sin 4 πξ, so choosing y = we obtain Φ ∗ S ∗ ( t ) w = 0 and the estimators (11) and (12) cannotbe defined.Now, Theorem 3.2 provides the strong consistency of the estimators (8), (9),(11) and (12). Moreover, if d < H < , Corollary 6.1 implies asymptotic normalityof the estimators with the upper bounds for the speed of the convergence given inTheorem 6.1 and Remark 6.1. 8. Appendix Lemma 8.1. Consider the sequence F n defined in (19) . The rd and th cumulantsthereof satisfy the following bounds: κ ( F n ) ≤ C √ n s / n (cid:18) X | i | To avoid technical difficulties with infinite sums, we shall work with projec-tions to finite-dimensional subspaces and then pass to the limit. Recall the settingin the subsection 4.2. For N < ∞ define V ( N ) as the subspace of V spanned byits first N orthonormal vectors { e k } Nk =1 . Consider the projections of the stationaryGaussian sequence Z i onto V ( N ) : Z ( N ) i := N X k =1 h Z i , e k i V e k . learly, ( Z ( N ) i : i ∈ Z ) is a stationary V ( N ) -valued centered Gaussian sequencewith a covariance operator Q ( N ) and auto-covariance operators Q ( N ) ( i − j ) beingrestrictions of Q and Q ( i − j ) onto V ( N ) in the following sense: h Q ( N ) u, v i V = h Qu, v i V , h Q ( N ) ( i − j ) u, v i V = h Q ( i − j ) u, v i V , ∀ u, v ∈ V ( N ) . Following the approach in subsection 4.2, for each N we define an appropriateisonormal Gaussian process X ( N ) on suitable Hilbert space H ( N ) corresponding tothe stationary sequence ( Z ( N ) i : i ∈ Z ) and consider the sequences V ( N ) n := 1 √ n n X i =1 ( | Z ( N ) i | V ( N ) − Tr( Q ( N ) )) , s ( N ) n := E ( V ( N ) n ) , F ( N ) n := V ( N ) n q s ( N ) n . Rewrite F ( N ) n as in (21): F ( N ) n = 1 q n s ( N ) n n X i =1 N X k =1 ( h Z ( N ) i , e k i V ( N ) − h Qe k , e k i V ( N ) )= I (cid:18) q n s ( N ) n n X i =1 N X k =1 h ⊗ i,e k (cid:19) = I ( f ( N ) n ) . Recall that the 4 th cumulant of F ( N ) n can be bounded above by the norm of thetensor contraction of f ( N ) n :(57) κ ( F ( N ) n ) ≤ C | f ( N ) n ⊗ f ( N ) n | H ( N ) ) ⊗ , where C is a universal constant (see [2]). The tensor contraction of order 1, denotedbe ⊗ , is defined as follows: f ⊗ g := ∞ X k =1 h f, h k i ⊗ h g, h k i ∈ ( H ( N ) ) ⊗ , ∀ f, g ∈ ( H ( N ) ) ⊙ , where { h k } ∞ k =1 is an orthonormal basis of H ( N ) .Now by the definition of ⊗ and its bilinearity, we can write f ( N ) n ⊗ f ( N ) n = 1 n s ( N ) n n X i,j =1 N X k,l =1 h ⊗ i,e k ⊗ h ⊗ j,e l = 1 n s ( N ) n n X i,j =1 N X k,l =1 h h i,e k , h j,e l i H ( N ) ( h i,e k ⊗ h j,e l ) , and, consequently, using the fact that h f ⊗ f , g ⊗ g i ( H ( N ) ) ⊗ = h f , g i H ( N ) h f , g i H ( N ) , nd the relation for adjoint operators ( Q ( N ) ( i − j )) ∗ = Q ( N ) ( j − i ), we calculatefurther: | f ( N ) n ⊗ f ( N ) n | H ( N ) ) ⊗ = (cid:18) n s ( N ) n (cid:19) n X i,i ′ =1 n X j,j ′ =1 N X k,k ′ =1 N X l,l ′ =1 h h i,e k , h j,e l i H ( N ) h h i ′ ,e k ′ , h j ′ ,e l ′ i H ( N ) h h i,e k , h i ′ ,e k ′ i H ( N ) h h j,e l , h j ′ ,e l ′ i H ( N ) = (cid:18) n s ( N ) n (cid:19) n X i,i ′ =1 n X j,j ′ =1 N X k,k ′ =1 N X l,l ′ =1 h Q ( N ) ( i − j ) e k , e l i V ( N ) h Q ( N ) ( i ′ − j ′ ) e k ′ , e l ′ i V ( N ) h Q ( N ) ( i − i ′ ) e k , e k ′ i V ( N ) h Q ( N ) ( j − j ′ ) e l , e l ′ i V ( N ) = (cid:18) n s ( N ) n (cid:19) n X i,i ′ =1 n X j,j ′ =1 N X k ′ =1 N X l =1 h Q ( N ) ( i ′ − j ′ ) e k ′ , Q ( N ) ( j − j ′ ) e l i V ( N ) h Q ( N ) ( j − i ) e l , Q ( N ) ( i ′ − i ) e k ′ i V ( N ) = (cid:18) n s ( N ) n (cid:19) n X i,i ′ =1 n X j,j ′ =1 N X k ′ =1 h Q ( N ) ( j ′ − j ) Q ( N ) ( i ′ − j ′ ) e k ′ , Q ( N ) ( i − j ) Q ( N ) ( i ′ − i ) e k ′ i V ( N ) = (cid:18) n s ( N ) n (cid:19) n X i,i ′ =1 n X j,j ′ =1 Tr (cid:18) Q ( N ) ( i − i ′ ) Q ( N ) ( j − i ) Q ( N ) ( j ′ − j ) Q ( N ) ( i ′ − j ′ ) (cid:19) ≤ (cid:18) n s ( N ) n (cid:19) n X i,i ′ =1 n X j,j ′ =1 | Q ( N ) ( i − i ′ ) | L | Q ( N ) ( j − i ) | L | Q ( N ) ( j ′ − j ) | L | Q ( N ) ( i ′ − j ′ ) | L . If we denote ρ ( i ) := | Q ( N ) ( i ) | L for i ∈ Z , we have ρ ( i ) = ρ ( − i ) and | f ( N ) n ⊗ f ( N ) n | H ( N ) ) ⊗ ≤ (cid:18) n s ( N ) n (cid:19) n X i,i ′ =1 n X j,j ′ =1 ρ ( i − i ′ ) ρ ( j − i ) ρ ( j ′ − j ) ρ ( i ′ − j ′ ) . Now we apply the method used in the proof of Proposition 6.4 in [2] based onrewriting the last sums as convolutions and then applying the Young inequality.This leads to the inequality | f ( N ) n ⊗ f ( N ) n | H ( N ) ) ⊗ ≤ n ( s ( N ) n ) (cid:18) X | i | Consider constants ρ > , H ∈ (0 , and γ ∈ [0 , H ) . Then forsufficiently large t > : Z t Z −∞ e − ρ ( − r ) ( − r ) − γ e − ρ ( t − s ) ( t − s ) − γ H (2 H − s − r ) H − drds ≤ Ct H − . Proof. Fix δ ∈ (1 /ρ, t ) and divide the range of integration into four disjoint areas: Z t Z −∞ e − ρ ( − r ) ( − r ) − γ e − ρ ( t − s ) ( t − s ) − γ H (2 H − s − r ) H − drds = Z /ρ Z −∞ ...drds + Z t − δ /ρ Z − δ −∞ ...drds + Z t − δ /ρ Z − δ ...drds + Z tt − δ Z −∞ ...drds. We shall treat each integral separately. Here C stands for a positive constant(independent of t ), which may change from line to line. Z /ρ Z −∞ e − ρ ( − r ) ( − r ) − γ e − ρ ( t − s ) ( t − s ) − γ H (2 H − s − r ) H − drds ≤ Ce − ρ ( t − /ρ ) ( t − /ρ ) γ Z /ρ Z −∞ e − ρ ( − r ) ( − r ) − γ ( s − r ) H − drds ≤ Ce − ρt . The second integral (after slight modification) can be treated as the correspondingterm in the proof of Theorem 2.3 in [4], which justifies the last inequality in the ollowing calculations: Z t − δ /ρ Z − δ −∞ e − ρ ( − r ) ( − r ) − γ e − ρ ( t − s ) ( t − s ) − γ H (2 H − s − r ) H − drds ≤ C Z t − δ /ρ Z − δ −∞ e − ρ ( − r ) e − ρ ( t − s ) ( s − r ) H − drds ≤ C Z t /ρ Z −∞ e − ρ ( − r ) e − ρ ( t − s ) ( s − r ) H − drds ≤ Ct H − . For the third integral, note that Z t − δ /ρ Z − δ e − ρ ( − r ) ( − r ) − γ e − ρ ( t − s ) ( t − s ) − γ H (2 H − s − r ) H − drds ≤ C Z t − δ /ρ e − ρ ( t − s ) s H − (cid:18)Z − δ ( − r ) − γ dr (cid:19) ds ≤ C Z t − δ /ρ e − ρ ( t − s ) s H − ds. To see that this can be bounded above by Ct H − , we can observe that R t − δ /ρ e − ρ ( t − s ) s H − dst H − ≤ Z t /ρ e − ρ ( t − s ) (cid:16) st (cid:17) H − ds = Z t − /ρ e − ρy (cid:16) − yt (cid:17) H − dy = Z ∞ e − ρy (cid:16) − yt (cid:17) H − I [0 ,t − /ρ ] ( y ) dy t →∞ −→ Z ∞ e − ρy dy < ∞ , where I denotes the indicator function and the last limit follows from the DominatedConvergence Theorem with integrable majorant e − ρy (cid:16) − yt (cid:17) H − I [0 ,t − /ρ ] ( y ) ≤ e − ρy (cid:18) − yy + 1 /ρ (cid:19) H − = (1 /ρ ) H − e − ρy ( y + 1 /ρ ) − H . Now let us find the upper bound for the fourth integral: Z tt − δ Z −∞ e − ρ ( − r ) ( − r ) − γ e − ρ ( t − s ) ( t − s ) − γ H (2 H − s − r ) H − drds ≤ C ( t − δ ) H − Z tt − δ Z −∞ e − ρ ( − r ) ( − r ) − γ ( t − s ) − γ drds = C ( t − δ ) H − Z δ u − γ du ! (cid:18)Z ∞ e − ρv v − γ dv (cid:19) ≤ Ct H − , which completes the proof. (cid:3) References [1] M. F. Balde, K. Es-Sebaiy, and C. A. Tudor, Ergodicity and Drift Parameter Estimationfor Infinite-Dimensional Fractional Ornstein–Uhlenbeck Process of the Second Kind , AppliedMathematics & Optimization (2018), https://doi.org/10.1007/s00245-018-9519-4.[2] H. Bierm´e, A. Bonami, I. Nourdin, and G. Peccati, Optimal Berry-Ess´een rates on the Wienerspace: the barrier of third and fourth cumulants , ALEA (2012), no. 2, 473–500.[3] A. Brouste and S. M. Iacus, Parameter estimation for the discretely observed fractionalOrnstein-Uhlenbeck process and the Yuima R package , Computational Statistics (2013),no. 4, 1529–1547[4] P. Cheridito, H. Kawaguchi and M. Maejima, Fractional Ornstein-Uhlenbeck processes , Elec-tron. J. Probab. (2003), no. 3, 1–14. 5] I. Cialenco, S. V. Lototsky and J. Posp´ıˇsil, Asymptotic properties of the maximum likelihoodestimator for stochastic parabolic equations with additive fractional Brownian motion , Stoch.Dyn. (2009), no. 02, 169–185.[6] P. ˇCoupek, Limiting measure and stationarity of solutions to stochastic evolutionequations with Volterra noise Fractional Brownian motion and stochasticequations in Hilbert spaces , Stoch. Dyn. (2002), no. 2, 225–250.[8] T. E. Duncan, B. Maslowski, and B. Pasik-Duncan, Linear stochastic equations in a Hilbertspace with a fractional Brownian motion , International Series in Operations Research & Man-agement Science, vol 94, pp. 201–222, Springer Verlag, 2006.[9] Y. Hu and D. Nualart, Parameter estimation for fractional Ornstein–Uhlenbeck processes ,Statistics & Probability Letters (2010), 1030–1038.[10] Y. Hu, D. Nualart and H. Zhou, Parameter estimation for fractional Orn-stein–Uhlenbeck processes of general Hurst parameter , Stat Inference Stoch Process (2017).https://doi.org/10.1007/s11203-017-9168-2.[11] K. Es-Sebaiy and F. Viens, Optimal rates for parameter estimation of sta-tionary Gaussian processes , Stochastic Analysis and Applications (2018).https://doi.org/10.1016/j.spa.2018.08.010[12] M. Huebner and B. L. Rozovskii, On asymptotic properties of maximum likelihood estimatorsfor parabolic stochastic PDE’s , Probability Theory and Related Fields (1995), no. 2, 143–163.[13] T. Koski, ˚A. Akademi, W. Loges, On identification for distributed parameter systems , In:Albeverio S., Blanchard P., Streit L. (eds) Stochastic Processes — Mathematics and PhysicsII. Lecture Notes in Mathematics, vol 1250, pp 152–159, Springer, Berlin, Heidelberg, 1987.[14] B. Maslowski, Stability of semilinear equations with boundary and pointwise noise , Annalidella Scuola Normale Superiore di Pisa, (1995), no. 1, 55–93.[15] B. Maslowski and J. Posp´ıˇsil, Ergodicity and Parameter Estimates for Infinite-DimensionalFractional Ornstein-Uhlenbeck Process , J. Appl. Math. Optim. (2008), no. 3, 401–429.[16] B. Maslowski and J. Posp´ıˇsil, Parameter estimates for linear partial differential equationswith fractional boundary noise , Commun. Inf. Syst. (2007), 1-20.[17] B. Maslowski and C. A. Tudor, Drift parameter estimation for infinite-dimensional fractionalOrnstein–Uhlenbeck process , Bulletin des Sciences Math´ematiques (2013) no. 7, 880–901.[18] I. Nourdin and G. Peccati, Stein’s method on Wiener chaos , Probability Theory and RelatedFields (2009), no. 1, 75–118.[19] I. Nourdin and G. Peccati, Normal Approximations with Malliavin Calculus: From Stein’sMethod to Universality , (Cambridge Tracts in Mathematics). Cambridge: Cambridge Univer-sity Press (2012). doi:10.1017/CBO9781139084659[20] I. Nourdin and G. Peccati, The optimal fourth moment theorem , Proceedings of the AmericanMathematical Society, (2015), no. 7, 3123–3133.[21] D. Nualart and G. Peccati, Central limit theorems for sequences of multiple stochastic inte-grals , Ann. Probab. (2005), no. 1, 177–193.[22] A.Pazy, Semigroups of Linear Operators and Applications to Partial Differential Equations ,Springer-Verlag (1983).[23] V. Pipiras and M. S. Taqqu, Are classes of deterministic integrands for fractional Brownianmotion on an interval complete? , (2001), no. 6, 873–897[24] T. Sottinen and L. Viitasaari, Parameter estimation for the Langevin equation withstationary-increment Gaussian noise , Stat Inference Stoch Process (2018) 21: 569.https://doi.org/10.1007/s11203-017-9156-6[25] H. Triebel, Theory of function spaces , Birkh¨auser (1983).[26] W. Xiao and J. Yu, Asymptotic theory for estimating drift parameters in the fractionalVasicek model , Econometric Theory (2018), 1–34. doi:10.1017/S0266466618000051.[27] W. Xiao and J. Yu, Asymptotic Theory for Rough Fractional Vasicek Models , Economics andStatistics Working Papers 7-2018, Singapore Management University, School of Economics(2018). niversity of Chemistry and Technology, Prague, Department of Mathematics, Tech-nick´a 5, Prague 6, Czech Republic E-mail address : [email protected] Charles University in Prague, Faculty of Mathematics and Physics, Sokolovsk´a 83,Prague 8, Czech Republic E-mail address : [email protected]@karlin.mff.cuni.cz