Second Moment Estimator for An AR(1) Model Driven by A Long Memory Gaussian Noise
aa r X i v : . [ m a t h . S T ] S e p SECOND MOMENT ESTIMATOR FOR AN AR(1) MODEL DRIVEN BY ALONG MEMORY GAUSSIAN NOISE
LI TIAN
Abstract.
In this paper, we consider an inference problem for the first order autoregressiveprocess driven by a long memory Gaussian process. The assumptions and results are stated interms of the covariance function of the noise. Some examples include the fractional Gaussiannoise and the fractional ARIMA model. For the second moment estimator, we prove the strongconsistency and the asymptotic normality, and obtain the Berry-Esseen bound under appro-priate conditions. The proof is based on the decay rate of the stationary solution’s covarianceassociated with the Gaussian noise.
Keywords: moment estimator; long memory; strong consistency; asymptotic normality;Breuer-Major theorem; Berry-Esseen bound. Introduction
We consider the first order autoregressive model ( X t , t ∈ N + ) defined by X t = θX t − + ξ t , t ∈ N + , X = 0 (1.1)where ξ = ( ξ t , t ∈ Z ) is a centered long memory Gaussian sequence. We will analyze consistencyfor the second moment estimator of the unknown parameter θ and study its asymptotic behavior.When ξ t is an independent identical distribution sequence (i.i.d.) or a martingale differencesequence, the statistical inference problem about the parameter θ has been intensively studiedover the past decades (see [1, 2] and the references therein). In the case of heavy-tailed noise, theleast square estimator (LSE) of AR(p) models was studied in [3]. In the case of long memory noise,models received considerable attention by researchers from various disciplines. The monographby Beran [4] provided an updated survey of recent developments of long memory processes (seealso [5, 6, 7]). The classes of M- and R-estimators of linear regression models was discussed in [8].The LSE of the regression model was considered in [9, 10]. Recently, the maximum likelihoodestimator (MLE) of AR(p) was investigated in [11].In this paper, we assume < θ < and the noise which satisfies the following Hypothesis 1.1is a long-range dependence Gaussian process. Hypothesis 1.1.
The autocovariance function ρ ( k ) = E ξ ξ k for any k ∈ Z satisfies ρ ( k ) = L ( k ) | k | H − , H ∈ ( 12 , , (1.2) with L : (0 , ∞ ) → (0 , ∞ ) slowly varying at infinity in Zygmund’s sense. Moreover, ρ (0) = 1 . Remark 1.2.
By Lemma 2.3 below, the equation (1.2) is equivalent to the spectral density havinga pole at the origin of the form h ξ ( λ ) ∼ C H L ( λ − ) | λ | − H , as λ → , with C H = π − Γ(2 H −
1) sin( π − πH ) . ξ t is said to have long memory. We will see that fractional Gaussian noise, fractional ARIMA model with an independentstandard Gaussian noise and some other long memory Gaussian processes are special examplesto satisfy the Hypothesis 1.1. If the unknown parameter | θ | < , then the stationary solution ofthe model (1.1) is Y t = P ∞ j =0 θ j ξ t − j . In addition, the solution is X t = Y t + θ t ζ , where ζ is anormal random variable. Consider the second moment of the stationary solution Y t f ( θ ) = E Y t = ∞ X i,j =0 θ i + j ρ ( i − j ) . Assume < θ < , then f ( θ ) is positive and strictly increasing when / < H < . Theautocovariance function of Y t is R ( k ) = Cov ( Y t , Y t + k ) = ∞ X i,j =0 θ i + j ρ ( k − i + j ) . The second moment estimator is given by ˜ θ n = f − n n X t =1 X t ! . (1.3)In this paper, we will prove the strong consistency and the asymptotic normality for the secondmoment estimator. The Berry-Esseen bound will be also obtained. These results are stated inthe following theorems. Theorem 1.3.
Set the parameter be
Θ = { θ ∈ R | < θ < } . When Hypothesis 1.1 is satisfied,the second moment estimator ˜ θ n is strongly consistent, i.e., lim n →∞ ˜ θ n = θ a.s. Theorem 1.4.
Suppose that Hypothesis 1.1 holds, as n → ∞ , for any θ ∈ Θ the followingasymptotic relations of ˜ θ n hold: • If H ∈ ( , ) , then √ n (˜ θ n − θ ) law −−→ N (cid:18) , σ H [ f ′ ( θ )] (cid:19) , (1.4) where σ H = 2 P k ∈ Z R ( k ) . • If H = , then √ n (˜ θ n − θ ) √ log n law −−→ N (cid:18) , σ H [ f ′ ( θ )] (cid:19) . (1.5) ECOND MOMENT ESTIMATOR FOR AR(1) MODEL 3 • If H ∈ ( , , there exists a constant C such that L ( n ) = C L ( n ) , then (˜ θ n − θ ) n H − ( L ( n )) / law −−→ f ′ ( θ ) × R H , (1.6) where R H is a so-called ‘Rosenblatt distribution’ and L ( · ) is given in Hypothesis 1.1. Theorem 1.5.
Let Z be a standard Gaussian random variable. If Hypothesis 1.1 is satisfied and L ( λ − ) (given in Hypothesis 1.1) converges to a constant c h as λ → . Then, when < H < ,there exists a constant C θ,H > such that when n is large enough for any θ ∈ Θ , sup z ∈ R (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) P ( f ′ ( θ ) √ n (˜ θ n − θ ) σ H ≤ z ) − P { Z ≤ z } (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ C θ,H n γ , (1.7) where γ = − , if H ∈ (cid:0) , (cid:1) ; − , if H = ;3 − H, if H ∈ (cid:0) , (cid:1) . Similarly, when H = , we have sup z ∈ R (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) P ( f ′ ( θ ) √ n (˜ θ n − θ ) σ H √ log n ≤ z ) − P { Z ≤ z } (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ C θ,H (log n ) − . (1.8)As far as we know, the Berry-Esseen bound for the second moment estimator of an AR(1)model with a long memory noise is not covered in the literature. Next, we give some well knownprocesses that satisfy the Hypothesis 1.1. Example 1.6.
Clearly the fractional Gaussian noise with covariance function ρ ( k ) = 12 (cid:0) | k + 1 | H + | k − | H − | k | H (cid:1) , k ∈ Z , satisfies Hypothesis 1.1. Example 1.7.
The fractional ARIMA(0, d ,0) model generated by an independently standardGaussian noise with parameter d ∈ ( − , ) has the spectral density h ξ ( λ ) ∼ π | λ | − d , λ → , satisfies Hypothesis 1.1 when d := H − > . Example 1.8. ξ t is generated by a strictly stationary moving average sequence ξ t = X k ≤ t b t − k ε t , t ∈ Z . Here ( ε t , t ∈ Z ) is an independent standard Gaussian noise, and ( b t , t ∈ N + ) are (nonrandom)weights such that P b t < ∞ . We assume that the weights decay slowly hyperbolically: b k = L ( k ) | k | H − , L. TIAN where H ∈ ( , , and L ( · ) is a slowly varying function. This condition implies the Hypothesis1.1, with L ( k ) ∝ L ( k ) (see [8]) . If L ( k ) converges to a constant c as k → ∞ , then Theorem1.5 holds. In this paper, C and c will be a generic positive constant independent of n whose value maydiffer from line to line. 2. Preliminary
Consider a second-order stationary long memory process ξ t ( t ∈ Z ) , with autocovariance func-tion ρ ( k )( k ∈ Z ) and spectral density h ξ ( λ ) = (2 π ) − P ∞ k = −∞ ρ ( k ) exp( − ikλ ) . A heuristic defi-nition of linear long-range dependence is given as follows: ξ t has long memory if h ( λ ) diverges toinfinity as λ → . Since πh ξ ( λ ) = P ρ ( k ) , this is essentially (in a sense specified more preciselybelow) equivalent to P ρ ( k ) = ∞ .First, we define so-called slowly varying functions. There is a slightly standard definition ofslowly varying functions by Zygmund [12]. Definition 2.1.
A function L : ( c, ∞ ) → R ( c > is called slowly varying at infinity in Zyg-mund’s sense if for x large enough, it is positive and for any δ > , there exists a finite number x ( δ ) > such that for x > x ( δ ) , both functions p ( x ) = x δ L ( x ) and p ( x ) = x − δ L ( x ) aremonotone. The standard formal definition of a linear long-range dependence structure is given as follows(see [4]).
Definition 2.2.
Let ξ t be a second-order stationary process with autocovariance function ρ ( k )( k ∈ Z ) and spectral density h ξ ( λ ) = (2 π ) − ∞ X k = −∞ ρ ( k ) exp( − ikλ ) , λ ∈ [ − π, π ] . Then ξ t is said to exhibit (linear) long-range dependence, if h ξ ( λ ) = L h ( λ ) | λ | − H , where L h ( λ ) > is a symmetric function that is slowly varying at zero and H ∈ ( , . The following Lemma 2.3 expresses equivalence between the behaviour of the spectral densityat the origin and the asymptotic decay of the autocovariance function (see Theorem 1.3 in [4]).
Lemma 2.3.
Let R ( k )( k ∈ Z ) and h ( λ )( λ ∈ [ π, π ]) be the autocovariance function and spectraldensity respectively of a second-order stationary process Y t . Then the following holds: (1) If R ( k ) = L R ( k ) | k | H − , k ∈ Z , where L R ( k ) is slowly varying at infinity in Zygmund’s sense, and H ∈ ( , , then h ( λ ) ∼ L h ( λ ) | λ | − H , λ → , ECOND MOMENT ESTIMATOR FOR AR(1) MODEL 5 with L h ( λ ) = L R ( λ − ) π − Γ(2 H −
1) sin( π − πH ) . (2) If h ( λ ) = L h ( λ ) | λ | − H , < λ < π, where H ∈ ( , , and L h ( λ ) is slowly varying at the origin in Zygmund’s sense and ofbounded variation on ( a, π ) for any a > , then R ( k ) ∼ L R ( k ) | k | H − , k → ∞ , where L R ( k ) = 2 L h ( k − )Γ(2 − H ) sin (cid:18) πH − π (cid:19) . The following Theorem 2.4, known as the property of the stationary Gaussian process, providesa sufficient condition for the ergodicity (see [13]).
Theorem 2.4.
The stationary Gaussian process Y t is mixing if and only if its autocovariancefunction satisfies lim k →∞ Cov ( Y t , Y t + k ) = 0 . Define V n := 1 √ n n X t =1 [ Y t − f ( θ )] , n ≥ . We also set σ H = EV n and σ H > .The following Theorem 2.5, known as Breuer-Major theorem, provides a sufficient conditionfor the convergence of a stationary random variable sequence to a normal distribution (see [14]). Theorem 2.5. Y = ( Y t , t ∈ Z ) is a centered stationary Gaussian sequence. For all k ∈ Z , weset R ( k ) = EY Y k . If P k ∈ Z R ( k ) < ∞ , then V n = 1 √ n n X t =1 [ Y t − f ( θ )] law −−→ N (0 , σ H ) , as n → ∞ , where σ H = 2 P k ∈ Z R ( k ) . The following Theorem 2.6 is similar to Theorem 7.3.1 and Corollary 7.4.3 of [15], which givesan estimation of total variation distance between a nonlinear Gaussian functional V n /σ H andthe standard normal random variable. Theorem 2.6.
Let N ∼ N (0 , . Set σ H = 2 P k ∈ Z R ( k ) and let L ( λ − ) (given in Hypothesis1.1) converge to a constant c h as λ → . Then, for all n ≥ , d T V ( V n /σ H , N ) ≤ √ σ H √ n n − X k = − n +1 | R ( k ) | ! . L. TIAN
Furthermore, assume H ≤ . Then there exists a constant c H > (depending only on H) suchthat, for all n ≥ : d T V ( V n /σ H , N ) ≤ c H n − γ = c H × n − , if H ∈ (cid:0) , (cid:1) ; n − (log n ) , if H = ; n H − , if H ∈ (cid:0) , (cid:1) ;(log n ) − , if H = . Strong Consistency
Lemma 3.1.
Let α > . Let ( Y t ) t ∈ N and ζ be given in the solution of model (1.1) . Then for all ε > there exists a random constant C ε which is almost surely finite such that (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) n − α ζ n X t =1 θ t Y t (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ C ε n − α + ε a.s. (3.1) for all n ∈ N .Proof. For every p > , the Holder equality implies that (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ζ n X t =1 θ t Y t (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) p ≤ k ζ k p (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) n X t =1 θ t Y t (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) p ≤ c (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) n X t =1 θ t Y t (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) p . Since p th and 2nd moment of normal random variables are equal to a multiplicative constant,we have (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) n X t =1 θ t Y t (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) p ≤ c (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) n X t =1 θ t Y t (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) = c vuut E n X t =1 θ t Y t ! = c vuut n X i,j =1 θ i + j E [ Y i Y j ] ≤ C θ , Then (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) n − α ζ n X t =1 θ t Y t (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) p ≤ C p n − α . By Lemma 2.1 of [16], the desired (3.1) is obtained. (cid:3)
Proof of Theorem 1.3.
Since the bounded function ρ ( k ) → as k → ∞ and θ i + j is integrablefor every i, j , dominated convergence theorem implies lim n →∞ R ( k ) = 0 . From Theorem 2.4, we obtain that the stationary Gaussian process Y t is mixing. Since mixingis a stronger property than ergodicity, then Y t is ergodic. Furthermore, since E Y t = f ( θ ) , then lim n →∞ n n X t =1 Y t = f ( θ ) a.s. ECOND MOMENT ESTIMATOR FOR AR(1) MODEL 7
From Lemma 3.1, we obtain lim n →∞ n P nt =1 θ t Y t = 0 almost surely. Thus, lim n →∞ n n X t =1 X t = f ( θ ) a.s. Finally, continuous mapping theorem implies that the second moment estimator ˜ θ n is stronglyconsistent. ✷ The Asymptotic Normality
Lemma 4.1.
Let L ( · ) be given in Hypothesis 1.1. When H ∈ ( , , the stationary solution Y t of the model (1.1) has long memory. Namely, h Y ( λ ) ∼ C θ,H L ( λ − ) | λ | − H , as λ → , (4.1) where the constant C θ,H > depends on θ and H .Proof. We consider the stationary process ξ t = Y t − θY t − , t ∈ Z , then the spectral density of ξ t is h ξ ( λ ) = | − θe − iλ | h Y ( λ ) , which implies the spectral density of Y t h Y ( λ ) = | − θe − iλ | − h ξ ( λ ) ∼ (1 − θ ) − h ξ ( λ ) ∼ C θ,H L ( λ − ) | λ | − H , as λ → . (cid:3) Corollary 4.2.
When H ∈ ( , ) , X k ∈ Z R ( k ) < ∞ . (4.2) When H ∈ [ , , X k ∈ Z R ( k ) = ∞ . (4.3) Proof.
By Lemma 2.3, the relation (4.1) is equivalent to R ( k ) ∼ C ′ θ,H L ( k ) | k | H − , as k → ∞ , if < H < . Furthermore, since L ( k ) is dominated by the power function | k | H − H ) where H < H < , then the desired (4.2) and (4.3) are obtained as k → ∞ . (cid:3) Proof of Theorem 1.4. If < H < , Theorem 2.5 together with the relation (4.2) implies thatas n → ∞ , V n law −−→ N (0 , σ H ) . L. TIAN
From Lemma 3.1, we have √ n P nt =1 θ t Y t a.s. −−→ as n → ∞ . Thus, the Slutsky’s theorem impliesthat as n → ∞ , √ n n X t =1 (cid:0) X t − f ( θ ) (cid:1) law −−→ N (0 , σ H ) . Consequently, the delta method implies the desired (1.4). If H = , we get as n → ∞ , V n √ log n law −−→ N (0 , σ H ) . Similar to Theorem 5.6 of [17], if < H < , we obtain as n → ∞ , V n n H − ( L ( n )) / law −−→ R H , where R H is a so-called ‘Rosenblatt distribution’. Similarly as in the case when < H < , wehave the desired (1.5) and (1.6). This completes the proof. ✷ The Berry-Esseen Bound
Proof of Theorem 1.5. If < H < , the Berry-Esseen bound (1.7) can be obtained by thesimilar arguments of Theorem 3.2 in [18]. Denote A := P ( f ′ ( θ ) √ n (˜ θ n − θ ) σ H ≤ z ) − P { Z ≤ z } . Since ˜ θ n > , we shall suppose z > − f ′ ( θ ) √ nσ H θ . Otherwise, the standard estimate for a normalrandom variable P ( | Z | ≥ t ) ≤ t , ∀ t > yields | A | = P { Z ≤ z } ≤ C √ n . Since f ( θ ) is strictly increasing and continuous, we have A = P ( f ′ ( θ ) √ n (˜ θ n − θ ) σ H ≤ z ) − P { Z ≤ z } = P (cid:26) ˜ θ n ≤ θ + σ H f ′ ( θ ) √ n z (cid:27) − P { Z ≤ z } = P ( n n X t =1 X t ≤ f (cid:18) θ + σ H f ′ ( θ ) √ n z (cid:19)) − P { Z ≤ z } = P ( n n X t =1 X t − f ( θ ) ≤ f (cid:18) θ + σ H f ′ ( θ ) √ n z (cid:19) − f ( θ ) ) − P { Z ≤ z } . Let us then introduce the short-hand notation u = √ nσ H (cid:20) f (cid:18) θ + σ H f ′ ( θ ) √ n z (cid:19) − f ( θ ) (cid:21) and w = 2 ζσ H √ n n X t =1 θ t Y t + ζ σ H √ n n X t =1 θ t . ECOND MOMENT ESTIMATOR FOR AR(1) MODEL 9
By using the calculation and the short-hand notation above, we split | A | = (cid:12)(cid:12)(cid:12)(cid:12) P (cid:26) V n σ H + w ≤ u (cid:27) − P { Z ≤ z } (cid:12)(cid:12)(cid:12)(cid:12) ≤ (cid:12)(cid:12)(cid:12)(cid:12) P (cid:26) V n σ H ≤ u − w (cid:27) − Φ( u − w ) (cid:12)(cid:12)(cid:12)(cid:12) + | Φ( u − w ) − Φ( u ) | + | Φ( u ) − P { Z ≤ z }| , where the first term is bounded by Cn − γ from Theorem 2.6, the third term is bounded by Cn − from Lemma 5.1 below, and the second term is bounded by Cn − + from Lemma 3.1 and thestandard estimate for the probability of a normal random variable, | Φ( z ) − Φ( z ) | ≤ | z − z | .Similar to the approach above, the Berry-Esseen bound (1.8) holds and the details are omittedif H = . This completes the proof. ✷ Lemma 5.1.
Denote u ( z ) = √ nσ H h f (cid:16) θ + σ H f ′ ( θ ) √ n z (cid:17) − f ( θ ) i when z > − f ′ ( θ ) √ nσ H θ . Then thereexists some positive number C independent of n such that sup z> − f ′ ( θ ) √ nσH θ | Φ( u ) − Φ( z ) | ≤ C √ n . Proof.
We follow the line of the proof of Theorem 3.2 in [18]. By the mean value theorem, thereexists some number η ∈ [ θ, θ + σ H f ′ ( θ ) √ n z ] such that u = √ nσ H (cid:20) f (cid:18) θ + σ H f ′ ( θ ) √ n z (cid:19) − f ( θ ) (cid:21) = √ nσ H f ′ ( η ) (cid:20) σ H f ′ ( θ ) √ n z (cid:21) = f ′ ( η ) f ′ ( θ ) z. Hence, | Φ( u ) − Φ( z ) | = | Φ( f ′ ( η ) f ′ ( θ ) z ) − Φ( z ) | = 1 √ π Z f ′ ( η ) f ′ ( θ ) zz e − t d t. When − f ′ ( θ ) √ nσ H θ < z ≤ − f ′ ( θ ) √ n σ H θ , since the function f ( x, z ) = z e − x z | x − | is uniformlybounded, then √ π Z f ′ ( η ) f ′ ( θ ) zz e − t d t ≤ Z f ′ ( η ) f ′ ( θ ) zz e − t d t ≤ | f ′ ( η ) − f ′ ( θ ) | f ′ ( θ ) | z | e − [ f ′ ( η )]22[ f ′ ( θ )]2 z ≤ C | z | ≤ C √ n . When z > − f ′ ( θ ) √ n σ H θ , using the mean value theorem and making the change of variable t = z s together with the fact that f ( s, z ) = z e − s z is also uniformly bounded, we conclude that there exists a number η ′ ∈ [ θ, η ] such that Z f ′ ( η ) f ′ ( θ ) zz e − t d t = Z f ′ ( η ) f ′ ( θ ) 1 z z z e − s z d s ≤ C | z | | f ′ ( η ) − f ′ ( θ ) | f ′ ( θ )= C | z | f ′′ ( η ′ ) σ H | z | f ′ ( θ ) √ n ≤ C √ n . (cid:3) Acknowledgements : I would like to thank my supervisor, Prof. Y. Chen for helpful discussions.
References [1] Anderson, T. W. and Taylor, J. B. (1979). Strong consistency of least squares estimates in dynamic models.Ann. Statist. 7(3): 484-489.[2] Lai, T. L. and Wei, C. Z. (1983). Asymptotic properties of general autoregressive models and strong consis-tency of least squares estimates of their parameters. J. Multivar. Anal. 13(1): 1-23.[3] Zhang, R. M. and Ling, S. Q. (2015). Asymptotic inference for AR models with heavy-tailed G-GARCHnoises. Econometric Theory, 31(4): 880-890.[4] Beran, J., Feng, Y., Ghosh, S. and Kulik, R. (2013). Long-Memory Processes: Probabilistic Properties andStatistical Methods. Berlin: Springer-Verlag.[5] Samorodnitsky, G. (2006). Long range dependence. Foundations and Trends in Stochastic Systems, 1(3),163-257.[6] Palma, W. (2007). Long-Memory Time Series: Theory and Methods (vol. 662). New York: John Wiley &Sons.[7] Giraitis, L., Koul, H. L. and Surgailis, D. (2012). Large sample inference for long memory processes. London:Imperial College Press.[8] Giraitis, L., Koul, H. L. and Surgailis, D. (1996). Asymptotic normality of regression estimators with longmemory errors. Statistics & Probability Letters, 29(4): 317-335.[9] Yajima, Y. (1988). On estimation of regression model with long-memory stationary errors. Ann. Statist.16(2): 791-807.[10] Yajima, Y. (1991). Asymptotic properties of the LSE in a regression model with long-memory stationaryerrors. Ann. Statist. 19(1): 158-177.[11] Brouste, A. Cai, C. and Kleptsyna M. (2014). Asymptotic Properties of the MLE for the AutoregressiveProcess Coefficients under Stationary Gaussian Noise. Mathematical Methods of Statistics, 23(2): 103-115.[12] Zygmund, A. (1968). Trigonometric series (Vol. 1). Cambridge: Cambridge University Press.[13] Maruyama, G. (1949). The harmonic analysis of stationary stochastic processes. Mem. Fac. Sci. KyushuUniv., 4(1): 45-106.[14] Breuer, P. and Major, P. (1983). Central limit theorems for non-linear functionals of Gaussian fields. J. Mult.Anal. 13(3): 425-441.[15] Nourdin, I. and Peccati, G. (2012). Normal Approximations with Malliavin Calculus: From Stein’s Methodto Universality (Vol. 192). Cambridge: Cambridge University Press.[16] Kloeden, P. and Neuenkirch, A. (2007). The pathwise convergence of approximation schemes for stochasticdifferential equations. LMS J. Comput. Math. 10: 235-253.
ECOND MOMENT ESTIMATOR FOR AR(1) MODEL 11 [17] Taqqu, M. S. (1979). Convergence of integrated processes of arbitrary Hermite rank. Z. Wahrscheinlichkeit-stheofie verw. Gebiete, 50(1): 53-83.[18] Sottinen, T. and Viitasaari, L. (2018). Parameter estimation for the Langevin equation with stationary-increment Gaussian noise. Stat Inference Stoch Process, 21(3): 569-601.
School of Mathematics and Statistics, Jiangxi Normal University, Nanchang, 330022, Jiangxi,China
E-mail address ::