Least Squares Estimator for Vasicek Model Driven by Sub-fractional Brownian Processes from Discrete Observations
LLeast Squares Estimator for Vasicek Model Driven bySub-fractional Brownian Processes from DiscreteObservations
Cuiyun Zhang, Jingjun Guo ∗ , Aiqin Ma, Bo PengSchool of Statistics,Lanzhou University of Finance and Economics,Lanzhou, Gansu 730020, PR China. † July 6, 2020
Abstract:
We study the parameter estimation problem of Vasicek Model driven by sub-fractionalBrownian processes from discrete observations, and let { S Ht , t ≥ } denote a sub-fractional Brow-nian motion whose Hurst parameter H ∈ ( , . The studies are as follows: firstly, two unknownparameters in the model are estimated by the least squares method. Secondly, the strong con-sistency and the asymptotic distribution of the estimators are studied respectively. Finally, ourestimators are validated by numerical simulation.
Keywords : least squares method, Vasicek model, strong consistency, asymptotic distribution
The following Vasicek (1977) model driven by standard Brownian motion { B ( t ) , t ≥ } hasbeen extensively applied in various fields, such as economics, finance and environmental et al: dX t = ( µ + θX t ) dt + σdB Ht , t ≥ , where µ, θ are unknown parameters. The first term ( µ + θX t ) is called the drift component, whoseeconomic interpretation is that stochastic price fluctuations around the mean and price peaks areonly temporarily, such as caused by power plant outages or capacity shortages.Many extensions to this model have been made. For example, motivated by the phenomenonof long-range dependence found in data of telecommunication, economics and finance, the Brow-nian motion in the Vasicek model has been replaced by fractional Brownian motion (fBm). Thefractional Vasicek model (fVm) was first used to describe the dynamics in volatility by Comteet al. (1998). Although fVm has many practical applications, little attention has been paid toits estimation and asymptotic theory in the literature. Xiao et al. (2019) developed the asymp-totic theory for estimators of two parameters in the fVm. Tanaka et al. (2019) was concernedabout the maximum likelihood method (MLE) of the drift parameters in the fVm from continuousobservations. ∗ Corresponding author (Jingjun Guo, [email protected]) † Research supported by the National Natural Science Foundation of China under Grant 71561017 and 71961013. a r X i v : . [ m a t h . S T ] J u l lthough this model driven by fBm has been applied in different areas, some more generalfractional Gaussian processes, such as sub-fractional Brownian motion (sub-fBm), are still pro-posed. However, compared with the extensive studies of fBm, there are few systematic studies onstatistical inference of other fractional Gaussian processes. The main reason for this phenomenonis the complexity of dependence structures fractional Gaussian processes which do not have sta-tionary increments. Li et al.(2018) tackled the least squares estimators (LSE) and discussed theconsistency and asymptotic distributions of the two estimators in the Vasicek model driven bysub-fBm based on the continuous observations. Xiao et al. (2018) considered the parameter es-timation for the continuously observed Vasicek model with sub-fBm. Furthermore, the strongconsistency results as well as the asymptotic distributions of these estimators are obtained in boththe non-ergodic case and the null recurrent case.From a practical point of view, it is more realistic and interesting to consider parameterestimation based on discrete observations in statistical inference, and the asymptotic theory ofparameter estimation for stochastic processes is also well developed. Shen et al. (2020) consideredthe problem of parameter estimation for Vasicek model driven by small fractional L´evy noise basedon discrete high-frequency observations at regularly spaced time points. For the general case andthe null recurrent case, the consistency as well as the asymptotic behavior of LSE of two unknownparameters have been established.Motivated by the aforementioned works, in this article, we study the LSE for Vasicek model: dX t = ( µ + θX t ) dt + σdS Ht , t ≥ , X = x , (1.1)where S Ht is a sub-fBm with Hurst index H ∈ ( , , x is a fix value. In almost all empiricallyrelevant cases, the parameters µ and θ in the drift component of model (1.1) are unknown and thereal value of these two parameters are θ and µ . We assume to observe { X t , t ≥ } at n regulartime intervals { t i = in , i = 1 , , · · · , n } , so an important problem is to estimate parameters θ and µ according to { X t , t ≥ } . The rest of the paper is organized as follows. In section 2, we introduce the detailed informationabout sub-fBm in preparation for our proof and describe the LSE of Vasicek model driven by sub-fBm from discrete observations. The strong consistency of LSE for our model are given in Section3. Section 4 is devoted to the asymptotic distribution of LSE for Vasicek model. In section 5, ourestimations are validated by numerical simulations. The true values of the parameters are givenand then they are used to simulate the Vasicek model driven by sub-fBm. With these simulatedvalues we compute our estimators and compare them with the true parameters. Numerical resultsshow that our estimators converges to the true parameters.
In this section, we describe some basic facts of sub-fBm and the LSE of Vasicek model drivenby sub-fBm from discrete observations. More complete introductions to this subjects, see Mendy(2013), Nourdin et al. (2017), Tudor (2007) and the references therein.The sub-fBm arises from occupation time fluctuations of branching particle systems withPoisson initial condition. As we all know, the sub-fBm has properties similar to fBm such as self-similarity, long-range dependence and H¨older continuous paths. However, compared with fBm,sub-fBm has non stationary increments. The increments over non overlapping intervals are moreweakly correlated and their covariance decays polynomially at a higher rate. For this reason, it is2alled sub-fBm in Bojdecki et al.(2004). It is worth emphasizing that the properties mentioned heremake the sub-fBm a possible candidate for models involving long-range dependence, self-similarityand non-stationary.The sub-fBm S Ht is a mean zero Gaussian process with S H = 0 and the covariance C H ( s, t ) = E ( S Ht S Hs ) = s H + t H − {| s − t | H +( s + t ) H } , where s, t ≥ . When H = , S Ht coincides with the standard Brownian motion. Actually, S Ht isneither a semimartingale nor a Markov process unless H = . For all s ≤ t, there is E ( | S Ht − S Hs | ) = − H − ( t H + s H ) + ( t + s ) H − ( t − s ) H . (2.1)The increments of sub-fBm satisfy the following inequalities[(2 − H − ) ∧ t − s ) H ≤ E ( | S Ht − S Hs | ) ≤ [(2 − H − ) ∨ t − s ) H . (2.2)Moreover, for u ≤ v ≤ s ≤ t the covariance of increments of sub-fBm over non-overlapping intervalscan be written as E (( S Ht − S Hs )( S Hv − S Hu )) = 12 [( t + u ) H + ( t − u ) H + ( s + v ) H + ( s − v ) H − ( t + v ) H − ( t − v ) H − ( s + u ) H − ( s − u ) H ] . Fixed a time interval [0 , T ] , We denote by H S Ht canonical Hilbert space associated to thesub-fBm S Ht . That is, H S Ht is the closure of the linear span ε generated by the indicator functionwith respect to the scalar product (cid:104) I [0 ,t ] , I [0 ,s ] (cid:105) H SHt = C H ( s, t ) . The covariance of sub-fBm also can be written as C H ( s, t ) = E ( S Ht S Hs ) = (cid:90) t (cid:90) s φ H ( u, v ) dudv, where φ H ( u, v ) = H (2 H − | t − s | H − − ( t + s ) H − ] and < H < . For
H > , we have L H ([0 , T ]) ⊂ H S Ht and for any pair step function ϕ, ψ ∈ L H ([0 , T ]) : (cid:104) ϕ, ψ (cid:105) H = α H (cid:90) T (cid:90) T ϕ s ψ t φ H ( s, t ) dsdt, < H < . (2.3)Next, let’s consider the Vasicek model driven by sub-fBm, which takes the sub-fBm as thegoverning force of the state variable instead of the usual Brownian motion.For the stochastic differential equation (1.1), we discuss the LSE of the two parameters.LSE’s motivation is the following illuminating argument, minimizing contrast function of µ and θ respectively, ρ n,σ ( θ, µ ) = n (cid:88) i =1 | X t i − X t i − − ( µ + θX t i − ) · (cid:52) t i − | , where (cid:52) t i = t i − t i − = n , i = 1 , , · · · , n. θ and µ respectively, we get ∂ρ n,σ ( θ, µ ) ∂µ = n (cid:88) i =1 ( X t i − X t i − − ( µ + θX t i − ) · (cid:52) t i − ) = 0 ,∂ρ n,σ ( θ, µ ) ∂θ = n (cid:88) i =1 ( X t i − X t i − − ( µ + θX t i − ) · (cid:52) t i − ) X t i − = 0 . To solve the above equation, we haveˆ θ = n (cid:80) i =1 ( X t i − X t i − ) X t i − − n n (cid:80) i =1 X t i − n (cid:80) i =1 ( X t i − X t i − ) n n (cid:80) i =1 X t i − − n ( n (cid:80) i =1 X t i − ) , (2.4)ˆ µ = n (cid:80) i =1 ( X t i − X t i − ) n (cid:80) i =1 X t i − − n (cid:80) i =1 X t i − n (cid:80) i =1 ( X t i − X t i − ) X t i − n (cid:80) i =1 X t i − − n ( n (cid:80) i =1 X t i − ) . (2.5) In this section, our main purpose is to clarify and prove the Theorem 3.1, which gives theconsistency of the estimators given by equations (2.4) and (2.5).Let’s consider the following solution of the stochastic differential equation (1.1): X t = x e θt + µθ ( e θt −
1) + σ (cid:90) t e θ ( t − s ) dS Hs , t ∈ [0 , . (3.1)More specifically, the numerical approximation of the model (1.1) can be expressed as Eulerianmodel (Ait-Sahalia (2002)): X t i = X t i − + ( µ + θX t i − ) (cid:52) t i + σ ( S Ht i − S Ht i − ) . (3.2)Hence, substituting (3.2) into (2.4) and (2.5) respectively, we get:ˆ θ = θ + σ n (cid:80) i =1 X t i − ( S Ht i − S Ht i − ) − n n (cid:80) i =1 X t i − S Hn n n (cid:80) i =1 X t i − − n ( n (cid:80) i =1 X t i − ) , (3.3)ˆ µ = µ + σ n (cid:80) i =1 X t i − S Hn − n (cid:80) i =1 X t i − n (cid:80) i =1 X t i − ( S Ht i − S Ht i − ) n (cid:80) i =1 X t i − − n ( n (cid:80) i =1 X t i − ) , (3.4)where θ , µ are the true values of parameters µ and θ respectively.Next, we will state our main results. Theorem 3.1.
For H ∈ ( , , we have(1) ˆ θ a.s −→ θ , as n → ∞ and σ → (2) ˆ µ a.s −→ µ , as n → ∞ and σ → .
4n order to simplify the proof of Theorem 3.1, we firstly give the following lemmas and propo-sitions.For simplicity, we assume that X t = µθ ( e θt −
1) + x e θt , t ∈ [0 , . (3.5) Lemma 3.2. [21] For any < u ≤ u ≤ v , < u ≤ v ≤ v and u − u = v − v , there existsa constant C depend on θ and H such that | (cid:90) u u (cid:90) v v e − θ ( s + t ) | s − t | H − dsdt | ≤ C | e − θ ( u + v ) − e − θ ( u + v ) || v − u | H − , θ (cid:54) = 0 . Lemma 3.3.
For θ < , we have E ( (cid:90) t i t i − e θ ( t i − s ) dS Hs ) ≤ C | e − θn − | n − H , E ( (cid:90) t i − e θ ( t i − − s ) dS Hs ) ≤ C | e − θt i − − || t i − | H − , where the C depend on H and θ. Proof :From (2.3), we calculate the following formula directly: E ( (cid:90) t i t i − e θ ( t i − s ) dS Hs ) = H (2 H − (cid:90) t i t i − (cid:90) t i t i − e θ (2 t i − u − v ) ( | u − v | H − − ( u + v ) H − ) dudv = H (2 H − (cid:90) t i t i − (cid:90) t i t i − e θ (2 t i − u − v ) | u − v | H − dudv − H (2 H − (cid:90) t i t i − (cid:90) t i t i − e θ (2 t i − u − v ) ( u + v ) H − dudv ≤ H (2 H − (cid:90) t i t i − (cid:90) t i t i − e θ (2 t i − u − v ) | u − v | H − dudv According to lemma 3.2, the right side of the above inequality satisfies the following inequality H (2 H − (cid:90) t i t i − (cid:90) t i t i − e θ (2 t i − u − v ) | u − v | H − dudv ≤ H (2 H − e θ (cid:90) t i t i − (cid:90) t i t i − e − θ ( u + v ) | u − v | H − dudv ≤ C | e − θn − | n − H So, we obtain E ( (cid:90) t i t i − e θ ( t i − s ) dS Hs ) ≤ C | e − θn − | n − H . As the same proof method as the above inequality, we can get E ( (cid:90) t i − e θ ( t i − − s ) dS Hs ) = H (2 H − (cid:90) t i − (cid:90) t i − e θ (2 t i − − u − v ) ( | u − v | H − − ( u + v ) H − ) dudv ≤ H (2 H − (cid:90) t i − (cid:90) t i − e θ (2 t i − − u − v ) | u − v | H − dudv ≤ H (2 H − e θ (cid:90) t i − (cid:90) t i − e − θ ( u + v ) | u − v | H − dudv ≤ C | e − θt i − ) − || t i − | H − . roposition 3.1. As σ → , we have sup t ∈ [0 , | ( X t ) − ( X t ) |→ , where X t is the equation (3 . . Proof : For (1 . , we can rewrite as X t = X + (cid:90) t ( µ + θX s ) ds + σS Ht , t ∈ [0 , . Then, from Equation (3 . , we have X t − X t = (cid:90) t θ ( X s − X s ) ds + σS Ht , t ∈ [0 , . On the one hand, by Cauchy-Schwarz inequality, we get | X t − X t | ≤ | (cid:90) t θ ( X s − X s ) ds | +2 σ | S Ht | ≤ t (cid:90) t | θ ( X s − X s ) | ds + 2 σ | S Ht | ≤ θ t (cid:90) t | ( X s − X s ) | ds + 2 σ | S Ht | . And by Gronwall’s inequality, we get the following inequality | X t − X t | ≤ σ e θ t | S Ht | . Then, | X t − X t | ≤ σ e θ t sup ≤ s ≤ t | S Hs | . Therefore, we find sup ≤ t ≤ | X t − X t |≤ √ σe θ sup ≤ t ≤ | S Ht | . So in summary, we can see sup ≤ t ≤ | X t − X t |→ , σ → . (3.6)On the other hand, according to the same method, we have | X t | = | X + (cid:90) t ( µ + θX s ) ds + σS Ht | ≤ | X | + σ | S Ht | + | µt | ) + 2 | (cid:90) t θX s ds | ≤ | X | + σ sup ≤ t ≤ | S Ht | + | µ | ) + 2 θ t (cid:90) t | X s | ds. By Gronwall’s inequality, we have | X t |≤ √ | x | + σ sup ≤ t ≤ | S Ht | + | µ | ) e θ t < ∞ . ≤ t ≤ | ( X t ) − ( X t ) |≤ ( sup ≤ t ≤ | X t | + sup ≤ t ≤ | X t | )( sup ≤ t ≤ | X t − X t ) → , σ → . Proposition 3.2. As σ → and n → ∞ , then we have n n (cid:88) i =1 X t i − → (cid:90) ( X t ) dt, n n (cid:88) i =1 X t i − → (cid:90) ( X t ) dt. Proof : Owing to the following equation:1 n n (cid:88) i =1 X t i − = n (cid:88) i =1 (cid:90) t i t i − X t i − dt = n (cid:88) i =1 (cid:90) t i t i − X [ nt ] n dt = (cid:90) ( X [ nt ] n ) dt, where t i − t i − = n , [ nt ] denotes the integer part of nt . Then (cid:90) ( X [ nt ] n ) dt → (cid:90) ( X t ) dt, as n → ∞ . By Proposition 3.1, there is1 n n (cid:88) i =1 X t i − − (cid:90) ( X t ) dt = (cid:90) ( X [ nt ] n ) dt − (cid:90) ( X t ) dt ≤ sup ≤ t ≤ | ( X [ nt ] n ) − X [ nt ] n ) | + sup ≤ t ≤ | ( X [ nt ] n ) − ( X t ) |→ , n → ∞ . In the same way, we can obtain1 n n (cid:88) i =1 X t i − → (cid:90) ( X t ) dt, n → ∞ . Lemma 3.4.
For n → ∞ , σ → , then n (cid:88) i =1 ( X t i − ( S Ht i − S Ht i − )) → x (cid:90) e θt dS Ht + µθ (cid:90) ( e θt − dS Ht < ∞ . Proof : According to the solution of (1.1), we have X t i − = x e θt i − + µθ ( e θt i − −
1) + σ (cid:90) t i − e θ ( t i − − s ) dS Hs , where t i − t i − = n , i = 1 , , · · · , n. Hence n (cid:88) i =1 X t i − ( S Ht i − S Ht i − ) = n (cid:88) i =1 ( x e θt i − + µθ ( e θt i − −
1) + σ (cid:90) t i − e θ ( t i − − s ) dS Hs )( S Ht i − S Ht i − )= I + I + I . θ < , we can rewrite as I = n (cid:88) i =1 x e θt i − ( S Ht i − S Ht i − ) = x (cid:90) e θt dS Ht .I = n (cid:88) i =1 µθ ( e θt i − − S Ht i − S Ht i − ) = µθ (cid:90) ( e θt − dS Ht . For I , by the Markov inequality, there exists any δ > P ( | σ n (cid:88) i =1 (cid:90) t i − e θ ( t i − − s ) dS Hs ( S Ht i − S Ht i − ) | > δ ) ≤ δ − σ n (cid:88) i =1 ( E ( (cid:90) t i − e θ ( t i − − s ) dS Hs ) ) ( E ( S Ht i − S Ht i − ) ) ≤ Cδ − σ n (cid:88) i =1 | e − θt i − − | | t i − | H − ( | t i − t i − | H ) ≤ Cδ − σ n (cid:88) i =1 | e − θt i − − | n − H → , ( n → ∞ , σ → , where C is a constant depend on H and θ, so I → n → ∞ , σ → . Proof of Theorem 3.1 : Combined propositions 3 . . . , when n → ∞ , σ → , we have σ ( n (cid:88) i =1 X t i ( S Ht i − S Ht i − ) − n n (cid:88) i =1 X t i − S Hn ) → , n n (cid:88) i =1 X t i − − n ( n (cid:88) i =1 X t i − ) → (cid:90) ( X t ) dt − ( (cid:90) X t dt ) . So, we immediately come to the conclusion: when n → ∞ and σ →
0, then ˆ θ a.s −→ θ . Moreover,ˆ µ = µ + σ n (cid:80) i =1 X t i − S Hn − n (cid:80) i =1 X t i − n (cid:80) i =1 X t i − ( S Ht i − S Ht i − ) n (cid:80) i =1 X t i − − n ( n (cid:80) i =1 X t i − ) = µ + σ n n (cid:80) i =1 X t i − S Hn − n n (cid:80) i =1 X t i − n (cid:80) i =1 X t i − ( S Ht i − S Ht i − ) n n (cid:80) i =1 X t i − − ( n n (cid:80) i =1 X t i − ) . Similarity, we can get ˆ µ a.s −→ µ , as n → ∞ and σ → . According to Es-Sebaiy (2013), Wang et al. (2017) and the solution of equation (1.1), theprocess is observed at equidistant discrete times { t i = in , i = 1 , , · · · , n } , so we can obtain X t i = X t i − e θ n + µθ ( e θ n −
1) + σ (cid:90) t i t i − e θ ( t i − s ) dS Hs , t i − t i − = n . Then the estimated value of the parameter θ, µ can be rewritten asˆ θ = e θ n − n − + σ n (cid:80) i =1 X t i − (cid:82) t i t i − e θ ( t i − s ) dS Hs − n n (cid:80) i =1 X t i − n (cid:80) i =1 (cid:82) t i t i − e θ ( t i − s ) dS Hs n n (cid:80) i =1 X t i − − n ( n (cid:80) i =1 X t i − ) , (4.1)ˆ µ = µ θ ( e θ n − n − + σ n (cid:80) i =1 X t i − n (cid:80) i =1 (cid:82) t i t i − e θ ( t i − s ) dS Hs − n (cid:80) i =1 X t i − n (cid:80) i =1 X t i − (cid:82) t i t i − e θ ( t i − s ) dS Hsn (cid:80) i =1 X t i − − n ( n (cid:80) i =1 X t i − ) . (4.2) Lemma 4.1. As n → ∞ , then n (cid:88) i =1 X t i − (cid:90) t i t i − e θ ( t i − s ) dS Hs → (cid:90) X s dS Hs . Proof : Using the same methods as the proof of lemma 3.4, we have n (cid:88) i =1 X t i − (cid:90) t i t i − e θ ( t i − s ) dS Hs = (cid:90) n (cid:88) i =1 X t i − e θ ( t i − s ) I [ t i − ,t i ] dS Hs = (cid:90) e θ ( [ ns ]+1 n − s ) X [ ns ] n dS Hs . On the other hand, e θ ( [ ns ]+1 n − s ) X [ ns ] n → X s , n → ∞ . and according to equation (3 . , X s → X s , n → ∞ . Therefore, we have n (cid:88) i =1 X t i − (cid:90) t i t i − e θ ( t i − s ) dS Hs → (cid:90) X s dS Hs , n → ∞ . Theorem 4.2. As n → ∞ , σ → and nσ → ∞ , we obtain σ − (ˆ θ − θ ) → (cid:82) X s dS Hs − (cid:82) X s ds (cid:82) dS Hs (cid:82) ( X s ) ds − ( (cid:82) X s ds ) ,σ − (ˆ µ − µ ) → (cid:82) ( X s ) ds (cid:82) dS Hs − (cid:82) X s ds (cid:82) X s dS Hs (cid:82) ( X s ) ds − ( (cid:82) X s ds ) . Proof : According to (4 . σ − (ˆ θ − θ ) = σ − ( e θ n − n − − θ ) + n (cid:80) i =1 X t i (cid:82) t i t i − e θ ( t i − s ) dS Hs − n n (cid:80) i =1 X t i − n (cid:80) i =1 (cid:82) t i t i − e θ ( t i − s ) dS Hs n n (cid:80) i =1 X t i − − n ( n (cid:80) i =1 X t i − ) , σ − ( e θ n − n − − θ ) → , if n → ∞ and nσ → ∞ . Moreover, n (cid:88) i =1 (cid:90) t i t i − e θ ( t i − s ) dS Hs = (cid:90) n (cid:88) i =1 e θ ( t i − s ) dS Hs = (cid:90) e θ ( [ ns ] n − s ) dS Hs = (cid:90) dS Hs . Combining Lemma 4.1 with Proposition 3.2, as n → ∞ , σ → nσ → ∞ , we obtain σ − (ˆ θ − θ ) → (cid:82) X s dS Hs − (cid:82) X s ds (cid:82) dS Hs (cid:82) ( X s ) ds − ( (cid:82) X s ds ) Further, we calculate the following equation σ − (ˆ µ − µ ) = σ − ( µ θ ( e θ n − n − − µ )+ n (cid:80) i =1 X t i − n (cid:80) i =1 (cid:82) t i t i − e θ ( t i − s ) dS Hs − n (cid:80) i =1 X t i − n (cid:80) i =1 X t i − (cid:82) t i t i − e θ ( t i − s ) dS Hsn (cid:80) i =1 X t i − − n ( n (cid:80) i =1 X t i − ) = σ − ( µ θ ( e θ n − n − − µ )+ n n (cid:80) i =1 X t i − n (cid:80) i =1 (cid:82) t i t i − e θ ( t i − s ) dS Hs − n n (cid:80) i =1 X t i − n (cid:80) i =1 X t i − (cid:82) t i t i − e θ ( t i − s ) dS Hs n n (cid:80) i =1 X t i − − ( n n (cid:80) i =1 X t i − ) . In the same way, combining Lemma 4.1 with Proposition 3.2, as n → ∞ , σ → nσ → ∞ , we get σ − (ˆ µ − µ ) → (cid:82) ( X s ) ds (cid:82) dS Hs − (cid:82) X s ds (cid:82) X s dS Hs (cid:82) ( X s ) ds − ( (cid:82) X s ds ) . In this section, we use Monte Carlo simulation to prove the unbiasedness and effectiveness ofestimators θ and µ. First of all, we use R to simulate stochastic differential equation (1 . θ and µ (as shownin Table 1 and Table 2, respectively), we let σ = 0 . , x = 0 take different values of H, θ and use Rto generate 500 samples according to the equation (3 .
3) and (3 . X t , where x = 9 , µ = 0 . , σ = 0 , , H = 0 . . Table 1: the mean value and standard deviation of the estimator µµ H = 0 . H = 0 . H = 0 . θθ -0.7 -0.8 -0.9 -0.95 H = 0 . H = 0 . H = 0 . onclusion and future work In this article, we mainly focus on two kinds of LSE of Vasicek-type stochastic differentialequation driven by sub-fBm from discrete observations. In theorems 3.1 and 4.2, the consistencyand asymptotic distribution are established. According to the research basis of this paper, we canalso study other properties of the LSE of the Vasicek-type stochastic differential equations withdiscrete observations in the future.
References [1] Ait-Sahalia Y. (2010). Maximum likelihood estimation of discretely sampled diffusions: aclosed-form approximation approach[J]. Econometrica, 70(1), 223-262.[2] Alexandre., Brouste. (2010).Asymptotic properties of MLE for partially observed fractionaldiffusion system with dependent noises[J]. Journal of Statistical Planning and Inference,140(2), 551-558.[3] Barchieli A., Paganoni A M. and Zucca F. (1998). On stochastic differential equations andsemigroups of probability operators in quantum probability[J], Stochastic Processes and theirApplications, 73(1), 69-86 .[4] Bojdecki T., Gorostiza L G. and Talarczyk A. (2004). Sub-fractional Brownian motion andits relation to occupation times[J]. Statistics and Probability Letters, 69(4), 405-419.[5] Bercu B., Coutin L. and Savy N. (2011). Sharp large deviations for the fractional Ornstein-Uhlenbeck process[J]. Theory of Probability and Its Applications, 55(4), 575-610.[6] Comte F., Renault E. (1998). Long memory in continuous-time stochastic volatility models[J].Mathematical Finance, 8(4), 291-323.[7] Es-Sebaiy., Khalifa. (2013). Berry-esseen bounds for the least squares estimator for discretelyobserved fractional Ornstein-Uhlenbeck processes [J]. Statistics and Probability letters, 83(11),2524-2525.[8] Hu Y., Long H. (2009). Least squares estimator for Ornstein-Uhlenbeck processes driven by αα