Asymptotic behavior of maximum likelihood estimator for time inhomogeneous diffusion processes
aa r X i v : . [ m a t h . S T ] O c t Asymptotic behavior of maximum likelihood estimatorfor time inhomogeneous diffusion processes
M´aty´as Barczy ∗ , ⋄ and Gyula Pap ∗ * University of Debrecen, Faculty of Informatics, Pf. 12, H–4010 Debrecen, Hungary; e–mail:[email protected] (M. Barczy), [email protected] (G. Pap). ⋄ Corresponding author.
Abstract
First we consider a process ( X ( α ) t ) t ∈ [0 ,T ) given by a SDE d X ( α ) t = αb ( t ) X ( α ) t d t + σ ( t ) d B t , t ∈ [0 , T ), with a parameter α ∈ R , where T ∈ (0 , ∞ ] and ( B t ) t ∈ [0 ,T ) is astandard Wiener process. We study asymptotic behavior of the MLE b α ( X ( α ) ) t of α basedon the observation ( X ( α ) s ) s ∈ [0 , t ] as t ↑ T . We formulate sufficient conditions under which p I X ( α ) ( t ) (cid:0)b α ( X ( α ) ) t − α (cid:1) converges to the distribution of c R W s d W s . R ( W s ) d s , where I X ( α ) ( t ) denotes the Fisher information for α contained in the sample ( X ( α ) s ) s ∈ [0 , t ] ,( W s ) s ∈ [0 , is a standard Wiener process, and c = 1 / √ c = − / √
2. We also weakenthe sufficient conditions due to Luschgy [19, Section 4.2] under which p I X ( α ) ( t ) (cid:0)b α ( X ( α ) ) t − α (cid:1) converges to the Cauchy distribution. Furthermore, we give sufficient conditions sothat the MLE of α is asymptotically normal with some appropriate random normalizingfactor.Next we study a SDE d Y ( α ) t = αb ( t ) a ( Y ( α ) t ) d t + σ ( t ) d B t , t ∈ [0 , T ), with a per-turbed drift satisfying a ( x ) = x + O (1 + | x | γ ) with some γ ∈ [0 , p I Y ( α ) ( t ) (cid:0)b α ( Y ( α ) ) t − α (cid:1) converges to the distributionof c R W s d W s . R ( W s ) d s .We emphasize that our results are valid in both cases T ∈ (0 , ∞ ) and T = ∞ , andwe develope a unified approach to handle these cases. Statistical estimation of parameters of diffusion processes has been studied for a long time. Fei-gin [8] gave a good historical overview of the very early investigations and provided a generalasymptotic theory of maximum likelihood estimation (MLE) for continuous-time homogeneousdiffusion processes without stationarity assumptions and without to resorting to the use of : 62M05, 62F12, 60J60.
Key words and phrases : maximum likelihood estimator for inhomogeneous diffusions, perturbed drift.
Acknowledgements : The first author has been supported by the Hungarian Scientific Research Fund underGrants No. OTKA–F046061/2004 and OTKA T-048544/2005. The second author has been supported by theHungarian Scientific Research Fund under Grant No. OTKA T-048544/2005. T ∈ (0 , ∞ ] be fixed. Let us consider a time inhomogeneous diffusion process( Y ( α ) t ) t ∈ [0 ,T ) given by the stochastic differential equation (SDE)(1.1) ( d Y ( α ) t = αb ( t ) a ( Y ( α ) t ) d t + σ ( t ) d B t , t ∈ [0 , T ) ,Y ( α )0 = 0 , where b : [0 , T ) → R , a : R → R and σ : [0 , T ) → (0 , ∞ ) are known Borel-measurablefunctions, ( B t ) t ∈ [0 ,T ) is a standard Wiener process, and α ∈ R is an unknown parameter.One can obtain sufficient conditions for asymptotic normality in case T = ∞ from thegeneral Theorem 5.1 in Chapter 9 due to Basawa and Prakasa Rao [2], namely, if α , b , a and σ are such that there exists a unique strong solution of the SDE (1.1) and1 t Z t b ( s ) a ( Y ( α ) s ) σ ( s ) d s P −→ K α as t → ∞ ,(1.2)with some K α ∈ (0 , ∞ ), where P −→ denotes convergence in probability, then the MLE b α ( Y ( α ) ) t of α based on the observation ( Y ( α ) s ) s ∈ [0 , t ] is weakly consistent, and √ t (cid:0)b α ( Y ( α ) ) t − α (cid:1) converges in distribution to the normal distribution with mean 0 and with variance K − α as t ↑ ∞ . We note that this theorem of Basawa and Prakasa Rao [2] is valid for multidimensionaldiffusion processes and the drift and diffusion coefficients can have a more general form. It isnot easy to check condition (1.2), and hence, as a general task, it is desirable to describe theasymptotic behavior of the MLE of α (considering more general normalizing factor than √ t )by giving simpler sufficient conditions.In the first part of the present paper we investigate the SDE (1.1) with a ( x ) = x , x ∈ R ,namely,(1.3) ( d X ( α ) t = αb ( t ) X ( α ) t d t + σ ( t ) d B t , t ∈ [0 , T ) ,X ( α )0 = 0 , which is a special case of Hull-White (or extended Vasicek) model, see, e.g., Bishwal [4, page3]. As one of our main results, we give sufficient conditions under which the MLE of α normalized by Fisher information converges to the distribution of c R W s d W s . R ( W s ) d s ,where ( W s ) s ∈ [0 , is a standard Wiener process, and c = 1 / √ c = − / √
2, seeTheorem 2.5. In the special case T = ∞ and σ ≡ α normalized by Fisher information to converge to a normalor to a Cauchy distribution. In case of Cauchy limit distribution, we weaken and generalizeconditions of Luschgy, see Theorem 2.8 and Remark 2.9. Moreover, one can easily formulateconditions for asymptotic normality generalizing Luschgy’s conditions, see Theorem 2.11. (We2o not know whether any other limit distribution can appear.) We also prove that, under theconditions of Theorem 2.8 or Theorem 2.11, the MLE of α is asymptotically normal with anappropriate random normalizing factor, see Corollaries 2.10 and 2.12. Furthermore, we provestrong consistency of the MLE of α , see Theorem 3.4.The above results are generalizations of the case of an Ornstein-Uhlenbeck process, when T = ∞ , b ≡ a ( x ) = x , x ∈ R , and σ ≡
1. In this special case if α <
0, then the MLEof α is asymptotically normal. This fact is known for a long time, see, e.g., Example 1.35 inKutoyants [15], (1.3) in Dietz and Kutoyants [7], page 189 in Basawa and Prakasa Rao [2] orExample 2.1 in Gushchin [9]. If α >
0, then the MLE of α is asymptotically Cauchy. Thisresult is also known for a while, see, e.g., Basawa and Scott [3], Kutoyants [14], Theorem 5.1in Dietz and Kutoyants [7] or Example 2.1 in Gushchin [9]. If α = 0, then t b α ( X (0) ) t L = R W s d W s R ( W s ) d s , t ∈ (0 , ∞ ) , where L = denotes equality in distribution, and hence we have not only a limit theorem but theappropriately normalized MLE of α has the same distribution for all t ∈ (0 , ∞ ). This hasalso been known for a long time, see, e.g., (1.4) in Dietz and Kutoyants [7], page 189 in Basawaand Prakasa Rao [2] or Example 2.1 in Gushchin [9]. We also note that this distribution isthe same as the limit distribution of the Dickey-Fuller statistics, see, e.g., the Ph.D. Thesis ofBobkoski [5], or (7.14) and Theorem 9.5.1 in Tanaka [24]. The strong consistency of the MLEof α has also been known for a long time, see, e.g., Theorem 17.4 in Liptser and Shiryaev [18].In the second part of the present paper we investigate the SDE (1.1) with a ( x ) = x + r ( x ), x ∈ R and a known Lipschitz function r satisfying r ( x ) = O (1 + | x | γ ) with some γ ∈ [0 , α normalized by Fisher information converges to the distribution of c R W s d W s . R ( W s ) d s , where c = 1 / √ c = − / √
2, see Theorem 4.3. Our proofis based on a generalization of Gr¨onwall’s inequality (see, Lemma 4.4). Note that Dietz andKutoyants [7] investigated the asymptotic properties of the MLE b α ( Y ( α ) ) t of α in the specialcase T = ∞ , α > b ( t ) = c , t >
0, with some c >
0, and σ ≡
1. They showed r c α e αct (cid:16)b α ( Y ( α ) ) t − α (cid:17) L −→ ξη ( α ) as t → ∞ ,where L −→ denotes convergence in distribution, η ( α ) := Z ∞ e − αcs d B s + αc Z ∞ e − αcs r ( Y ( α ) s ) d s, and ξ is a standard normally distributed random variable independent of η ( α ) , provided that P ( η ( α ) = 0) = 0. Dietz and Kutoyants [7, Theorem 4.1] also showed that b α ( Y ( α ) ) t is stronglyconsistent provided that P ( η ( α ) = 0) = 0.We emphasize that our results are valid in both cases T ∈ (0 , ∞ ) and T = ∞ , and wedevelope a unified approach to handle these cases.3 A special time inhomogeneous SDE
Let T ∈ (0 , ∞ ] be fixed. Let b : [0 , T ) → R and σ : [0 , T ) → R be continuous functions.Suppose that σ ( t ) > t ∈ [0 , T ), and there exists t ∈ (0 , T ) such that b ( t ) = 0for all t ∈ [ t , T ) . For all α ∈ R , consider the SDE (1.3). Note that the drift and diffusioncoefficients of the SDE (1.3) satisfy the local Lipschitz condition and the linear growth condition(see, e.g., Jacod and Shiryaev [10, Theorem 2.32, Chapter III]). By Jacod and Shiryaev [10,Theorem 2.32, Chapter III], the SDE (1.3) has a unique strong solution X ( α ) t = Z t σ ( s ) exp (cid:26) α Z ts b ( u ) d u (cid:27) d B s , t ∈ [0 , T ) , (2.1)defined on a filtered probability space (cid:0) Ω , F , ( F t ) t ∈ [0 ,T ) , P (cid:1) constructed by the help of thestandard Wiener process B , see, e.g., Karatzas and Shreve [11, page 285]. This filteredprobability space satisfies the so called usual conditions, i.e., (Ω , F , P ) is complete, the filtration( F t ) t ∈ [0 ,T ) is right-continuous, F contains all the P -null sets in F and F = F T − , where F T − := σ (cid:16)S t ∈ [0 ,T ) F t (cid:17) . Note that ( X ( α ) t ) t ∈ [0 ,T ) has continuous sample paths by the definitionof strong solution, see, e.g., Jacod and Shiryaev [10, Definition 2.24, Chapter III]. For all α ∈ R and t ∈ (0 , T ), let P X ( α ) , t denote the distribution of the process ( X ( α ) s ) s ∈ [0 , t ] on (cid:0) C ([0 , t ]) , B ( C ([0 , t ])) (cid:1) , where C ([0 , t ]) and B ( C ([0 , t ])) denote the set of all continuousreal valued functions defined on [0 , t ] and the Borel σ –field on C ([0 , t ]), respectively. Themeasures P X ( α ) , t and P X (0) , t are equivalent andd P X ( α ) , t d P X (0) , t (cid:16) X ( α ) (cid:12)(cid:12) [0 ,t ] (cid:17) = exp ( α Z t b ( s ) X ( α ) s σ ( s ) d X ( α ) s − α Z t b ( s ) (cid:0) X ( α ) s (cid:1) σ ( s ) d s ) , see Liptser and Shiryaev [17, Theorem 7.20].For all t ∈ (0 , T ), the maximum likelihood estimator b α ( X ( α ) ) t of the parameter α basedon the observation ( X ( α ) s ) s ∈ [0 , t ] is defined by b α ( X ( α ) ) t := arg max α ∈ R ln (cid:18) d P X ( α ) , t d P X (0) , t (cid:16) X ( α ) (cid:12)(cid:12) [0 ,t ] (cid:17)(cid:19) . The following lemma guarantees the existence of a unique MLE of α . For all α ∈ R and t ∈ [ t , T ) , we have P Z t b ( s ) (cid:0) X ( α ) s (cid:1) σ ( s ) d s > ! = 1 . Proof.
Let α ∈ R be fixed. On the contrary, let us suppose that there exists some t ∈ [ t , T )such that P ( A ) >
0, where A := ( ω ∈ Ω : Z t b ( s ) (cid:0) X ( α ) s ( ω ) (cid:1) σ ( s ) d s = 0 ) . ω ∈ A , we have b ( s ) X ( α ) s ( ω ) = 0 for all s ∈ [0 , t ], since b , σ and X ( α ) . ( ω )are continuous on [0 , T ). Using the SDE (1.3), we get X ( α ) s ( ω ) = (cid:18)Z s σ ( u ) d B u (cid:19) ( ω ) , s ∈ [0 , t ] , ω ∈ A, and hence b ( s ) (cid:18)Z s σ ( u ) d B u (cid:19) ( ω ) = 0 , s ∈ [0 , t ] , ω ∈ A. By b ( t ) = 0, we conclude P (cid:18)Z t σ ( s ) d B s = 0 (cid:19) > . Here R t σ ( s ) d B s is a normally distributed random variable with mean 0 and with variance R t σ ( s ) d s >
0, since σ ( s ) > s ∈ [0 , T ), which leads us to a contradiction. ✷ By Lemma 2.1, for all t ∈ [ t , T ) , there exists a unique maximum likelihood estimator b α ( X ( α ) ) t of the parameter α based on the observation ( X ( α ) s ) s ∈ [0 , t ] given by b α ( X ( α ) ) t = R t b ( s ) X ( α ) s σ ( s ) d X ( α ) s R t b ( s ) ( X ( α ) s ) σ ( s ) d s , t ∈ [ t , T ) . To be more precise, by Lemma 2.1, the maximum likelihood estimator b α ( X ( α ) ) t , t ∈ [ t , T ),exists P -almost surely. Using the SDE (1.3) we obtain b α ( X ( α ) ) t − α = R t b ( s ) X ( α ) s σ ( s ) d B s R t b ( s ) ( X ( α ) s ) σ ( s ) d s , t ∈ [ t , T ) . (2.2)For all t ∈ (0 , T ), the Fisher information for α contained in the observation ( X ( α ) s ) s ∈ [0 , t ] , isdefined by I X ( α ) ( t ) := E (cid:18) ∂∂α ln (cid:18) d P X ( α ) , t d P X (0) , t (cid:16) X ( α ) (cid:12)(cid:12) [0 , t ] (cid:17)(cid:19)(cid:19) = Z t b ( s ) E (cid:0) X ( α ) s (cid:1) σ ( s ) d s, where the last equality follows by the SDE (1.3) and Karatzas and Shreve [11, Proposition3.2.10]. Note also that, again by Karatzas and Shreve [11, Proposition 3.2.10], E (cid:0) X ( α ) s (cid:1) = Z s σ ( u ) exp (cid:26) α Z su b ( v ) d v (cid:27) d u, s ∈ [0 , T ) , and then, by the conditions on b and σ , E (cid:0) X ( α ) s (cid:1) > s ∈ (0 , T ), and I X ( α ) : (0 , T ) → [0 , ∞ )is an increasing function with I X ( α ) ( t ) > t ∈ [ t , T ).The aim of the present paragraph is to formulate a theorem (see Theorem 2.4) which we willuse for studying asymptotic properties of the MLE of α . First we recall a limit theorem forcontinuous local martingales. Theorem 4.1 in van Zanten [26], which is stated for continuouslocal martingales with time interval [0 , ∞ ), can be applied to continuous local martingaleswith time interval [0 , T ), T ∈ (0 , ∞ ), with appropriate modifications of the conditions, asfollows. 5 .2 Theorem. Let T ∈ (0 , ∞ ] be fixed and let (cid:0) Ω , F , ( F t ) t ∈ [0 ,T ) , P (cid:1) be a filtered probabilityspace satisfying the usual conditions. Let ( M t ) t ∈ [0 ,T ) be a continuous local martingale withrespect to the filtration ( F t ) t ∈ [0 ,T ) such that P ( M = 0) = 1 . Suppose that there exists afunction Q : [0 , T ) → R \ { } such that lim t ↑ T Q ( t ) = 0 and Q ( t ) h M i t P −→ η as t ↑ T, where η is a random variable defined on (cid:0) Ω , F , P (cid:1) , and ( h M i t ) t ∈ [0 ,T ) denotes the quadraticvariation of M . Then for each random variable Z defined on (cid:0) Ω , F , P (cid:1) , we have ( Q ( t ) M t , Z ) L −→ ( ηξ, Z ) as t ↑ T, where ξ is a standard normally distributed random variable independent of ( η, Z ) . To derive a consequence of Theorem 2.2 we need the following lemma which is a multidimen-sional version of Lemma 3 due to K´atai and Mogyor´odi [12].
Let T ∈ (0 , ∞ ] be fixed. Suppose that ( X t ) t ∈ [0 ,T ) and ( Y t ) t ∈ [0 ,T ) are stochasticprocesses on a probability space (cid:0) Ω , F , P (cid:1) such that X t converges in distribution as t ↑ T and Y t P −→ Y as t ↑ T , where Y is a random variable defined on (cid:0) Ω , F , P (cid:1) . If g : R → R d is a continuous function (where d ∈ N ), then g ( X t , Y t ) − g ( X t , Y ) P −→ as t ↑ T. Proof.
The assertion follows from K´atai and Mogyor´odi [12, Lemma 3] using that conver-gence in probability of a d -dimensional stochastic process is equivalent to the convergence inprobability of all of its coordinates separately (see, e.g., van der Vaart [25, page 10]). ✷ As a consequence of Theorem 2.2 and Lemma 2.3 one can derive the following theorem.
Let α ∈ R . Suppose that there exists a function Q : [0 , T ) → R \ { } suchthat lim t ↑ T Q ( t ) = 0 and Q ( t ) Z t b ( s ) ( X ( α ) s ) σ ( s ) d s P −→ η as t ↑ T, (2.3) where η is a random variable defined on (Ω , F , P ) . Then Q ( t ) Z t b ( s ) X ( α ) s σ ( s ) d B s , Q ( t ) Z t b ( s ) ( X ( α ) s ) σ ( s ) d s ! L −→ ( ηξ, η ) as t ↑ T, where ξ is a standard normally distributed random variable independent of η . Moreover, if P ( η >
0) = 1 , then Q ( t ) ( b α ( X ( α ) ) t − α ) L −→ ξη as t ↑ T. roof. With the notation M t := R t b ( s ) X ( α ) s σ ( s ) d B s , t ∈ [0 , T ), we have ( M t ) t ∈ [0 ,T ) is acontinuous square integrable martingale with respect to the filtration ( F t ) t ∈ [0 ,T ) . By (2.3) andTheorem 2.2, we have Q ( t ) Z t b ( s ) X ( α ) s σ ( s ) d B s , η ! L −→ ( ηξ, η ) as t ↑ T. By (2.3) and Lemma 2.3, we get Q ( t ) Z t b ( s ) X ( α ) s σ ( s ) d B s , Q ( t ) Z t b ( s ) ( X ( α ) s ) σ ( s ) d s ! − Q ( t ) Z t b ( s ) X ( α ) s σ ( s ) d B s , η ! P −→ , as t ↑ T . This implies the first part of the assertion using Slutsky’s lemma (see, e.g., van derVaart [25, Lemma 2.8]). Using (2.2) and the continuous mapping theorem (see, e.g., van derVaart [25, Theorem 2.3]), we also have the second part of the assertion. ✷ Next we turn to the investigation of the asymptotic properties of the MLE of α . Suppose that α ∈ R such that lim t ↑ T I X ( α ) ( t ) = ∞ , (2.4) lim t ↑ T b ( t ) σ ( t ) exp (cid:26) α Z t b ( w ) d w (cid:27) = C ∈ R \ { } . (2.5) Then p I X ( α ) ( t ) (cid:16)b α ( X ( α ) ) t − α (cid:17) L −→ sign( C ) √ R W s d W s R ( W s ) d s as t ↑ T, where sign denotes the signum function and ( W s ) s ∈ [0 , is a standard Wiener process. For the proof of Theorem 2.5 we need the following lemma.
Let α ∈ R be such that condition (2.5) is satisfied. Then (2.4) is equivalent toany of the following conditions: lim t ↑ T Z t | b ( v ) | d v = ∞ , (2.6) lim t ↑ T Z t σ ( s ) exp (cid:26) − α Z s b ( v ) d v (cid:27) d s = ∞ . (2.7) Proof.
By (2.5), there exist c > c > t ∈ (0 , T ) such that(2.8) 0 < c | b ( t ) | exp (cid:26) α Z t b ( w ) d w (cid:27) σ ( t ) c | b ( t ) | exp (cid:26) α Z t b ( w ) d w (cid:27) , t ∈ [ t , T ). First we show that (2.4) and (2.6) are equivalent. By (2.8), we have for all t ∈ [ t , T ), I X ( α ) ( t ) − I X ( α ) ( t ) = Z tt b ( s ) σ ( s ) (cid:18)Z s σ ( u ) exp (cid:26) α Z su b ( v ) d v (cid:27) d u (cid:19) d s Z tt b ( s ) σ ( s ) exp (cid:26) α Z s b ( v ) d v (cid:27) (cid:18)Z t σ ( u ) exp (cid:26) − α Z u b ( v ) d v (cid:27) d u (cid:19) d s + Z tt b ( s ) σ ( s ) exp (cid:26) α Z s b ( v ) d v (cid:27) (cid:18)Z st c | b ( u ) | d u (cid:19) d s a Z tt | b ( u ) | d u + a (cid:18)Z tt | b ( u ) | d u (cid:19) , where a := 1 c Z t σ ( u ) exp (cid:26) − α Z u b ( v ) d v (cid:27) d u, and a := c c . Moreover, again by (2.8), for all t ∈ [ t , T ), we have I X ( α ) ( t ) − I X ( α ) ( t ) > c c (cid:18)Z tt | b ( u ) | d u (cid:19) . This implies the equivalence of (2.4) and (2.6), since if ( x n ) n ∈ N is a monotone increasingsequence of real numbers and a > a >
0, then a x n + a x n tends to ∞ if and only if x n → ∞ . Indeed, since ( x n ) n ∈ N is monotone increasing, lim n →∞ x n ∈ R exists or x n ↑ ∞ .In the first case a x n + a x n does not converge to ∞ . Now we show that (2.6) and (2.7) are equivalent. Using (2.8), we have Z t σ ( s ) exp (cid:26) − α Z s b ( v ) d v (cid:27) d s + c Z tt | b ( s ) | d s Z t σ ( s ) exp (cid:26) − α Z s b ( v ) d v (cid:27) d s Z t σ ( s ) exp (cid:26) − α Z s b ( v ) d v (cid:27) d s + c Z tt | b ( s ) | d s, t ∈ [ t , T ) , which implies the corresponding part of the assertion. ✷ Proof of Theorem 2.5.
Note that condition (2.5) yields that there exists t ∈ (0 , T ) suchthat b ( t ) = 0 for all t ∈ [ t , T ). By Lemma 2.6, since (2.4) is assumed, we have conditions(2.6) and (2.7) are also satisfied. By (2.7), for each t ∈ (0 , T ), there exists a function τ t : [0 , ∞ ) → [0 , T ) such that1 p I X ( α ) ( t ) Z τ t ( u )0 σ ( s ) exp (cid:26) − α Z s b ( w ) d w (cid:27) d s = u, u ∈ [0 , ∞ ).Clearly, τ t is strictly increasing (hence invertible), and again by (2.7), lim u →∞ τ t ( u ) = T for all t ∈ (0 , T ), and τ − t ( v ) = 1 p I X ( α ) ( t ) Z v σ ( s ) exp (cid:26) − α Z s b ( w ) d w (cid:27) d s, v ∈ [0 , T ).8hen lim v ↑ T τ − t ( v ) = ∞ for all t ∈ (0 , T ), and( τ − t ) ′ ( v ) = 1 p I X ( α ) ( t ) σ ( v ) exp (cid:26) − α Z v b ( w ) d w (cid:27) , v ∈ (0 , T ).By the theorem on differentiation of inverse function, τ t is also continuously differentiable and( τ t ) ′ ( u ) = p I X ( α ) ( t ) σ ( τ t ( u )) − exp ( α Z τ t ( u )0 b ( v ) d v ) , u ∈ (0 , ∞ ) . The process M ( X ( α ) ) t := X ( α ) t exp (cid:26) − α Z t b ( u ) d u (cid:27) = Z t σ ( s ) exp (cid:26) − α Z s b ( u ) d u (cid:27) d B s , t ∈ [0 , T ) , (2.9)is a continuous square-integrable martingale with respect to the filtration induced by B . Withthis notation we have for all t ∈ (0 , T ),1 I X ( α ) ( t ) Z t b ( v ) ( X ( α ) v ) σ ( v ) d v = 1 I X ( α ) ( t ) Z t b ( v ) σ ( v ) exp (cid:26) α Z v b ( w ) d w (cid:27) (cid:0) M ( X ( α ) ) v (cid:1) d v = 1 p I X ( α ) ( t ) Z τ − t ( t )0 b ( τ t ( u )) σ ( τ t ( u )) exp ( α Z τ t ( u )0 b ( w ) d w ) (cid:0) M ( X ( α ) ) τ t ( u ) (cid:1) d u. Then for all t ∈ (0 , T ),1 I X ( α ) ( t ) Z t b ( v ) ( X ( α ) v ) σ ( v ) d v = Z τ − t ( t )0 c ( τ t ( u )) (cid:0) f M ( X ( α ) , t ) u (cid:1) d u, (2.10)where c ( s ) := b ( s ) σ ( s ) exp (cid:26) α Z s b ( w ) d w (cid:27) , s ∈ [0 , T ) , and f M ( X ( α ) , t ) u := 1 p I X ( α ) ( t ) M ( X ( α ) ) τ t ( u ) , u ∈ [0 , ∞ ) . By (2.5), we have lim s ↑ T c ( s ) = C , and for all t ∈ (0 , T ), the process ( f M ( X ( α ) , t ) u ) u ∈ [0 , ∞ ) is acontinuous Gauss martingale with respect to the filtration ( e F tu ) u > , where e F tu := σ (cid:0) B v , v τ t ( u ) (cid:1) , u > . Moreover, for all t ∈ (0 , T ), the process ( f M ( X ( α ) , t ) u ) u ∈ [0 , ∞ ) has quadratic variation h f M ( X ( α ) , t ) i u = 1 p I X ( α ) ( t ) Z τ t ( u )0 σ ( s ) exp (cid:26) − α Z s b ( w ) d w (cid:27) d s = u, u ∈ [0 , ∞ ) . f M ( X ( α ) , t ) u ) u ∈ [0 , ∞ ) is a standardWiener process with respect to the filtration ( e F tu ) u > . In a similar way we get1 p I X ( α ) ( t ) Z t b ( v ) X ( α ) v σ ( v ) d B v = 1 p I X ( α ) ( t ) Z t b ( v ) σ ( v ) exp (cid:26) α Z v b ( w ) d w (cid:27) M ( X ( α ) ) v d M ( X ( α ) ) v = 1 p I X ( α ) ( t ) Z τ − t ( t )0 b ( τ t ( u )) σ ( τ t ( u )) exp ( α Z τ t ( u )0 b ( w ) d w ) M ( X ( α ) ) τ t ( u ) d M ( X ( α ) ) τ t ( u ) = Z τ − t ( t )0 c ( τ t ( u )) f M ( X ( α ) , t ) u d f M ( X ( α ) , t ) u , t ∈ (0 , T ) , (2.11)where the last but one equality follows by the construction of a stochastic integral with respectto M ( X ( α ) ) , see, e.g., Jacod and Shiryaev [10, Proposition 4.44, Chapter I]. By assumption(2.4) and the fact that b ( t ) = 0 for all t ∈ [ t , T ), we can use L’Hospital’s rule and we obtainlim t ↑ T (cid:0) τ − t ( t ) (cid:1) = lim t ↑ T (cid:16)R t σ ( s ) exp (cid:8) − α R s b ( w ) d w (cid:9) d s (cid:17) I X ( α ) ( t )= lim t ↑ T σ ( t ) exp n − α R t b ( w ) d w o R t σ ( s ) exp (cid:8) − α R s b ( w ) d w (cid:9) d s b ( t ) σ ( t ) R t σ ( s ) exp n α R ts b ( w ) d w o d s = lim t ↑ T σ ( t ) b ( t ) exp (cid:26) − α Z t b ( w ) d w (cid:27) = lim t ↑ T c ( t ) = 2 C , (2.12)where the last equality follows by (2.5). Hence, using that τ − t ( t ) ∈ [0 , ∞ ), we also havelim t ↑ T τ − t ( t ) = √ | C | .Now we prove that p I X ( α ) ( t ) Z t b ( s ) X ( α ) s σ ( s ) d B s , I X ( α ) ( t ) Z t b ( s ) (cid:16) X ( α ) s (cid:17) σ ( s ) d s L −→ C Z √ / | C | W s d W s , C Z √ / | C | ( W s ) d s ! as t ↑ T .(2.13)Using (2.10), (2.11) and that ( f M ( X ( α ) , t ) u ) u ∈ [0 , ∞ ) is a standard Wiener process for all t ∈ (0 , T ),we conclude that p I X ( α ) ( t ) Z t b ( s ) X ( α ) s σ ( s ) d B s , I X ( α ) ( t ) Z t b ( s ) (cid:16) X ( α ) s (cid:17) σ ( s ) d s Z τ − t ( t )0 c ( τ t ( u )) W u d W u , Z τ − t ( t )0 c ( τ t ( u )) ( W u ) d u ! , for all t ∈ (0 , T ) with some fixed standard Wiener process ( W u ) u > . Hence to prove (2.13),using Slutsky’s lemma, it is enough to check that Z τ − t ( t )0 c ( τ t ( u )) W u d W u , Z τ − t ( t )0 c ( τ t ( u )) ( W u ) d u ! − C Z √ / | C | W u d W u , C Z √ / | C | ( W u ) d u ! P −→ t ↑ T .For this it is enough to prove that the following convergences hold: Z √ / | C | (cid:2) c ( τ t ( u )) − C (cid:3) W u d W u L −→ t ↑ T, (2.14) Z τ − t ( t )0 c ( τ t ( u )) W u d W u − Z √ / | C | c ( τ t ( u )) W u d W u L −→ t ↑ T, (2.15) P lim t ↑ T Z √ / | C | (cid:2) c ( τ t ( u )) − C (cid:3) ( W u ) d u = 0 ! = 1 , (2.16) P lim t ↑ T Z τ − t ( t )0 c ( τ t ( u )) ( W u ) d u − Z √ / | C | c ( τ t ( u )) ( W u ) d u ! = 0 ! = 1 . (2.17)Using that lim v →∞ τ t ( v ) = T for all t ∈ (0 , T ) and lim t ↑ T τ t ( v ) = T for all v ∈ (0 , ∞ ) , first weprove (2.14). An easy calculation shows that for all t ∈ (0 , T ), E Z √ / | C | (cid:2) c ( τ t ( u )) − C (cid:3) W u d W u ! = E Z √ / | C | (cid:2) c ( τ t ( u )) − C (cid:3) ( W u ) d u = Z √ / | C | (cid:2) c ( τ t ( u )) − C (cid:3) u d u √ | C | Z √ / | C | (cid:2) c ( τ t ( u )) − C (cid:3) d u. The only non-trivial step is to verify that the first equality holds. By Karatzas and Shreve [11,Proposition 3.2.10], for this equality it is enough to check that E Z √ / | C | (cid:2) c ( τ t ( u )) − C (cid:3) ( W u ) d u = Z √ / | C | (cid:2) c ( τ t ( u )) − C (cid:3) u d u < ∞ , t ∈ (0 , T ) , which holds, since the integrand u (cid:2) c ( τ t ( u )) − C (cid:3) u is continuous on [0 , √ / | C | ] and hencebounded. Finally, we prove thatlim t ↑ T Z √ / | C | (cid:2) c ( τ t ( u )) − C (cid:3) d u = 0 . (2.18) 11ince lim s ↑ T c ( s ) = C ∈ R \ { } , for all δ > ε > | c ( s ) − C | < δ for all s ∈ ( T − ε, T ). For all ε > u ∈ (0 , ∞ ) there exists t ∈ (0 , T ) suchthat τ t ( u ) ∈ ( T − ε, T ) for all t ∈ ( t , T ), and hence τ t ( u ) ∈ ( T − ε, T ) for all t ∈ ( t , T )and u > u , since τ t is increasing. Consequently, for all δ > u ∈ (0 , ∞ ) thereexists t ∈ (0 , T ) such that | c ( τ t ( u )) − C | < δ for all t ∈ ( t , T ) and all u > u . Thus forall δ > u ∈ (0 , √ (cid:14) | C | ), there exists t ∈ (0 , T ) such that Z √ / | C | u (cid:2) c ( τ t ( u )) − C (cid:3) d u √ | C | δ , t ∈ ( t , T ).Then for all δ > u ∈ (0 , √ (cid:14) | C | ), there exists t ∈ (0 , T ) such that Z √ / | C | (cid:2) c ( τ t ( u )) − C (cid:3) d u = Z u (cid:2) c ( τ t ( u )) − C (cid:3) d u + Z √ / | C | u (cid:2) c ( τ t ( u )) − C (cid:3) d u sup ( u,t ) ∈ [0 ,u ] × [ t ,T ) (cid:2) c ( τ t ( u )) − C (cid:3) u + √ | C | δ , t ∈ ( t , T ).Since lim s ↑ T c ( s ) = C ∈ R \ { } implies that there exists K ∈ (0 , ∞ ) such thatsup ( u,t ) ∈ [0 ,u ] × [ t ,T ) (cid:2) c ( τ t ( u )) − C (cid:3) sup s ∈ [0 ,T ) ( c ( s ) − C ) K , we have for all δ > u ∈ (0 , √ (cid:14) | C | ), there exists t ∈ (0 , T ) such that Z √ / | C | (cid:2) c ( τ t ( u )) − C (cid:3) d u K u + √ | C | δ , t ∈ ( t , T ),which yields (2.18) and then we obtain (2.14).Now we check (2.15). Similarly as above, we have for all t ∈ (0 , T ), E Z τ − t ( t )0 c ( τ t ( u )) W u d W u − Z √ / | C | c ( τ t ( u )) W u d W u ! = Z √ | C | ∨ τ − t ( t ) √ | C | ∧ τ − t ( t ) c ( τ t ( u )) u d u K √ | C | ∨ τ − t ( t ) ! (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) √ | C | − τ − t ( t ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) , where the last step follows from K := sup s ∈ [0 ,T ) c ( s ) < ∞ , since lim s ↑ T c ( s ) = C ∈ R \ { } .By (2.12), we have lim t ↑ T τ − t ( t ) = √ / | C | and hencelim t ↑ T (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) √ | C | − τ − t ( t ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = 0 , which implies (2.15).Now we check (2.16). Using that, by Cauchy-Schwartz’s inequality, Z √ / | C | (cid:2) c ( τ t ( u )) − C (cid:3) ( W u ) d u ! Z √ / | C | (cid:2) c ( τ t ( u )) − C (cid:3) d u ! Z √ / | C | ( W u ) d u ! , t ∈ (0 , T ) ,
12e have it is enough to check thatlim t ↑ T Z √ / | C | (cid:2) c ( τ t ( u )) − C (cid:3) d u = 0 , since P Z √ / | C | ( W u ) d u < ∞ ! = 1 . Using that lim s ↑ T c ( s ) = C ∈ R \ { } and that c is continuous, there exists K ∈ (0 , ∞ )such that sup s ∈ [0 ,T ) | c ( s ) | K . Hence Z √ / | C | (cid:2) c ( τ t ( u )) − C (cid:3) d u = Z √ / | C | (cid:2) c ( τ t ( u )) + C (cid:3) (cid:2) c ( τ t ( u )) − C (cid:3) d u ( K + | C | ) Z √ / | C | (cid:2) c ( τ t ( u )) − C (cid:3) d u → t ↑ T ,where the last step follows by (2.18).Using the very same arguments as above, one can check (2.17).By (2.13) and the continuous mapping theorem, we have p I X ( α ) ( t ) ( b α ( X ( α ) ) t − α ) = √ I X ( α ) ( t ) R t b ( s ) X ( α ) s σ ( s ) d B s I X ( α ) ( t ) R t b ( s ) ( X ( α ) s ) σ ( s ) d s L −→ C R √ / | C | W s d W s C R √ / | C | ( W s ) d s as t ↑ T. Using that for all λ >
0, the process (cid:0) λ − / W λt (cid:1) t > is a standard Wiener process, by thesubstitution s = √ | C | u , u ∈ R , we get the random variable C R √ / | C | W s d W s C R √ / | C | ( W s ) d s = 1 C R W √ | C | u d W √ | C | u R ( W √ | C | u ) √ | C | d u = 1 C √ | C | R q | C |√ W √ | C | u d q | C |√ W √ | C | u √ | C | √ | C | R (cid:16)q | C |√ W √ | C | u (cid:17) d u has the same distribution as sign( C ) √ R W s d W s R ( W s ) d s . ✷ For historical fidelity, we remark that the corresponding part of Example 8.1 in Luschgy[20] is a special case of our Theorem 2.5, and in our proof we used some ideas of Luschgy’sexample. Note also that, by Lemma 2.6, condition (2.4) in Theorem 2.5 can be replaced by(2.6) or (2.7).In the next remark we give an example for functions b and σ for which conditions (2.4)and (2.5) are satisfied. First let α = 0. Let σ : [0 , T ) → (0 , ∞ ) be some continuously differentiablefunction such that R T σ ( s ) d s := lim u ↑ T R u σ ( s ) d s < ∞ , and let b ( t ) := − α σ ( t ) R Tt σ ( s ) d s , t ∈ [0 , T ) . s < t < T , Z ts | b ( u ) | d u = − | α | ln R Tt σ ( v ) d v R Ts σ ( v ) d v ! , which implies that lim t ↑ T R t | b ( u ) | d u = ∞ . Moreover, b ( t ) σ ( t ) exp (cid:26) α Z t b ( u ) d u (cid:27) = − α R T σ ( s ) d s ∈ R \ { } , which implies (2.5). By Lemma 2.6, since lim t ↑ T R t | b ( u ) | d u = ∞ , we have (2.4) is alsosatisfied.Let us suppose now that α = 0. Let σ : [0 , T ) → (0 , ∞ ) be some continuously differen-tiable function such that R T σ ( s ) d s = ∞ , and b ( t ) := σ ( t ) , t ∈ [0 , T ). Then we havelim t ↑ T R t | b ( u ) | d u = ∞ , and, with α = 0, b ( t ) σ ( t ) exp (cid:26) α Z t b ( u ) d u (cid:27) = 1 ∈ R \ { } , t ∈ [0 , T ) , which implies (2.5). By Lemma 2.6, since lim t ↑ T R t | b ( u ) | d u = ∞ , we have (2.4) is alsosatisfied.Next we deal with the case of Cauchy limit distribution. Suppose that α ∈ R such that lim t ↑ T I X ( α ) ( t ) = ∞ , (2.19) lim t ↑ T Z t σ ( s ) exp (cid:26) − α Z s b ( v ) d v (cid:27) d s < ∞ . (2.20) Then p I X ( α ) ( t ) ( b α ( X ( α ) ) t − α ) L −→ ζ as t ↑ T ,where ζ is a random variable with standard Cauchy distribution admitting a density function π (1+ x ) , x ∈ R . Proof.
The process ( M ( X ( α ) ) t ) t ∈ [0 ,T ) introduced in (2.9) is a continuous square-integrablemartingale with respect to the filtration induced by B and with quadratic variation h M ( X ( α ) ) i t = Z t σ ( s ) exp (cid:26) − α Z s b ( u ) d u (cid:27) d s, t ∈ [0 , T ) . By (2.20), we have lim t ↑ T h M ( X ( α ) ) i t < ∞ . Hence Proposition 1.26 in Chapter IV and Propo-sition 1.8 in Chapter V in Revuz and Yor [22] imply that the limit M ( X ( α ) ) T := lim t ↑ T M ( X ( α ) ) t exists almost surely. Since M ( X ( α ) ) t is normally distributed with mean 0 and with variance h M ( X ( α ) ) i t for all t ∈ [0 , T ), the random variable M ( X ( α ) ) T is also normally distributed withmean 0 and with variance lim t ↑ T h M ( X ( α ) ) i t . Indeed, normally distributed random variables14an converge in distribution only to a normally distributed random variable, by continuitytheorem, see, e.g., page 304 in Shiryaev [23]. Hence for the random variable M ( X ( α ) ) T we obtain P (cid:18) lim t ↑ T M ( X ( α ) ) t = M ( X ( α ) ) T (cid:19) = 1 and M ( X ( α ) ) T L = N (cid:18) , lim t ↑ T h M ( X ( α ) ) i t (cid:19) . By (2.19) and the fact that b ( t ) = 0 for all t ∈ [ t , T ), we can use L’Hospital’s rule and weobtainlim t ↑ T R t b ( s ) ( X ( α ) s ) σ ( s ) d sI X ( α ) ( t ) = lim t ↑ T R t b ( s ) ( X ( α ) s ) σ ( s ) d s R t b ( s ) E ( X ( α ) s ) σ ( s ) d s = lim t ↑ T ( X ( α ) t ) E ( X ( α ) t ) = lim t ↑ T ( M ( X ( α ) ) t ) exp n α R t b ( v ) d v oR t σ ( u ) exp n α R tu b ( v ) d v o d u = lim t ↑ T ( M ( X ( α ) ) t ) R t σ ( u ) exp (cid:8) − α R u b ( v ) d v (cid:9) d u = ( M ( X ( α ) ) T ) lim t ↑ T h M ( X ( α ) ) i t =: ξ P -almost surely,where ξ L = N (0 , Q ( t ) := 1 / p I X ( α ) ( t ), t ∈ (0 , T ), we have p I X ( α ) ( t ) ( b α ( X ( α ) ) t − α ) L −→ η | ξ | . This yields the assertion, since one can easily check that η | ξ | has standard Cauchy distribution. ✷ We note that if condition (2.20) is satisfied then lim t ↑ T R t | b ( s ) | d s = ∞ yieldscondition (2.19). Indeed, for all t ∈ (0 , T ), I X ( α ) ( t ) = Z t b ( s ) σ ( s ) exp (cid:26) α Z s b ( u ) d u (cid:27) Z s σ ( u ) exp (cid:26) − α Z u b ( v ) d v (cid:27) d u d s, and, by (2.20), for all ε > t ε ∈ [0 , T ) such that for all t ∈ [ t ε , T ), (cid:12)(cid:12)(cid:12)(cid:12)Z t σ ( s ) exp (cid:26) − α Z s b ( v ) d v (cid:27) d s − Z T σ ( s ) exp (cid:26) − α Z s b ( v ) d v (cid:27) d s (cid:12)(cid:12)(cid:12)(cid:12) < ε. Hence for all 0 < ε < Z T σ ( s ) exp (cid:26) − α Z s b ( v ) d v (cid:27) d s, and for all t ∈ [ t ε , T ), we have I X ( α ) ( t ) > Z t ε b ( s ) σ ( s ) Z s σ ( u ) exp (cid:26) α Z su b ( v ) d v (cid:27) d u d s + Z tt ε b ( s ) σ ( s ) exp (cid:26) α Z s b ( u ) d u (cid:27) d s (cid:18)Z T σ ( s ) exp (cid:26) − α Z s b ( v ) d v (cid:27) d s − ε (cid:19) . t ↑ T Z t b ( s ) σ ( s ) exp (cid:26) α Z s b ( u ) d u (cid:27) d s = ∞ . (2.21)By Cauchy-Schwartz’s inequality, we get for all t ∈ [0 , T ), (cid:18)Z t | b ( s ) | d s (cid:19) = (cid:18)Z t | b ( s ) | σ ( s ) exp (cid:26) α Z s b ( u ) d u (cid:27) σ ( s ) exp (cid:26) − α Z s b ( u ) d u (cid:27) d s (cid:19) (cid:18)Z t b ( s ) σ ( s ) exp (cid:26) α Z s b ( u ) d u (cid:27) d s (cid:19)(cid:18)Z t σ ( s ) exp (cid:26) − α Z s b ( u ) d u (cid:27) d s (cid:19) . Using (2.20) and that lim t ↑ T R t | b ( s ) | d s = ∞ , we have (2.21).We note that in case of T = ∞ and σ ≡
1, Theorem 2.8 with condition (2.19) replaced bylim t ↑∞ R t | b ( s ) | d s = ∞ was proved by Luschgy [19, Section 4.2], and hence in Theorem 2.8 weweaken and generalize Luschgy’s above mentioned result. Luschgy’s original proof is based onhis general theorem (see Theorem 1 in [19]), and we note that the conditions of this generaltheorem are not easy to verify. Our proof can be considered as a direct one based on limittheorems for local martingales (see Theorem 2.4).The next corollary states that, under the conditions of Theorem 2.8, the MLE of α isasymptotically normal with an appropriate random normalizing factor. Suppose that α ∈ R such that conditions (2.19) and (2.20) are satisfied.Then Z t b ( u ) ( X ( α ) u ) σ ( u ) d u ! (cid:16)b α ( X ( α ) ) t − α (cid:17) L −→ N (0 , as t ↑ T . Proof.
By (2.2), we have for all t ∈ [ t , T ), Z t b ( u ) ( X ( α ) u ) σ ( u ) d u ! ( b α ( X ( α ) ) t − α ) = R t b ( u ) X ( α ) u σ ( u ) d B u (cid:16)R t b ( u ) ( X ( α ) u ) σ ( u ) d u (cid:17) = √ I X ( α ) ( t ) R t b ( u ) X ( α ) u σ ( u ) d B u (cid:16) I X ( α ) ( t ) R t b ( u ) ( X ( α ) u ) σ ( u ) d u (cid:17) . By the proof of Theorem 2.8, we have P lim t ↑ T I X ( α ) ( t ) Z t b ( u ) ( X ( α ) u ) σ ( u ) d u = ξ ! = 1 , where ξ L = N (0 , R t b ( s ) X ( α ) s σ ( s ) d B s p I X ( α ) ( t ) , R t b ( s ) ( X ( α ) s ) σ ( s ) d sI X ( α ) ( t ) L −→ (cid:0) | ξ | η, ξ (cid:1) as t ↑ T ,where η is a standard normally distributed random variable independent of | ξ | . Then, bythe continuous mapping theorem, we have √ I X ( α ) ( t ) R t b ( u ) X ( α ) u σ ( u ) d B u (cid:16) I X ( α ) ( t ) R t b ( u ) ( X ( α ) u ) σ ( u ) d u (cid:17) L −→ | ξ | η p ξ = η as t ↑ T ,16s stated. ✷ Finally we formulate conditions for asymptotic normality. In the case of T = ∞ and σ ≡
1, the corresponding assertion has already been formulated and proved in Luschgy [19,Section 4.2]. The proof of our more general theorem is the same as in Luschgy [19, Section 4.2].
Suppose that α ∈ R such that lim t ↑ T I X ( α ) ( t ) = ∞ , (2.22) lim t ↑ T p I X ( α ) ( t ) b ( t ) σ ( t ) Z t σ ( s ) exp (cid:26) α Z ts b ( v ) d v (cid:27) d s = 0 . (2.23) Then p I X ( α ) ( t ) ( b α ( X ( α ) ) t − α ) L −→ N (0 , as t ↑ T . The next corollary states that, under the conditions of Theorem 2.11, the MLE of α isalso asymptotically normal with an appropriate random normalizing factor. Suppose that α ∈ R such that conditions (2.22) and (2.23) are satisfied.Then Z t b ( u ) ( X ( α ) u ) σ ( u ) d u ! ( b α ( X ( α ) ) t − α ) L −→ N (0 , as t ↑ T . Proof.
For all t ∈ [ t , T ), we have Z t b ( u ) ( X ( α ) u ) σ ( u ) d u ! ( b α ( X ( α ) ) t − α )= p I X ( α ) ( t )( b α ( X ( α ) ) t − α ) I X ( α ) ( t ) Z t b ( u ) ( X ( α ) u ) σ ( u ) d u ! . (2.24)By Theorem 2.11, we have p I X ( α ) ( t )( b α ( X ( α ) ) t − α ) L −→ N (0 ,
1) as t ↑ T .Moreover, one can show (see, Luschgy [19, Section 4.2])1 I X ( α ) ( t ) Z t b ( u ) ( X ( α ) u ) σ ( u ) d u P −→ t ↑ T .Note that if ξ n , n ∈ N , are nonnegative random variables such that ξ n P −→ n → ∞ ,then ξ n L −→ n → ∞ and hence √ ξ n L −→ n → ∞ . Since the limit 1 is non-random,we have √ ξ n P −→ n → ∞ . Hence I X ( α ) ( t ) Z t b ( u ) ( X ( α ) u ) σ ( u ) d u ! / P −→ t ↑ T .By (2.24) and Slutsky’s lemma (see, e.g., Lemma 2.8, (ii) in van der Vaart [25]), we have theassertion. ✷ .13 Remark. We note that the sets of those parameters of α for which (2.4) and (2.5),for which (2.19) and (2.20), and for which (2.22) and (2.23) hold are pairwise disjoint. This isan immediate consequence of the fact that under the conditions of Theorems 2.5, 2.8 and 2.11the asymptotic distributions of the MLE of α are different from each other. That is, if wecan apply one of the Theorems 2.5, 2.8 and 2.11, then it is sure that the other two can not beapplied.We also remark that in general the set of those parameters of α for which one of the Theorems2.5, 2.8 and 2.11 can be applied is not necessarily the whole R . Due to Luschgy [19, Section4.2], if T = ∞ , b ( t ) := − e − t , t >
0, and σ ≡
1, then lim t ↑ T I X ( α ) ( t ) = ∞ is not satisfiedand hence none of the Theorems 2.5, 2.8 and 2.11 can be applied. First we recall a strong law of large numbers which can be applied to stochastic integrals.The following theorem is a modification of Theorem 3.4.6 in Karatzas and Shreve [11] (due toDambis, Dubins and Schwartz), see also Theorem 1.6 in Chapter V in Revuz and Yor [22]. Infact, our next Theorem 3.1 is Exercise 1.18 in Chapter V in Revuz and Yor [22].
Let T ∈ (0 , ∞ ] be fixed and let (cid:0) Ω , F , ( F t ) t ∈ [0 ,T ) , P (cid:1) be a filtered probabilityspace satisfying the usual conditions. Let ( M t ) t ∈ [0 ,T ) be a continuous local martingale withrespect to the filtration ( F t ) t ∈ [0 ,T ) such that P ( M = 0) = 1 and P (lim t ↑ T h M i t = ∞ ) = 1 .For each s ∈ [0 , ∞ ) , define the stopping time τ s := inf { t ∈ [0 , T ) : h M i t > s } . Then the time-changed process ( B s := M τ s , F τ s ) s > is a standard Wiener process. In particular, the filtration ( F τ s ) s > satisfies the usual conditionsand P (cid:0) M t = B h M i t for all t ∈ [0 , T ) (cid:1) = 1 . Now we formulate a strong law of large numbers for continuous local martingales. Comparewith L´epingle [16, Theoreme 1] or with 3 ◦ ) in Exercise 1.16 in Chapter V in Revuz and Yor[22]. We note that the above mentioned citations are about continuous local martingales withtime interval [0 , ∞ ), but they are also valid for continuous local martingales with time interval[0 , T ), T ∈ (0 , ∞ ), with appropriate modifications in the conditions, see as follows. Let T ∈ (0 , ∞ ] be fixed and let (cid:0) Ω , F , ( F t ) t ∈ [0 ,T ) , P (cid:1) be a filtered probabilityspace satisfying the usual conditions. Let ( M t ) t ∈ [0 ,T ) be a continuous local martingale withrespect to the filtration ( F t ) t ∈ [0 ,T ) such that P ( M = 0) = 1 and P (lim t ↑ T h M i t = ∞ ) = 1 .Let f : [1 , ∞ ) → (0 , ∞ ) be an increasing function such that Z ∞ f ( x ) d x < ∞ . hen P (cid:18) lim t ↑ T M t f ( h M i t ) = 0 (cid:19) = 1 . Theorem 3.1 has the following consequence on stochastic integrals.
Let T ∈ (0 , ∞ ] be fixed and let (cid:0) Ω , F , ( F t ) t ∈ [0 ,T ) , P (cid:1) be a filtered probabilityspace satisfying the usual conditions. Let ( M t ) t ∈ [0 ,T ) be a continuous local martingale withrespect to the filtration ( F t ) t ∈ [0 ,T ) such that P ( M = 0) = 1 . Let ( ξ t ) t ∈ [0 ,T ) be a progressivelymeasurable process such that P (cid:18)Z t ( ξ u ) d h M i u < ∞ (cid:19) = 1 , t ∈ [0 , T ) , and P (cid:18) lim t ↑ T Z t ( ξ u ) d h M i u = ∞ (cid:19) = 1 . (3.1) Let τ s := inf (cid:26) t ∈ [0 , T ) : Z t ( ξ u ) d h M i u > s (cid:27) , s > . Then the process ( η s , F τ s ) s > , defined by η s := Z τ s ξ u d M u , s > , is a standard Wiener process, and P lim t ↑ T R t ξ u d M u R t ( ξ u ) d h M i u = 0 ! = 1 . (3.2) In case of M t = B t , t ∈ [0 , T ) , where ( B t ) t ∈ [0 ,T ) is a standard Wiener process, the progressivemeasurability of ( ξ t ) t ∈ [0 ,T ) can be relaxed to measurability and adaptedness to the filtration ( F t ) t ∈ [0 ,T ) . For historical fidelity, we note that if T = ∞ and M is a standard Wiener process, thenTheorem 3.3 was already formulated and proved in Lemma 17.4 in Liptser and Shiryaev [18].Our proof differs from the original proof of Liptser and Shiryaev. Proof of Theorem 3.3.
By Proposition 3.2.24 in Karatzas and Shreve [11], the process R t ξ u d M u , t ∈ [0 , T ), is a continuous, local martingale with respect to the filtration ( F t ) t ∈ [0 ,T ) .Since R t ξ u d M u , t ∈ [0 , T ), is continuous almost surely, it is square integrable. Moreover, bypage 147 in Karatzas and Shreve [11], the quadratic variation process of R t ξ u d M u , t ∈ [0 , T ),is Z t ( ξ u ) d h M i u , t ∈ [0 , T ) . Hence Theorem 3.1 implies that ( η s , F τ s ) s > is a standard Wiener process. Using condition(3.1), Theorem 3.2 implies (3.2).In case of M t = B t , t ∈ [0 , T ), Remark 3.2.11 in Karatzas and Shreve [11] gives us that theprogressively measurability of ( ξ t ) t ∈ [0 ,T ) can be relaxed to measurability and adaptedness tothe filtration ( F t ) t ∈ [0 ,T ) . ✷ .4 Theorem. Suppose that α ∈ R such that P lim t ↑ T Z t b ( s ) ( X ( α ) s ) σ ( s ) d s = ∞ ! = 1 . (3.3) Then the maximum likelihood estimator b α ( X ( α ) ) t of α is strongly consistent, i.e., P (cid:0) lim t ↑ T b α ( X ( α ) ) t = α (cid:1) = 1 . Proof.
Using (2.2) and (3.3), Theorem 3.3 yields the assertion. ✷ Note that in the case of an Ornstein-Uhlenbeck process, condition (3.3) is satisfied forall α ∈ R (see, e.g., Liptser and Shiryaev [18, (17.57)]), and hence in this case the strongconsistency of the MLE of α is an immediate consequence of Theorem 3.4.We also remark that if the conditions of Theorem 2.5 or Theorem 2.8 or Theorem 2.11 aresatisfied then weak consistency of the MLE of α holds. Let T ∈ (0 , ∞ ] be fixed. Let b : [0 , T ) → R and σ : [0 , T ) → R be continuous functions.Suppose that σ ( t ) > t ∈ [0 , T ), and there exists t ∈ (0 , T ) such that b ( t ) = 0 forall t ∈ [ t , T ). Let a : R → R be a function such that a ( x ) = x + r ( x ) , x ∈ R , where | r ( x ) | L (1 + | x | γ ) , x ∈ R , with some L > γ ∈ [0 , r satisfies the global Lipschitz condition | r ( x ) − r ( y ) | M | x − y | , x, y ∈ R , (4.1)with some M >
0. Note that continuity of r implies continuity of a . For all α ∈ R , let usconsider the SDE (1.1). Note that the drift and diffusion coefficients of the SDE (1.1) satisfythe local Lipschitz condition and the linear growth condition (see, e.g., Jacod and Shiryaev [10,Theorem 2.32, Chapter III]). Again by Jacod and Shiryaev [10, Theorem 2.32, Chapter III],the SDE (1.1) has a unique strong solution. Note also that ( Y ( α ) t ) t ∈ [0 ,T ) has continuous samplepaths by the definition of strong solution, see, e.g., Jacod and Shiryaev [10, Definition 2.24,Chapter III]. For all α ∈ R and t ∈ (0 , T ), let P Y ( α ) , t denote the distribution of the process( Y ( α ) s ) s ∈ [0 , t ] on (cid:0) C ([0 , t ]) , B ( C ([0 , t ])) (cid:1) . The measures P Y ( α ) , t and P Y (0) , t are equivalent andd P Y ( α ) , t d P Y (0) , t (cid:16) Y ( α ) (cid:12)(cid:12) [0 ,t ] (cid:17) = exp (cid:26) α Z t b ( s ) σ ( s ) a (cid:0) Y ( α ) s (cid:1) d Y ( α ) s − α Z t b ( s ) σ ( s ) a (cid:0) Y ( α ) s (cid:1) d s (cid:27) , see Liptser and Shiryaev [17, Theorem 7.20]. 20he MLE b α ( Y ( α ) ) t of α based on the observation ( Y ( α ) s ) s ∈ [0 , t ] is defined by b α ( Y ( α ) ) t := arg max α ∈ R ln (cid:18) d P Y ( α ) , t d P Y (0) , t (cid:16) Y ( α ) (cid:12)(cid:12) [0 ,t ] (cid:17)(cid:19) . If ω ∈ Ω such that R t b ( s ) σ ( s ) a ( Y ( α ) s ( ω )) d s = 0 and (cid:16)R t b ( s ) σ ( s ) a (cid:0) Y ( α ) s (cid:1) d Y ( α ) s (cid:17) ( ω ) = 0, thensup α ∈ R ln (cid:18) d P Y ( α ) , t d P Y (0) , t (cid:16) Y ( α ) (cid:12)(cid:12) [0 ,t ] (cid:17)(cid:19) ( ω ) = ∞ , which yields that b α ( Y ( α ) ) t ( ω ) does not exist. If condition P (cid:18) lim t ↑ T Z t b ( s ) σ ( s ) a ( Y ( α ) s ) d s > (cid:19) = 1(4.2)holds, then the MLE b α ( Y ( α ) ) t of α based on the observation ( Y ( α ) s ) s ∈ [0 , t ] exists asymptoticallyas t ↑ T with probability one. (Note that in case of r ≡ b α ( Y ( α ) ) t = R t b ( s ) a ( Y ( α ) s ) σ ( s ) d Y ( α ) s R t b ( s ) a ( Y ( α ) s ) σ ( s ) d s holds asymptotically as t ↑ T with probability one. To be more precise, if condition (4.2)holds, there exists an event A ∈ F such that P ( A ) = 1 and for all ω ∈ A there exists a t ( ω ) ∈ [0 , T ) with the property that b α ( Y ( α ) ) t ( ω ) exists for all t ∈ [ t ( ω ) , T ) and b α ( Y ( α ) ) t ( ω ) = (cid:16)R t b ( s ) a ( Y ( α ) s ) σ ( s ) d Y ( α ) s (cid:17) ( ω ) R t b ( s ) a ( Y ( α ) s ( ω )) σ ( s ) d s . In all what follows, by the expression ‘exists/holds asymptotically as t ↑ T with probabilityone’ we mean the above property. Using the SDE (1.1), we have for all α ∈ R , b α ( Y ( α ) ) t − α = R t b ( s ) a ( Y ( α ) s ) σ ( s ) d B s R t b ( s ) a ( Y ( α ) s ) σ ( s ) d s holds asymptotically as t ↑ T with probability one.The following lemma gives a sufficient condition under which (4.2) is satisfied for all α ∈ R . If lim t ↑ T R t σ ( s ) d s = ∞ , then (4.2) is satisfied for all α ∈ R . Proof.
We follow the ideas of the proof of Lemma 3.2 in Dietz and Kutoyants [7]. Let α ∈ R be fixed. On the contrary, let us suppose that P ( A ) >
0, where A := (cid:26) ω ∈ Ω : lim t ↑ T Z t b ( s ) σ ( s ) a (cid:0) Y ( α ) s ( ω ) (cid:1) d s = 0 (cid:27) . Then for all t ∈ [0 , T ) and ω ∈ A , we have Z t b ( s ) σ ( s ) a (cid:0) Y ( α ) s ( ω ) (cid:1) d s = 0 . b , σ , a are continuous and Y ( α ) . ( ω ) is also continuous on [0 , T ) for all ω ∈ Ω, wehave b ( t ) a (cid:0) Y ( α ) t ( ω ) (cid:1) = 0 , ∀ t ∈ [0 , T ) , ∀ ω ∈ A . This yields that A ⊂ A , where A := n ω ∈ Ω : b ( t ) a (cid:0) Y ( α ) t ( ω ) (cid:1) = 0 , ∀ t ∈ [0 , T ) o . Let Z := { x ∈ R : a ( x ) = 0 } . We show that Z is compact. First we check thatlim x →±∞ a ( x ) = ±∞ . Since (cid:12)(cid:12)(cid:12)(cid:12) r ( x ) x (cid:12)(cid:12)(cid:12)(cid:12) L (cid:18) | x | + | x | γ − (cid:19) → x → ±∞ , we have lim x →±∞ a ( x ) = lim x →±∞ x (cid:18) r ( x ) x (cid:19) = ±∞ . Hence, using also that a is continuous, we have Z is compact. Using that b ( t ) = 0 for all t ∈ [ t , T ), we have a (cid:0) Y ( α ) t ( ω ) (cid:1) = 0 , ∀ ω ∈ A , ∀ t ∈ [ t , T ) , i.e., Y ( α ) t ( ω ) ∈ Z for all ω ∈ A and for all t ∈ [ t , T ). By the SDE (1.1), we have Y ( α ) t ( ω ) = (cid:18)Z t σ ( s ) d B s (cid:19) ( ω ) , ∀ ω ∈ A , ∀ t ∈ [0 , T ) , and hence a (cid:18)(cid:18)Z t σ ( s ) d B s (cid:19) ( ω ) (cid:19) = 0 , ∀ ω ∈ A , ∀ t ∈ [ t , T ) , i.e., (cid:16)R t σ ( s ) d B s (cid:17) ( ω ) ∈ Z for all ω ∈ A and for all t ∈ [ t , T ). Then0 < P ( A ) P ( A ) P (cid:18)(cid:26) ω ∈ Ω : (cid:18)Z t σ ( s ) d B s (cid:19) ( ω ) ∈ Z, ∀ t ∈ [ t , T ) (cid:27)(cid:19) . This leads us to a contradiction. Indeed, the Gauss process (cid:16)R t σ ( s ) d B s (cid:17) t ∈ [0 ,T ) has expectationfunction 0 and variance function R t σ ( s ) d s , t ∈ [0 , T ). Using that Z is compact, thereexists K > | x | < K for all x ∈ Z . Hence0 < P (cid:18)(cid:26) ω ∈ Ω : (cid:12)(cid:12)(cid:12)(cid:12)(cid:18)Z t σ ( s ) d B s (cid:19) ( ω ) (cid:12)(cid:12)(cid:12)(cid:12) < K, ∀ t ∈ [ t , T ) (cid:27)(cid:19) P (cid:18)(cid:12)(cid:12)(cid:12)(cid:12)Z t σ ( s ) d B s (cid:12)(cid:12)(cid:12)(cid:12) < K (cid:19) , ∀ t ∈ [ t , T ) . (4.3)Using that, by our assumption, lim t ↑ T R t σ ( s ) d s = ∞ and that Z t σ ( s ) d B s L = N (cid:18) , Z t σ ( s ) d s (cid:19) , t ∈ [0 , T ) ,
22e get R t σ ( s ) d B s qR t σ ( s ) d s L = N (0 , , t ∈ (0 , T ) . Hencelim t ↑ T P (cid:18)(cid:12)(cid:12)(cid:12)(cid:12)Z t σ ( s ) d B s (cid:12)(cid:12)(cid:12)(cid:12) < K (cid:19) = lim t ↑ T P (cid:12)(cid:12)(cid:12)R t σ ( s ) d B s (cid:12)(cid:12)(cid:12)qR t σ ( s ) d s < K qR t σ ( s ) d s = P ( | ξ | <
0) = 0 , where ξ is a standard normally distributed random variable. Here the last but one equalityfollows by the fact that if F n , n ∈ N , are distribution functions such that lim n →∞ F n ( x ) = F ( x )for all x ∈ R , where F is a continuous distribution function, then for all sequences ( x n ) n ∈ N for which lim n →∞ x n = x ∈ R , we have lim n →∞ F n ( x n ) = F ( x ). By (4.3), we arrive at acontradiction. ✷ In the next remark we give an example for α , b , r and σ for which condition (4.2) doesnot hold, and also give an example for which it holds. We will give an example for α , b , r and σ such that for all t ∈ [0 , T ), P (cid:18)Z t b ( s ) σ ( s ) a ( Y ( α ) s ) d s = 0 (cid:19) > . In this case, for all t ∈ (0 , T ), the MLE b α ( Y ( α ) ) t of α exists only with probability less thanone. We note that in our example condition (4.2) will not hold, and hence the MLE of α willexist asymptotically as t ↑ T only with probability less than one. We also give an examplefor α , b , r and σ such that for all t ∈ (0 , T ), P (cid:18)Z t b ( s ) σ ( s ) a ( Y ( α ) s ) d s = 0 (cid:19) = 0 . In this case, for all t ∈ (0 , T ), the MLE b α ( Y ( α ) ) t exists with probability one, and condition(4.2) holds trivially.First we consider the case T ∈ (0 , ∞ ). Let b ( t ) := 1, t ∈ [0 , T ), σ ( t ) := √ T − t , t ∈ [0 , T ),and r ( t ) := 0, t ∈ [0 , T ). Sincelim t ↑ T Z t σ ( s ) d s = lim t ↑ T Z t T − s d s = ∞ , by Lemma 4.1, we get condition (4.2) is satisfied for all α ∈ R .In what follows we give an example for α , b , r and σ such that for all t ∈ [0 , T ), P (cid:18)Z t b ( s ) σ ( s ) a ( Y ( α ) s ) d s = 0 (cid:19) > . In fact, we just reformulate Remark 3.1 in Dietz and Kutoyants [7] which is originally statedfor the time interval [0 , ∞ ). Let b ( t ) := 1, t ∈ [0 , T ), σ ( t ) := 1, t ∈ [0 , T ), and r ( x ) := x +1) if x < − − x if − x < − x − if 1 x .23ote that in this case lim t ↑ T R t σ ( s ) d s = T < ∞ , and hence one can not use Lemma 4.1 forproving (4.2). It will turn out that (4.2) is not satisfied for α = 1. Clearly, r is continuous,piecewise continuously differentiable and has everywhere left and right derivatives. Moreover, | r ( x ) | x ∈ R , and all of its (one-sided) derivatives are bounded by 1. Therefore, | r ( x ) | L (1 + | x | γ ), x ∈ R , and | r ( x ) − r ( y ) | M | x − y | , x, y ∈ R , with L := , γ := 0and M := 1. (The fact that one can choose M to be 1 follows from Lagrange’s theorem.Note that r is not differentiable everywhere, but we can apply Lagrange’s theorem on differentsubintervals of R separately, where r is differentiable.) Let α := 1. Then, by the SDE(1.1), Y (1) t = Z t a ( Y (1) s ) d s + B t , t ∈ [0 , T ) . (4.4)Let us define the random variable τ by τ ( ω ) := ( inf (cid:8) t ∈ [0 , T ) : | Y (1) t ( ω ) | > (cid:9) if ∃ t ∈ [0 , T ) : | Y (1) t ( ω ) | > ,T if | Y (1) t ( ω ) | < , ∀ t ∈ [0 , T ) . Since Y (1)0 = 0 and ( Y (1) t ( ω )) t ∈ [0 ,T ) is continuous for all ω ∈ Ω, we have P ( τ >
0) = 1, andif τ ( ω ) < T , then | Y (1) τ ( ω ) ( ω ) | = 1. By the definition of τ , we have | Y (1) t ( ω ) | < t < τ ( ω ). Hence, using that a ( x ) = x + r ( x ) = 0, | x |
1, we have a ( Y (1) t ( ω )) = 0 forall 0 t < τ ( ω ), and then Z t a ( Y (1) s ( ω )) d s = Z t a ( Y (1) s ( ω )) d s = 0 , t < τ ( ω ) . Hence, by (4.4), we have Y (1) t ( ω ) = B t ( ω ), 0 t < τ ( ω ). Note that if τ ( ω ) < T , then wealso have Y (1) τ ( ω ) ( ω ) = B τ ( ω ) ( ω ) and hence | B τ ( ω ) ( ω ) | = 1. Let us define the random variable κ by κ ( ω ) := inf n t ∈ [0 , ∞ ) : | B t ( ω ) | > o . Hence, if τ ( ω ) < T , we get κ ( ω ) = τ ( ω ), and if τ ( ω ) = T , then κ ( ω ) > T . By formula 2.0.2on page 163 in Borodin and Salminen [6], κ is unbounded and P ( κ < ∞ ) = 1. Consequently,0 < P ( κ > t ) = P ( { κ > t } ∩ { τ < T } ) + P ( { κ > t } ∩ { τ = T } )= P ( { τ > t } ∩ { τ < T } ) + P ( τ = T ) P (cid:18)Z t a ( Y (1) s ) d s = 0 (cid:19) , ∀ t ∈ [0 , T ) , as desired. This also implies that0 < P ( κ > T ) t ↑ T P (cid:18)Z t a ( Y (1) s ) d s = 0 (cid:19) = 2 P (cid:18) lim t ↑ T Z t a ( Y (1) s ) d s = 0 (cid:19) , hence (4.2) is not satisfied for α = 1.Now we consider the case T = ∞ . Let b ( t ) := 1, t > σ ( t ) := 1, t >
0, and r ( t ) := 0, t >
0. Since lim t ↑∞ Z t σ ( s ) d s = lim t ↑∞ t = ∞ ,
24y Lemma 4.1, we get (4.2) holds for all α ∈ R . Remark 3.1 in Dietz and Kutoyants [7] (whichwe already reformulated for the case T ∈ (0 , ∞ )) gives an example for α , b , r and σ suchthat P (cid:18)Z t b ( s ) σ ( s ) a ( Y ( α ) s ) d s = 0 (cid:19) > , t ∈ [0 , ∞ ) . In this example we also have lim t ↑∞ Z t σ ( s ) d s = lim t ↑∞ t = ∞ , and hence, by Lemma 4.1, we have (4.2) holds for all α ∈ R .In case of T = ∞ we are not able to give an example for α , b , r and σ such that P (cid:18)Z t b ( s ) σ ( s ) a ( Y ( α ) s ) d s = 0 (cid:19) > , ∀ t ∈ [0 , ∞ ) , and condition (4.2) is not satisfied. For such an example, by Proposition 1.26 in Chapter IV,Proposition 1.8 in Chapter V in Revuz and Yor [22] and Lemma 4.1, it is necessary to havelim t ↑∞ Z t σ ( s ) d s < ∞ and P (cid:18) lim t ↑∞ Z t σ ( s ) d B s = ζ (cid:19) = 1 , where ζ is a normally distributed random variable with mean 0 and with variance R ∞ σ ( s ) d s .For all t ∈ (0 , T ), the Fisher information for α contained in the observation ( Y ( α ) s ) s ∈ [0 , t ] is defined by I Y ( α ) ( t ) := E (cid:18) ∂∂α ln (cid:18) d P Y ( α ) , t d P Y (0) , t (cid:16) Y ( α ) (cid:12)(cid:12) [0 ,t ] (cid:17)(cid:19)(cid:19) = Z t b ( s ) σ ( s ) E a (cid:0) Y ( α ) s (cid:1) d s, where the last equality follows by the SDE (1.1) and Karatzas and Shreve [11, Proposition3.2.10]. Note that I Y ( α ) ( t ) > t ∈ [0 , T ), but in general I Y ( α ) ( t ) > ∀ t ∈ [0 , T ),does not hold necessarily. Suppose that α ∈ R such that lim t ↑ T I X ( α ) ( t ) = ∞ , (4.5) lim t ↑ T b ( t ) σ ( t ) exp (cid:26) α Z t b ( w ) d w (cid:27) = C ∈ R \ { } , (4.6) and sign( α ) = sign( C ) or α = 0 . Then p I Y ( α ) ( t ) ( b α ( Y ( α ) ) t − α ) L −→ sign( C ) √ R W s d W s R ( W s ) d s as t ↑ T . Note that conditions (4.5) and (4.6) do not contain the function r . For the proof ofTheorem 4.3, we need a generalization of Gr¨onwall’s inequality. Our generalization can beconsidered as a slight improvement of Bainov and Simeonov [1, Lemma 1.1]. The proof goesalong the same lines. 25 .4 Lemma. (A generalization of Gr¨onwall’s inequality) Let s , s ∈ R with s < s ,let ϕ : [ s , s ] → [0 , ∞ ) and ψ : [ s , s ] → [0 , ∞ ) be continuous functions, and let ψ :[ s , s ] → R be a continuously differentiable function. Suppose that ϕ ( s ) ψ ( s ) + Z ss ψ ( u ) ϕ ( u ) d u, s ∈ [ s , s ] . Then ϕ ( s ) ψ ( s ) exp (cid:26)Z ss ψ ( u ) d u (cid:27) + Z ss ψ ′ ( u ) exp (cid:26)Z su ψ ( v ) d v (cid:27) d u, s ∈ [ s , s ] . Proof of Theorem 4.3.
Note that condition (4.6) yields that there exists t ∈ (0 , T ) suchthat b ( t ) = 0 for all t ∈ [ t , T ). First we check that lim t ↑ T R t σ ( s ) d s = ∞ . By (4.6), thereexist c > c > t ∈ [ t , T ) such that (2.8) is satisfied. Hence for all t ∈ [ t , T ), Z t σ ( s ) d s > Z t σ ( s ) d s + Z tt c | b ( s ) | exp (cid:26) α Z s b ( w ) d w (cid:27) d s. By Lemma 2.6, we have lim t ↑ T Z t | b ( s ) | d s = ∞ . (4.7)If α > C >
0, then b ( t ) > t ∈ [ t , T ) and lim t ↑ T R t b ( s ) d s = ∞ . Hencelim t ↑ T Z tt c | b ( s ) | exp (cid:26) α Z s b ( w ) d w (cid:27) d s = ∞ , which yields lim t ↑ T R t σ ( s ) d s = ∞ .If α = 0, by (4.7), we havelim t ↑ T Z tt c | b ( s ) | exp (cid:26) α Z s b ( w ) d w (cid:27) d s = c lim t ↑ T Z tt | b ( s ) | d s = ∞ , which yields lim t ↑ T R t σ ( s ) d s = ∞ .If α < C <
0, then b ( t ) < t ∈ [ t , T ) and lim t ↑ T R t b ( s ) d s = −∞ . Hencelim t ↑ T Z tt c | b ( s ) | exp (cid:26) α Z s b ( w ) d w (cid:27) d s = ∞ , which yields lim t ↑ T R t σ ( s ) d s = ∞ . By Lemma 4.1, condition (4.2) holds for all α satisfyingthe assumptions of the present theorem. Hence the MLE b α ( Y ( α ) ) t of α based on the observation( Y ( α ) s ) s ∈ [0 , t ] exists asymptotically as t ↑ T with probability one.Consider the SDEs (1.1) and (1.3). Introduce the stochastic process∆ ( α ) t := Y ( α ) t − X ( α ) t , t ∈ [0 , T ) . This process satisfies the ordinary differential equation d∆ ( α ) t = α b ( t )∆ ( α ) t d t + αb ( t ) r ( Y ( α ) t ) d t, t ∈ [0 , T ) , ∆ ( α )0 = 0 , ( α ) t = α Z t r ( Y ( α ) s ) b ( s ) exp (cid:26) α Z ts b ( u ) d u (cid:27) d s, t ∈ [0 , T ) . Using the decomposition(4.9) a ( Y ( α ) t ) = Y ( α ) t + r ( Y ( α ) t ) = X ( α ) t + ∆ ( α ) t + r ( Y ( α ) t ) , t ∈ [0 , T ) , we get b α ( Y ( α ) ) t − α = R t b ( s ) σ ( s ) X ( α ) s d B s + J (1) t + J (2) t R t b ( s ) σ ( s ) ( X ( α ) s ) d s + J (3) t + J (4) t + J (5) t + J (6) t holds asymptotically as t ↑ T with probability one, where for all t ∈ [0 , T ) ,J (1) t := Z t b ( s ) σ ( s ) ∆ ( α ) s d B s , J (2) t := Z t b ( s ) σ ( s ) r ( Y ( α ) s ) d B s ,J (3) t := 2 Z t b ( s ) σ ( s ) X ( α ) s ∆ ( α ) s d s, J (4) t := Z t b ( s ) σ ( s ) (∆ ( α ) s ) d s,J (5) t := 2 Z t b ( s ) σ ( s ) Y ( α ) s r ( Y ( α ) s ) d s, J (6) t := Z t b ( s ) σ ( s ) r ( Y ( α ) s ) d s. By (2.13), using Slutsky’s lemma and the continuous mapping theorem, in order to prove thestatement, it is sufficient to show lim t ↑ T I Y ( α ) ( t ) I X ( α ) ( t ) = 1 , (4.10) J ( j ) t p I X ( α ) ( t ) P −→ t ↑ T for j = 1 , J ( j ) t I X ( α ) ( t ) P −→ t ↑ T for j = 3 , , , b ( t ) = 0 for all t ∈ [ t , T ), we can apply L’Hospital’s rule andwe obtain, lim t ↑ T I Y ( α ) ( t ) I X ( α ) ( t ) = lim t ↑ T E a ( Y ( α ) t ) E ( X ( α ) t ) . Using again the decomposition (4.9), we have E a ( Y ( α ) t ) = E ( X ( α ) t ) + 2 E X ( α ) t ∆ ( α ) t + E (∆ ( α ) t ) + 2 E Y ( α ) t r ( Y ( α ) t ) + E r ( Y ( α ) t ) , t ∈ [0 , T ) . By Cauchy-Schwartz’s inequality, (cid:12)(cid:12) E X ( α ) t ∆ ( α ) t (cid:12)(cid:12) E ( X ( α ) t ) vuut E (∆ ( α ) t ) E ( X ( α ) t ) , (4.13) (cid:12)(cid:12) E Y ( α ) t r ( Y ( α ) t ) (cid:12)(cid:12) E ( X ( α ) t ) vuut (cid:0) E ( X ( α ) t ) + E (∆ ( α ) t ) (cid:1) E ( X ( α ) t ) E r ( Y ( α ) t ) E ( X ( α ) t ) , (4.14) 27hus, in order to show (4.10), it is enough to checklim t ↑ T E (∆ ( α ) t ) E ( X ( α ) t ) = 0 , (4.15) lim t ↑ T E r ( Y ( α ) t ) E ( X ( α ) t ) = 0 . (4.16)In order to show (4.11), it is enough to prove J ( j ) t p I X ( α ) ( t ) L −→ t ↑ T for j = 1 , t ↑ T I X ( α ) ( t ) Z t b ( s ) σ ( s ) E (∆ ( α ) s ) d s = 0 , lim t ↑ T I X ( α ) ( t ) Z t b ( s ) σ ( s ) E r ( Y ( α ) s ) d s = 0 . By L’Hospital’s rule, lim t ↑ T I X ( α ) ( t ) Z t b ( s ) σ ( s ) E (∆ ( α ) s ) d s = lim t ↑ T E (∆ ( α ) t ) E ( X ( α ) t ) , lim t ↑ T I X ( α ) ( t ) Z t b ( s ) σ ( s ) E r ( Y ( α ) s ) d s = lim t ↑ T E r ( Y ( α ) t ) E ( X ( α ) t ) , hence (4.11) also follows from (4.15) and (4.16). In order to show (4.12), it is enough to prove J ( j ) t I X ( α ) ( t ) L −→ t ↑ T for j = 3 , , , j = 4 and j = 6, by the previous argument, this follows directly from (4.15) and (4.16).For j = 3 and j = 5, this also follows from (4.15) and (4.16) applying (4.13) and (4.14).The aim of the following discussions is to check (4.15) and (4.16).First we consider the case α > C >
0. Then b ( t ) > t ∈ [ t , T ), and, by Lemma2.6, condition (4.5) is equivalent to lim t ↑ T R t b ( s ) d s = ∞ . Let us introduce the stochasticprocess Z ( α ) t := Y ( α ) t exp (cid:26) − α Z t b ( u ) d u (cid:27) , t ∈ [0 , T ) . (4.17)Using Y ( α ) t = X ( α ) t + ∆ ( α ) t , t ∈ [0 , T ), and the equations (2.1) and (4.8), we get Z ( α ) t = Z t σ ( s ) exp (cid:26) − α Z s b ( u ) d u (cid:27) d B s + α Z t r ( Y ( α ) s ) b ( s ) exp (cid:26) − α Z s b ( u ) d u (cid:27) d s t ∈ [0 , T ). Consequently, for all t ∈ [0 , T ),( Z ( α ) t ) (cid:18)Z t σ ( s ) exp (cid:26) − α Z s b ( u ) d u (cid:27) d B s (cid:19) + 2 α (cid:18)Z t r ( Y ( α ) s ) b ( s ) exp (cid:26) − α Z s b ( u ) d u (cid:27) d s (cid:19) . (4.18)Clearly, by Karatzas and Shreve [11, Proposition 3.2.10], for all t ∈ [0 , T ), E (cid:18)Z t σ ( s ) exp (cid:26) − α Z s b ( u ) d u (cid:27) d B s (cid:19) = Z t σ ( s ) exp (cid:26) − α Z s b ( u ) d u (cid:27) d s, and E (cid:18)Z t r ( Y ( α ) s ) b ( s ) exp (cid:26) − α Z s b ( u ) d u (cid:27) d s (cid:19) = Z t Z t E (cid:2) r ( Y ( α ) u ) r ( Y ( α ) v ) (cid:3) b ( u ) b ( v ) exp (cid:26) − α Z u b ( w ) d w − α Z v b ( w ) d w (cid:27) d u d v. Using that (cid:12)(cid:12) E (cid:2) r ( Y ( α ) u ) r ( Y ( α ) v ) (cid:3)(cid:12)(cid:12) q E r ( Y ( α ) u ) E r ( Y ( α ) v ) for all u, v ∈ [0 , T ), we obtain E (cid:18)Z t r ( Y ( α ) s ) b ( s ) exp (cid:26) − α Z s b ( u ) d u (cid:27) d s (cid:19) (cid:18)Z t q E r ( Y ( α ) s ) | b ( s ) | exp (cid:26) − α Z s b ( u ) d u (cid:27) d s (cid:19) , t ∈ [0 , T ) . (4.19)Since | b | γ | b | for all b ∈ R and for all γ ∈ [0 , | b | γ b ) for all b ∈ R and for all γ ∈ [0 , r ( Y ( α ) s ) L (1 + | Y ( α ) s | γ ) = 2 L (cid:18) | Z ( α ) s | γ exp (cid:26) γα Z s b ( u ) d u (cid:27)(cid:19) L (cid:18) (cid:0) Z ( α ) s ) (cid:1) exp (cid:26) γα Z s b ( u ) d u (cid:27)(cid:19) , s ∈ [0 , T ) . Recall that condition (4.6) implied the existence of c > c > t ∈ [ t , T ) such that(2.8) holds. By (4.7), there exists t ∈ [ t , T ) such that R s b ( u ) d u > s ∈ [ t , T ).We check that r ( Y ( α ) s ) c (cid:0) Z ( α ) s ) (cid:1) exp (cid:26) γα Z s b ( u ) d u (cid:27) , s ∈ [0 , T ) , (4.20)with some appropriate c >
0. If s ∈ [0 , t ], then r ( Y ( α ) s ) L (cid:0) Z ( α ) s ) (cid:1) exp ( γα sup v ∈ [0 ,t ] Z v | b ( u ) | d u )! L (cid:0) Z ( α ) s ) (cid:1) exp ( γα sup v ∈ [0 ,t ] Z v | b ( u ) | d u ) c ′ (cid:0) Z ( α ) s ) (cid:1) exp (cid:26) γα Z s b ( u ) d u (cid:27) , c ′ := 8 L exp (cid:8) γα sup v ∈ [0 ,t ] R v | b ( u ) | d u (cid:9) exp (cid:8) γα inf v ∈ [0 ,t ] R v b ( u ) d u (cid:9) . If s ∈ ( t , T ), then r ( Y ( α ) s ) L (cid:0) Z ( α ) s ) (cid:1) exp (cid:26) γα Z s b ( u ) d u (cid:27) . Hence (4.20) is satisfied with c := max { c ′ , L } . Thus, using that √ a + b √ a + √ b for all a, b >
0, we have(4.21) q E r ( Y ( α ) s ) √ c (cid:18) q E ( Z ( α ) s ) (cid:19) exp (cid:26) γα Z s b ( u ) d u (cid:27) , s ∈ [0 , T ) . Applying (4.18) and (4.21), we obtain for all t ∈ [0 , T ), E ( Z ( α ) t ) Z t σ ( s ) exp (cid:26) − α Z s b ( u ) d u (cid:27) d s + 2 cα (cid:18)Z t (cid:18) q E ( Z ( α ) s ) (cid:19) | b ( s ) | exp (cid:26) − (1 − γ ) α Z s b ( u ) d u (cid:27) d s (cid:19) . By the choice of t , we have for all t ∈ [ t , T ), Z t | b ( s ) | exp (cid:26) − (1 − γ ) α Z s b ( u ) d u (cid:27) d s Z t | b ( s ) | exp (cid:26) − (1 − γ ) α Z s b ( u ) d u (cid:27) d s + Z tt b ( s ) exp (cid:26) − (1 − γ ) α Z s b ( u ) d u (cid:27) d s, and for all t ∈ [ t , T ), Z tt b ( s ) exp (cid:26) − (1 − γ ) α Z s b ( u ) d u (cid:27) d s − γ ) α exp (cid:26) − (1 − γ ) α Z t b ( u ) d u (cid:27) . Hence there exists K > Z t | b ( s ) | exp (cid:26) − (1 − γ ) α Z s b ( u ) d u (cid:27) d s K , t ∈ [0 , T ) . Using (2.8), (4.22) and that √ a + b √ a + √ b for all a, b >
0, we obtain q E ( Z ( α ) t ) ψ ( t ) + Z tt ψ ( u ) q E ( Z ( α ) u ) d u, t ∈ [ t , T ) , where ψ ( t ) := (cid:18) Z t σ ( s ) exp (cid:26) − α Z s b ( u ) d u (cid:27) d s (cid:19) / + (cid:18) c Z tt b ( s ) d s (cid:19) / + √ cαK + Z t ψ ( u ) q E ( Z ( α ) u ) d u =: (cid:18) c Z tt b ( s ) d s (cid:19) / + K , t ∈ [ t , T ) ,ψ ( t ) := √ cα | b ( t ) | exp (cid:26) − (1 − γ ) α Z t b ( u ) d u (cid:27) , t ∈ [ t , T ) . q E ( Z ( α ) t ) K exp (cid:26)Z tt ψ ( u ) d u (cid:27) + Z tt ψ ′ ( u ) exp (cid:26)Z tu ψ ( v ) d v (cid:27) d u, t ∈ [ t , T ) . Using that, by (4.22),exp (cid:26)Z tt ψ ( u ) d u (cid:27) e √ cαK =: K , t ∈ [ t , T ) , and that ψ ′ ( t ) = c b ( t ) (cid:18) c Z tt b ( s ) d s (cid:19) − / > , t ∈ [ t , T ) , we have q E ( Z ( α ) t ) K K + K Z tt ψ ′ ( u ) d u = K K + K (cid:18) c Z tt b ( s ) d s (cid:19) / , t ∈ [ t , T ) . Then, using (4.7), we have there exist c ′′ > t ∈ ( t , T ) such that q E ( Z ( α ) t ) c ′′ (cid:18)Z tt b ( s ) d s (cid:19) / , t ∈ [ t , T ) . By Lyapunov’s inequality, since 0 γ < E | Z ( α ) t | γ (cid:16) E ( Z ( α ) t ) (cid:17) γ ( c ′′ ) γ (cid:18)Z tt b ( s ) d s (cid:19) γ , t ∈ [ t , T ) , and thus, by (4.17), E | Y ( α ) t | γ = exp (cid:26) γα Z t b ( u ) d u (cid:27) E | Z ( α ) t | γ ( c ′′ ) γ exp (cid:26) γα Z t b ( u ) d u (cid:27) (cid:18)Z tt b ( s ) d s (cid:19) γ , t ∈ [ t , T ) . (4.23)Applying again (2.8), for all t ∈ [ t , T ), we have E ( X ( α ) t ) = exp (cid:26) α Z t b ( v ) d v (cid:27) Z t σ ( u ) exp (cid:26) − α Z u b ( v ) d v (cid:27) d u > exp (cid:26) α Z t b ( v ) d v (cid:27) (cid:18)Z t σ ( u ) exp (cid:26) − α Z u b ( v ) d v (cid:27) d u + Z tt c b ( u ) d u (cid:19) . (4.24)Hence, using also (4.1), for all t ∈ [ t , T ), we have E r ( Y ( α ) t ) E ( X ( α ) t ) L (1 + E | Y ( α ) t | γ ) E ( X ( α ) t ) L (cid:16) c ′′ ) γ exp n γα R t b ( u ) d u o (cid:16)R tt b ( s ) d s (cid:17) γ (cid:17) exp n α R t b ( u ) d u o (cid:16)R t σ ( u ) exp (cid:8) − α R u b ( v ) d v (cid:9) d u + c R tt b ( s ) d s (cid:17) . γ ∈ [0 , c >
0, and lim x →∞ x a e bx = 0 for all a > b >
0, by (4.7), weget (4.16).Now we turn to prove (4.15). Using (4.8), by the same way that we derived (4.19), one can get E (∆ ( α ) t ) α (cid:18)Z t q E r ( Y ( α ) s ) | b ( s ) | exp (cid:26) α Z ts b ( u ) d u (cid:27) d s (cid:19) , t ∈ [0 , T ) . By (4.7), there exists t ∈ ( t , T ) such thatexp (cid:26) γα Z t b ( u ) d u (cid:27) (cid:18)Z tt b ( s ) d s (cid:19) γ > , t ∈ [ t , T ) , Hence, using (4.1) and (4.23), we have q E r ( Y ( α ) t ) √ L p c ′′ ) γ exp (cid:26) γα Z t b ( u ) d u (cid:27) (cid:18)Z tt b ( u ) d u (cid:19) γ/ , t ∈ [ t , T ) , and then for all t ∈ [ t , T ), E (∆ ( α ) t ) α (cid:18)Z t q E r ( Y ( α ) s ) | b ( s ) | exp (cid:26) α Z ts b ( u ) d u (cid:27) d s (cid:19) + 2 α L (1 + ( c ′′ ) γ ) Z tt | b ( s ) | exp (cid:26) γα Z s b ( u ) d u + α Z ts b ( u ) d u (cid:27)(cid:18)Z st b ( u ) d u (cid:19) γ/ d s ! . Hence, by (4.24), we have q E (∆ ( α ) t ) q E ( X ( α ) t ) lim t ↑ T √ α R t q E r ( Y ( α ) s ) | b ( s ) | exp n α R ts b ( u ) d u o d s exp n α R t b ( u ) d u o (cid:16)R t σ ( u ) exp (cid:8) − α R u b ( v ) d v (cid:9) d u + c R tt b ( s ) d s (cid:17) + 2 αL p c ′′ ) γ R tt | b ( s ) | exp (cid:8) − (1 − γ ) α R s b ( u ) d u (cid:9) (cid:16)R st b ( u ) d u (cid:17) γ d s (cid:16)R t σ ( u ) exp (cid:8) − α R u b ( v ) d v (cid:9) d u + c R tt b ( s ) d s (cid:17) / , and then, by L’Hospital’s rule, we concludelim t ↑ T q E (∆ ( α ) t ) q E ( X ( α ) t ) αL p c ′′ ) γ √ c lim t ↑ T R tt | b ( s ) | exp (cid:8) − (1 − γ ) α R s b ( u ) d u (cid:9) (cid:16)R st b ( u ) d u (cid:17) γ d s (cid:16)R tt b ( s ) d s (cid:17) / = 2 αL p c ′′ ) γ √ c lim t ↑ T | b ( t ) | exp n − (1 − γ ) α R t b ( u ) d u o (cid:16)R tt b ( u ) d u (cid:17) γ (cid:16)R tt b ( s ) d s (cid:17) − / b ( t ) = 0 , since lim t ↑ T R t b ( s ) d s = ∞ and lim x →∞ x a e bx = 0 for all a > b > α = 0. Then the SDE (1.1) and the SDE (1.3) have the same uniquestrong solution Y (0) t = Z t σ ( s ) d B s , t ∈ [0 , T ) . (0) t = Y (0) t − X (0) t = 0, t ∈ [0 , T ), and then we get b α ( Y (0) ) t = R t b ( s ) σ ( s ) X (0) s d B s + J (2) t R t b ( s ) σ ( s ) ( X (0) s ) d s + J (5) t + J (6) t , holds asymptotically as t ↑ T with probability one. Note that (4.15) is satisfied, since ∆ (0) t = 0, t ∈ [0 , T ). Hence, in order to prove the statement, it is enough to check (4.16). Clearly, X (0) t L = Y (0) t L = N (cid:18) , Z t σ ( s ) d s (cid:19) , t ∈ [0 , T ) , and hence E ( X (0) t ) = Z t σ ( s ) d s, t ∈ [0 , T ) , E | Y (0) t | γ = (cid:18)Z t σ ( s ) d s (cid:19) γ E | ξ | γ , t ∈ [0 , T ) , where ξ is a standard normally distributed random variable. Then, by (4.1), we havelim t ↑ T E r ( Y (0) t ) E ( X (0) t ) lim t ↑ T L (1 + E | Y (0) t | γ ) E ( X (0) t ) = lim t ↑ T L (cid:16) E | ξ | γ (cid:16)R t σ ( s ) d s (cid:17) γ (cid:17)R t σ ( s ) d s = 0 , where the last step can be checked as follows. By (4.6), lim t ↑ T | b ( t ) | σ ( t ) = | C | ∈ (0 , ∞ ), and hencethere exist c > t ∈ [ t , T ) such that | b ( t ) | < c σ ( t ) for all t ∈ [ t , T ). Then (4.7)yields lim t ↑ T R t σ ( s ) d s = ∞ concluding the proof of the present case.Finally, we consider the case α < C <
0. For all β ∈ R , let us consider theprocess ( V ( β ) t ) t ∈ [0 ,T ) given by the SDE ( d V ( β ) t = β e b ( t ) a ( V ( β ) t ) d t + σ ( t ) d B t , t ∈ [0 , T ) ,V ( β )0 = 0 , where e b ( t ) := − b ( t ), t ∈ [0 , T ). Note that if conditions (4.5) and (4.6) are satisfied withfunctions b and σ and with parameters α < C <
0, then they are also satisfiedwith the functions e b = − b and σ and with parameters − α > − C >
0. Hence p I V ( − α ) ( t ) ( [ ( − α ) ( V ( − α ) ) t − ( − α )) L −→ √ R W s d W s R ( W s ) d s as t ↑ T .By the uniqueness of a strong solution, the process ( Y ( α ) t ) t ∈ [0 ,T ) given by the SDE (1.1) andthe process ( V ( − α ) t ) t ∈ [0 ,T ) coincide, and hence I V ( − α ) ( t ) = I Y ( α ) ( t ) for all t ∈ (0 , T ), and [ ( − α ) ( V ( − α ) ) t = − b α ( Y ( α ) ) t holds asymptotically as t ↑ T with probability one concluding theproof. ✷ We note that in the proof of Theorem 4.3 instead of Lemma 4.4 (a generalizationof Gr¨onwall’s inequality) we could use Bainov and Simeonov [1, Theorem 1.3], which is anothergeneralization of Gr¨onwall’s inequality. But the calculations would be more complicated withoutany improvement or refinement of the result. 33 eferences [1]
D. Bainov and
P. Simeonov , Integral Inequalities and Applications . Kluwer AcademicPublishers, Dordrecht, 1992.[2]
I. V. Basawa and
B. L. S. Prakasa Rao , Statistical Inference for Stochastic Processes .Academic Press, London, 1980.[3]
I. V. Basawa and
D. J. Scott , Asymptotic optimal inference for non-ergodic models,Lecture Notes in Statistics 17 . Springer, 1983.[4]
J. P. N. Bishwal , Parameter Estimation in Stochastic Differential Equations . Springer,2007.[5]
M. J. Bobkoski , Hypothesis testing in nonstationary time series . Ph.D. Dissertation,University of Wisconsin, 1983.[6]
A. N. Borodin and
P. Salminen , Handbook of Brownian Motion – Facts and Formulae .Birkh¨auser, 1996.[7]
H. M. Dietz and
Yu. A. Kutoyants , Parameter estimation for some non-recurrentsolutions of SDE.
Statistics & Decisions (1), 29–46 (2003).[8] P. D. Feigin , Maximum likelihood estimation for continuous-time stochastic processes.
Advances in Applied Probability (4), 712-736 (1976).[9] A. A. Gushchin , On asymptotic optimality of estimators of parameters under the LAQcondition.
Theory of Probability and Its Applications (2), 261-272 (1995).[10] J. Jacod and
A. N. Shiryaev , Limit Theorems for Stochastic Processes , 2nd edition.Springer-Verlag, Berlin, 2003.[11]
I. Karatzas and
S. E. Shreve , Brownian Motion and Stochastic Calculus , 2nd edition.Springer-Verlag, Berlin, Heidelberg, 1991.[12]
I. K´atai and
J. Mogyor´odi , Some remarks concerning the stable sequences of randomvariables.
Publicationes Mathematicae Debrecen , 227–238 (1967).[13] Yu. A. Kutoyants , Parameter estimation for stochastic processes.
Heldermann VerlagBerlin, 1984.[14]
Yu. A. Kutoyants , Identification of Dynamical Systems with Small Noise.
Kluwer Aca-demic Publisher, Dordrecht, 1994.[15]
Yu. A. Kutoyants , Statistical Inference for Ergodic Diffusion Processes.
Springer-Verlag, Berlin, Heidelberg, 2004.[16]
D. L´epingle , Sur les comportement asymptotique des martingales locales.
Seminaire deProbabilites XII, Lecture Notes in Mathematics , 148–161 (1978).3417]
R. S. Liptser and
A. N. Shiryaev , Statistics of Random Processes I. General Theory ,2nd edition. Springer-Verlag, Berlin, Heidelberg, 2001.[18]
R. S. Liptser and
A. N. Shiryaev , Statistics of Random Processes II. Applications ,2nd edition. Springer-Verlag, Berlin, Heidelberg, 2001.[19]
H. Luschgy , Local asymptotic mixed normality for semimartingale experiments.
Proba-bility Theory and Related Fields (2), 151–176 (1992).[20] H. Luschgy , Asymptotic inference for semimartingale models with singular parameterpoints.
Journal of Statistical Planning and Inference (2), 155–186 (1994).[21] M. N. Mishra and
B. L. S. Prakasa Rao , Asymptotic study of maximum likelihoodestimation for nonhomogeneous diffusion processes.
Statistics & Decisions (3–4), 193–203(1985).[22] D. Revuz and
M. Yor , Continuous Martingales and Brownian Motion , 3rd edition,corrected 2nd printing. Springer-Verlag, Berlin, 2001.[23]
A. N. Shiryaev , Probability,
K. Tanaka , Time Series Analysis, Nonstationary and Noninvertible Distribution Theory.
Wiley Series in Probability and Statistics, 1996.[25]
A. W. van der Vaart , Asymptotic Statistics.
Cambridge University Press, 1998.[26]
H. van Zanten , A multivariate central limit theorem for continuous local martingales.
Statistics & Probability Letters50