Estimation of Tempered Stable Lévy Models of Infinite Variation
EEstimation of Tempered Stable L´evy Models of Infinite Variation
Jos´e E. Figueroa-L´opez ∗ Ruoting Gong † Yuchen Han ‡ January 5, 2021
Abstract
In this paper we propose a new method for the estimation of a semiparametric temperedstable L´evy model. The estimation procedure combines iteratively an approximate semipara-metric method of moment estimator, Truncated Realized Quadratic Variations (TRQV), anda newly found small-time high-order approximation for the optimal threshold of the TRQVof tempered stable processes. The method is tested via simulations to estimate the volatilityand the Blumenthal-Getoor index of the generalized CGMY model as well as the integratedvolatility of a Heston type model with CGMY jumps. The method outperforms other efficient alternatives proposed in the literature.
MSC 2000 subject classifications : 60G51, 62M09.
Keywords and phrases : Threshold estimator, high-frequency estimation, L´evy models, methodof moment estimators, optimal parameter tuning.
L´evy processes have experienced a revival in the past 20 years, propelled by the need for more real-istic modeling of irregular behavior in many phenomena of nature and society. These fundamentalbuilding blocks of stochastic modeling have been widely applied in many fields, including statisticalphysics, meteorology, seismology, insurance, finance, and telecommunication.While, in principle, L´evy models offer ideal conditions for estimation purposes, two main bot-tlenecks complicate their estimation. Firstly, the marginal distributions often lack tractable orclosed-form representations. In those situations, the marginal distributions must be approximatedby Fourier, Monte Carlo, or other methods, which makes the estimation slower and noisier. Thesecond issue comes from the need to handle high-frequency sampling data of the process. This typeof data has been widely available in finance during the last 15 years and is increasingly more com-mon in other fields. The two just-mentioned issues have rendered traditional statistical methodssuch as likelihood and Bayesian estimation unfeasible.In this paper, we study a new method for the estimation of the parameters of a L´evy model.A semiparametric model is considered in which the jump component is assumed to exhibit small ∗ Department of Mathematics & Statistics, Washington University in St. Louis, St. Louis, MO 63130, U.S.A.(Email: [email protected] ). Research supported in part by the NSF Grants: DMS-2015323, DMS-1613016. † Department of Applied Mathematics, Illinois Institute of Technology, Chicago, IL 60616, U.S.A. (Email: [email protected] ). ‡ Department of Mathematics & Statistics, Washington University in St. Louis, St. Louis, MO 63130, U.S.A.(Email: [email protected] ). a r X i v : . [ ec on . E M ] J a n umps that behave like those of an Y -stable L´evy process. Specifically, the class of tempered stableprocesses introduced in [8] and [11] is considered. We focus on models of infinite variation (i.e., Y ∈ (1 , efficient estimatorof the integrated volatility of an Itˆo semimartingale model in the presence of a L´evy jump modelof infinite variation with Blumenthal-Getoor index β ∈ (1 , /
2) or when the jump component issymmetric. Recently, Mies [16] proposed an efficient estimation method for L´evy models basedon a type of approximate semiparametric method of moments with scaling. Specifically, for somesuitable moment functions f , f , . . . , f m and a scaling factor u n → ∞ , [16] proposed to look forthe parameters (cid:98) θ = ( (cid:98) θ , . . . , (cid:98) θ m ) such that1 n n (cid:88) i =1 f j (cid:0) u n ∆ ni X (cid:1) − E (cid:98) θ (cid:16) f j (cid:0) u n ∆ ni (cid:101) Z (cid:1)(cid:17) = 0 , j = 1 , . . . , m, (1.1)where (cid:101) Z is the superposition of a Brownian motion and independent stable L´evy processes closelyapproximating X in a certain sense. The distribution measure P θ of (cid:101) Z depends on some parameters θ , including the volatility σ of X , and E θ ( · ) denotes the expectation with respect to P θ . Above,∆ ni L := L t i − L t i − is the i -th increment of a generic process ( L t ) t ≥ given n evenly spaced randomsamples L t , . . . , L t n over a fixed time interval [0 , T ] (i.e., t i = ih n with h n = T /n ). If X wereassumed to be a parametric L´evy model and we replaced E (cid:98) θ ( f j ( u n ∆ ni (cid:101) Z )) with E (cid:98) θ ( f j ( u n ∆ ni X )) in(1.1), we will recover a standard Method of Moment Estimator (MME). However, we are assumingthat X is semiparametric and that it can be approximated closely enough by a parametric L´evymodel (cid:101) Z . The scaling u n , which is taken to converge to ∞ at the order of 1 / (cid:112) ln( n ) /n , is also anew feature of this method compare to the standard MME.The moment functions f , . . . , f m and the scaling factor u n in (1.1) play key roles in the perfor-mance of the estimators. To determine an appropriate scaling u n , we connect it to the thresholdparameter ε n of a Truncated Realized Quadratic Variation (TRQV),TRQV n ( ε n ) = n (cid:88) i =1 (cid:0) ∆ ni X (cid:1) {| ∆ ni X |≤ ε n } , which is known to be a consistent estimator for the integrated volatility of a general semimartingalemodel. Indeed, by taking f ( x ) = x {| x |≤ } in (1.1), we uncover the relationship u n = 1 /ε n . Thatis, 1 /u n plays the same role as the threshold in TRQV.Recently, [10] studied the problem of optimal thresholding of TRQV under the mean-squareerror. Specifically, in the case of a L´evy process with volatility σ , it is shown that the threshold ε = ε (cid:63)n that minimizes the mean-square error, E ((TRQV n ( ε ) − σ T ) ), solves the equation: ε + 2( n − E (cid:0) b ,h n ( ε ) (cid:1) − T σ = 0 , where b ,h n ( ε ) := (∆ X ) {| ∆ X |≤ ε } . By analyzing the small-time asymptotic behavior of E ( b ,h n ( ε ))(i.e., when n → ∞ so that h n → ε (cid:63)n for a L´evy processwith a Y -stable jump component behaves like ε (cid:63)n ∼ (cid:112) (2 − Y ) σ h n ln(1 /h n ) , n → ∞ , (1.2) The term “tempered stable” is understood here in a much more general sense than in several classical sources offinancial mathematics (e.g., [2], [6], [15]) and even more general than in [18]. In fact, such class of L´evy processes iscalled the class of tempered-stable-like
L´evy processes in [8]. h n is the time span between observations and, as usual, a n ∼ b n means a n /b n → n → ∞ .The proportionality constant √ − Y roughly tells us that the higher the jump activity is, the lowerthe optimal threshold has to be if we want to discard the higher noise represented by the smalljumps. This fact opens the door to an iterative method to estimate σ . We can first estimate Y and σ using, for instance, the method of moments (1.1). We can then use the TRQV with thethreshold (cid:98) ε (cid:63)n = (cid:113) (2 − (cid:98) Y ) (cid:98) σ h n ln(1 /h n ).In this paper, we first extend the result of [10] to allow for a general tempered stable L´evyprocess. Furthermore, we propose a new approximation for ε (cid:63)n of the form: (cid:101) ε (cid:63)n := (cid:115) (2 − Y ) σ h n ln (cid:16) h n (cid:17) + 2 σ h n ln (cid:18) (2 − α ) σC (cid:19) , (1.3)where C controls the overall intensity of jumps. The approximation (1.3) says that if C is small(relative to σ ) then the threshold can be loosened up (in fact, (cid:101) ε (cid:63)n (cid:37) ∞ as C (cid:38) C is small compare to σ and (1.3) provides a significant correction compare to (1.2).We then proceed to devise a new method to estimate the volatility, the index of jump activity Y ,and C by combining a variation of the approximate semiparametric method of moments in [16],TRQVs, and the approximate optimal threshold (1.3). Compared to [16] we introduce simplermoment functions f , . . . , f m , and a systematic and objective method to tune the scaling factor u n in (1.1). The performance of the proposed procedure is superior to the efficient methods of [13]and [16]. Finally, as in [13], we use a localization technique to estimate the integrated volatility ofan Itˆo semimartingale. Specifically, the idea is to split the time horizon into small blocks where theprocess is approximately L´evy and, hence, its volatility level can be estimated using our method.For values of Y ≥ .
5, our method outperforms the method proposed by [13].The rest of this paper is organized as follows. Section 2 provides the framework and assumptionsas well as some known preliminary results from the literature. Section 3 obtains the asymptoticbehavior of E ( b ,h n ( ε )) and derives (1.2). The second-order approximation (1.3) is derived in Section4 as well as a numerical assessment of the approximations in the case of a CGMY jump component.The new method to estimate the parameters of a tempered stable L´evy model is presented inSection 5 together with an analysis of its performance via Monte Carlo simulations. The proofs aredeferred to an appendix section. Throughout, R + := [0 , ∞ ) and R := R \{ } , and we let (Ω , F , F , P ) be a complete filtered proba-bility space on which all stochastic processes are defined, where F := ( F t ) t ∈ R + satisfies the usualconditions. We consider a L´evy process X := ( X t ) t ∈ R + of the form X t = σW t + J t , t ∈ R + , (2.1)where W := ( W t ) t ∈ R + is a Wiener process and J := ( J t ) t ∈ R + is an independent pure-jump temperedstable L´evy process with L´evy triplet ( b, , ν ). The L´evy measure ν is assumed to be absolutelycontinuous with a density s : R → R + of the form s ( x ) := ν ( dx ) dx := (cid:0) C + (0 , ∞ ) ( x ) + C − ( −∞ , ( x ) (cid:1) q ( x ) | x | − − Y , x ∈ R . (2.2)3ere, C ± > Y ∈ (1 , q : R → R + is a bounded Borel-measurable function. Concretely,we make the following assumptions on q . Assumption 2.1. (i) q ( x ) → , as x → ;(ii) There exist α ± (cid:54) = 0 such that (cid:90) (0 , (cid:12)(cid:12) q ( x ) − − α + x (cid:12)(cid:12) x − Y − dx + (cid:90) [ − , (cid:12)(cid:12) q ( x ) − − α − x (cid:12)(cid:12) x − Y − dx < ∞ ; (iii) lim sup | x |→∞ | ln q ( x ) || x | < ∞ ;(iv) For any ε > , inf | x | <ε q ( x ) > ;(v) (cid:90) | x | > q ( x ) | x | − − Y dx < ∞ .Remark . The class of L´evy processes considered above is sometimes termed tempered stable processes (or tempered-stable-like processes as in [8]) and includes a wide range of models appearingin finance. Roughly, the conditions above amount to say that the small jumps of X behave likethose of a Y -stable L´evy process. We refer the reader to [12] for further background about this class.The parameter Y is called the index of jump activity and coincides with the Blumenthal-Getoorindex, which controls the jump activity of X in that (cid:80) s ∈ (0 ,t ] | ∆ X s | γ < ∞ for all γ > Y and t > X s := X s − X s − is the jump of X at time s . The range of Y considered here (namely, Y ∈ (1 , P to another locally absolutely continuous measure (cid:101) P , under which J is a Y -stableL´evy process and W is a standard Brownian motion independent of J . Concretely, let (cid:101) ν ( dx ) := (cid:0) C + (0 , ∞ ) ( x ) + C − ( −∞ , ( x ) (cid:1) | x | − Y − dx, (cid:101) b := b + (cid:90) < | x |≤ x (˜ ν − ν )( dx ) . Note that (cid:101) ν is the L´evy measure of a Y -stable L´evy process and, also, (cid:101) ν ( dx ) = e ϕ ( x ) ν ( dx ) , with ϕ ( x ) := − ln q ( x ) . Next, define (cid:101) P such that, for any t ∈ R + ,ln (cid:32) d (cid:101) P (cid:12)(cid:12) F t d P (cid:12)(cid:12) F t (cid:33) = U t := lim ε → (cid:88) s ∈ (0 ,t ]: | ∆ J s | >ε ϕ (∆ J s ) + t (cid:90) | x | >ε (cid:0) e − ϕ ( x ) − (cid:1) ˜ ν ( dx ) . (2.3)By virtue of [19, Theorem 33.1], a necessary and sufficient condition for the measure transformationfrom P to (cid:101) P to be well defined is given by (cid:90) R (cid:0) e ϕ ( x ) / − (cid:1) ν ( dx ) < ∞ , − (i) & (ii) (cf. [12, Lemma 2.1]). Under (cid:101) P , J is a L´evy process with L´evy triplet ( (cid:101) b, , (cid:101) ν ), and W is a standard Brownian motion which isindependent of J . In particular, under (cid:101) P , the centered process Z := ( Z t ) t ∈ R + , given by Z t := J t − t (cid:101) γ, (cid:101) γ := (cid:101) E ( J ) = (cid:101) b + (cid:90) | x | > x (cid:101) ν ( dx ) , is a strictly Y -stable process with its scale, skewness, and location parameters given by ( C + + C − )Γ( − Y ) | cos( πY / | , ( C + − C − ) / ( C + + C − ), and 0, respectively. Let p Z denote the marginaldensity of Z under (cid:101) P . It is well known (cf. [19, (14.37)] and references therein) that p Z ( z ) ∼ C ± | z | − Y − , as z → ±∞ , respectively , so that (cid:101) P (cid:0) ± Z > z (cid:1) = C ± Y z − Y + O (cid:0) z − Y (cid:1) , z → ∞ . The processes U := ( U t ) t ∈ R + and Z can be expressed in terms of the jump-measure N ( dt, dx )of the process J and its compensator (cid:101) N ( dt, dx ) := N ( dt, dx ) − (cid:101) ν ( dx ) dt (under (cid:101) P ), as follows: U t = (cid:101) U t + ηt := (cid:90) t (cid:90) R ϕ ( x ) (cid:101) N ( ds, dx ) + tη, (2.4) J t = Z t + t (cid:101) γ := Z + t + Z − t + t (cid:101) γ, (2.5)where Z + t := (cid:90) t (cid:90) (0 , ∞ ) x (cid:101) N ( dt, dx ) , Z − t := (cid:90) t (cid:90) ( −∞ , x (cid:101) N ( dt, dx ) , η := (cid:90) R (cid:0) e − ϕ ( x ) − ϕ ( x ) (cid:1)(cid:101) ν ( dx ) . The existence of the integral defining η follows from Assumption 2.1 − (i) & (ii). Clearly, Z + :=( Z + t ) t ∈ R + and − Z − := ( − Z − t ) t ∈ R + are independent one-sided Y -stable processes with scale, skew-ness, and location parameters given by C ± | Γ( − Y ) cos( πY / | , 1, and 0, respectively, so that (cid:101) P (cid:0) ± Z ± > z (cid:1) = C ± Y z − Y + O (cid:0) z − Y (cid:1) , z → ∞ , (cid:101) E (cid:16) e ∓ Z ± t (cid:17) = exp (cid:18) C ± Γ( − Y ) cos (cid:18) πY (cid:19) sgn(1 − Y ) t (cid:19) < ∞ . Moreover, it can be shown that (cf. [9, Lemma 2.1]) there exists universal constants K ∈ (0 , ∞ ),such that for any z > (cid:101) P (cid:0) ± Z ± > z (cid:1) ≤ Kz − Y . (2.6)Combining (2.5) and (2.6), we deduce that there exists a universal constant (cid:101) K ∈ (0 , ∞ ), such thatfor any z > p Z ( ± z ) ≤ (cid:101) K z − Y − , (2.7) (cid:12)(cid:12)(cid:12) p Z ( ± z ) − C ± z − Y − (cid:12)(cid:12)(cid:12) ≤ (cid:101) K (cid:0) z − Y − ∧ z − Y − (cid:1) , (2.8) (cid:101) P (cid:0) ± Z > z (cid:1) ≤ (cid:101) Kz − Y . (2.9)5 Main Result
The TRQV, defined as (cid:98) σ n ( ε ) = 1 T n (cid:88) i =1 (cid:0) ∆ ni X (cid:1) {| ∆ ni X |≤ ε } , (3.1)is one of the most commonly used estimators for the integrated volatility of an Itˆo semimartingale.Above, ∆ ni X := X t i − X t i − for i = 1 , . . . , n , where X t , X t , . . . , X t n are evenly spaced observationsof X over a fixed time horizon [0 , T ], so that t i = t i,n = ih n for i = 0 , , . . . , n , with h n := T /n . Oneof its drawbacks is the necessity of tuning the threshold ε up, which strongly affects the performanceof the estimator. It is shown in [10] that, for a L´evy process X with volatility σ >
0, there exists aunique threshold ε = ε (cid:63)n , which minimizes the mean-square error, E (( (cid:98) σ n ( ε ) − σ ) ). Furthermore,the minimizer ε (cid:63)n is such that ε (cid:63)n → , ε (cid:63)n √ h n → ∞ , as n → ∞ , (3.2)and solves the equation ε + 2( n − E (cid:0) b ,h n ( ε ) (cid:1) − T σ = 0 , (3.3)where b ,h n ( ε ) := X h n {| X hn |≤ ε } . Therefore, in order to determine the asymptotic behavior of theoptimal threshold ε (cid:63)n , we need to study the asymptotic behavior of E ( b ,h ( ε )) as both h → ε = ε ( h ) →
0+ in such a way that ε ( h ) / √ h → ∞ , as h →
0. Our main theoretical resultaccomplishes this for the tempered stable L´evy processes of Section 2, and its proof is deferred toAppendix A.
Theorem 3.1.
Under Assumption 2.1, we have E (cid:0) b ,h ( ε ) (cid:1) = σ h − σε √ h √ π e − ε / (2 σ h ) + C + + C − − Y hε − Y + O (cid:16) he − ε / (2 σ h ) (cid:17) + O (cid:0) hε − Y/ (cid:1) + O (cid:0) h − Y/ (cid:1) , as h → and ε = ε ( h ) → + , with ε/ √ h → ∞ . The following result gives the asymptotic behavior of the optimal threshold ε (cid:63)n . Its proof issimilar to that of [10, Proposition 2] and is outline below for completeness and also to motivatesome approximation methods proposed below. Corollary 3.2.
Under Assumption 2.1, the optimal threshold ε (cid:63)n is such that ε (cid:63)n ∼ (cid:114) (2 − Y ) σ h n ln 1 h n , as n → ∞ . (3.4) Furthermore, setting C = ( C + + C − ) / , we have, as n → ∞ , ε (cid:63)n = (cid:115) σ h n (cid:20) (2 − Y ) ln 1 h n +( Y −
1) ln ln 1 h n +( Y −
1) ln (cid:0) (2 − Y ) σ (cid:1) +2 ln (cid:18) (2 − Y ) σC √ π (cid:19) + o (1) (cid:21) . (3.5)6 roof. For simplicity, we take T = 1 so that h n = 1 /n . With C = ( C + + C − ) / E ( b ,h n ( ε (cid:63)n )) described in Theorem 3.1, we can write (3.3) as( ε (cid:63)n ) + 2( n − (cid:18) σ h n − √ σ √ π ε (cid:63)n (cid:112) h n e − ( ε ∗ n ) / (2 σ h n ) + 2 C − Y h n ( ε (cid:63)n ) − Y + h.o.t. (cid:19) − nh n σ = 0 , where h.o.t. means “higher-order terms” as n → ∞ . In view of (3.2) and since Y ∈ (1 , C − Y ( ε (cid:63)n ) − Y − √ σ √ π ε (cid:63)n √ h n e − ( ε ∗ n ) / (2 σ h n ) + o (cid:18) ε (cid:63)n √ h n e − ( ε ∗ n ) / (2 σ h n ) (cid:19) + o (cid:0) ( ε ∗ n ) − Y (cid:1) = 0 . Dividing by ε ∗ n , rearranging the terms, and taking logarithms of both sides, we deduce that(1 − Y ) ln ε ∗ n + o (1) = − ( ε (cid:63)n ) σ h n −
12 ln h n + ln (cid:18) √ σ (2 − Y )2 C √ π (cid:19) + o (1) , which can be written as( ε (cid:63)n ) σ h n + (1 − Y ) ln (cid:18) ( ε (cid:63)n ) σ h n (cid:19) + (1 − Y ) ln (cid:0) σ (cid:1) + (2 − Y ) ln h n − (cid:18) σ (2 − Y ) C √ π (cid:19) = o (1) . (3.6)Dividing by ( ε ∗ n ) / ( σ h n ) and using (3.2), we obtain the first result (3.4). For the second asymp-totics, note that (3.4) implies thatln (cid:18) ( ε (cid:63)n ) σ h n (cid:19) = ln (cid:18) (2 − Y ) ln 1 h n (cid:19) + o (1) . Finally, plugging the above in (3.6) and solving for ε (cid:63)n gives the desired asymptotics.The proportionality constant √ − Y of the previous result is intuitive and roughly tells us thatthe higher the jump activity is, the lower the optimal threshold has to be if we want to discard thehigher noise represented by the jumps and to catch information about the Brownian component. In this section, we introduce other approximations to the optimal threshold derived from the for-mulas in Theorem 3.1 and the proof of Corollary 3.2. We then illustrate their performance in thecase of a L´evy process with a CGMY jump component J (cf. [5]). The CGMY model is considereda prototypical jump process of infinite activity in finance. In the notation of the L´evy density (2.2),a CGMY model is given by q ( x ) = e − Mx (0 , ∞ ) ( x ) + e Gx ( −∞ , ( x ) and C + = C − = C. Thus, the conditions of Assumption 2.1 are satisfied with α + = − M and α − = G . We adopt theparameter setting C = 0 . , G = 2 . , M = 4 . , Y = 1 . . (4.1)These values are similar to those used in [12] , who themselves took them from an empirical studyin [14]. We take T = 1 year and n = 252(6 . . [12] considers the asymmetric case ν ( dx ) = C ( x/ | x | )¯ q ( x ) | x | − − Y dx with C (1) = 0 .
015 and C ( −
1) = 0 . C = ( C (1) + C ( − / G , M , and Y are thesame as in [12].
7o compute E ( b ,h ( ε )), we use Monte Carlo and the change of probability measure (2.3). Con-cretely, under (cid:101) P , we have the following representation: E (cid:0) b ,h ( ε ) (cid:1) = (cid:101) E (cid:16) e − U h (cid:0) σW h + J h (cid:1) {| σW h + J h |≤ ε } (cid:17) = (cid:101) E (cid:16) e − MZ + h + GZ − h − ηt (cid:0) σW h + Z + h + Z − h + (cid:101) γt (cid:1) {| σW h + Z + h + Z − h + (cid:101) γh |≤ ε } (cid:17) , where Z + h and − Z − h are independent one-sided Y -stable random variables with common scale,skewness, and location parameters given by C | Γ( − Y ) cos( πY / | h /Y , 1, and 0, respectively. Sucha distribution can be simulated efficiently .We consider two different approximations of the equation (3.3) defining the optimal threshold ε (cid:63)n . For the first approximation, we replace E ( b ( ε, h n )) (where h n = 1 /n ) in (3.3) with its leadingorder terms as given by Theorem 3.1, namely, ε + 2( n − (cid:18) − √ σ √ π ε (cid:112) h n e − ε / (2 σ h n ) + 2 C − Y h n ε − Y (cid:19) − σ h n = 0 . (4.2)For the second approximation, we take a simplified version of (3.5), only keeping those terms thatare found to be significant: (cid:101) ε (cid:63)n := (cid:115) (2 − Y ) σ h n ln (cid:16) h n (cid:17) + 2 σ h n ln (cid:18) (2 − Y ) σC (cid:19) . (4.3)Interestingly, as C →
0, we have (cid:101) ε (cid:63)n → ∞ , which makes sense. The approximation (4.3) says thatif C is small (relative to σ ) then the threshold can be loosened up.Figure 1 shows the graphs of the left-hand expressions of (3.3) (solid blue) and the approximation(4.2) (dashed red) against ε for three different values of σ : 0 .
1, 0 .
2, and 0 .
4. The solid blue verticalline is the “true” optimum threshold ε = ε (cid:63)n , the dotted brown vertical line shows ε = (cid:101) ε (cid:63)n with (cid:101) ε (cid:63)n given as in approximation (4.3), and the dotted/dashed vertical green line is the approximation ε = ε n := (cid:112) (2 − Y ) σ h n ln(1 /h n ) derived in (3.4) of Corollary 3.2. We also show the vertical linepassing at the root of (4.2) (vertical dashed red). It is evident that for the considered values of Y and σ , the root of (4.2) and (cid:101) ε (cid:63)n are reasonably good approximations of ε (cid:63)n . However, we cannot saythe same about ε n = (cid:112) (2 − Y ) σ h n ln(1 /h n ), which is a good approximation of ε (cid:63)n only for smallvalues of σ and, otherwise, it underestimates ε (cid:63)n .Next, we consider the value of Y = 1 .
5, while all the other CGMY parameter values remainunchanged. Figure 2 below shows the graphs of the left-hand expressions of (3.3) (solid blue)and (4.2) (dashed red), against ε for three different values of σ : 0 .
1, 0 .
2, and 0 .
4. The Equation(4.2) derived from Theorem 3.1 is a relatively accurate approximation of (3.3), especially for largervalues of σ . As before, the approximation (3.4) established in Corollary 3.2 is accurate for smalland medium values of σ but not for larger values. The approximation (4.3) is reasonably accuratefor all considered values of σ .Finally, we consider the value of Y = 1 .
7. All the other CGMY parameter values remain thesame. The approximations are shown in Figure 3. We deduce that for such a large value of Y , theapproximation (4.2) derived from Theorem 3.1 is not accurate anymore, though it improves as σ gets larger. On the other hand, the other suggested approximation (4.3) is still relatively accurateto approximate the optimal threshold ε (cid:63)n (the root of (3.3)). We again have that for small andmedium values of σ , the approximation (3.4) is good, which is not the case for large values of σ . In our code, we use the R package stabledist to generate them. .000 - . - . . Threshold, ε - . - . Threshold, ε - . - . - . . Threshold, ε Figure 1:
Graphs of the respective left-hand expressions of (3.3) (solid blue) and (4.2) (dashed red)against ε for σ = 0 . (left panel), σ = 0 . (center panel), and σ = 0 . (right panel), respectively. Wealso show the vertical lines ε = ε (cid:63)n (solid blue), ε = the root of (4.2) (dashed red), ε = (cid:101) ε (cid:63)n (dotted brown),and ε = (cid:112) (2 − Y ) σ h n ln(1 /h n ) (dotted/dashed green). The parameters for the CGMY model are set as C = 0 . , G = 2 . , M = 4 . , and Y = 1 . . - . - . . Threshold, ε - . - . Threshold, ε - . - . - . . Threshold, ε Figure 2:
Graphs of the respective left-hand expressions of (3.3) (solid blue) and (4.2) (dashed red)against ε for σ = 0 . (left panel), σ = 0 . (center panel), and σ = 0 . (right panel), respectively. Wealso show the vertical lines ε = ε (cid:63)n (solid blue), ε = the root of (4.2) (dashed red), ε = (cid:101) ε (cid:63)n (dotted brown),and ε = (cid:112) (2 − Y ) σ h n ln(1 /h n ) (dotted/dashed green). The parameters for the CGMY model are set as C = 0 . , G = 2 . , M = 4 . , and Y = 1 . . To summarize, while for values of Y ≤ .
5, the approximation (4.2) may be the most accurate,this is not the case anymore for larger values of Y . On the other hand, the approximation (4.3) isreasonably good for a large range of values of Y . Due to this reason, in our simulations of Section5, we use (4.3) to assess the finite sample performance of the proposed estimation method below. In this section, we propose a new method for estimating the volatility σ and other parametersof a tempered-stable L´evy process using the TRQV (3.1) and the approximations of the optimalthreshold derived in Section 3. Then, we illustrate the method in the case of a CGMY L´evyprocess. Finally, using a localization technique, we adapt our method to estimate the integrated9 .000 . . Threshold, ε - . . . Threshold, ε - . - . Threshold, ε Figure 3:
Graphs of the respective left-hand expressions of (3.3) (solid blue) and (4.2) (dashed red)against ε for σ = 0 . (left panel), σ = 0 . (center panel), and σ = 0 . (right panel), respectively. Wealso show the vertical lines ε = ε (cid:63)n (solid blue), ε = the root of (4.2) (dashed red), ε = (cid:101) ε (cid:63)n (dotted brown),and ε = (cid:112) (2 − Y ) σ h n ln(1 /h n ) (dotted/dashed green). The parameters of the CGMY model are set as C = 0 . , G = 2 . , M = 4 . , and Y = 1 . . variance under a Heston stochastic volatility model with CGMY jumps and compare it to themethod proposed by [13], which is known to be efficient when Y ≤ . As shown by (3.3) and the asymptotic expansion of Theorem 3.1, the optimal threshold ε (cid:63)n dependson the volatility, and vice versa. It is then natural to consider an iterative method to estimate ε (cid:63)n . But before this, we need to estimate C ± and Y . Several methods have been proposed in theliterature for this purpose (see, e.g., [1], [4], and [17]). Mies [16] recently proposed an efficientmethod using the method of moments. In this part, we adapt and modify this method and applyit in combination with the approximations of Theorem 3.1 to estimate the optimal threshold ε (cid:63)n ofthe TRQV and subsequently the other parameters σ , Y , and C ± .Consider a L´evy process X := ( X t ) t ∈ R + with characteristic triplet ( µ, σ , ν ). The approach of[16] builds on the assumption that ν can be well approximated by the superposition of stable L´evymeasures in the sense that (cid:12)(cid:12) ν (cid:0) [ x, ∞ ) (cid:1) − (cid:101) ν (cid:0) [ x, ∞ ) (cid:1)(cid:12)(cid:12) ≤ L | x | − ρ , x ∈ (0 , , (5.1) (cid:12)(cid:12) ν (cid:0) ( −∞ , x ] (cid:1) − (cid:101) ν (cid:0) ( −∞ , x ] (cid:1)(cid:12)(cid:12) ≤ L | x | − ρ , x ∈ [ − , , (5.2)for some L, ρ ∈ (0 , ∞ ), where (cid:101) ν is given by (cid:101) ν ( dz ) = N (cid:88) m =1 α m | z | α m (cid:0) r + m { x> } + r − m { x< } (cid:1) dz, (5.3)for some N ∈ N , α = ( α , . . . , α N ) ∈ (0 , N , and r = ( r +1 , r − , . . . , r + N , r − N ) ∈ R N + such that α > α > · · · > α N > α , α N > ρ, r + m + r − m > , m = 1 , . . . , N, for some α ∈ (0 , ∞ ). We want to estimate θ := ( σ , r , α ) given n observations, X t , X t , . . . , X t n ,of the process X at known times 0 = t < t < · · · < t n = T . As before, we assume the sampling10imes are evenly spaced and we done the time step between observations as h n := T /n . Conditions(5.1), (5.2), and (5.3) essentially say that we can approximate X by a fully specified L´evy process (cid:101) Z := ( (cid:101) Z t ) t ∈ R + with characteristic triplet (0 , σ , (cid:101) ν ) and, hence, with the decomposition (cid:101) Z t = σW t + N (cid:88) m =1 S mt , t ∈ R + , where W := ( W t ) t ∈ R + is a standard Brownian motion and S m := ( S mt ) t ∈ R + , m = 1 , . . . , N , are in-dependent α m -stable processes, independent of W , each with L´evy density α m | z | − − α m ( r + m { x> } + r − m { x< } ), respectively.Mies [16] proposed to estimate the parameters, θ = ( σ , r , α ), of the approximating process (cid:101) Z using the method of moments. We now proceed to briefly review her method. The first step isto choose 3 N + 1 moment functions f = ( f , . . . , f N +1 ) T , one for each parameters of (cid:101) Z , and asuitable scaling factor u n ∝ / (cid:112) h n ln(1 /h n ), where “ ∝ ” hereafter means “proportional to”. Next,define the MME (cid:98) θ n to be a solution of the following equation F n ( θ ) := 1 n n (cid:88) i =1 f (cid:0) u n ∆ ni X (cid:1) − E θ (cid:16) f (cid:0) u n (cid:101) Z h n (cid:1)(cid:17) = , (5.4)where = (0 , . . . , T ∈ R N +1 and E θ ( f ( u n (cid:101) Z h n )) denotes the expectation such that (cid:101) Z h n is deter-mined by the parameter vector θ . Since (cid:101) Z h n is fully specified and, thus, its characteristic functionis available, E θ ( f ( u n (cid:101) Z h n )) can be computed by, e.g., Fourier methods.Our idea is to combine a version of Mies’ method with our results in Section 3 to improve ourestimation of σ and Y . Concretely, we propose to first find the roots of F n ( θ ) and plug them intoa suitable approximation of Equation (3.3) to obtain an estimate of the optimal threshold ε (cid:63)n . Thiscan in turn be used to estimate the volatility via thresholding. To solve the 3 N + 1 equations (5.4),we propose to solve an optimization problem with objective function of the form V n ( θ ; f ) := F T n ( θ ) Λ − n ( θ ) Λ − n ( θ ) F n ( θ ) , (5.5)where Λ n ( θ ) := diag (cid:0) h n u n , h n u α n , h n u α n , h n u α n , . . . , h n u α N n , h n u α N n , h n u α N n (cid:1) . This particular weights are motivated by the scaling of the Central Limit Theorem for F n ( θ )established in [16, Lemma 5.4].For simplicity, suppose we only want to estimate α = α , r ± , and σ (the method can easily beadapted to estimate more parameters of (cid:101) ν ). We then proposed the following procedure:1. Start with some initial values θ := ( σ , r , α ) and a suitable scaling factor u n (to be specifiedlater on).2. Find the roots of F n ( θ ), which we call (cid:98) θ n, := ( (cid:98) σ n, , (cid:98) r n, , (cid:98) α n, ), by minimizing the objectivefunction V n ( θ ; f ) in (5.5).3. Using (cid:98) θ n, , we solve a suitable approximation of (3.3) (e.g., (4.2) or (4.3)) to get an estimationof the optimal threshold ε (cid:63)n , denoted by (cid:98) ε n, . This estimate is then used to compute anestimate of σ as (cid:98) σ n, := 1 T n (cid:88) i =1 (cid:0) ∆ ni X (cid:1) {| ∆ ni X |≤ (cid:98) ε n, } .
11. Fix (cid:98) σ n, and use 3 N moment functions g := ( g , . . . , g N ) T to find ( (cid:98) r n, , (cid:98) α n, ) by solving G n (cid:0) r , α ; (cid:98) σ n, (cid:1) := 1 n n (cid:88) i =1 g (cid:0) u n ∆ ni X (cid:1) − E ( (cid:98) σ n, , r , α ) (cid:16) g (cid:0) u n (cid:101) Z h n (cid:1)(cid:17) = , (5.6)or minimizing V n ( r , α ; (cid:98) σ n, , g ) := G T n (cid:0) r , α ; (cid:98) σ n, (cid:1) G n (cid:0) r , α ; (cid:98) σ n, (cid:1) . (5.7)5. Using ( (cid:98) σ n, , (cid:98) r n, , (cid:98) α n, ) and solving the same approximating equation as in Step 3, we obtaina new estimation of ε (cid:63)n , denoted by (cid:98) ε (cid:63)n , and update (cid:98) σ n, by (cid:0)(cid:98) σ (cid:63)n (cid:1) := 1 T n (cid:88) i =1 (cid:0) ∆ ni X (cid:1) {| ∆ ni X |≤ (cid:98) ε (cid:63)n } . Remark . We could stop right after Step 3 and make (cid:98) σ n, our final estimate of the volatility.However, our simulation results show that Step 4 significantly improves our estimates of ( r , α )when Y ≤ .
5. We could also follow the Steps 1 − (cid:98) σ n, = ( (cid:98) σ (cid:63)n ) until the sequence of estimates ( (cid:98) σ (cid:63)n ) stabilizes. This approach, however, tends toincrease the sample error of the estimators. In this subsection, we apply the method introduced in the previous subsection to the case of aCGMY jump component and compare it to the estimators of Mies [16] and Jacod and Todorov[13]. Specifically, we work with simulated data from the model (2.1) where J , is a pure-jumpCGMY L´evy process, independent of the Brownian motion W , with L´evy measure ν CGMY ( dx ) := ν CGMY ( x ) dx := C | x | Y (cid:16) e − Mx { x> } + e Gx { x< } (cid:17) dx. We use the same values of C , G , and M as in (4.1), but with different values of σ and Y . We considerobservations of a 5 minutes frequency over a one-year (252 days) time horizon with a trading timeof 6 . n = 252(6 . x → ν CGMY ( x ) = C | x | − − Y + O ( | x | − Y ). Thissuggests us to take N = 1 in (5.3) and to use a Y -stable process to approximate the CGMY processbecause only the parameters σ , C , and Y are of primary interest. Then Assumptions (5.1) − (5.3)are satisfied with ρ = Y and (cid:101) Z t = σW t + S t , t ∈ R + , where ( S t ) t ∈ R + is a Y -stable process with L´evymeasure (cid:101) ν ( dx ) := C | x | − − Y dx . The parameters of the approximating model are θ = ( σ , C, Y ).Next, we choose the 3 moment functions f = ( f , f , f ) T as f ( x ) := e −| x | , f ( x ) := e − √ | x | , f ( x ) := x {| x | < } , x ∈ R , (5.8)and a suitable scaling factor u n to be specified below. These functions are simpler than the onesproposed in [16, Section 4] and were chosen because of their superior performance. Even thoughthe moment functions (5.8) do not meet the strict constraints imposed in [16] (see Assumptions(F1) − (F2) therein), we believe that most of the assumptions therein are not needed for the validity12f the asymptotic theory in [16]. This will be investigated in a future work together with anobjective and systematic method to calibrate the moment functions.To determine a suitable scaling factor u n , we will connect it to the threshold parameter ε of theTRQV estimator (3.1). The key observation is to analyze the moment equation corresponding tothe function f , namely, 1 n n (cid:88) i =1 f (cid:0) u n ∆ ni X (cid:1) − E θ (cid:16) f (cid:0) u n (cid:101) Z h n (cid:1)(cid:17) = 0 , which, after some trivial simplifications, can be written as1 n n (cid:88) i =1 (cid:0) ∆ ni X (cid:1) {| ∆ ni X |≤ /u n } − E θ (cid:16) (cid:101) Z h n {| (cid:101) Z hn |≤ /u n } (cid:17) = 0 . This suggests that 1 /u n has a similar role to that of the threshold ε in the TRQV estimator; namely,the choice of u n should ensure that (cid:80) ni =1 f ( u n ∆ ni X ) is dominated by the Brownian componentor, equivalently, to eliminate the increments in which the jump component J of X dominates theBrownian component. Hence, in what follows, we will fix u n as 1 /ε n , where ε n = (cid:112) σ h n ln(1 /h n )and σ is a suitable initial estimate of σ . We consider the following initial values for σ : (cid:98) σ n, := 1 T n (cid:88) i =1 (cid:0) ∆ ni X (cid:1) {| ∆ ni X |≤ √ h n ln(1 /h n ) } , (cid:98) σ n, := 1 T n (cid:88) i =1 (cid:0) ∆ ni X (cid:1) {| ∆ ni X |≤ (cid:113) (cid:98) σ n, h n ln(1 /h n ) } , (5.9)where (cid:98) σ n, := 1 T n (cid:88) i =1 (cid:0) ∆ ni X (cid:1) {| ∆ ni X |≤ (cid:113) (cid:98) σ n, RV h n ln(1 /h n ) } , (cid:98) σ n, RV := 1 T n (cid:88) i =1 (cid:0) ∆ ni X (cid:1) . Broadly, we recommend to use the loose estimator (cid:98) σ n, as our initial value σ if the volatility is“large” (say, 0.4 or larger), and, otherwise, use the tighter estimator (cid:98) σ n, .For the moment functions g := ( g , g ) T in Step 4 of the algorithm in Subsection 5.1 (the onesused to correct estimates of r and α while fixing that of σ ), we choose g ( x ) := (cid:0) − | x | (cid:1) {| x | < } , g ( x ) := (cid:0) − x (cid:1) {| x | < } , x ∈ R . (5.10)Finally, we use the approximation (4.3) in Steps 3 and 5 of the algorithm outlined in Subsection5.1. For clarity and easy reference, we outline below the precise estimation procedure for the caseof the CGMY model.1. Start with some initial guesses ( σ , C , Y ). Here we take C = 0 . Y = 1 .
3, and σ = (cid:98) σ n, when σ = 0 . σ = (cid:98) σ n, when σ = 0 .
2, as defined in (5.9). Given σ , we fix the scalingfactor u n = 1 / (cid:112) σ h n ln(1 /h n ).2. Find the roots of F n ( θ ) with the moment functions f in (5.8), which we call ( (cid:98) σ n, , (cid:98) C n, , (cid:98) Y n, ),by minimizing the objective function V n ( θ ; f ) in (5.5).3. Using ( (cid:98) σ n, , (cid:98) C n, , (cid:98) Y n, ), we apply the second-order approximation (4.3) to get an estimate of ε (cid:63)n , denoted by (cid:98) ε n, , and compute its corresponding TRQV estimator (cid:98) σ n, .13. Fix (cid:98) σ n, and then use the moment functions g in (5.10) with u n = 1 / (cid:113) (cid:98) σ n, h n ln(1 /h n ) toget the estimates ( (cid:98) C n, , (cid:98) Y n, ) by solving the roots of G n ( C, Y ; (cid:98) σ n, ) in (5.6) or minimizing V n ( C, Y ; (cid:98) σ n, , g ) in (5.7)
5. Using ( (cid:98) σ n, , (cid:98) C n, , (cid:98) Y n, ), we again apply (4.3) to get a new estimate of ε (cid:63)n , denoted by (cid:98) ε (cid:63)n . Thisthreshold is plugged into the TRQV estimator to compute a final estimation of σ , denotedby ( (cid:98) σ (cid:63)n ) .We compare the simulated performance of our estimator ( (cid:98) σ (cid:63)n ) to the estimator (cid:98) σ n, (whichcould be considered the plain estimator proposed by [16]) and the estimator (cid:98) σ n, JT in [13]. Inthe latter one, we use the equation (5.3) therein with ζ = 1 . k n = 252(6 . u n = (ln(1 /h n )) − / , as proposed in the simulationportion of [16], and ¯ u n = (8 / u n for the term S nT of equation (5.3) in [13]. [13] suggests to use u n = (ln(1 /h n )) − / / √ BV and ¯ u n = 0 . u n , where BV is the bipower variation, which we alsotried in our simulation, but obtained worst results. In fact, we tried different parameters settingsfor ζ , k n , u n , and ¯ u n , and select the values with the best performance. In each simulation, wedivided the one-year data into 12 months and compute the estimate of σ for each month, and thentake the average of these 12 monthly estimators as (cid:98) σ n, JT .The results are summarized in Tables 1 − (cid:101) σ (cid:63)n ) , using the threshold (cid:101) ε (cid:63)n given in (4.3) with the true values of σ , C , and Y . Finally,we also report the TRQV estimator, denoted by ( σ (cid:63)n ) , corresponding to the true optimal threshold ε (cid:63)n obtained by solving (3.3) after finding E ( b ( ε )) via a large scale Monte Carlo experiment.Tables 1 − σ = 0 .
2, the MSEs of σ = (cid:98) σ n, , (cid:98) σ n, , (cid:98) σ n, , and ( (cid:98) σ (cid:63)n ) are gettingsmaller in each step. The MSE of ( (cid:98) σ (cid:63)n ) is about 81 . . .
8% lower than the MSE of (cid:98) σ n, , for Y = 1 . , . , .
35, respectively, while this is 98 . . .
7% lower than the MSEof (cid:98) σ n, JT , for Y = 1 . , . , .
35, respectively. Similarly, as shown in Tables 4 −
6, when σ = 0 .
4, theMSEs of σ = (cid:98) σ n, , (cid:98) σ n, , (cid:98) σ n, , and ( (cid:98) σ (cid:63)n ) are also getting smaller in each step. The MSE of ( (cid:98) σ (cid:63)n ) is about 35 . . .
8% lower than the MSE of (cid:98) σ n, , for Y = 1 . , . , .
35, respectively.The MSE of ( (cid:98) σ (cid:63)n ) are 96 . . .
5% lower than the MSE of (cid:98) σ n, JT , for Y = 1 . , . , . (cid:98) σ n, and (cid:98) σ n, JT . Regarding the estimates of Y and C , we notice that the secondstep estimates (cid:98) Y n, and (cid:98) C n, (obtained from fixing (cid:98) σ n, and then applying (5.6)) are significantlybetter than the first step estimates (cid:98) Y n, and (cid:98) C n, when Y ≤ .
5. When Y = 1 .
7, there is nosignificant improvement.
In this subsection, we apply the method in the previous subsection to estimate the integratedvariance under a stochastic volatility model with a CGMY jump component. We also examine thefinite sample performance of the resulting estimator and compare it with the estimator of Jacod In Steps 2 and 4, we choose the minimization method. We use the R function nloptr from the package nloptr with the algorithm NLOPT LD LBFGS. σ (cid:98) σ n, (cid:98) C n, (cid:98) Y n, (cid:98) ε n, (cid:98) σ n, (cid:98) C n, (cid:98) Y n, (cid:98) ε (cid:63)n (cid:98) σ (cid:63)n ) (cid:98) σ n, JT (cid:101) σ (cid:63)n ) σ (cid:63)n ) Table 1:
Estimation based on simulated -minute observations of paths over a one-year time horizon.The parameters are C = 0 . , Y = 1 . , and σ = 0 . . We take σ = (cid:98) σ n, . In this case, we compute ε (cid:63)n = 0 . and (cid:101) ε (cid:63)n = 0 . . and Todorov [13]. The basic idea is to split the time-period [0 , T ] into smaller subintervals sothat σ would be approximately constant in each subinterval and, hence, X is approximately L´evywithin that interval. We then apply the method developed in Subsection 5.1 to each subintervalto estimate the volatility level in each subinterval and finally aggregate the resulting estimates toestimate the integrated volatility.Specifically, we consider the following Heston model: X t = 1 + (cid:90) t (cid:112) V s dW s + J t , V t = θ + (cid:90) t κ (cid:0) θ − V s (cid:1) ds + ξ (cid:90) t (cid:112) V s dB s , t ∈ R + , where ( W t ) t ∈ R + and ( B t ) t ∈ R + are two independent standard Brownian motions and ( J t ) t ∈ R + isa CGMY L´evy process independent of ( W t ) t ∈ R + and ( B t ) t ∈ R + . The parameters of the volatilityspecification are set as κ = 5 , ξ = 0 . , θ = 0 . . The values of κ and ξ above are borrowed from [21]. In the simulation, we experiment with valuesof Y = 1 . Y = 1 .
7, and compute the estimated integrated volatility for one day under twodifferent estimators.We consider 5-second observations over a one-year (252 days) time horizon with 6 . k n = 160, which corresponds to 30 blocks per day. As mentioned above,we treat the stochastic volatility as a constant volatility in each block, so that we can estimatethe integrated volatility for each block by computing our estimator ( (cid:98) σ (cid:63)n ) with all the estimationparameters specified as in Subsection 5.2. We then add the integrated volatilities for the 30 blocks tocompute our daily estimator of the integrated volatility (cid:82) t +1 / t V s ds for that day. For the estimatorof [13], we use both equations (4.2) and (5.3) therein with k n = 160 (number of observation in eachblock), ξ = 1 .
5, and u n = 0 . − ln h n ) − / / √ BV , where BV is the bipower variation of the15ample Mean Sample SD Mean ofRelative Error SD ofRelative Error MSE σ (cid:98) σ n, (cid:98) C n, (cid:98) Y n, (cid:98) ε n, (cid:98) σ n, (cid:98) C n, (cid:98) Y n, (cid:98) ε (cid:63)n (cid:98) σ (cid:63)n ) (cid:98) σ n, JT (cid:101) σ (cid:63)n ) σ (cid:63)n ) Table 2:
Estimation based on simulated -minute observations of paths over a one-year time horizon.The parameters are C = 0 . , Y = 1 . , and σ = 0 . . We take σ = (cid:98) σ n, . In this case, we compute ε (cid:63)n = 0 . and (cid:101) ε (cid:63)n = 0 . . previous day. To assess the accuracy of the different methods, we compute the Median AbsoluteDeviation (MAD) around the true value, (cid:82) t +1 / t V s ds , over 200 simulation paths.Figure 4 shows the estimated integrated volatility for each day computed by our MME (solidblack line) and the JT estimator in [13] (dotted blue line) (see Eq. (5.3) therein), and the true dailyintegrated volatility (dashed red) for 1 simulation path. When Y = 1 .
7, the JT estimator tends tojitter around the true value, while our MME exhibits better performance. This behavior is furthercorroborated by Table 7, which shows the MADs of both our MME and JT estimator for 6 differentdays based on 200 simulated paths. However, when Y = 1 .
5, the JT estimator outperforms ourMME, as shown in the right panel of Figure 4 and Table 7. Now, it is important to point out thatthe daily estimate (5.3) of [13] is based on more data than that used in our estimates. Indeed,the estimator (5.3) in [13] employs a debiasing procedure, whose debiasing term consists of twocomponents: one that can be computed using the data in each day and another term, dependingonly on Y , that is computed using the data during the whole time horizon (in this case, one yearworth of data). To explore the performance of the estimator using only contemporary data, wealso analyze the performance of the estimator (4.2) in [13], which can be computed using only thedata collected in each day. When Y = 1 . Y = 1 . Y > . σ (cid:98) σ n, (cid:98) C n, (cid:98) Y n, (cid:98) ε n, (cid:98) σ n, (cid:98) C n, (cid:98) Y n, (cid:98) ε (cid:63)n (cid:98) σ (cid:63)n ) (cid:98) σ n, JT (cid:101) σ (cid:63)n ) σ (cid:63)n ) Table 3:
Estimation based on simulated -minute observations of paths over a one-year time horizon.The parameters are C = 0 . , Y = 1 . , and σ = 0 . . We take σ = (cid:98) σ n, . In this case, we compute ε (cid:63)n = 0 . and (cid:101) ε (cid:63)n = 0 . . Sample Mean Sample SD Mean ofRelative Error SD ofRelative Error MSE σ (cid:98) σ n, (cid:98) C n, (cid:98) Y n, (cid:98) ε n, (cid:98) σ n, (cid:98) C n, (cid:98) Y n, (cid:98) ε (cid:63)n (cid:98) σ (cid:63)n ) (cid:98) σ n, JT (cid:101) σ (cid:63)n ) σ (cid:63)n ) Table 4:
Estimation based on simulated -minute observations of paths over a one-year time horizon.The parameters are C = 0 . , Y = 1 . , and σ = 0 . . We take σ = (cid:98) σ n, . In this case, we compute ε (cid:63)n = 0 . and (cid:101) ε (cid:63)n = 0 . . σ (cid:98) σ n, (cid:98) C n, (cid:98) Y n, (cid:98) ε n, (cid:98) σ n, (cid:98) C n, (cid:98) Y n, (cid:98) ε (cid:63)n (cid:98) σ (cid:63)n ) (cid:98) σ n, JT (cid:101) σ (cid:63)n ) σ (cid:63)n ) Table 5:
Estimation based on simulated -minute observations of paths over a one-year time horizon.The parameters are C = 0 . , Y = 1 . , and σ = 0 . . We take σ = (cid:98) σ n, . In this case, we compute ε (cid:63)n = 0 . and (cid:101) ε (cid:63)n = 0 . . Sample Mean Sample SD Mean ofRelative Error SD ofRelative Error MSE σ (cid:98) σ n, (cid:98) C n, (cid:98) Y n, (cid:98) ε n, (cid:98) σ n, (cid:98) C n, (cid:98) Y n, (cid:98) ε (cid:63)n (cid:98) σ (cid:63)n ) (cid:98) σ n, JT (cid:101) σ (cid:63)n ) σ (cid:63)n ) Table 6:
Estimation performance based on simulated -minute observations of paths over a one-yeartime horizon. The parameters are C = 0 . , Y = 1 . , and σ = 0 . . We take σ = (cid:98) σ n, . In this case, wecompute ε (cid:63)n = 0 . and (cid:101) ε (cid:63)n = 0 . . A Proof of Theorem 3.1
We start with the decomposition: E (cid:0) b ( ε ) (cid:1) = E (cid:16)(cid:0) σW h + J h (cid:1) {| σW h + J h |≤ ε } (cid:17) = σ E (cid:16) W h {| σW h + J h |≤ ε } (cid:17) + E (cid:16) J h {| σW h + J h |≤ ε } (cid:17) +2 σ E (cid:16) W h J h {| σW h + J h |≤ ε } (cid:17) . (A.1)18
50 100 150 200 250 + − − − − − Y=1.7
Day I V + − − − − − Y=1.5
Day I V Figure 4:
Graphs of the daily integrated volatility estimates for Y = 1 . (left panel) and Y = 1 . (right panel), respectively. The dashed red line is the true daily integrated volatility, while the solid black(respectively, dotted blue) line shows the daily estimates using our MME (respectively, (5.3) in [13]). . . . Y=1.7
Day I V . . . . Y=1.5
Day I V Figure 5:
Graphs of the daily integrated volatility estimates for Y = 1 . (left panel) and Y = 1 . (right panel), respectively. The dashed red line is the true daily integrated volatility, while the solid black(respectively, dotted blue) line shows the daily estimates using our MME (respectively, (4.2) in [13]). The asymptotic behavior of each of the terms above is obtained in three steps.
Step 1.
We first analyze the behavior of the first term in (A.1) as h →
0. By (2.3) and (2.4), wefirst decompose it as E (cid:16) W h {| σW h + J h |≤ ε } (cid:17) = h − E (cid:16) W h {| σW h + J h | >ε } (cid:17) = h − (cid:101) E (cid:16) e − (cid:101) U h − ηh W h {| σW h + J h | >ε } (cid:17) = h − he − ηh (cid:101) E (cid:16) W {| σ √ hW + J h | >ε } (cid:17) − he − ηh (cid:101) E (cid:16)(cid:16) e − (cid:101) U h − (cid:17) W {| σ √ hW + J h | >ε } (cid:17) =: h − he − ηh I ( h ) − he − ηh I ( h ) . (A.2)In what follows, we will analyze the asymptotic behavior of I ( h ) and I ( h ), as h →
0, respectively,in two sub-steps. 19
Method Day 2 Day 52 Day 102 Day 152 Day 202 Day 2521 . . Table 7:
The MADs for our MME and the estimators in [13] for arbitrary sampled days. The resultsare based on simulated -second observations of paths over a one-year time horizon with Y = 1 . and Y = 1 . , respectively. Step 1.1.
Clearly, by (2.5) and the symmetry of W , I ( h ) = (cid:101) E (cid:16) W { σ √ hW + J h >ε } (cid:17) + (cid:101) E (cid:16) W { σ √ hW − J h >ε } (cid:17) =: I +1 ( h ) + I − ( h ) . (A.3)Denote by φ ( x ) := 1 √ π e − x / , Φ( x ) := (cid:90) ∞ x φ ( x ) dx, x ∈ R . By conditioning on J h and using the fact that (cid:101) E ( W { W >x } ) = xφ ( x ) + Φ( x ), for all x ∈ R , wehave that I ± ( h ) = (cid:101) E (cid:18)(cid:18) εσ √ h ∓ J h σ √ h (cid:19) φ (cid:18) εσ √ h ∓ J h σ √ h (cid:19) + Φ (cid:18) εσ √ h ∓ J h σ √ h (cid:19)(cid:19) . (A.4)Let p ± J,h be the density of ± J h under (cid:101) P , and recall the Fourier transform and its inverse transformdefined by ( F g )( z ) := 1 √ π (cid:90) R g ( x ) e − izx dx, (cid:0) F − g (cid:1) ( x ) := 1 √ π (cid:90) R g ( z ) e izx dz. In what follows, we set ψ ( x ) := (cid:18) F − φ (cid:18) · σ √ h − εσ √ h (cid:19)(cid:19) ( x ) = 1 √ π (cid:90) R φ (cid:18) zσ √ h − εσ √ h (cid:19) e izx dz = σ √ h e iεx √ π (cid:90) R φ ( ω ) e iσ √ hωx dω = σ √ h √ π exp (cid:18) iεx − σ x h (cid:19) . (A.5)Then, we deduce that (cid:101) E (cid:18) φ (cid:18) εσ √ h ± J h σ √ h (cid:19)(cid:19) = (cid:90) R (cid:0) F ψ (cid:1) ( z ) p ∓ J,h ( z ) dz = (cid:90) R ψ ( u ) (cid:0) F p ∓ J,h (cid:1) ( u ) du, where (cid:0) F p ± J,h (cid:1) ( u ) = e ∓ iu (cid:101) γh √ π exp (cid:18) − ( C + + C − ) (cid:12)(cid:12)(cid:12)(cid:12) cos (cid:18) πY (cid:19)(cid:12)(cid:12)(cid:12)(cid:12) Γ( − Y ) | u | Y h (cid:18) − i C + − C − C + + C − tan (cid:18) πY (cid:19) sgn( u ) (cid:19)(cid:19) =: 1 √ π exp (cid:16) c | u | Y h + ic | u | Y h sgn( u ) ∓ iu (cid:101) γh (cid:17) , (A.6)20ith c := ( C + + C − ) cos( πY / − Y ) and c := ( C − − C + ) sin( πY / − Y ). Hence, we have (cid:101) E (cid:18) φ (cid:18) εσ √ h ± J h σ √ h (cid:19)(cid:19) = σ √ h π (cid:90) R exp (cid:18) c | u | Y h + ic | u | Y h sgn( u ) − σ u h + iu (cid:0) ε ± (cid:101) γh (cid:1)(cid:19) du (A.7)= σ √ hπ (cid:90) ∞ exp (cid:18) c u Y h − σ u h (cid:19) cos (cid:16) c u Y h + u (cid:0) ε ± (cid:101) γh (cid:1)(cid:17) du = 1 π (cid:90) ∞ exp (cid:18) c · ω Y h − Y/ σ Y − ω (cid:19) cos (cid:18) c · ω Y h − Y/ σ Y + ω · ε ± (cid:101) γhσ √ h (cid:19) dω = 1 π (cid:90) ∞ exp (cid:18) c · ω Y h − Y/ σ Y − ω (cid:19) cos (cid:18) c · ω Y h − Y/ σ Y (cid:19) cos (cid:18) ω · ε ± (cid:101) γhσ √ h (cid:19) dω − π (cid:90) ∞ exp (cid:18) c · ω Y h − Y/ σ Y − ω (cid:19) sin (cid:18) c · ω Y h − Y/ σ Y (cid:19) sin (cid:18) ω · ε ± (cid:101) γhσ √ h (cid:19) dω. By expanding the Taylor series for exp( c σ − Y | ω | Y h − Y/ ), as well as for cos( c σ − Y ω Y h − Y/ ) andsin( c σ − Y ω Y h − Y/ ), we deduce that (cid:101) E (cid:18) φ (cid:18) εσ √ h ± J h σ √ h (cid:19)(cid:19) = 1 π (cid:90) ∞ cos (cid:18) ω · ε ± (cid:101) γhσ √ h (cid:19) e − ω / dω + 1 π (cid:88) ( k,j ) ∈ Z : ( k,j ) (cid:54) =(0 , ( − j a ± k, j, ( h ) − π ∞ (cid:88) k,j =0 ( − j d ± k, j +1 , ( h )= 1 √ π exp (cid:18) − ( ε ± (cid:101) γh ) σ h (cid:19) + 1 π (cid:88) ( k,j ) ∈ Z : ( k,j ) (cid:54) =(0 , ( − j a ± k, j, ( h ) − π ∞ (cid:88) k,j =0 ( − j d ± k, j +1 , ( h ) , (A.8)where, for m, n ∈ Z + and r ∈ R + , a ± m,n,r ( h ) := c m c n h ( m + n )(1 − Y/ m ! n ! σ ( m + n ) Y (cid:90) ∞ ω ( m + n ) Y + r cos (cid:18) ω · ε ± (cid:101) γhσ √ h (cid:19) e − ω / dω, (A.9) d ± m,n,r ( h ) := c m c n h ( m + n )(1 − Y/ m ! n ! σ ( m + n ) Y (cid:90) ∞ ω ( m + n ) Y + r sin (cid:18) ω · ε ± (cid:101) γhσ √ h (cid:19) e − ω / dω. (A.10)By applying the formula for the integrals of ω kY cos( βω ) and ω kY sin( βω ) with respect to e − ω / on R + , as well as the asymptotics for the Kummer’s function M ( a, b, z ), as h →
0, we deduce that a ± m,n,r ( h ) = c m c n h ( m + n )(1 − Y/ m ! n ! σ ( m + n ) Y · (( m + n ) Y + r − / Γ (cid:18) ( m + n ) Y + r + 12 (cid:19) · M (cid:18) ( m + n ) Y + r + 12 , , − (cid:0) ε ± (cid:101) γh (cid:1) σ h (cid:19) ∼ c m c n h ( m + n )(1 − Y/ m ! n ! σ ( m + n ) Y · (( m + n ) Y + r − / Γ (cid:18) ( m + n ) Y + r + 12 (cid:19) · (cid:32) Γ(1 / (cid:0) − (( m + n ) Y + r ) / (cid:1) (cid:18) ε σ h (cid:19) − (( m + n ) Y + r +1) / + Γ(1 / e − ε / (2 σ h ) Γ (cid:0) (( m + n ) Y + r + 1) / (cid:1) (cid:18) ε σ h (cid:19) (( m + n ) Y + r ) / (cid:33) , d ± m,n,r ( h ) = c m c n h ( m + n )(1 − Y/ m ! n ! σ ( m + n ) Y · ε ± (cid:101) γhσ √ h · (( m + n ) Y + r ) / Γ (cid:18) ( m + n ) Y + r (cid:19) · M (cid:18) ( m + n ) Y + r , , − (cid:0) ε ± (cid:101) γh (cid:1) σ h (cid:19) ∼ c m c n h ( m + n )(1 − Y/ m ! n ! σ ( m + n ) Y · εσ √ h · (( m + n ) Y + r ) / Γ (cid:18) ( m + n ) Y + r (cid:19) · (cid:32) Γ(3 / (cid:0) (1 − ( m + n ) Y − r ) / (cid:1) · (cid:18) ε σ h (cid:19) − (( m + n ) Y + r ) / − + Γ(3 / e − ε / (2 σ h ) Γ (cid:0) ( m + n ) Y + r ) / (cid:1) · (cid:18) ε σ h (cid:19) (( m + n ) Y + r − / (cid:33) . In the asymptotic formulas for the Kummers function in the expression of a ± n,m,r ( h ) above, the firstterm vanishes if Γ( − (( m + n ) Y + r ) /
2) are infinity. This happens when − (( m + n ) Y + r ) / d ± m,n,r ( h ),the first term vanishes if (1 − ( m + n ) Y − r ) / m, n ∈ Z + and r ∈ R + , as h → a ± m,n,r ( h ) = O (cid:18) h m + n +( r +1) / ε ( m + n ) Y + r +1 (cid:19) + O (cid:18) ε ( m + n ) Y + r e − ε / (2 σ h ) h ( m + n )( Y − r/ (cid:19) = O (cid:18) h m + n +( r +1) / ε ( m + n ) Y + r +1 (cid:19) , (A.11) d ± m,n,r ( h ) = O (cid:18) h m + n +( r +1) / ε ( m + n ) Y + r +1 (cid:19) + O (cid:18) ε ( m + n ) Y + r e − ε / (2 σ h ) h ( m + n )( Y − r/ (cid:19) = O (cid:18) h m + n +( r +1) / ε ( m + n ) Y + r +1 (cid:19) . (A.12)Therefore, by combining (A.8), (A.11), and (A.12), we obtain that (cid:101) E (cid:18) φ (cid:18) εσ √ h ± J h σ √ h (cid:19)(cid:19) = e − ε / (2 σ h ) √ π + O (cid:0) h / ε − − Y (cid:1) , as h → . (A.13)Next, we note that (cid:101) E (cid:18) ∓ J h φ (cid:18) εσ √ h ± J h σ √ h (cid:19)(cid:19) = (cid:90) R ( F ψ )( z ) zp ∓ J,h ( z ) dz = (cid:90) R ψ ( u ) F (cid:0) zp ∓ J,h ( z ) (cid:1) ( u ) du, where by (A.6), F (cid:0) zp ± J,h ( z ) (cid:1) ( u ) = i ddu (cid:0) F p ± J,h (cid:1) ( u ) = i √ π exp (cid:16) c | u | Y h + ic | u | Y h sgn( u ) ∓ iu (cid:101) γh (cid:17) · (cid:16) c Y | u | Y − sgn( u ) h + ic Y | u | Y − h ∓ i (cid:101) γh (cid:17) . Together with (A.5), we obtain that (cid:101) E (cid:18) ∓ J h φ (cid:18) εσ √ h ± J h σ √ h (cid:19)(cid:19) = ic σY h / π (cid:90) R sgn( u ) | u | Y − exp (cid:18) c | u | Y h + ic | u | Y h sgn( u ) − σ u h iu (cid:0) ε ± (cid:101) γh (cid:1)(cid:19) du − c σY h / π (cid:90) R | u | Y − exp (cid:18) c | u | Y h + ic | u | Y h sgn( u ) − σ u h iu (cid:0) ε ± (cid:101) γh (cid:1)(cid:19) du ∓ (cid:101) γσh / π (cid:90) R exp (cid:18) c | u | Y h + ic | u | Y h sgn( u ) − σ u h iu (cid:0) ε ± (cid:101) γh (cid:1)(cid:19) du (A.14)22he asymptotics of the last term above follows from (A.7) and (A.13), namely (cid:101) γσh / π (cid:90) R exp (cid:18) c | u | Y h + ic | u | Y h sgn( u ) − σ u h iu (cid:0) ε ± (cid:101) γh (cid:1)(cid:19) du = O (cid:16) h e − ε / (2 σ h ) (cid:17) + O (cid:0) h / ε − − Y (cid:1) , as h → . (A.15)For the first term in (A.14), by expanding the Taylor series for exp( c σ − Y | ω | Y h − Y/ ), as well asfor cos( c σ − Y ω Y h − Y/ ) and sin( c σ − Y ω Y h − Y/ ) below, and using (A.9) and (A.10), we have ic σY h / π (cid:90) R sgn( u ) | u | Y − exp (cid:18) c | u | Y h + ic | u | Y h sgn( u ) − σ u h iu (cid:0) ε ± (cid:101) γh (cid:1)(cid:19) du = − c σY h / π (cid:90) ∞ u Y − exp (cid:18) c u Y h − σ u h (cid:19) sin (cid:16) c u Y h + u (cid:0) ε ± (cid:101) γh (cid:1)(cid:17) du = − c Y h (3 − Y ) / σ Y − π (cid:90) ∞ ω Y − exp (cid:18) c ω Y h − Y/ σ Y − ω (cid:19) sin (cid:18) c ω Y h − Y/ σ Y + ω · ε ± (cid:101) γhσ √ h (cid:19) dω = − c Y h (3 − Y ) / σ Y − π (cid:90) ∞ ω Y − exp (cid:18) c ω Y h − Y/ σ Y − ω (cid:19) sin (cid:18) c ω Y h − Y/ σ Y (cid:19) cos (cid:18) ω · ε ± (cid:101) γhσ √ h (cid:19) dω − c Y h (3 − Y ) / σ Y − π (cid:90) ∞ ω Y − exp (cid:18) c ω Y h − Y/ σ Y − ω (cid:19) cos (cid:18) c ω Y h − Y/ σ Y (cid:19) sin (cid:18) ω · ε ± (cid:101) γhσ √ h (cid:19) dω = − c Y h (3 − Y ) / σ Y − π ∞ (cid:88) k,j =0 ( − j a ± k, j +1 ,Y − ( h ) + ∞ (cid:88) k,j =0 ( − j d ± k, j,Y − ( h ) = O (cid:0) h / ε − Y (cid:1) , (A.16)as h →
0, where we have used the asymptotic formulas (A.11) and (A.12) in the last equality. Fi-nally, for the second term in (A.14), again by expanding the Taylor series for exp( c σ − Y | ω | Y h − Y/ ),cos( c σ − Y ω Y h − Y/ ), and sin( c σ − Y ω Y h − Y/ ) below, and using (A.9), (A.10), (A.11), and (A.12),we deduce that c σY h / π (cid:90) R | u | Y − exp (cid:18) c | u | Y h + ic | u | Y h sgn( u ) − σ u h iu (cid:0) ε ± (cid:101) γh (cid:1)(cid:19) du = c σY h / π (cid:90) ∞ u Y − exp (cid:18) c u Y h − σ u h (cid:19) cos (cid:16) c u Y h + u (cid:0) ε ± (cid:101) γh (cid:1)(cid:17) du = c Y h (3 − Y ) / πσ Y − (cid:90) ∞ ω Y − exp (cid:18) c ω Y h − Y/ σ Y − ω (cid:19) cos (cid:18) c ω Y h − Y/ σ Y + ω · ε ± (cid:101) γhσ √ h (cid:19) dω = c Y h (3 − Y ) / πσ Y − (cid:90) ∞ ω Y − exp (cid:18) c ω Y h − Y/ σ Y − ω (cid:19) cos (cid:18) c ω Y h − Y/ σ Y (cid:19) cos (cid:18) ω · ε ± (cid:101) γhσ √ h (cid:19) dω − c Y h (3 − Y ) / πσ Y − (cid:90) ∞ ω Y − exp (cid:18) c ω Y h − Y/ σ Y − ω (cid:19) sin (cid:18) c ω Y h − Y/ σ Y (cid:19) sin (cid:18) ω · ε ± (cid:101) γhσ √ h (cid:19) dω = c Y h (3 − Y ) / πσ Y − ∞ (cid:88) k,j =0 ( − j a ± k, j,Y − ( h ) − ∞ (cid:88) k,j =0 ( − j d ± k, j +1 ,Y − ( h ) = O (cid:0) h / ε − Y (cid:1) , (A.17)as h →
0. Therefore, by combining (A.14), (A.15), (A.16), and (A.17), we obtain that (cid:101) E (cid:18) ∓ J h φ (cid:18) εσ √ h ± J h σ √ h (cid:19)(cid:19) = O (cid:16) he − ε / (2 σ h ) (cid:17) + O (cid:0) h / ε − Y (cid:1) , as h → . (A.18)23t remains to analyze the asymptotic behavior of E (Φ(( ε ± J h ) / ( σ √ h ))). We first note thatthere exists a universal constant K >
0, such that (cid:101) E (cid:18) Φ (cid:18) εσ √ h ± J h σ √ h (cid:19) { ε ± J h ≥ } (cid:19) ≤ K (cid:101) E (cid:18) φ (cid:18) εσ √ h ± J h σ √ h (cid:19) { ε ± J h ≥ } (cid:19) = O (cid:18)(cid:101) E (cid:18) φ (cid:18) εσ √ h ± J h σ √ h (cid:19)(cid:19)(cid:19) = O (cid:16) e − ε / (2 σ h ) (cid:17) + O (cid:0) h / ε − − Y (cid:1) , as h → , (A.19)where the last inequality above follows from (A.13). Moreover, by (2.9), as h → (cid:101) E (cid:18) Φ (cid:18) εσ √ h ± J h σ √ h (cid:19) { ε ± J h ≤ } (cid:19) = (cid:90) R φ ( u ) (cid:101) P (cid:16) ε ± J h ≤ σ √ hu, ε ± J h ≤ (cid:17) du = (cid:90) ∞ φ ( u ) (cid:101) P (cid:16) ± Z h ≤ − ε ∓ (cid:101) γh (cid:17) du + (cid:90) −∞ φ ( u ) (cid:101) P (cid:16) ± Z h ≤ σ √ hu − ε ∓ (cid:101) γh (cid:17) du = 12 (cid:101) P (cid:18) ± Z ≤ − ε ∓ (cid:101) γhh /Y (cid:19) + (cid:90) −∞ φ ( u ) (cid:101) P (cid:18) ± Z ≤ σ √ hu − ε ∓ (cid:101) γhh /Y (cid:19) du ≤ (cid:101) P (cid:18) ± Z ≤ − ε ∓ (cid:101) γhh /Y (cid:19) + (cid:101) Khε Y (cid:90) −∞ φ ( u ) (cid:18) − σ √ h ± (cid:101) γhε u (cid:19) − Y du = O (cid:0) hε − Y (cid:1) . (A.20)Therefore, by combining (A.19) and (A.20), we obtain that (cid:101) E (cid:18) Φ (cid:18) εσ √ h ± J h σ √ h (cid:19)(cid:19) = O (cid:16) e − ε / (2 σ h ) (cid:17) + O (cid:0) hε − Y (cid:1) , as h → . (A.21)Finally, by combining (A.3), (A.4), (A.13), (A.18), and (A.21), we conclude that I ( h ) = √ σ √ π · ε √ h e − ε / (2 σ h ) + O (cid:16) e − ε / (2 σ h ) (cid:17) + O (cid:0) hε − Y (cid:1) , as h → . (A.22) Step 1.2.
We now study the asymptotic behavior of I ( h ), defined in (A.2), as h →
0. Let us firstconsider the following decomposition I ( h ) = (cid:101) E (cid:16)(cid:16) e − (cid:101) U h − (cid:101) U h (cid:17) W {| σ √ hW + J h | >ε } (cid:17) − (cid:101) E (cid:16) (cid:101) U h W {| σ √ hW + J h | >ε } (cid:17) =: I ( h ) − I ( h ) . (A.23)The first term I ( h ) can be bounded as follows: as h → ≤ I ( h ) ≤ (cid:101) E (cid:16) e − (cid:101) U h − (cid:101) U h (cid:17) = exp (cid:18) h (cid:90) R (cid:16) e − φ ( x ) − φ ( x ) (cid:17)(cid:101) ν ( dx ) (cid:19) − O ( h ) . (A.24)To deal with I , for any t ∈ R + , we further decompose (cid:101) U t as (cid:101) U t = (cid:90) t (cid:90) R (cid:0) ϕ ( x ) + α sgn( x ) x (cid:1) (cid:101) N ( ds, dx ) − (cid:90) t (cid:90) R α sgn( x ) x (cid:101) N ( ds, dx ) =: (cid:101) U BV t − α + Z + t − α − Z − t , where the first integral is well-defined in light of Assumption 2.1-(i) & (ii), so that I ( h ) = (cid:101) E (cid:16) (cid:101) U BV h W {| σ √ hW + J h | >ε } (cid:17) − α + (cid:101) E (cid:16) Z + h W {| σ √ hW + J h | >ε } (cid:17) − α − (cid:101) E (cid:16) Z − h W {| σ √ hW + J h | >ε } (cid:17) =: I BV22 ( h ) − α + I +22 ( h ) − α − I − ( h ) . I BV22 ( h ), note that (cid:12)(cid:12) I BV22 ( h ) (cid:12)(cid:12) ≤ (cid:101) E (cid:16)(cid:12)(cid:12) (cid:101) U BV h (cid:12)(cid:12)(cid:17) ≤ h (cid:90) R (cid:12)(cid:12) ϕ ( x ) + α sgn( x ) x (cid:12)(cid:12) (cid:101) ν ( dx ) , where the last integral is finite since in a neighborhood of the origin, (cid:12)(cid:12) ϕ ( x ) + α sgn( x ) x (cid:12)(cid:12) = (cid:12)(cid:12) − ln q ( x ) + α sgn( x ) x (cid:12)(cid:12) = O (cid:16)(cid:12)(cid:12) − q ( x ) + α sgn( x ) x (cid:12)(cid:12)(cid:17) , which is integrable with respect to (cid:101) ν ( dx ) in view of Assumption 2.1-(ii). As for the terms I ± ,due to the self-similarity of Z ± t and the fact that εh − /Y → ∞ (since Y ∈ (1 , I ± ( h ) = o ( h /Y ), as h →
0. Hence, we obtain that I ( h ) = o (cid:0) h /Y (cid:1) , as h → . (A.25)By combining (A.23), (A.24), and (A.25), we conclude that I ( h ) = o (cid:0) h /Y (cid:1) , as h → . (A.26)Finally, from (A.2), (A.22), and (A.26), we obtain that E (cid:16) W h {| σW h + J h |≤ ε } (cid:17) = h − √ h εσ √ π e − ε / (2 σ h ) + O (cid:16) he − ε / (2 σ h ) (cid:17) + O (cid:0) h ε − Y (cid:1) + o (cid:0) h /Y (cid:1) , (A.27)as h →
0, which completes the analysis in Step 1.
Step 2.
In this step, we will study the asymptotic behavior of the second term in (A.1), as h → E (cid:16) J h {| σW h + J h |≤ ε } (cid:17) = (cid:101) E (cid:16) e − (cid:101) U h − ηh J h {| σW h + J h |≤ ε } (cid:17) = e − ηh (cid:101) E (cid:16) e − (cid:101) U h Z h {| W h + Z h + (cid:101) γh |≤ ε } (cid:17) + 2 (cid:101) γhe − ηh (cid:101) E (cid:16) e − (cid:101) U h Z h {| W h + Z h + (cid:101) γh |≤ ε } (cid:17) + (cid:101) γ h e − ηh (cid:101) E (cid:16) e − (cid:101) U h {| W h + Z h + (cid:101) γh |≤ ε } (cid:17) =: e − ηh I ( h ) + 2 (cid:101) γhe − ηh I ( h ) + (cid:101) γ h e − ηh (cid:101) E (cid:16) e − (cid:101) U h {| W h + Z h + (cid:101) γh |≤ ε } (cid:17) . (A.28)Clearly, (cid:101) γ h e − ηh (cid:101) E (cid:16) e − (cid:101) U h {| W h + Z h + (cid:101) γh |≤ ε } (cid:17) = O (cid:0) h (cid:1) , as h → . (A.29)It remains to analyze the asymptotic behavior of the first two terms in (A.28). Step 2.1.
We begin with the analysis of I ( h ). Clearly, I ( h ) = (cid:101) E (cid:16) Z h {| σW h + Z h + (cid:101) γh |≤ ε } (cid:17) + (cid:101) E (cid:16)(cid:16) e − (cid:101) U h − (cid:17) Z h {| σW h + Z h + (cid:101) γh |≤ ε } (cid:17) = h /Y (cid:101) E (cid:16) Z {| σ √ hW + h /Y Z + (cid:101) γh |≤ ε } (cid:17) + (cid:101) E (cid:16)(cid:16) e − (cid:101) U h − (cid:17) Z h {| σW h + Z h + (cid:101) γh |≤ ε } (cid:17) =: h /Y I ( h ) + I ( h ) . (A.30)25y the symmetry of W , we note that (cid:101) E (cid:16) Z {− ε ≤ σ √ hW + h /Y Z + (cid:101) γh ≤ } (cid:17) = (cid:101) E (cid:16) Z { ≤ σ √ hW − h /Y Z − (cid:101) γh ≤ ε } (cid:17) . In what follows, we let h > ε − | (cid:101) γ | h > I ( h ), as h →
0, let us first consider E ± ( h ) := (cid:101) E (cid:16) Z { ≤ σ √ hW ± h /Y Z ± (cid:101) γh ≤ ε, W ≥ , ± Z ≥ } (cid:17) = (cid:90) ε ∓ (cid:101) γhσ √ h (cid:18)(cid:90) ε ∓ (cid:101) γh − σ √ hxh /Y u p Z ( ± u ) du (cid:19) φ ( x ) dx = ε ∓ (cid:101) γhσ √ h (cid:90) (cid:18) (cid:90) ( ε ∓ (cid:101) γh )(1 − ω ) h /Y u p Z ( ± u ) du (cid:19) φ (cid:18) ε ∓ (cid:101) γhσ √ h ω (cid:19) dω = C ± (cid:0) ε ∓ (cid:101) γh (cid:1) σ √ h (cid:90) (cid:18) (cid:90) ( ε ∓ (cid:101) γh )(1 − ω ) h /Y u − Y du (cid:19) φ (cid:18) ε ∓ (cid:101) γhσ √ h ω (cid:19) dω + ε ∓ (cid:101) γhσ √ h (cid:90) (cid:18) (cid:90) ( ε ∓ (cid:101) γh )(1 − ω ) h /Y u (cid:16) p Z ( ± u ) − C ± u − − Y (cid:17) du (cid:19) φ (cid:18) ε ∓ (cid:101) γhσ √ h ω (cid:19) dω. (A.31)For the first term in (A.31), we have (cid:90) (cid:18) (cid:90) ( ε ∓ (cid:101) γh )(1 − ω ) h /Y u − Y du (cid:19) φ (cid:18) ε ∓ (cid:101) γhσ √ h ω (cid:19) dω = (cid:0) ε ∓ (cid:101) γh (cid:1) − Y (2 − Y ) h (2 − Y ) /Y (cid:90) (1 − ω ) − Y φ (cid:18) ε ∓ (cid:101) γhσ √ h ω (cid:19) dω ∼ (cid:0) ε ∓ (cid:101) γh (cid:1) − Y − Y ) h (2 − Y ) /Y · σ √ hε ∓ (cid:101) γh , as h → . (A.32)For the second term in (A.31), since Y ∈ (1 , z > (cid:90) z u (cid:0) u − Y − ∧ u − Y − (cid:1) du = z − Y − Y (0 , ( z ) + 1 − z − Y Y − (1 , ∞ ) ( z ) ≤ Y Y − − Y ) . (A.33)Hence, we deduce from (2.8) that (cid:90) (cid:18) (cid:90) ( ε ∓ (cid:101) γh )(1 − ω ) h /Y u (cid:12)(cid:12) p Z ( ± u ) − C ± u − − Y (cid:12)(cid:12) du (cid:19) φ (cid:18) ε ∓ (cid:101) γhσ √ h ω (cid:19) dω ≤ (cid:101) KY Y − − Y ) (cid:90) φ (cid:18) ε ∓ (cid:101) γhσ √ h ω (cid:19) dω = O (cid:0) √ h ε − (cid:1) , as h → . Therefore, we obtain that E ± ( h ) = C ± − Y ) h − /Y ε − Y + O (1) , as h → . (A.34)Using the same argument as above and since ε (cid:29) h , we also obtain that, when ± (cid:101) γ >
0, as h → E ± ( h ) := (cid:101) E (cid:16) Z { ≤ σ √ hW ± h /Y Z ± (cid:101) γh ≤ ε, W ≤ , ± Z ≤ } (cid:17) = (cid:101) E (cid:16) Z { ≤ σ √ hW ± h /Y Z ≤∓ (cid:101) γh, W ≤ , ± Z ≤ } (cid:17) = O (cid:0) h − Y − /Y (cid:1) + O (1) . (A.35)26ext, we consider E ± ( h ) := (cid:101) E (cid:16) Z { ≤ σ √ hW ± h /Y Z ± (cid:101) γh ≤ ε, W ≥ , ± Z ≤ } (cid:17) = (cid:90) ε ∓ (cid:101) γhσ √ h (cid:18)(cid:90) − σ √ hxh /Y u p Z ( ± u ) du (cid:19) φ ( x ) dx + (cid:90) ∞ ε ∓ (cid:101) γhσ √ h (cid:18)(cid:90) ε ∓ (cid:101) γh − σ √ hxh /Y − σ √ hxh /Y u p Z ( ± u ) du (cid:19) φ ( x ) dx. (A.36)By (2.7), the first term in (A.36) is such that (cid:90) ε ∓ (cid:101) γhσ √ h (cid:18) (cid:90) − σ √ hxh /Y u p Z ( ± u ) du (cid:19) φ ( x ) dx ≤ (cid:101) K (cid:90) ε ∓ (cid:101) γhσ √ h (cid:18) (cid:90) σ √ hxh /Y u − Y du (cid:19) φ ( x ) dx = (cid:101) Kσ − Y h − Y/ − /Y − Y (cid:90) ε ∓ (cid:101) γhσ √ h x − Y φ ( x ) dx = O (cid:0) h − Y/ − /Y (cid:1) , as h → . Similarly, the second term in (A.36) can be estimated as follows: (cid:90) ∞ ε ∓ (cid:101) γhσ √ h (cid:18) (cid:90) ε ∓ (cid:101) γh − σ √ hxh /Y − σ √ hxh /Y u p Z ( ± u ) du (cid:19) φ ( x ) dx ≤ (cid:101) K (cid:90) ∞ ε ∓ (cid:101) γhσ √ h (cid:18) (cid:90) σ √ hxh /Yσ √ hx − ( ε ∓ (cid:101) γh ) h /Y u − Y du (cid:19) φ ( x ) dx ≤ (cid:101) Kσ − Y h − /Y − Y/ − Y (cid:90) ∞ ε ∓ (cid:101) γhσ √ h x − Y φ ( x ) dx = o (cid:0) h − /Y − Y/ (cid:1) , as h → . Therefore, we obtain that E ± ( h ) = O (cid:0) h − Y/ − /Y (cid:1) , as h → . (A.37)To complete the analysis for I ( h ), it remains to study E ± ( h ) := (cid:101) E (cid:16) Z { ≤ σ √ hW ± h /Y Z ± (cid:101) γh ≤ ε, W ≤ , ± Z ≥ } (cid:17) = (cid:90) −∞ (cid:18)(cid:90) ε ∓ (cid:101) γh − σ √ hxh /Y − σ √ hx ∓ (cid:101) γhh /Y u p Z ( ± u ) du (cid:19) φ ( x ) dx = C ± (cid:90) −∞ (cid:18) (cid:90) ε ∓ (cid:101) γh − σ √ hxh /Y − σ √ hx ∓ (cid:101) γhh /Y u − Y du (cid:19) φ ( x ) dx + (cid:90) −∞ (cid:18) (cid:90) ε ∓ (cid:101) γh − σ √ hxh /Y − σ √ hx ∓ (cid:101) γhh /Y u (cid:16) p Z ( ± u ) − C ± u − − Y (cid:17) du (cid:19) φ ( x ) dx. (A.38)For the first term in (A.38), we have C ± (cid:90) −∞ (cid:18) (cid:90) ε ∓ (cid:101) γh − σ √ hxh /Y − σ √ hx ∓ (cid:101) γhh /Y u − Y du (cid:19) φ ( x ) dx = C ± ε − Y (2 − Y ) h /Y − (cid:90) −∞ (cid:32)(cid:18) − σ √ hx ± (cid:101) γhε (cid:19) − Y − (cid:18) − σ √ hx ± (cid:101) γhε (cid:19) − Y (cid:33) φ ( x ) dx ∼ C ± h − /Y ε − Y − Y ) , as h → . (A.39)27or the second term in (A.38), we deduce from (A.33) that (cid:90) −∞ (cid:18) (cid:90) ε ∓ (cid:101) γh − σ √ hxh /Y − σ √ hx ∓ (cid:101) γhh /Y u (cid:12)(cid:12)(cid:12) p Z ( ± u ) − C ± u − − Y (cid:12)(cid:12)(cid:12) du (cid:19) φ ( x ) dx = O (1) , h → . Therefore, we obtain that E ± ( h ) = C ± − Y ) h − /Y ε − Y + O (1) , as h → . (A.40)By combining (A.34), (A.35), (A.37), and (A.40), we conclude that I ( h ) = (cid:88) i =1 (cid:0) E + i ( h )+ E − i ( h ) (cid:1) = C + + C − − Y h − /Y ε − Y + O (cid:0) h − Y/ − /Y (cid:1) , as h → . (A.41)Next, we will study the asymptotic behavior of I ( h ), as h →
0. Clearly, by Cauchy-Schwarzinequality and self-similarity of Z h and W h under (cid:101) P , we have that I ( h ) ≤ h /Y (cid:18)(cid:101) E (cid:18)(cid:16) e − (cid:101) U h − (cid:17) (cid:19)(cid:19) / (cid:18)(cid:101) E (cid:16) Z {| σ √ hW + h /Y Z + (cid:101) γh |≤ ε } (cid:17)(cid:19) / . (A.42)By Assumption 2.1-(v) and denoting (cid:101) C (cid:96) = (cid:82) R (cid:0) e − (cid:96)ϕ ( x ) − (cid:96)ϕ ( x ) (cid:1) ˜ ν ( dx ), (cid:96) = 1 ,
2, we first have (cid:101) E (cid:18)(cid:16) e − (cid:101) U h − (cid:17) (cid:19) = e (cid:101) C h − e (cid:101) C h + 1 ∼ (cid:0) (cid:101) C − (cid:101) C (cid:1) h, as h → . (A.43)The analysis of the asymptotic behavior, as h →
0, of the second factor in (A.42) is similar to thatof I ( h ). More precisely, we first consider F ± ( h ) := (cid:101) E (cid:16) Z { ≤ σ √ hW ± h /Y Z ± (cid:101) γh ≤ ε, W ≥ , ± Z ≥ } (cid:17) = C ± (cid:0) ε ∓ (cid:101) γh (cid:1) σ √ h (cid:90) (cid:18) (cid:90) ( ε ∓ (cid:101) γh )(1 − ω ) h /Y u − Y du (cid:19) φ (cid:18) ε ∓ (cid:101) γhσ √ h ω (cid:19) dω + ε ∓ (cid:101) γhσ √ h (cid:90) (cid:18) (cid:90) ( ε ∓ (cid:101) γh )(1 − ω ) h /Y u (cid:16) p Z ( ± u ) − C ± u − − Y (cid:17) du (cid:19) φ (cid:18) ε ∓ (cid:101) γhσ √ h ω (cid:19) dω. A similar argument as in (A.32) shows that C ± (cid:0) ε ∓ (cid:101) γh (cid:1) σ √ h (cid:90) (cid:18) (cid:90) ( ε ∓ (cid:101) γh )(1 − ω ) h /Y u − Y du (cid:19) φ (cid:18) ε ∓ (cid:101) γhσ √ h ω (cid:19) dω ∼ C ± − Y ) h − /Y ε − Y , as h → , and by (2.8), ε ∓ (cid:101) γhσ √ h (cid:90) (cid:18) (cid:90) ( ε ∓ (cid:101) γh )(1 − ω ) h /Y u (cid:12)(cid:12) p Z ( u ) − Cu − − Y (cid:12)(cid:12) du (cid:19) φ (cid:18) ε ∓ (cid:101) γhσ √ h ω (cid:19) dω ≤ ε ∓ (cid:101) γhσ √ h · (cid:101) K (cid:0) ε ∓ (cid:101) γh (cid:1) − Y − Y ) h (4 − Y ) /Y (cid:90) (1 − ω ) − Y φ (cid:18) ε ∓ (cid:101) γhσ √ h ω (cid:19) dω = O (cid:0) h − /Y ε − Y (cid:1) , as h → . F ± ( h ) = C ± − Y ) h − /Y ε − Y + O (cid:0) h − /Y ε − Y (cid:1) , as h → . (A.44)Using the same argument as above and since ε (cid:29) h , we also obtain that, when ± (cid:101) γ >
0, as h → F ± ( h ) := (cid:101) E (cid:16) Z { ≤ σ √ hW ± h /Y Z ± (cid:101) γh ≤ ε, W ≤ , ± Z ≤ } (cid:17) = (cid:101) E (cid:16) Z { ≤ σ √ hW ± h /Y Z ≤∓ (cid:101) γh, W ≥ , ± Z ≤ } (cid:17) = O (cid:0) h − /Y − Y (cid:1) . (A.45)Moreover, using arguments similar to those for E ± ( h ), we deduce that F ± ( h ) := (cid:101) E (cid:16) Z { ≤ σ √ hW ± h /Y Z ± (cid:101) γh ≤ ε, W ≥ , ± Z ≤ } (cid:17) = O (cid:0) h − /Y − Y/ (cid:1) , as h → . (A.46)Finally, we consider F ± ( h ) := (cid:101) E (cid:16) Z { ≤ σ √ hW ± h /Y Z ± (cid:101) γh ≤ ε, W ≤ , ± Z ≥ } (cid:17) = (cid:90) −∞ (cid:18)(cid:90) ε ∓ (cid:101) γh − σ √ hxh /Y − σ √ hx ∓ (cid:101) γhh /Y u p Z ( ± u ) du (cid:19) φ ( x ) dx = C ± (cid:90) −∞ (cid:18)(cid:90) ε ∓ (cid:101) γh − σ √ hxh /Y − σ √ hx ∓ (cid:101) γhh /Y u − Y du (cid:19) φ ( x ) dx + (cid:90) −∞ (cid:90) ε ∓ (cid:101) γh − σ √ hxh /Y − σ √ hx ∓ (cid:101) γhh /Y u (cid:18) p Z ( ± u ) − C ± u Y (cid:19) du φ ( x ) dx. A similar argument as in (A.39) shows that C ± (cid:90) −∞ (cid:18) (cid:90) ε ∓ (cid:101) γh − σ √ hxh /Y − σ √ hx ∓ (cid:101) γhh /Y u − Y du (cid:19) φ ( x ) dx ∼ C ± − Y ) h − /Y ε − Y , as h → , and by (2.8), (cid:90) −∞ (cid:18) (cid:90) ε ∓ (cid:101) γh − σ √ hxh /Y − σ √ hx ∓ (cid:101) γhh /Y u (cid:12)(cid:12)(cid:12) p Z ( ± u ) − C ± u − − Y (cid:12)(cid:12)(cid:12) du (cid:19) φ ( x ) dx ≤ (cid:101) K (cid:90) −∞ (cid:18) (cid:90) ε ∓ (cid:101) γh − σ √ hxh /Y − σ √ hx ∓ (cid:101) γhh /Y u − Y du (cid:19) φ ( x ) dx = (cid:101) Kε − Y − Y ) h /Y − (cid:90) −∞ (cid:32)(cid:18) − σ √ hx ± (cid:101) γhε (cid:19) − Y − (cid:18) − σ √ hx ± (cid:101) γhε (cid:19) − Y (cid:33) φ ( x ) dx = O (cid:18) ε − Y h /Y − (cid:19) . Hence, we obtain that F ± ( h ) = C ± − Y ) h − /Y ε − Y + O (cid:0) h − /Y ε − Y (cid:1) , as h → . (A.47)Combining (A.44), (A.45), (A.46), and (A.47), leads to (cid:101) E (cid:16) Z {| σ √ hW + h /Y Z + (cid:101) γh |≤ ε } (cid:17) = (cid:88) i =1 (cid:0) F + i ( h ) + F − i ( h ) (cid:1) = (cid:0) C + + C − (cid:1) h − /Y ε − Y − Y + O (cid:0) h − /Y ε − Y (cid:1) + O (cid:0) h − /Y − Y/ (cid:1) , as h → . (A.48)Therefore, by combining (A.42), (A.43), and (A.48), we have I ( h ) = O (cid:0) h ε − Y/ (cid:1) + O (cid:0) h / ε − Y (cid:1) + O (cid:0) h − Y/ (cid:1) , as h → . (A.49)29inally, by combining (A.30), (A.41), and (A.49), we obtain that I ( h ) = C + + C − − Y hε − Y + O (cid:0) hε − Y/ (cid:1) + O (cid:0) h − Y/ (cid:1) , as h → . (A.50) Step 2.2.
In this step, we will investigate the asymptotic behavior of I ( h ), as h →
0. Note that I ( h ) = h /Y (cid:101) E (cid:16) Z {| σ √ hW + h /Y Z + (cid:101) γh |≤ ε } (cid:17) + (cid:101) E (cid:18)(cid:16) e − (cid:101) U h − (cid:17) Z h {| σW h + Z h + (cid:101) γh |≤ ε } (cid:19) ≤ h /Y (cid:18)(cid:101) E (cid:16) Z {| σ √ hW + h /Y Z + (cid:101) γh |≤ ε } (cid:17)(cid:19) / (cid:32) (cid:18)(cid:101) E (cid:18)(cid:16) e − (cid:101) U h − (cid:17) (cid:19)(cid:19) / (cid:33) = O (cid:32) h /Y (cid:18)(cid:101) E (cid:16) Z {| σ √ hW + h /Y Z + (cid:101) γh |≤ ε } (cid:17)(cid:19) / (cid:33) , h → , where the second inequality above follows from Cauchy-Schwarz inequality. Therefore, by (A.30)and (A.41), we obtain that I ( h ) = O (cid:0) √ h ε − Y/ (cid:1) + O (cid:0) h − Y/ (cid:1) , as h → . (A.51)Finally, by combining (A.28), (A.29), (A.50), and (A.51), we conclude that E (cid:16) J h {| σW h + J h |≤ ε } (cid:17) = C + + C − − Y hε − Y + O (cid:0) hε − Y/ (cid:1) + O (cid:0) h − Y/ (cid:1) , as h → , (A.52)which completes the analysis of Step 2. Step 3.
In this last step, we will study the asymptotic behavior of the third term in (A.1), as h →
0. By (2.3) and (2.4), we first decompose it as E (cid:16) W h J h {| σW h + J h |≤ ε } (cid:17) = (cid:101) E (cid:16) e − (cid:101) U h − ηh W h J h {| σW h + J h |≤ ε } (cid:17) = √ h e − ηh (cid:101) E (cid:16) W J h {| σ √ hW + J h |≤ ε } (cid:17) + √ h e − ηh (cid:101) E (cid:16)(cid:16) e − (cid:101) U h − (cid:17) W J h {| σ √ hW + J h |≤ ε } (cid:17) =: e − ηh √ h I ( h ) + e − ηh √ h I ( h ) . (A.53)For I ( h ), by conditioning on J h , and using the fact that, for any x , x ∈ R with x < x , (cid:101) E (cid:16) W { W ∈ [ x ,x ] } (cid:17) = φ ( x ) − φ ( x ) , we obtain from (A.18) that, as h → I ( h ) = (cid:101) E (cid:18) J h (cid:18) φ (cid:18) ε + J h σ √ h (cid:19) − φ (cid:18) ε − J h σ √ h (cid:19)(cid:19)(cid:19) = O (cid:16) h e − ε / (2 σ h ) (cid:17) + O (cid:0) h / ε − Y (cid:1) . (A.54)As for I ( h ), by Cauchy-Schwarz inequality, (2.5), (A.30), (A.41), and (A.43), we obtain that (cid:12)(cid:12) I ( h ) (cid:12)(cid:12) ≤ (cid:18)(cid:101) E (cid:18)(cid:16) e − (cid:101) U h − (cid:17) (cid:19)(cid:19) / (cid:18)(cid:101) E (cid:16) J h {| σ √ hW + J h |≤ ε } (cid:17)(cid:19) / ≤ (cid:18)(cid:101) E (cid:18)(cid:16) e − (cid:101) U h − (cid:17) (cid:19)(cid:19) / (cid:18)(cid:101) E (cid:16) (cid:0) Z h + (cid:101) γ h (cid:1) {| σ √ hW + J h |≤ ε } (cid:17)(cid:19) / = O (cid:0) hε − Y/ (cid:1) , as h → . (A.55)30herefore, by combining (A.53), (A.54), and (A.55), we obtain that, as h → E (cid:16) W h J h {| σW h + J h |≤ ε } (cid:17) = O (cid:16) h / e − ε / (2 σ h ) (cid:17) + O (cid:0) h ε − Y (cid:1) + O (cid:0) h / ε − Y/ (cid:1) , (A.56)which completes the analysis in Step 3.Finally, by combining (A.1), (A.27), (A.52), and (A.56), we conclude that, as h → E (cid:0) b ( ε ) (cid:1) = σ h − σε √ h √ π e − ε / (2 σ h ) + C + + C − − Y hε − Y + O (cid:16) he − ε / (2 σ h ) (cid:17) + O (cid:0) hε − Y/ (cid:1) + O (cid:0) h − Y/ (cid:1) , which completes the proof of the lemma. References [1] Y. A¨ıt-Sahalia and J. Jacod. Estimating the Degree of Activity of Jumps in High FrequencyData.
Ann. Stat. , 37(5A):2202 − L´evy Processes and Stochastic Calculus , 2nd Ed..
Cambridge Stud. Adv. Math. ,116, Cambridge University Press, Cambridge, U.K., 2004.[3] D. Belomestny. Spectral Estimation of the Fractional Order of a L´evy Process.
Ann. Stat. ,38(1):317 − Ann. Stat. ,44(1):58 −
86, 2016.[5] P. Carr, H. Geman, D. B. Madan, and M. Yor. The Fine Structure of Asset Returns: AnEmpirical Investigation.
J. Bus. , 75(2):305 − Financial Modelling with Jump Processes . Chapman & Hall/CRCFinanc. Math. Ser. , Chapman & Hall/CRC, Boca Raton, FL, U.S.A., 2004.[7] J. E. Figueroa-L´opez. Statistical Estimation of L´evy-Type Stochastic Volatility Models.
Ann.Finance , 8(2):309 − Math. Financ. , 26(3):516 − Appl. Math. Financ. , 24(6):547 − J. Econom. , 208(1):179 − Financ. Stoch. , 20(1):219 − Financ. Stoch. , 20(4):973 − Ann. Stat. , 42(3):1029 − IAENG Int. J. Appl. Math. , 40(4):239 − Exotic Option Pricing and Advanced L´evyModels . John Wiley & Sons Ltd., ChiChester, England, 2005.[16] F. Mies. Rate-Optimal Estimation of the Blumenthal-Getoor Index of a L´evy Process.
Preprint ,2019. arXiv:1906.08062[17] M. Reiß. Testing the Characteristics of a L´evy Process.
Stoch. Proc. Appl. , 123(7):2808 − Stoch. Proc. Appl. , 117(6):677 − L´evy Processes and Infinitely Divisible Distributions . Cambridge Stud. Adv. Math. ,68, Cambridge University Press, Cambridge, U.K., 1999.[20] P. Tankov. Pricing and Hedging in Exponential L´evy Models: Review of Recent Results.
Paris-Princeton Lectures in Mathematical Finance 2010 (R. Carmona, E. C¸ inlar, I. Ekeland,E. Jouini, J. A. Scheinkman, and N. Touzi (eds.)),
Lect. Notes Math. , 2003, 319 − J. Am. Stat. Assoc. , 100(472):1394 −−