Global jump filters and realized volatility
aa r X i v : . [ m a t h . S T ] F e b Global jump filters and realized volatility ∗ Haruhiko Inatsugu and Nakahiro Yoshida Graduate School of Mathematical Sciences, University of Tokyo † Japan Science and Technology Agency CRESTFebruary 16, 2021
Summary
For a semimartingale with jumps, we propose a new estimation method for integratedvolatility, i.e., the quadratic variation of the continuous martingale part, based on the globaljump filter proposed by Inatsugu and Yoshida [8]. To decide whether each increment of theprocess has jumps, the global jump filter adopts the upper α -quantile of the absolute incrementsas the threshold. This jump filter is called global since it uses all the observations to classifyone increment. We give a rate of convergence and prove asymptotic mixed normality of theglobal realized volatility and its variant “Winsorized global volatility”. By simulation studies,we show that our estimators outperform previous realized volatility estimators that use a fewadjacent increments to mitigate the effects of jumps. Keywords and phrases
Volatility, semimartingales with jumps, global filter, high-frequency data,order statistic, rate of convergence, asymptotic mixed normality.
Let (Ω , F , P ) be a probability space equipped with a filtration F = ( F t ) t ∈ [0 ,T ] . We consider aone-dimensional semimartingle X = ( X t ) t ∈ [0 ,T ] having a decomposition X t = X + Z t b s ds + Z t σ s dw s + J t ( t ∈ [0 , T ]) (1.1)where X is an F -measurable random variable, b = ( b t ) t ∈ [0 ,T ] and σ = ( σ t ) t ∈ [0 ,T ] are c`adl`ag F -adapted processes, and w = ( w t ) t ∈ [0 ,T ] is an F -standard Wiener process. J = ( J t ) t ∈ [0 ,T ] is ∗ This work was in part supported by Japan Science and Technology Agency CREST JPMJCR14D7; JapanSociety for the Promotion of Science Grants-in-Aid for Scientific Research No. 17H01702 (Scientific Research);and by a Cooperative Research Program of the Institute of Statistical Mathematics. † Graduate School of Mathematical Sciences, University of Tokyo: 3-8-1 Komaba, Meguro-ku, Tokyo 153-8914, Japan. e-mail: [email protected] X . We will assumed that J is finitely active, that is, J t = P s ∈ (0 ,t ] ∆ J s for∆ J s = J s − J s − and P t ∈ [0 ,T ] { ∆ J t =0 } < ∞ a.s. In this paper, we are interested in the estimationof the integrated volatility Θ = Z T σ t dt (1.2)based on the data ( X t j ) j =0 , ,...n , where t j = t j n = jT /n .The jump part J can be endogenous or exogenous, as well as b and σ , however, J is anuisance in any case. The simple realized volatility is heavily damaged when jumps exist. Toavoid the effects of the jumps, various methods have been proposed so far. For example, thebipower variation (Barndorff-Nielsen and Shephard. [2], Barndorff-Nielsen et al. [3]) and theminimum realized volatility (Andersen et al. [1]) are shown to be consistent estimators ofthe integrated volatility even in the presence of jumps. The idea of these methods is that, tomitigate the effect of jumps, they employ adjacent increments in constructing the estimator.Another direction to handle jumps is to introduce a threshold to detect jumps.Parametric inference for sampled diffusion type processes was studied by Dohnal [5], PrakasaRao [16, 15], Yoshida [21, 22], Kessler [10], Genon-Catalot and Jacod [6], Uchida and Yoshida[19, 18, 20], Ogihara and Yoshida [14], Kamatani and Uchida [9] and others. Limit theoremsused to analyse the realized volatility appeared in the studies of parametric inference. If nojump part exists, then the distribution of the increment ∆ j X = X t j − X t j − admits Gaussianapproximation in a short time interval, and a quasi-likelihood function can be constructed withthe conditional Gaussian density. When the jump part exists, the local Gaussian approximationis no longer valid. Then it is necessary to detect jumps and classify the increments to apply thelocal Gaussian quasi-likelihood function for estimation of the parameters in the continuous part.Threshold method was investigated by Shimizu and Yoshida [17] and Ogihara and Yoshida [13]in the context of the parametric inference for a stochastic differential equation with jumps. Theidea of thresholding is rather old, going back to the studies of limit theorems for L´evy processesas latest. Mancini [12] used this idea in a nonparametric situation. Koike [11] applied thethreshold method to covariance estimation for asynchronously observed semimartingales withjumps. The classical jump filters compare the size of increment with a threshold determined bya (conditionally/unconditionally deterministic) function of the length of the time interval. If anincrement is so large that exceeds the threshold, it is regarded as having jumps. Otherwise, theincrement is regarded as having no jump. Once classified, the increments are used to estimatethe parameters in continuous and jump parts, respectively.Though the efficiency of the traditional thresholding parametric estimators has been estab-lished theoretically, it is known that their real performance strongly depends on a choice oftuning parameters; see, e.g., Iacus and Yoshida [7]. Examining each individual increment with-out other data is not always effective in finding jumps. It sometimes overlooks relatively smalljumps due to a conservative level of threshold to try to incorporate all Brownian increments.To resolve this problem, Inatsugu and Yoshida [8] introduced the so-called global filters thatexamine all increments simultaneously and regard an increment of high rank in order of absolutesize as a jump. Using the information about the size of other increments helps us detect jumpsmore accurately than the previous methods that ignore such information. Moreover, Inatsuguand Yoshida [8] also removed the assumption of low intensity of small jumps, that was used in2himizu and Yoshida [17] and Ogihara and Yoshida [13]. This is a theoretical advantage of theglobal jump filters, in addition to their outperformance in practice.In this paper, we will apply the global filtering method to nonparametric volatility esti-mation. Specifically, we will construct the “global realized volatility (GRV) estimator” of theintegrated volatility for the semimartingale X having the decomposition (1.1). Though J andthe jump part of σ are assumed to be finitely active for each n , we permit the number of jumpsto diverge as n tends to infinity. We will investigate the theoretical properties of GRV and thenconduct numerical simulations to study their performance compared with traditional methods,that is, the deterministic threshold estimator, the bipower variation and the minimum realizedvolatility.The organization of this paper is as follows. Section 2 introduces the GRV and its variant, thewinsorized GRV (WGRV). In Section 3, we introduce the local-global realized volatility (LGRV)and prove its convergence to the spot volatility. The LGRV will be used for normalizing theincrements to compute the global filter. Section 4 gives the rate of convergence of the GRVand WGRV in the situation where the intensity of jumps is high. In this case, we need a highand fixed cut-off rate α to eliminate harmful jumps. In Section 5, we allow the cut-off rateto vary according to the sample size. This “moving threshold” method is for the situationwhere the intensity of jumps is moderate and small cut-off rate is applicable. Section 6 brieflydiscusses the situation where true volatility is constant. In this case, normalizing increments isnot necessry, so the estimator gets a little simpler. Section 7 presents some simulation results tocompare the real performance of the GRV, WGRV, bipower varition, and the mininum realizedvolatility.Concluding, let us mention some technical aspects. The global jump filter causes theoreticaldifficulty. By nature, it uses all the data to classify each increment ∆ j X . This completelydestroys the martingale structure in the model, which makes it difficult to use orthogonalitybetween the selected increments to validate the law of large numbers and the central limittheorem. However, it is possible to asymptotically recover the orthogonality by the glocal andglobal filtering lemmas presented in Sections 3 and 4. Technically, the argument here is closedwithin the semimartingale theory, although the global filter breaks adaptivity of the functionals,in other words, a quadratic variation with anticipative weights is treated. On the other hand,Yoshida [23] suggests a use of the Malliavin calculus to analyse robustified volatility estimatorswith anticipative weights. The global jump filter introduced by Inatsugu and Yoshida [8] uses the order statistics ofthe transformed increments of the observations. Suppose that an estimator S n,j − of the spotvolatility σ ( X t j − ) (up to a common scaling factor) is given for each j ∈ I n = { , ..., n } . Denote∆ j U = U t j − U t j − for a process U = ( U t ) t ∈ [0 ,T ] . Then the distribution of the scaled increment S − / n,j − ∆ j X is expected to be well approximated by the standard normal distribution N (0 , V j = (cid:12)(cid:12) ( S n,j − ) − / ∆ j X (cid:12)(cid:12) (2.1)3s relatively very large among V n = { V k } k ∈ I n , then plausibly we can infer that the V j involvesjumps with high probability. The idea of the global jump filter is to eliminate the increment∆ j X from the data if the corresponding V j is ranked within the top 100 α % in V n . Moreprecisely, let J n ( α ) = (cid:8) j ∈ I n ; V j < V ( s n ( α )) (cid:9) where s n ( α ) = ⌊ n (1 − α ) ⌋ for α ∈ [0 , r n ( U j ) the rank of U j among the variables { U i } i ∈ I n . Let q ( α ) = Z {| z |≤ c ( α ) / } z φ ( z ; 0 , dz (2.2)where φ ( z ; 0 ,
1) is the density function of N (0 ,
1) and c ( α ) defined by P (cid:2) ζ ≤ c ( α ) (cid:3) = 1 − α for ζ ∼ N (0 ,
1) and α ∈ [0 , global realized volatility (globally truncatedrealized volatility, GRV) with cut-off ratio α is defined by V n ( α ) = X j ∈J n ( α ) q ( α ) − | ∆ j X | K n,j (2.3)where K n,j = 1 {| ∆ j X |≤ n − / } . As remarked in Inatsugu and Yoshida [8], the indicator func-tion K n,j is set just for relaxing the conditions for validation. Generalization by using like1 {| ∆ j X |≤ B n − δ } with constants B > δ ∈ (0 , /
4] is straightforward, but we prefer sim-plicity in presentation of this article. In practice, the probability that K n,j executes the taskis exponentially small by the large deviation principle. However, the moments of ∆ J t are notcontrollable without assumption, and we can simply avoid it by the cut-off function K n,j .Winsorization is a popular technique in robust statistics. In the present context, the Win-sorized global realized volatility (WGRV) is given by W n ( α ) = n X j =1 w ( α ) − (cid:8) | ∆ j X | ∧ (cid:0) S / n,j − V ( s n ( α )) (cid:1)(cid:9) K n,j where w ( α ) = Z R (cid:0) z ∧ c ( α ) (cid:1) φ ( z ; 0 , dz. The cut-off ratio α ∈ [0 ,
1) is a tuning parameter in estimation procedures. The bigger α provides the more stable estimates even in high intensity of jumps. On the other hand, thesmaller α gives the more precise estimates if the intensity of jumps is low. Making trade-offbetween stability and precision is necessary in practice. As a matter of fact, these cases requiredifferent theoretical treatments. We will consider fixed α in Section 4, and shrinking α inSection 5. 4 Local-global filter
An estimator S n,j − for the spot volatility (up to a constant scaling) is necessary to constructa global realized volatility. Naturally, we use the data around time t to estimate σ t . Sincethese data are also contaminated with jumps, we need a jump filter to construct a temporally-local estimator S n,j − . The idea of the global jump filter with the order statistics of the dataaround t serves to eliminate the effects of jumps, not only theoretically but also practically asdemonstrated by the simulation studies of Section 7. In this section, we propose a local-globalrealized volatility and validate it by establishing in Section 3.2 the rate of convergence of theestimator. Since the local-global filter involves the order statistics, that destroy the martingalestructure, we try to recover it by somewhat sophisticated lemmas given in Section 3.1. Theminimum realized volatility (minRV) made of the temporally-local data is also a candidate ofan estimator for the spot volatility. A rate of convergence of the local minRV is mentioned inSection 3.3. For each j ∈ I n , let j n = j ≤ κ n ) j − κ n ( κ n + 1 ≤ j ≤ n − κ n ) n − κ n ( j ≥ n − κ n + 1)for κ n ∈ Z + satisfying 2 κ n + 1 ≤ n . Let I n,j = { j n , j n + 1 , ..., j n + 2 κ n } . Let b U j,k = h − / σ − t jn − ∆ k X and W j = h − / ∆ j w for j, k ∈ I n . Both variables b U j,k and W j depend on n . Let b R j,k = b U j,k − W k − h − / σ − t jn − ∆ k J for j, k ∈ I n . Denote L ∞ – = ∩ p> L p .Let N = P s ∈ (0 , · ] { ∆ J s =0 } . Let e σ = σ − J σ for J σ = P s ∈ (0 , · ] ∆ σ s , and let N σ = P s ∈ (0 , · ] { ∆ J σs =0 } .We assume that N σT < ∞ a.s. Moreover, let N = N + N σ . Let e X = X − J . A counting processwill be identified with a random measure. Let I n,j = (cid:0) t j n − , t j n +2 κ n (cid:3) . [G1] (i) For every p >
1, sup t ∈ [0 ,T ] k σ t k p < ∞ and (cid:13)(cid:13)e σ t − e σ s (cid:13)(cid:13) p ≤ C ( p ) | t − s | / ( t, s ∈ [0 , T ])for some constant C ( p ) for every p > (ii) sup t ∈ [0 ,T ] k b t k p < ∞ for every p > (iii) σ t = 0 a.s. for every t ∈ [0 , T ], an sup t ∈ [0 ,T ] (cid:13)(cid:13) σ − t (cid:13)(cid:13) p < ∞ for every p > emma 3.1. Under [ G , sup j ∈ I n sup k ∈ I n,j (cid:13)(cid:13) b R j,k { N σ ( I n,j )=0 } (cid:13)(cid:13) p = O (cid:18)(cid:16) κ n n (cid:17) / (cid:19) (3.1) as n → ∞ for every p > .Proof. For j ∈ I n , let E ( j ) = { N σ ( I n,j ) = 0 } . Then, for k ∈ I n,j , b R j,k E ( j ) = (cid:0) h − / σ − t jn − ∆ k e X − h − / ∆ k w (cid:1) E ( j ) = h − / Z t k t k − σ − t jn − ( e σ t − e σ t jn − ) dw t E ( j ) + h − / σ − t jn − Z t k t k − b t dt E ( j ) (3.2)We obtain (3.1) by applying the Burkholder-Davis-Gundy inequality to the martingale part of(3.2) after the trivial estimate 1 E ( j ) ≤ j ∈ I n , denote by r n,j ( U k ) the rank of the element U k among a collection of randomvariables { U ℓ } k ∈ I n,j . Let0 < η < η , κ n = 2 κ n + 1 , a n = ⌊ (1 − α ) κ n − κ − η n ⌋ , b a n = ⌊ a n − κ − η n ⌋ for α ∈ [0 , L n,j,k = (cid:8) r n,j ( | W k | ) ≤ a n − κ − η n (cid:9) ∩ (cid:8) | W | ( j, a n ) − | W k | < κ − η n (cid:9) (3.3)where (cid:0) | W | ( j,k ) (cid:1) k ∈ I n,j are the ordered statistics made from {| W k |} k ∈ I n,j . In the same way asLemma 1 of Inatsugu and Yoshida [8], we obtain the following result. Lemma 3.2.
Let α ∈ (0 , . Suppose that η < / and that n − ǫ κ n → ∞ as n → ∞ for some ǫ ∈ (0 , . Then sup j ∈ I n P (cid:20) [ k ∈ I n,j L n,j,k (cid:21) = O ( n − L ) as n → ∞ for every L > . Define K n,j ( α ) by K n,j ( α ) = (cid:8) k ∈ I n,j ; r n,j ( | ∆ k X | ) ≤ (1 − α ) κ n (cid:9) , where r n,j ( | ∆ k X | ) is the rank of | ∆ k X | among {| ∆ k ′ X |} k ′ ∈ I n,j . Let b K n,j ( α ) = (cid:8) k ∈ I n,j ; r n,j ( | W k | ) ≤ ˆ a n (cid:9) . n,j = \ k ∈ I n,j (cid:20)(cid:26) | b R j,k | { N σ ( I n,j )=0 } < − κ − η n (cid:27) ∩ L cn,j,k (cid:21) . Let L n = (cid:8) j ∈ I n ; N ( I n,j ) = 0 (cid:9) . (3.4) Lemma 3.3. (a) b K n,j ( α ) ⊂ K n,j ( α ) on Ω n,j if j ∈ L cn . (b) Ω n,j { j ∈L cn } (cid:0) K n,j ( α ) \ b K n,j ( α ) (cid:1) ≤ κ − η n ( j ∈ I n.j , n ∈ N ) . Proof.
Let n ∈ N and suppose that j ∈ L cn . We will work on Ω n,j . For a pair ( k , k ) ∈ I n,j ,suppose that r n,j ( | W k | ) ≤ b a n and r n,j ( | W k | ) ≥ a n . (3.5)Then | b U j,k | < | W k | + 2 − κ − η n , since ∆ k N = 0 and N σ ( I n,j ) = 0 when j ∈ L cn , and then | b R j,k | < − κ − η n on Ω n,j . By the first inequality of (3.5), r n,j ( | W k | ) ≤ a n − κ − η n , and henceon Ω n,j ⊂ L cn,j,k , we have | W | ( j, a n ) − | W k | ≥ κ − η n by the definition (3.3) of L n,j,k . Therefore | b U j,k | < | W | ( j, a n ) − − κ − η n . (3.6)The assumption j ∈ L cn entails | b R j,k | < − κ − η n on Ω n,j , and hence | W k | − − κ − η n < | b U j,k | due to ∆ k J = 0. From (3.6), we have got | b U j,k | < | b U j,k | (3.7)on Ω n,j if j ∈ L cn and if a pair ( k , k ) ∈ I n,j satisfies (3.5).We are working on Ω n,j yet. Suppose that j ∈ L cn and k ∈ b K n,j ( α ). Then the inequality(3.7) holds for any k ∈ I n,j satisfying r n,j ( | W k | ) ≥ a n . So, there are at least ⌊ α κ n + 1 ⌋ (cid:0) ≤ α κ n + κ − η n + 1 ≤ κ n − a n + 1 (cid:1) variables b U j,k that satisfy (3.7). Then r n,j ( | b U j,k | ) ≤ (1 − α ) κ n ,and hence k ∈ K n,j ( α ). Thus, we found b K n,j ( α ) ⊂ K n,j ( α )on Ω n,j if j ∈ L cn , that is, (a).We still work on Ω n,j . Suppose that j ∈ L cn and k ∈ K n,j ( α ) \ b K n,j ( α ). When r n,j ( | W k | ) < a n , since r n,j ( | W k | ) > b a n due to k ∈ b K n,j ( α ) c , we see1 { j ∈L cn } (cid:8) k ∈ K n,j ( α ) \ b K n,j ( α ); r n,j ( | W k | ) < a n (cid:9) ≤ κ − η n (3.8)on Ω n,j . When r n,j ( | W k | ) ≥ a n , for any k satisfying r n,j ( | W k | ) ≤ b a n , we have (3.7). Therefore (cid:8) k ∈ I n,j ; | b U j,k | < | b U j,k | (cid:9) ≥ { j ∈L cn } { r n,j ( | W k | ) ≥ a n } b a n ,
7n other words, r n,j ( | b U j,k | ) > b a n (3.9)on Ω n,j if j ∈ L cn and r n,j ( | W k | ) ≥ a n . Moreover, r n,j ( | b U j,k | ) ≤ ⌊ (1 − α ) κ n ⌋ since k ∈ K n,j ( α ).Combining this estimate with (3.9), we obtain1 { j ∈L cn } (cid:8) k ∈ K n,j ( α ) \ b K n,j ( α ); r n,j ( | W k | ) ≥ a n (cid:9) ≤ (1 − α ) κ n − b a n ≤ κ − η n + 1 (3.10)From (3.8) and (3.10), we obtain (b).For η ∈ R , j ∈ I n and a sequence of random variables ( V j ) j ∈ I n , let D n,j = κ η n (cid:12)(cid:12)(cid:12)(cid:12) κ n X k ∈K n,j ( α ) V k − κ n X k ∈ b K n,j ( α ) V k (cid:12)(cid:12)(cid:12)(cid:12) The following lemma follows from Lemma 3.3 immediately.
Lemma 3.4. (i)
Let p ≥ . Then (cid:13)(cid:13) D n,j (cid:13)(cid:13) p ≤ κ η − η n (cid:13)(cid:13)(cid:13)(cid:13) max k ∈ I n,j | V k | Ω n,j ∩{ j ∈L cn } (cid:13)(cid:13)(cid:13)(cid:13) p + κ η n (cid:13)(cid:13)(cid:13)(cid:13) max k ∈ I n,j | V k | Ω cn,j (cid:13)(cid:13)(cid:13)(cid:13) p + κ η n (cid:13)(cid:13)(cid:13)(cid:13) max k ∈ I n,j | V k | { j ∈L n } (cid:13)(cid:13)(cid:13)(cid:13) p for j ∈ I n , n ∈ N . (ii) Let p ≥ and η > . Then (cid:13)(cid:13) D n,j (cid:13)(cid:13) p ≤ κ η − η n (cid:18) κ η n + κ n max k ∈ I n,j (cid:13)(cid:13)(cid:13)(cid:13) | V k | {| V k | >κ η n } Ω n,j ∩{ j ∈L cn } (cid:13)(cid:13)(cid:13)(cid:13) p (cid:19) + κ η n (cid:13)(cid:13)(cid:13)(cid:13) max k ∈ I n,j | V k | Ω cn,j (cid:13)(cid:13)(cid:13)(cid:13) p + κ η n (cid:13)(cid:13)(cid:13)(cid:13) max k ∈ I n,j | V k | { j ∈L n } (cid:13)(cid:13)(cid:13)(cid:13) p for j ∈ I n , n ∈ N . Let e K n,j ( α ) = (cid:8) k ∈ I n,j ; | W k | ≤ c ( α ) / (cid:9) For η > j ∈ I n and a sequence of random variables ( V j ) j ∈ I n , let e D n,j = κ η n (cid:12)(cid:12)(cid:12)(cid:12) κ n X k ∈ b K n,j ( α ) V k − κ n X k ∈ e K n,j ( α ) V k (cid:12)(cid:12)(cid:12)(cid:12) Let e Ω n,j = (cid:8)(cid:12)(cid:12) | W | ( j, b a n ) − c ( α ) / (cid:12)(cid:12) < ˇ Cκ − η n (cid:9) (3.11)for j ∈ I n , where ˇ C is a positive constant. 8 emma 3.5. Let η ∈ R . Then (i) For p ≥ and j ∈ I n , (cid:13)(cid:13) e D n,j (cid:13)(cid:13) p ≤ κ η n (cid:13)(cid:13)(cid:13)(cid:13) max k ′ ∈ I n,j | V k ′ | κ n X k ∈ I n,j (cid:8)(cid:12)(cid:12) | W k |− c ( α ) / (cid:12)(cid:12) < ˇ Cκ − η n (cid:9)(cid:13)(cid:13)(cid:13)(cid:13) p + κ η n (cid:13)(cid:13)(cid:13)(cid:13) e Ω cn,j max k ′ ∈ I n,j | V k ′ | (cid:13)(cid:13)(cid:13)(cid:13) p (ii) For p > p ≥ and j ∈ I n , (cid:13)(cid:13) e D n,j (cid:13)(cid:13) p ≤ κ η n (cid:13)(cid:13)(cid:13)(cid:13) max k ∈ I n,j | V k | (cid:13)(cid:13)(cid:13)(cid:13) p P (cid:20)(cid:12)(cid:12) | W | − c ( α ) / (cid:12)(cid:12) < ˇ Cκ − η n (cid:21) + κ η n (cid:13)(cid:13)(cid:13)(cid:13) max k ∈ I n,j | V k | (cid:13)(cid:13)(cid:13)(cid:13) pp ( p − p ) − (cid:13)(cid:13)(cid:13)(cid:13) κ n X k ∈ I n,j (cid:18) (cid:8)(cid:12)(cid:12) | W k |− c ( α ) / (cid:12)(cid:12) < ˇ Cκ − η n (cid:9) − P (cid:20)(cid:12)(cid:12) | W k | − c ( α ) / (cid:12)(cid:12) < ˇ Cκ − η n (cid:21)(cid:19)(cid:13)(cid:13)(cid:13)(cid:13) p + κ η n P (cid:2)e Ω cn,j (cid:3) /p (cid:13)(cid:13)(cid:13)(cid:13) max k ∈ I n,j | V k | (cid:13)(cid:13)(cid:13)(cid:13) pp ( p − p ) − Proof.
For k ∈ I n,j , e Ω n,j ∩ (cid:8) r n,j ( | W k | ) ≤ b a n (cid:9) c ∩ (cid:8) | W k | ≤ c ( α ) / (cid:9) = (cid:8)(cid:12)(cid:12) | W | ( j, b a n ) − c ( α ) / (cid:12)(cid:12) < ˇ Cκ − η n (cid:9) ∩ (cid:8) | W k | > | W | ( j, b a n ) (cid:9) ∩ (cid:8) | W k | ≤ c ( α ) / (cid:9) ⊂ (cid:8)(cid:12)(cid:12) | W k | − c ( α ) / (cid:12)(cid:12) < ˇ Cκ − η n (cid:9) and e Ω n,j ∩ (cid:8) r n,j ( | W k | ) ≤ b a n (cid:9) ∩ (cid:8) | W k | ≤ c ( α ) / (cid:9) c = (cid:8)(cid:12)(cid:12) | W | ( j, b a n ) − c ( α ) / (cid:12)(cid:12) < ˇ Cκ − η n (cid:9) ∩ (cid:8) | W k | ≤ | W | ( j, b a n ) (cid:9) ∩ (cid:8) | W k | > c ( α ) / (cid:9) ⊂ (cid:8)(cid:12)(cid:12) | W k | − c ( α ) / (cid:12)(cid:12) < ˇ Cκ − η n (cid:9) . Thus we obtain (i). Property (ii) follows from (i).
Lemma 3.6.
If the constant ˇ C in (4.7) is sufficiently large, then sup j ∈ I n P (cid:2)e Ω cn,j (cid:3) = O ( n − L ) as n → ∞ for any L > .Proof. We have P (cid:2) | W | ( j, b a n ) − c ( α ) / < − ˇ Cκ − η n (cid:3) ≤ P (cid:20) | W | (cid:0) j, ⌊ a n − κ − η n − ⌋ (cid:1) < c ( α ) / − ˇ Cκ − η n (cid:21) ≤ P (cid:20) X k ∈ I n,j A n,k ≥ ⌊ a n − κ − η n − ⌋ (cid:21) = P (cid:20) κ − / n X k ∈ I n,j (cid:8) A n,k − P [ A n,k ] (cid:9) ≥ C n (cid:21) (3.12)9here A n,k = (cid:8) | W k | < c ( α ) / − ˇ Cκ − η n (cid:9) ,C n = κ − / n (cid:0) a n − κ − η n − − κ n P [ A n, ] (cid:1) . By using the mean-value theorem, we obtain C n ∼ κ − / n (cid:20) (1 − α ) κ n − κ − η n − κ n (cid:8) − α − φ (cid:0) c ( α ) / ; 0 , (cid:1) ˇ Cκ − η n (cid:9)(cid:21) & κ − η n as n → ∞ if we choose a sufficiently large ˇ C . Therefore, the L p -boundedness of the randomvariables in (3.12) givessup j ∈ I n P (cid:2) | W | ( j, b a n ) − c ( α ) / < − ˇ Cκ − η n (cid:3) = O ( n − L ) (3.13)as n → ∞ for any L >
0. In a similar way, we know P (cid:2) | W | ( j, b a n ) − c ( α ) / > ˇ Cκ − η n (cid:3) = O ( n − L ) (3.14)as n → ∞ for any L >
0. Then we obtain the result from (3.13) and (3.14).
We introduce the local-global realized volatility (LGRV) L n,j ( α ) = nκ n T X k ∈K n,j ( α ) q ( α ) − | ∆ k X | K n,k . (3.15) Theorem 3.7.
Suppose that [ G is fulfilled. For c ∈ (0 , and B > , suppose that κ n ∼ Bn c as n → ∞ . Then sup n ∈ N sup j ∈ I n sup k ∈ I n,j n γ ∗ (cid:13)(cid:13) { j ∈L cn } (cid:0) L n,j ( α ) − σ t k (cid:1)(cid:13)(cid:13) p < ∞ (3.16) as n → ∞ for any constant γ ∗ satisfying γ ∗ < min (cid:26)
12 (1 − c ) , c (cid:27) . Proof. (I) We have κ n ∼ n c ∼ h − c and n/κ n ∼ n − c ∼ h c − . Let D ∗ n,j = κ η n (cid:26) nκ n X k ∈K n,j ( α ) | ∆ k X | K n,k − nκ n X k ∈ b K n,j ( α ) | ∆ k X | K n,k (cid:27) Applied to V k = n | ∆ k X | K n,k { j ∈L cn } , Lemma 3.4 (ii) gives (cid:13)(cid:13) D ∗ n,j { j ∈L cn } (cid:13)(cid:13) p ≤ Φ ( . ) n,j + Φ ( . ) n,j (3.17)10or every p >
1, whereΦ ( . ) n,j = 4 κ η − η n (cid:18) κ η n + κ n max k ∈ I n,j (cid:13)(cid:13)(cid:13)(cid:13) n | ∆ k X | { n | ∆ k X | >κ η n } { j ∈L cn } (cid:13)(cid:13)(cid:13)(cid:13) p (cid:19) (3.18)and Φ ( . ) n,j = κ η n (cid:13)(cid:13)(cid:13)(cid:13) max k ∈ I n,j n | ∆ k X | K n,k Ω cn,j (cid:13)(cid:13)(cid:13)(cid:13) p . (3.19)Since there is no jump of J on { j ∈ L cn } , we seesup j ∈ I n sup k ∈ I n,j (cid:13)(cid:13) n | ∆ k X | { j ∈L cn } (cid:13)(cid:13) p = O (1) (3.20)for every p >
1, as a result, the L p -norm on the right-hand side of (3.18) is of O ( n − L ) forarbitrary L >
0, and hence Φ ( . ) n,j = O (cid:0) κ η − η + η n (cid:1) (3.21)as n → ∞ . Similarly to (3.20), we obtainsup j ∈ I n P (cid:2) Ω cn,j (cid:3) = O ( n − L ) (3.22)as n → ∞ for every L >
0, from Lemma 3.2 as well as Lemma 3.1 because ( n/κ n ) / κ − η n ≫ − ( c − − > η . ThenΦ ( . ) n,j ≤ κ η n n / P (cid:2) Ω cn,j (cid:3) /p = O ( n − L ) (3.23)for every L > p >
1. From (3.17), (3.21) and (3.23), (cid:13)(cid:13) D ∗ n,j { j ∈L cn } (cid:13)(cid:13) p = O (cid:0) κ η − η + η n (cid:1) = O (cid:0) n − c ( η − η − η ) (cid:1) (3.24)as n → ∞ for every p >
1. We recall that the parameters should satisfy0 < η < η < min (cid:26) , (cid:18) c − (cid:19)(cid:27) , η + η < η . [ In particular, if c = 1 /
2, then 0 < η < η < /
2. The positive parameters η and η can besufficiently small at this stage. Remark that c η < / c ≤ /
2. ](II) Let e D ∗ n,j = κ η n (cid:26) nκ n X k ∈ b K n,j ( α ) | ∆ k X | K n,k − nκ n X k ∈ e K n,j ( α ) | ∆ k X | K n,k (cid:27) . Applying Lemma 3.5 (ii) to V k = n | ∆ k X | K n,k { j ∈L cn } , we have (cid:13)(cid:13) e D ∗ n,j { j ∈L cn } (cid:13)(cid:13) p ≤ Φ ( . ) n,j + Φ ( . ) n,j + Φ ( . ) n,j , (3.25)11here Φ ( . ) n,j = κ η n (cid:13)(cid:13)(cid:13)(cid:13) max k ∈ I n,j n | ∆ k X | K n,k { j ∈L cn } (cid:13)(cid:13)(cid:13)(cid:13) p P (cid:20)(cid:12)(cid:12) | W | − c ( α ) / (cid:12)(cid:12) < ˇ Cκ − η n (cid:21) , (3.26)Φ ( . ) n,j = κ η n (cid:13)(cid:13)(cid:13)(cid:13) max k ∈ I n,j n | ∆ k X | K n,k { j ∈L cn } (cid:13)(cid:13)(cid:13)(cid:13) pp ( p − p ) − × (cid:13)(cid:13)(cid:13)(cid:13) κ n X k ∈ I n,j (cid:18) (cid:8)(cid:12)(cid:12) | W k |− c ( α ) / (cid:12)(cid:12) < ˇ Cκ − η n (cid:9) − P (cid:20)(cid:12)(cid:12) | W k | − c ( α ) / (cid:12)(cid:12) < ˇ Cκ − η n (cid:21)(cid:19)(cid:13)(cid:13)(cid:13)(cid:13) p (3.27)and Φ ( . ) n,j = κ η n P (cid:2)e Ω cn,j (cid:3) /p (cid:13)(cid:13)(cid:13)(cid:13) max k ∈ I n,j n | ∆ k X | K n,k { j ∈L cn } (cid:13)(cid:13)(cid:13)(cid:13) pp ( p − p ) − (3.28)for j ∈ I n , n ∈ N . Then, paying κ η n for the maximum, we have the following estimates for any p > p ≥
1: sup j ∈ I n Φ ( . ) n,j = O (cid:0) κ η + η − η n (cid:1) = O (cid:0) n − c ( η − η − η ) (cid:1) , (3.29)sup j ∈ I n Φ ( . ) n,j = O (cid:0) κ η n × κ η n × κ − (1+ η ) / n (cid:1) = O (cid:0) n − c (cid:0) η − η − η (cid:1)(cid:1) , (3.30)and sup j ∈ I n Φ ( . ) n,j = O ( n − L ) (3.31)as n → ∞ for any L > C ; the estimate (3.31) follows from Lemma 3.6.In this way, (cid:13)(cid:13) e D ∗ n,j { j ∈L cn } (cid:13)(cid:13) p = O (cid:0) n − c ( η − η − η ) (cid:1) + O (cid:0) n − c (cid:0) η − η − η (cid:1)(cid:1) (3.32)as n → ∞ for every p ≥ { j ∈ L cn } , we have X k ∈ e K n,j ( α ) | ∆ k X | K n,k = X k ∈ e K n,j ( α ) Z t k t k − σ t dw t + Z t k t k − b t dt ! K n,k = Φ ( . ) n,j + Φ ( . ) n,j + Φ ( . ) n,j + Φ ( . ) n,j + Φ ( . ) n,j (3.33)where Φ ( . ) n,j = X k ∈ I n,j (cid:0) σ t jn (cid:1) hW k {| W k |≤ c ( α ) / } , (3.34)12 ( . ) n,j = X k ∈ I n,j (cid:0) σ t jn (cid:1) hW k {| W k |≤ c ( α ) / } (cid:0) K n,k − (cid:1) +2 X k ∈ e K n,j ( α ) Z t k t k − Z tt k − (cid:0)e σ s − e σ t jn (cid:1) dw s σ t dw t K n,k +2 X k ∈ e K n,j ( α ) Z t k t k − Z tt k − σ t jn dw s (cid:0)e σ t − e σ t jn (cid:1) dw t K n,k +2 X k ∈ e K n,j ( α ) Z t k t k − e σ t jn (cid:0)e σ t − e σ t jn (cid:1) dtK n,k + X k ∈ e K n,j ( α ) (cid:18) Z t k t k − (cid:0)e σ t − e σ t jn (cid:1) dw t (cid:19) K n,k , (3.35)Φ ( . ) n,j = 2 X k ∈ e K n,j ( α ) Z t k t k − Z tt k − b s dsσ t dw t K n,k , (3.36)Φ ( . ) n,j = 2 X k ∈ e K n,j ( α ) Z t k t k − Z tt k − σ s dw s b t dtK n,k , (3.37)and Φ ( . ) n,j = 2 X k ∈ e K n,j ( α ) Z t k t k − Z tt k − b s dsb t dtK n,k . (3.38)By assumption, sup j ∈ I n sup s ∈ [ t jn − ,t jn + κn ] (cid:13)(cid:13) { j ∈L cn } (cid:0) σ s − σ t jn − (cid:1)(cid:13)(cid:13) p ≤ sup j ∈ I n sup s ∈ [ t jn − ,t jn + κn ] (cid:13)(cid:13)e σ s − e σ t jn − (cid:13)(cid:13) p < ∼ ( κ n h ) / < ∼ h (1 − c ) (3.39)for every p >
1. First, a primitive estimate givessup j ∈ I n nκ n (cid:13)(cid:13) Φ ( . ) n,j { j ∈L cn } (cid:13)(cid:13) p < ∼ nκ n × κ / n n / < ∼ h (1 − c ) (3.40)as n → ∞ ; we note that the orthogonality cannot apply due to e K n,j ( α ) even after K n,k isdecoupled. We also have sup j ∈ I n nκ n (cid:13)(cid:13) Φ ( . ) n,j { j ∈L cn } (cid:13)(cid:13) p < ∼ h / . (3.41)13or Φ ( . ) n,j and Φ ( . ) n,j , by the same way, we can getsup j ∈ I n nκ n (cid:13)(cid:13) Φ ( . ) n,j { j ∈L cn } (cid:13)(cid:13) p < ∼ h / , (3.42)and sup j ∈ I n nκ n (cid:13)(cid:13) Φ ( . ) n,j { j ∈L cn } (cid:13)(cid:13) p < ∼ h (3.43)as n → ∞ . Furthermore, we havesup j ∈ I n (cid:13)(cid:13)(cid:13)(cid:13)(cid:26) κ n h Φ ( . ) n,j − (cid:0) σ t jn (cid:1) q ( α ) (cid:27) { j ∈L cn } (cid:13)(cid:13)(cid:13)(cid:13) p ≤ sup j ∈ I n (cid:13)(cid:13)(cid:13)(cid:13) κ n h (cid:26) Φ ( . ) n,j − X k ∈ I n,j (cid:0) σ t jn (cid:1) q ( α ) h (cid:27) { j ∈L cn } (cid:13)(cid:13)(cid:13)(cid:13) p ≤ sup j ∈ I n (cid:13)(cid:13)(cid:13)(cid:13) κ n X k ∈ I n,j (cid:0) σ t jn (cid:1) (cid:0) W k {| W k |≤ c ( α ) / } − q ( α ) (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) p = O ( κ − / n ) = O ( h c / ) (3.44)for every p >
1. Combining (3.33) and (3.39)-(3.44), we obtainsup j ∈ I n sup k ′ ∈ I n,j (cid:13)(cid:13)(cid:13)(cid:13) { j ∈L cn } (cid:18) nκ n T X k ∈ e K n,j ( α ) | ∆ k X | K n,k − σ t k ′ q ( α ) (cid:19)(cid:13)(cid:13)(cid:13)(cid:13) p = O (cid:0) n − (1 − c ) / (cid:1) + O ( n − c / ) (3.45)as n → ∞ for every p > j ∈ I n sup k ′ ∈ I n,j κ η n (cid:13)(cid:13)(cid:13)(cid:13) { j ∈L cn } (cid:18) nκ n T X k ∈K n,j ( α ) | ∆ k X | K n,k − σ t k ′ q ( α ) (cid:19)(cid:13)(cid:13)(cid:13)(cid:13) p = O (cid:0) n − c ( η − η − η ) (cid:1) + (cid:26) O (cid:0) n − c ( η − η − η ) (cid:1) + O (cid:0) n − c (cid:0) η − η − η (cid:1)(cid:1)(cid:27) + κ η n (cid:26) O (cid:0) n − (1 − c ) / (cid:1) + O ( n − c / ) (cid:27) = O ( n − c ( η − η − η ) ) + O (cid:0) n c ( η + η ) − (1 − c ) / (cid:1) =: O n (3.46)as n → ∞ for every p >
1. Here we are assuming the parameters satisfy c ∈ (0 , , B > , η ∈ (cid:18) , min (cid:26) (cid:0) c − (cid:1) , (cid:27)(cid:19) ,η ∈ (0 , η ) , η > , η > , η + η < η . (3.47)To obtain the last error bound in (3.46), we used the inequalities − c (cid:18) η − η − η (cid:19) < − c (cid:0) η − η − η (cid:1) c η − c < c η − c η < − c ( η − η − η ) . The LGRV L n,j ( α ) of (3.15) does not depend on η i ( i = 1 , , ,
4) within the ranges (3.47).When c > /
2, we make12 > (cid:18) c − (cid:19) > η > η > η ↑ (cid:18) c − (cid:19) , η ↓ O n = O (1). When c ≤ /
2, we make12 > η > η > η ↑ , η ↓ O n = O (1). Thus, the proof of Theorem 3.7 is concluded.According to the error bound (3.16), we should in general take c = 1 /
2, i.e., κ n ∼ Bn / toobtain an optimal error estimate. However, this is not always true. If the process σ is (unknown)constant for example, then we do not need any spot volatility estimator to construct a globaljump filter, and the convergence of the resulting estimator for Θ becomes much faster than thatin the non-constant σ case. Estimation of spot volatilities can be done by the minimum realized volatility (minRV) methodof Andersen et al. [1]. This method is localized to define the local minRV by M n,j = ππ − n ¯ κ n T X k ∈ I n,j (cid:8) | ∆ k X | ∧ | ∆ k +1 X |} . (3.48) Theorem 3.8.
Suppose that [ G is fulfilled. For c ∈ (0 , and B > , suppose that κ n ∼ Bn c as n → ∞ . Then sup n ∈ N sup j ∈ I n sup k ∈ I n,j n γ ∗∗ (cid:13)(cid:13) { j ∈L cn } (cid:0) M n,j − σ t k (cid:1)(cid:13)(cid:13) p < ∞ as n → ∞ for any p > and any constant γ ∗∗ satisfying γ ∗∗ = min (cid:26)
12 (1 − c ) , c (cid:27) . Proof.
Consider k ∈ I n,j for j ∈ L cn . Then we can decompose ∆ k X as∆ k X = σ t jn ∆ k w + Z t k t k − ( σ t − σ t jn ) dw t + Z t k t k − b t dt.
15y (3.39), sup j ∈ I n sup k ∈ I n,j (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Z t k t k − ( σ t − σ t jn ) dw t { j ∈L cn } (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) p ≤ sup j ∈ I n sup k ∈ I n,j (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Z t k t k − ( e σ t − e σ t jn ) dw t (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) p (cid:13)(cid:13) { j ∈J cn } (cid:13)(cid:13) p < ∼ sup j ∈ I n sup k ∈ I n,j sZ t k t k − (cid:13)(cid:13)(cid:13)(cid:0)e σ t − e σ t jn (cid:1) (cid:13)(cid:13)(cid:13) p dt = q O (cid:0) h × κ n h (cid:1) = O (cid:0) h − c (cid:1) for every p >
1. Hence, we obtain | ∆ k X | = σ t jn hW k + X k , where X k is a random variable satisfying sup j ∈ I n sup k ∈ I n,j kX k k p = O ( h (3 − c ) ). By using thisapproximation (and the equality a ∧ b = ( a + b − | a − b | ) for a, b > | ∆ k X | ∧ | ∆ k +1 X | = σ t jn h (cid:0) W k ∧ W k +1 (cid:1) + X ′ k , where X ′ k is a random variable satisfying sup j ∈ I n sup k ∈ I n,j kX ′ k k p = O ( h (3 − c ) ). Hence, weobtain M n,j − σ t k = ππ − nκ n T X k ∈ I n,j (cid:8) | ∆ k X | ∧ | ∆ k +1 X | (cid:9) − σ t k = 1 κ n X k ∈ I n,j σ t jn ππ − (cid:8) W k ∧ W k +1 (cid:9) − ! + (cid:0) σ t jn − σ t k (cid:1) + ππ − nκ n T X k ∈ I n,j X ′ k (3.49)The first term on the right-hand side of (3.49) is O ( κ − / n ) = O ( h c / ). As for the second term,(3.39) gives k σ t jn − σ t k k p = O ( h (1 − c ) ). Finally, as for the third term, we can estimate as (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) nκ n T X k ∈ I n,j X ′ k (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) p . n × O (cid:16) h (3 − c ) (cid:17) = O (cid:16) h (1 − c ) (cid:17) . With these estimates, we obtain the desired result.16
Rate of convergence of the global realized volatilitiesin high intensity of jumps
In this section, we present a rate of convergence of the GRV and WGRV, both defined in Section2. When the frequency of the jumps is high, it is recommend that one should choose a valueof α that is not extremely small in order to cover the jumps by the index set J n ( α ) c . We willassume the properties of S n,j − below, that we already proved in Section 3 for the LGRV andthe local minRV. Thus, GRV and WGRV with a LGRV or the local minRV are global realizedvolatilities. [G2] (i) S n,j − is positive a.s. and sup n ∈ N sup j ∈ I n (cid:13)(cid:13) S − n,j − (cid:13)(cid:13) p < ∞ for every p > (ii) There exist positive constants γ and c such thatsup n ∈ N sup j ∈ I n n γ (cid:13)(cid:13) { j ∈L cn } (cid:0) σ t j − − c S n,j − (cid:1)(cid:13)(cid:13) p < ∞ for every p > G c is known. We note thatsup n ∈ N sup j ∈ I n (cid:13)(cid:13) { j ∈L cn } S n,j − (cid:13)(cid:13) p < ∞ for every p > G
1] and [ G S n,j − .If σ t is equal to a (possibly unknown) constant, then γ can be arbitrarily large since wecan let S n,j − = 1. In other words, we do not need any pre-estimate of σ t j − . So, the constantvolatility case is very special and it will be discussed briefly in Section 6 separately. Thissection logically includes the constant volatility case (hence a less efficient way for it) but wewill consider a general non-constant volatility and assume a given local estimator attains alimited rate of convergence. Remark 4.1.
When v = 2 − inf ω ∈ Ω ,t ∈ [0 ,T ] σ t > v , given a localestimator L locn,j − of σ t j − , we can use S n,j − ( v ) = L locn,j ∨ v for S n,j − . For example, it is the casewhen X satisfies a stochastic differential equation with jumps and its diffusion coefficient isuniformly elliptic. When v = 0, an appropriate modification of L locn,j is necessary and possible.We only give an idea without going into details here. Preset a positive constant v . Using S n,j − ( v ) for S n,j − , we obtain an estimator e V n [ v ] of Θ( v ) = R T σ t { σ t ≥ v } dt , and indeed, therate of convergence e V n [ v ] is established in this paper. Then it is natural to use e V n [ v n ] to estimateΘ = R T σ t dt with a sequence of numbers v n tending to 0 as n → ∞ . Consistency does notmatter because the mappting v Θ( v ) is continuous and the operation v n ↓ v n depends on v n . However, the cause of the error by the truncation at level v n is thedifference R T σ t { σ t < v n } dt , and it is rather easy to control for small v n .17 .1 Rate of convergence of the GRV with a fixed α We consider the GRV given by (2.3): V n ( α ) = X j ∈J n ( α ) q ( α ) − | ∆ j X | K n,j . Denote by r n ( U j ) the rank of U j among the variables { U i } i ∈ I n as before, and | U | ( r ) denotesthe r -th ordered statistic of {| U i |} i ∈ I n . Let 0 < γ < γ < γ , and define numbers a n and b a n by a n = ⌊ (1 − α ) n − n − γ ⌋ and b a n = ⌊ a n − n − γ ⌋ , respectively. Define the event N n,j by N n,j = (cid:8) r n ( | W j | ) ≤ a n − n − γ (cid:9) ∩ (cid:8) | W | ( a n ) − | W j | < n − γ (cid:9) The following lemma is Lemma 2.6 of Inatsugu and Yoshida [8].
Lemma 4.2. P " [ j =1 ,..,,n N n,j = O ( n − L ) as n → ∞ for every L > . We need some notation: b J n ( α ) = (cid:8) j ∈ I n ; r n ( | W j | ) ≤ b a n (cid:9) ,U j = c − / h − / ( S n,j − ) − / ∆ j XR j = U j − W j − c − / h − / ( S n,j − ) − / ∆ j J, as well Ω n = (cid:26) L n < n − γ (cid:27)\(cid:18) \ j =1 ,...,n (cid:20)(cid:8) | R j | { j ∈L cn } < − n − γ (cid:9) ∩ ( N n,j ) c (cid:21)(cid:19) . Let L n = (cid:8) j ∈ I n ; ∆ j N = 0 (cid:9) . (4.1)The definition of L ( k ) n in Inatsugu and Yoshida [8] of the extended version arXiv:1806.10706v3 isessentially the same as L n , and different from L n defined by (3.4). The random set L ( k ) n thereincorresponds to L n .We assume that the distribution of the variable N T depends on n , and consider the casewhere N T may diverge as n → ∞ . More precisely, we will assume the following situation. [G3] There exists a constant ξ ≥ k L n k p = O ( n ξ ) as n → ∞ for every p > We slightly relaxed Condition [ G
3] of arXiv:2102.05307v1. emma 4.3. Suppose that [ G and [ G are satisfied. Suppose that < γ < γ < / . Then sup j ∈ I n P (cid:2) | R j | { j ∈L cn } ≥ − n − γ (cid:3) = O ( n − L ) (4.2) as n → ∞ for every L > . In particular, if the conditions κ n = O ( n / ) , γ < − ξ and [ G are additionally satisfied, then P [Ω cn ] = O ( n − L ) (4.3) as n → ∞ for every L > .Proof. We have sup j ∈ I n (cid:13)(cid:13) R j { j ∈L cn } (cid:13)(cid:13) p = O ( n − γ )for every p >
1. The Markov inequality implies (4.2). This estimate and Lemma 4.2 give (4.3)if the Markov inequality is used with the estimate k L n k p = O ( n ξ +1 / ) from [ G L n is given by (3.4). Lemma 4.4. b J n ( α ) ∩ L cn ⊂ J n ( α ) (4.4) on Ω n . In particular (cid:2) J n ( α ) ⊖ b J n ( α ) (cid:3) ≤ c ∗ n − γ + L n (4.5) on Ω n , where c ∗ is a positive constant. Here ⊖ denotes the symmetric difference operator ofsets. For γ > U j ) j =1 ,...,n , let D n = n γ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) n X j ∈J n ( α ) U j − n X j ∈ b J n ( α ) U j (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) . We refer the reader to Lemmas 2.8 and 2.9 of Inatsugu and Yoshida [8] (or see arXiv:1806.10706v3)for proof of the following two lemmas.
Lemma 4.5. (i)
Let p > . Then kD n k p ≤ (cid:0) c ∗ n γ − γ + n − γ k L n k p (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) max j =1 ,...,n (cid:12)(cid:12) U j (cid:12)(cid:12)(cid:13)(cid:13)(cid:13)(cid:13) pp ( p − p ) − + n γ (cid:13)(cid:13)(cid:13)(cid:13) max j =1 ,...,n (cid:12)(cid:12) U j (cid:12)(cid:12) Ω cn (cid:13)(cid:13)(cid:13)(cid:13) p for p ∈ (1 , p ) . ii) Let γ > and p > . Then kD n k p ≤ (cid:0) c ∗ n γ − γ + n − γ k L n k p (cid:1) × (cid:18) n γ + n max j =1 ,...,n (cid:13)(cid:13)(cid:13)(cid:13)(cid:12)(cid:12) U j (cid:12)(cid:12) {| U j | >n γ } (cid:13)(cid:13)(cid:13)(cid:13) pp ( p − p ) − (cid:19) + n γ (cid:13)(cid:13)(cid:13)(cid:13) max j =1 ,...,n (cid:12)(cid:12) U j (cid:12)(cid:12) Ω cn (cid:13)(cid:13)(cid:13)(cid:13) p for p ∈ (1 , p ) . Let e D n = n γ (cid:12)(cid:12)(cid:12)(cid:12) n X j ∈ b J n ( α ) U j − n X j ∈ e J n ( α ) U j (cid:12)(cid:12)(cid:12)(cid:12) . for a collection of random variables { U j } j ∈ I n and e J n ( α ) = (cid:8) j ∈ I n ; | W j | ≤ c ( α ) / (cid:9) . (4.6)Let e Ω n = (cid:8)(cid:12)(cid:12) | W | ( b a n ) − c ( α ) / (cid:12)(cid:12) < ˇ C n − γ (cid:9) , (4.7)where ˇ C is a positive constant. See Lemma 4 of Inatsugu and Yoshida [8] for a proof of thefollowing lemma. Lemma 4.6.
Let ˇ C > and γ > . Then (i) For p ≥ , k e D n k p ≤ n γ (cid:13)(cid:13)(cid:13)(cid:13) max j ′ =1 ,...,n | U j ′ | n n X j =1 (cid:8)(cid:12)(cid:12) | W j |− c ( α ) / (cid:12)(cid:12) < ˇ C n − γ (cid:9)(cid:13)(cid:13)(cid:13)(cid:13) p + n γ (cid:13)(cid:13)(cid:13)(cid:13) e Ω cn max j ′ =1 ,...,n | U j ′ | (cid:13)(cid:13)(cid:13)(cid:13) p . (ii) For p > p ≥ , k ˜ D ( k ) n k p ≤ n γ (cid:13)(cid:13)(cid:13)(cid:13) max j =1 ,...,n | U j | (cid:13)(cid:13)(cid:13)(cid:13) p P (cid:20)(cid:12)(cid:12) | W | − c ( α ) / (cid:12)(cid:12) < ˇ C n − γ (cid:21) + n γ (cid:13)(cid:13)(cid:13)(cid:13) max j =1 ,...,n | U j | (cid:13)(cid:13)(cid:13)(cid:13) pp ( p − p ) − × (cid:13)(cid:13)(cid:13)(cid:13) n n X j =1 (cid:18) (cid:8)(cid:12)(cid:12) | W j |− c ( α ) / (cid:12)(cid:12) < ˇ C n − γ (cid:9) − P (cid:20)(cid:12)(cid:12) | W | − c ( α ) / (cid:12)(cid:12) < ˇ C n − γ (cid:21)(cid:19)(cid:13)(cid:13)(cid:13)(cid:13) p + n γ P [ e Ω cn ] /p (cid:13)(cid:13)(cid:13)(cid:13) max j =1 ,...,n | U j | (cid:13)(cid:13)(cid:13)(cid:13) pp ( p − p ) − . emma 4.7. If the constant ˇ C in (3.11) is sufficiently large, then P (cid:2)e Ω cn (cid:3) = O ( n − L ) as n → ∞ for any L > . Now we shall investigate the rate of convergence of V n ( α ) for a constant α ∈ (0 , G
1] and [ G (cid:13)(cid:13)(cid:13)(cid:13) X j ∈ L n | ∆ j X | K n,j (cid:13)(cid:13)(cid:13)(cid:13) p ≤ n − / (cid:13)(cid:13) L n (cid:13)(cid:13) p = O ( n − / ξ ) . (4.8)Let b V n ( α ) = X j ∈ b J n ( α ) q ( α ) − | ∆ j X | K n,j . Lemma 4.8.
Suppose that [ G
1] [ G and [ G are fulfilled. Suppose that ξ < . Let γ < min (cid:8) γ , − ξ (cid:9) and κ n = O ( n / ) . Then sup n ∈ N n γ (cid:13)(cid:13) V n ( α ) − b V n ( α ) (cid:13)(cid:13) p < ∞ . Proof.
By (4.8), we obtain (cid:13)(cid:13) V n ( α ) − b V n ( α ) (cid:13)(cid:13) p = (cid:13)(cid:13)(cid:13)(cid:13) X j ∈J n ( α ) q ( α ) − | ∆ j X | { ∆ j N =0 } K n,j − X j ∈ b J n ( α ) q ( α ) − | ∆ j X | { ∆ j N =0 } K n,j (cid:13)(cid:13)(cid:13)(cid:13) p + O ( n − / ξ ) . By Lemmas 4.5 and 4.3, n γ (cid:13)(cid:13) V n ( α ) − b V n ( α ) (cid:13)(cid:13) p < ∼ (cid:0) c ∗ n γ − γ + n − γ k L n k p (cid:1) × (cid:18) n γ + n max j =1 ,...,n (cid:13)(cid:13)(cid:13)(cid:13) n | ∆ j X | { ∆ j N =0 } K n,j { n | ∆ j X | { ∆ jN =0 } K n,j >n γ } (cid:13)(cid:13)(cid:13)(cid:13) pp ( p − p ) − (cid:19) + n γ (cid:13)(cid:13)(cid:13)(cid:13) max j =1 ,...,n (cid:0) n | ∆ j X | { ∆ j N =0 } K n,j (cid:1) Ω cn (cid:13)(cid:13)(cid:13)(cid:13) p + O ( n − / γ + ξ ) < ∼ c ∗ n γ + γ − γ + n − / γ + γ + ξ + n − / γ + ξ , where 1 < p < p . The parameters should satisfy0 < γ < γ < γ < γ < , γ < − ξ, γ > . We make γ ↓ , γ <γ < ↑ γ < ↑ γ < ↑ min (cid:26) γ , − ξ (cid:27)
21o obtain the desired exponent.For e J n ( α ) defined in (4.6), let e V n ( α ) = X j ∈ e J n ( α ) q ( α ) − | ∆ j X | K n,j . Lemma 4.9.
Suppose that [ G and [ G are fulfilled. Suppose that ξ < . Let γ < − ξ .Then sup n ∈ N n γ (cid:13)(cid:13)b V n ( α ) − e V n ( α ) (cid:13)(cid:13) p < ∞ . Proof.
By (4.8), we obtain (cid:13)(cid:13)b V n ( α ) − e V n ( α ) (cid:13)(cid:13) p = (cid:13)(cid:13)(cid:13)(cid:13) X j ∈ b J n ( α ) q ( α ) − | ∆ j X | { ∆ j N =0 } K n,j − X j ∈ e J n ( α ) q ( α ) − | ∆ j X | { ∆ j N =0 } K n,j (cid:13)(cid:13)(cid:13)(cid:13) p + O ( n − / ξ ) . By Lemma 4.6, we obtain n γ (cid:13)(cid:13)b V n ( α ) − e V n ( α ) (cid:13)(cid:13) p < ∼ n γ (cid:13)(cid:13)(cid:13)(cid:13) max j =1 ,...,n (cid:0) n | ∆ j X | { ∆ j N =0 } K n,j (cid:1) (cid:13)(cid:13)(cid:13)(cid:13) p P (cid:20)(cid:12)(cid:12) | W | − c ( α ) / (cid:12)(cid:12) < ˇ C n − γ (cid:21) + n γ (cid:13)(cid:13)(cid:13)(cid:13) max j =1 ,...,n (cid:0) n | ∆ j X | { ∆ j N =0 } K n,j (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) pp ( p − p ) − × (cid:13)(cid:13)(cid:13)(cid:13) n n X j =1 (cid:18) (cid:8)(cid:12)(cid:12) | W j |− c ( α ) / (cid:12)(cid:12) < ˇ C n − γ (cid:9) − P (cid:20)(cid:12)(cid:12) | W | − c ( α ) / (cid:12)(cid:12) < ˇ C n − γ (cid:21)(cid:19)(cid:13)(cid:13)(cid:13)(cid:13) p + n γ P [ e Ω cn ] /p (cid:13)(cid:13)(cid:13)(cid:13) max j =1 ,...,n (cid:0) n | ∆ j X | { ∆ j N =0 } K n,j (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) pp ( p − p ) − + O ( n γ − / ξ ) < ∼ n γ − γ + n γ − − γ + n γ − / ξ < ∼ n γ + γ − γ + n γ + γ − / ξ , where 1 ≤ p < p and γ is an arbitrary positive number. Lemma 4.7 was used in the abovederivation. Making γ ↓ γ < γ < ↑ γ < ↑ − ξ, we conclude the proof. Lemma 4.10.
Suppose that [ G and [ G are satisfied. Suppose that ξ < / . Then (cid:13)(cid:13)(cid:13)(cid:13)e V n ( α ) − Z T σ t dt (cid:13)(cid:13)(cid:13)(cid:13) p = O (cid:0) n − + ξ (cid:1) as n → ∞ for every p > . roof. Recall that L n is defined by (4.1). We have X j ∈ e J n ( α ) | ∆ j X | K n,j { j ∈ L cn } = X j ∈ e J n ( α ) Z t j t j − σ t dw t + Z t j t j − b t dt ! K n,j { j ∈ L cn } = Φ ( . ) n + Φ ( . ) n + Φ ( . ) n (4.9)where Φ ( . ) n = X j ∈ I n σ t j − hW j {| W j |≤ c ( α ) / } , (4.10)Φ ( . ) n = X j ∈ I n σ t j − hW j {| W j |≤ c ( α ) / } (cid:0) K n,j { j ∈ L cn } − (cid:1) +2 X j ∈ e J n ( α ) Z t j t j − Z tt j − (cid:0)e σ s − e σ t j − (cid:1) dw s σ t dw t K n,j { j ∈ L cn } +2 X j ∈ e J n ( α ) Z t j t j − Z tt j − σ t j − dw s (cid:0)e σ t − e σ t j − (cid:1) dw t K n,j { j ∈ L cn } +2 X j ∈ e J n ( α ) Z t j t j − e σ t j − (cid:0)e σ t − e σ t j − (cid:1) dtK n,j { j ∈ L cn } + X j ∈ e J n ( α ) (cid:18) Z t j t j − (cid:0)e σ t − e σ t j − (cid:1) dw t (cid:19) K n,j { j ∈ L cn } (4.11)and Φ ( . ) n = 2 X j ∈ e J n ( α ) Z t j t j − Z tt j − b s dsσ t dw t K n,j { j ∈ L cn } +2 X j ∈ e J n ( α ) Z t j t j − Z tt j − σ s dw s b t dtK n,j { j ∈ L cn } +2 X j ∈ e J n ( α ) Z t j t j − Z tt j − b s dsb t dtK n,j { j ∈ L cn } . (4.12)23et ǫ > ξ . For p > ǫ ′ > (cid:13)(cid:13)(cid:13)(cid:13) X j ∈ I n σ t j − hW j {| W j |≤ c ( α ) / } (cid:0) K n,j { j ∈ L cn } − (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) p ≤ (cid:13)(cid:13)(cid:13)(cid:13) X j ∈ I n σ t j − hW j {| W j |≤ c ( α ) / } { j ∈ L n } (cid:13)(cid:13)(cid:13)(cid:13) p (4.13)+ (cid:13)(cid:13)(cid:13)(cid:13) X j ∈ I n σ t j − hW j {| W j |≤ c ( α ) / } (cid:0) K n,j − (cid:1) { j ∈ L cn } (cid:13)(cid:13)(cid:13)(cid:13) p ≤ (cid:13)(cid:13)(cid:13)(cid:13) max j ∈ I n (cid:18) σ t j − hW j {| W j |≤ c ( α ) / } K n,j (cid:19) L n (cid:13)(cid:13)(cid:13)(cid:13) p + O ( n − L ) ≤ (cid:13)(cid:13) L n { L n >n ǫ } (cid:13)(cid:13) p (cid:13)(cid:13)(cid:13)(cid:13) max j ∈ I n (cid:18) σ t j − hW j {| W j |≤ c ( α ) / } K n,j (cid:19)(cid:13)(cid:13)(cid:13)(cid:13) p + n ǫ (cid:13)(cid:13)(cid:13)(cid:13) max j ∈ I n (cid:18) σ t j − hW j {| W j |≤ c ( α ) / } K n,j (cid:19)(cid:13)(cid:13)(cid:13)(cid:13) p + O ( n − L ) < ∼ n − Lǫ p (cid:13)(cid:13) L n (cid:13)(cid:13) p + L p p + L × n − ǫ ′ + n − ǫ + ǫ ′ + O ( n − L ) < ∼ n − L ( ǫ − ξ )2 p + ξ − ǫ ′ + n − ǫ + ǫ ′ + O ( n − L ) < ∼ n − ǫ + ǫ ′ since ǫ > ξ , where L is a sufficiently large number chosen suitably depending on ( ǫ, ξ, p, ǫ ′ ).From the estimate (4.13), we have (cid:13)(cid:13) Φ ( . ) n (cid:13)(cid:13) p < ∼ h / + h − ǫ − ǫ ′ < ∼ h / (4.14)if letting ǫ ↓ ξ < / ǫ ′ ↓ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) X j ∈ e J n ( α ) Z t j t j − Z tt j − b s dsσ t dw t K n,j { j ∈ L cn } (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) p ≤ X j ∈ I n (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Z t j t j − Z tt j − b s dsσ t dw t (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) p . X j ∈ I n vuut(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Z t j t j − Z tt j − b s dsσ t ! dt (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) p/ ≤ X j ∈ I n vuutZ t j t j − (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Z tt j − b s dsσ t (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) p dt . h / . From this and similar estimates, we have (cid:13)(cid:13) Φ ( . ) n (cid:13)(cid:13) p < ∼ h / (4.15)24s n → ∞ for every p >
1. Moreover, (cid:13)(cid:13)(cid:13)(cid:13) Φ ( . ) j − X j ∈ I n σ t j − q ( α ) h (cid:13)(cid:13)(cid:13)(cid:13) p ≤ (cid:13)(cid:13)(cid:13)(cid:13) h X j ∈ I n σ t j − (cid:0) W j {| W j |≤ c ( α ) / } − q ( α ) (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) p = O ( h / ) (4.16)for every p > j ∈ I n (cid:13)(cid:13) { j ∈ L cn } (cid:0) σ t k − σ t j − (cid:1)(cid:13)(cid:13) p ≤ sup j ∈ I n (cid:13)(cid:13)e σ t k − e σ t j − (cid:13)(cid:13) p < ∼ h / (4.17)for every p >
1. In view of (4.17), we deduce that (cid:13)(cid:13)(cid:13)(cid:13) X j ∈ I n σ t j − h − Z T σ t dt (cid:13)(cid:13)(cid:13)(cid:13) p ≤ (cid:13)(cid:13)(cid:13)(cid:13) X j ∈ I n Z t j t j − (cid:12)(cid:12)e σ t − e σ t j − (cid:12)(cid:12) dt (cid:13)(cid:13)(cid:13)(cid:13) p + (cid:13)(cid:13)(cid:13)(cid:13) X j ∈ I n Z t j t j − (cid:0) σ t − σ t j − (cid:1) dt { j ∈ L n } (cid:13)(cid:13)(cid:13)(cid:13) p ≤ O ( h / ) + (cid:13)(cid:13)(cid:13)(cid:13) max j ∈ I n (cid:26) Z t j t j − (cid:0) | σ t | + | σ t j − | (cid:1) dt (cid:27) L n (cid:13)(cid:13)(cid:13)(cid:13) p = O ( h / ) , (4.18)following the passage from (4.13) to (4.14).Easily, (cid:13)(cid:13)(cid:13)(cid:13) X j ∈ e J n ( α ) | ∆ j X | K n,j { j ∈ L n } (cid:13)(cid:13)(cid:13)(cid:13) p ≤ (cid:13)(cid:13) n − / L n (cid:13)(cid:13) p < ∼ n − + ξ . (4.19)Combining (4.19), (4.9), (4.14), (4.15) (4.16) and (4.18), we obtain (cid:13)(cid:13)(cid:13)(cid:13)e V n ( α ) − Z T σ t dt (cid:13)(cid:13)(cid:13)(cid:13) p = O (cid:0) n − + ξ (cid:1) as n → ∞ for every p > Theorem 4.11.
Suppose that [ G
1] [ G and [ G are fulfilled. Suppose that ξ < and κ n = O ( n / ) . Let α ∈ (0 , and β < min (cid:8) γ , − ξ (cid:9) . Then (cid:13)(cid:13) V n ( α ) − Θ (cid:13)(cid:13) p = O ( n − β ) as n → ∞ for every p > .Proof. Use Lemmas 4.8, 4.9 and 4.10. 25 .2 Rate of convergence of the WGRV with a fixed α Next, we discuss the convergence of the WGRV with a fixed α . Recall that the WGRV isdefined as W n ( α ) = X j ∈ I n w ( α ) − (cid:8) | ∆ j X | ∧ ( S / n,j − V ( s n ( α )) ) (cid:9) K n,j . The WGRV has entirely the same rate of convergence as the GRV.
Theorem 4.12.
Suppose that [ G , [ G , and [ G are fulfilled. Suppose that ξ < . Let α ∈ (0 , and β < min (cid:8) γ , − ξ (cid:9) . Moreover, assume that κ n = O ( n / ) . Then k W n ( α ) − Θ k p = O ( n − β ) as n → ∞ for every p > .Proof. Decompose W n ( α ) as W n ( α ) = X j ∈J n ( α ) w ( α ) − | ∆ j X | K n,j + X j ∈J n ( α ) c w ( α ) − S n,j − V s n ( α )) K n,j = q ( α ) w ( α ) V n ( α ) + X j ∈J n ( α ) c w ( α ) − S n,j − V s n ( α )) K n,j . Note that w ( α ) = q ( α ) + αc ( α ). Hence, it suffices to show that (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) X j ∈J n ( α ) c S n,j − V s n ( α )) K n,j − αc ( α )Θ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) p = O ( n − β )as n → ∞ for every p >
1. Decompose the left-hand side as X j ∈J n ( α ) c S n,j − V s n ( α )) K n,j − αc ( α )Θ = X j ∈J n ( α ) c S n,j − V s n ( α )) K n,j { j ∈L cn } − αc ( α )Θ+ X j ∈J n ( α ) c S n,j − V s n ( α )) K n,j { j ∈ L n ∩L n } + X j ∈J n ( α ) c S n,j − V s n ( α )) K n,j { j ∈ L cn ∩L n } =: A + A + A . Since S n,j − V s n ( α )) K n,j ≤ | ∆ j X | K n,j ≤ n − / for j ∈ J n ( α ) c , we have (cid:13)(cid:13) A (cid:13)(cid:13) p . n − / ξ . Asfor A , note that L n . n ξ × ¯ κ n = O ( n ξ +1 / ) and that ∆ j X = ∆ j ˜ X for j ∈ L cn . Hence we have k A k p ≤ (cid:13)(cid:13)(cid:13)(cid:13) max j ∈ I n | ∆ j ˜ X | L cn ∩ L n ) (cid:13)(cid:13)(cid:13)(cid:13) p . n − / ξ + ǫ , where ǫ is an arbitrarily small positive number.26s for A , we can set c = 1 in the condition [ G k A − αc ( α )Θ k p ≤ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:16) h − V s n ( α )) − c ( α ) (cid:17) h X j ∈J n ( α ) c S n,j − K n,j { j ∈L cn } (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) p + c ( α ) (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) h X j ∈J n ( α ) c (cid:0) S n,j − − σ t j − (cid:1) { j ∈L cn } (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) p + c ( α ) (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) h X j ∈J n ( α ) c S n,j − (cid:0) − K n,j (cid:1) { j ∈L cn } (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) p + c ( α ) (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) h X j ∈J n ( α ) c σ t j − { j ∈L cn } − α Θ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) p =: B + B + B + B . By condition [ G B = O ( n − γ ). As for B , with the estimate (cid:13)(cid:13) { j ∈L cn } (1 − K n,j ) (cid:13)(cid:13) p ≤ P [ | ∆ j ˜ X | > n − / ] /p = O ( n − L ) (for all p > L >
0) and the Cauchy-Schwarz inequality,we have k B k p ≤ h X j ∈ I n (cid:13)(cid:13) { j ∈L cn } S n,j − (cid:13)(cid:13) p (cid:13)(cid:13) { j ∈L cn } (1 − K n,j ) (cid:13)(cid:13) p = O ( n − L ) . For B , we use the following decomposition: h X j ∈J n ( α ) c σ t j − { j ∈L cn } − α Θ= h X j ∈ I n σ t j − − Θ ! − h X j ∈ I n σ t j − { j ∈L n } + h X j ∈J n ( α ) σ t j − { j ∈L n } + (1 − α )Θ − h X j ∈ e J n ( α ) σ t j − + h X j ∈ e J n ( α ) σ t j − − h X j ∈J n ( α ) σ t j − . Hence, with the aid of Lemmas 4.5, 4.6 and the estimate kL n k p < ∼ n ξ +1 / , we have (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) h X j ∈J n ( α ) c σ t j − { j ∈L cn } − α Θ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) p . (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) h X j ∈ I n σ t j − − Θ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) p + (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) (1 − α )Θ − h X j ∈ e J n ( α ) σ t j − (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) p + O ( n − β ) (4.20)since β < − ξ . The first term of the right-hand side of the above inequality is O ( n − / ) by274.18). As for the second term on the right-hand side of (4.20), (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) (1 − α )Θ − h X j ∈ e J n ( α ) σ t j − (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) p = (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) (1 − α )Θ − h X j ∈ I n σ t j − {| W j |≤ c ( α ) / } (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) p ≤ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) h X j ∈ I n σ t j − (cid:16) {| W j |≤ c ( α ) / } − P h | W j | ≤ c ( α ) / i(cid:17)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) p + (1 − α ) (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) h X j ∈ I n σ t j − − Θ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) p = O ( n − / ) . since Hence we have B = O ( n − + ξ ).Finally, for B , it suffices to show that P (cid:2)(cid:12)(cid:12) h − V ( s n ( α )) − c ( α ) / (cid:12)(cid:12) > n − β (cid:3) = O ( n − L ) (4.21)as n → ∞ for every L > β < min { γ , − ξ } . Let A n,j = (cid:8) h − / V j < c ( α ) / − n − β (cid:9) V n,j = 1 (cid:8) | W j |≤ c ( α ) / − n − β +2 − n − γ (cid:9) and µ n = (1 − α ) n − − n + ξ + ǫ − nE [ V n,j ]for ǫ >
0. Then P (cid:2) h − / V ( s n ( α )) − c ( α ) / < − n − β (cid:3) ≤ P "X j ∈ I n A n,j ≥ (1 − α ) n − = P "X j ∈ I n A n,j ∩{ j ∈L cn } + X j ∈ I n A n,j ∩{ j ∈L n } ≥ (1 − α ) n − ≤ P "X j ∈ I n A n,j ∩{ j ∈L cn } + L n ≥ (1 − α ) n − ≤ P "X j ∈ I n V n,j ≥ (1 − α ) n − − n + ξ + ǫ + P [ L n > n + ξ + ǫ ] + P [Ω cn ] ≤ P "X j ∈ I n (cid:0) V n,j − E [ V n,j ] (cid:1) ≥ µ n + P [ L n > n + ξ + ǫ ] + P [Ω cn ] . We see µ n ∼ (1 − α ) n − − n + ξ + ǫ − n (cid:8) (1 − α ) − c ∗ ( n − β − − n − γ (cid:1)(cid:9) ≥ c ∗ n − β n , where c ∗ is some positive constant, if we take a sufficiently small ǫ and γ ∈ ( β , γ )thanks to β < min (cid:8) γ , − ξ (cid:9) . Since n − / µ n ≥ − c ∗ n − β , from β < /
2, we obtain P " n − / X j ∈ I n (cid:0) V n,j − E [ V n,j ] (cid:1) ≥ n − / µ n = O ( n − L )for every L >
0. Therefore, P (cid:2) h − / V ( s n ( α )) − c ( α ) / < − n − β (cid:3) = O ( n − L )as n → ∞ for every L >
0. Similarly, we can obtain the estimate P (cid:2) h − / V ( s n ( α )) − c ( α ) / > n − β (cid:3) = O ( n − L ) to show (4.21), which concludes the proof. In this section, we will consider a situation where the intensity of jumps is moderate. Thenit is possible to keep the cut-off ratio of the data small, and to get a precise estimate for theintegrated volatility. Let δ ∈ (cid:0) , / (cid:1) and δ ∈ (cid:0) , / (cid:1) . (5.1)In the context of the global jump filtering, given a collection ( S n,j − ) j ∈ I n of nonnegative randomvariables, we consider the index set J n given by J n = (cid:8) j ∈ I n ; V j < V ( s n ) (cid:9) (5.2)where V j = (cid:12)(cid:12) ( S − n,j − ) / ∆ j X (cid:12)(cid:12) (5.3)and s n = n − ⌊ Bn δ ⌋ (5.4)for a positive constant B . Here x − = 1 { x =0 } x − for x ∈ R . Remark 5.1.
It is natural to set a spot volatility estimator of σ t j − in S n,j − though notdefinitively necessary (Remark 5.2). In Section 3, we discussed some constructions of S n,j − .In the terminology of Section 2, the cut-off rate by J n is α n = ⌊ Bn δ ⌋ /n , J n = J n ( α n ) and α n goes to 0 as n tends to ∞ . We note that the definition of V j is different from that in (2.1).29or estimation of Θ of (1.2), we consider the global realized volatility (GRV) with a movingthreshold G n = X j ∈J n q − n (cid:12)(cid:12) ∆ j X (cid:12)(cid:12) H n,j (5.5)where ( q n ) n ∈ N is a sequence of positive numbers, and H n,j = 1 {| ∆ j X |
1, sup t ∈ [0 ,T ] k σ t k p + k b t k p < ∞ . [G2 o ] q n > n ∈ N ) and q n − o ( n − / ) as n → ∞ . Remark 5.2.
Theoretically, we may set S n,j − = 1. Condition [ G o ] is satisfied with q n = 1.Asymptotically the choice ( S n,j − , q n ) = (1 ,
1) is sufficient and valid. However, in practice, anatural choice is a pair S n,j − satisfying [ G
2] in Section 2 and q n = q ( α n ), where the function q is defined by (2.2) in Section 2.For the jump part J of the semimartingale X , we only assumeΛ n := { j ∈ I n ; ∆ j J = 0 } < ∞ a.s. for every n ∈ N , and the following estimate: [G3 o ] There exists a constant ξ ≥ k Λ n k p = O ( n ξ ) as n → ∞ for every p > Remark 5.3.
The diverging Λ n models high intensity of the jump part for a fixed n in practice.Mathematically, we are assuming that the process σ is independent of n . This makes sensenaturally in particular when the jumps are exogenous. It is sufficient for the limit theorem byusing the c`adl`ag property of σ . On the other hand, though details are omitted, we can treat σ depending on n if uniform L ∞ – -continuity of σ and uniformity in [ G o ] are satisfied.Define Γ by Γ = 2 T Z T σ t dt. Extend (Ω , F , P ) so that there is a standard normal random variable ζ independent of F onthe extension. The F -stable convergence is denoted by → d s . We obtain asymptotic mixednormality of the global realized volatility G n with a moving threshold. Theorem 5.4.
Suppose that [ G o ] , [ G o ] and [ G o ] are satisfied. Suppose that ξ < δ . Then n / (cid:0) G n − Θ (cid:1) → d s Γ / ζ as n → ∞ . Theorem 5.4 follows from Theorem 5.9, that is presented in a slightly more general setting.30 .2 The WGRV with a moving threshold
Suppose that a collection ( S n,j − ) j ∈ I n ( n ∈ N ) of positive random variables is given. Considerconstants δ and δ satisfying (5.1), and V j and s n given by (5.3) and (5.4), respectively. Wedefine the Winsorized global realized volatility (WGRV) with a moving threshold by W n = X j ∈ I n q − n (cid:8) | ∆ j X | ∧ S / n,j − V ( s n ) (cid:1)(cid:9) H n,j , (5.7)where ( q n ) n ∈ N is a sequence of positive numbers. The error of the WGRV has the same limitas GRV G n . Theorem 5.5.
Suppose that [ G o ] , [ G o ] and [ G o ] are satisfied. Suppose that ξ < δ . Then n / (cid:0) W n − Θ (cid:1) → d s Γ / ζ as n → ∞ where ζ is a standard Gaussian random variable independent of F .Proof. Let e X = X − J . It suffices to show that n / k W n − G n k p → n → ∞ for every p >
1. From (5.7), W n − G n = X j ∈J cn q − n S n,j − V s n ) H n,j . We have (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) X j ∈J cn S n,j − V s n ) H n,j (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) p ≤ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) X j ∈J cn | ∆ j X | { j ∈ L n } H n,j (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) p + (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) X j ∈J cn | ∆ j ˜ X | { j ∈ L cn } H n,j (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) p . n − / − δ + ξ + n − ǫ + δ as n → ∞ for any ǫ >
0. Since δ < / ξ < δ and ǫ is arbitrary, we obtain the desiredconvergence from Theorem 5.4. We are about establishing asymptotic mixed normality of the integrated volatility estimatorhaving a moving threshold. We will solve this problem by showing a stability of estimationunder elimination of a certain portion of the data. In other words, this is a question of stabilityunder missing data. In what follows, we will consider the variable V n defined by V n = X j ∈M n q − n (cid:12)(cid:12) ∆ j X (cid:12)(cid:12) H n,j (5.8)where ( q n ) n ∈ N is a sequence of positive numbers, H n,j is given in (5.6), and M n is an abstractrandom index set in I n . It is not necessary to specify M n like J n by (5.2) and (5.3).Let V † n = X j ∈M n q − n (cid:12)(cid:12) ∆ j e X (cid:12)(cid:12) H n,j . Recall e X = X − J . 31 G2 ′ ] (i) For every n ∈ N , M n is a random set in I n such that (cid:0) I n \ M n (cid:1) ≤ B n δ ( n ∈ N )for some positive constant B . (ii) q n > n ∈ N ) and q n − o ( n − / ) as n → ∞ . Lemma 5.6.
Suppose that [ G o ] , [ G ′ ] and [ G o ] are satisfied. Suppose that ξ < δ . Then n / (cid:13)(cid:13) V n − V † n (cid:13)(cid:13) p → as n → ∞ for every p > .Proof. We have the estimate n / (cid:13)(cid:13) V n − V † n (cid:13)(cid:13) p ≤ q − n Φ ( . ) n + q − n Φ ( . ) n , (5.9)where Φ ( . ) n = n / (cid:13)(cid:13)(cid:13)(cid:13) X j ∈M n (cid:12)(cid:12) ∆ j e X ∆ j J (cid:12)(cid:12) H n,j (cid:13)(cid:13)(cid:13)(cid:13) p , (5.10)and Φ ( . ) n = n / (cid:13)(cid:13)(cid:13)(cid:13) X j ∈M n (cid:12)(cid:12) ∆ j J (cid:12)(cid:12) H n,j (cid:13)(cid:13)(cid:13)(cid:13) p (5.11)for p >
1. By using the inequality | ∆ j J | H n,j ≤ (cid:0) | ∆ j e X | + B n − − δ (cid:1) { ∆ j J =0 } , we obtain Φ ( . ) n ≤ n / (cid:13)(cid:13)(cid:13)(cid:13) max j ∈ I n (cid:8)(cid:12)(cid:12) ∆ j e X (cid:12)(cid:12) ( | ∆ j e X | + B n − − δ (cid:1)(cid:9)(cid:13)(cid:13)(cid:13)(cid:13) p (cid:13)(cid:13) Λ n (cid:13)(cid:13) p < ∼ n − − δ + ξ + ǫ as n → ∞ for any ǫ > p >
1. Therefore,Φ ( . ) n → p > ξ < δ < + δ . Similarly,Φ ( . ) n < ∼ n / (cid:13)(cid:13)(cid:13)(cid:13) max j ∈ I n ( | ∆ j e X | + n − − δ (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) p (cid:13)(cid:13) Λ n (cid:13)(cid:13) p < ∼ n − δ + ξ + ǫ as n → ∞ for any ǫ > p > δ < /
4. In particular,Φ ( . ) n → n → ∞ since ξ < δ . Now the proof is completed with (5.9), (5.12) and (5.13).Define e V n by e V n = X j ∈ I n (cid:12)(cid:12) ∆ j e X (cid:12)(cid:12) . emma 5.7. Suppose that ξ < / . Then n / (cid:13)(cid:13) V † n − e V n (cid:13)(cid:13) p → as n → ∞ for every p > .Proof. Recall that δ < / δ < /
2. Define V ‡ n by V ‡ n = X j ∈M n q − n (cid:12)(cid:12) ∆ j e X (cid:12)(cid:12) . Then n / (cid:13)(cid:13) V † n − V ‡ n (cid:13)(cid:13) p < ∼ n / (cid:13)(cid:13)(cid:13)(cid:13) X j ∈M n (cid:12)(cid:12) ∆ j e X (cid:12)(cid:12) | H n,j − | (cid:13)(cid:13)(cid:13)(cid:13) p ≤ n / (cid:13)(cid:13)(cid:13)(cid:13) X j ∈M n (cid:12)(cid:12) ∆ j e X (cid:12)(cid:12) | H n,j − | { ∆ j J =0 } (cid:13)(cid:13)(cid:13)(cid:13) p + n / (cid:13)(cid:13)(cid:13)(cid:13) X j ∈M n (cid:12)(cid:12) ∆ j e X (cid:12)(cid:12) {| ∆ j e X | >n − − δ } { ∆ j J =0 } (cid:13)(cid:13)(cid:13)(cid:13) p ≤ n / (cid:13)(cid:13)(cid:13)(cid:13) max j ∈ I n (cid:12)(cid:12) ∆ j e X (cid:12)(cid:12) Λ n (cid:13)(cid:13)(cid:13)(cid:13) p + O ( n − L ) < ∼ n − + ǫ + ξ for any positive number ǫ >
0. Here L is an arbitrary positive number greater than 1 /
2, andwe used the inequality δ < / O ( n − L ). Since ξ < /
2, we obtain n / (cid:13)(cid:13) V † n − V ‡ n (cid:13)(cid:13) p = o (1) (5.14)as n → ∞ for every p > q n − o ( n − / ) of [ G ′ ] (ii), obviously, n / (cid:13)(cid:13) V ‡ n − e V n (cid:13)(cid:13) p ≤ n / (cid:13)(cid:13)(cid:13)(cid:13) X j ∈ I n \M n (cid:12)(cid:12) ∆ j e X (cid:12)(cid:12) (cid:13)(cid:13)(cid:13)(cid:13) p + o (1) < ∼ n − + ǫ + δ + o (1) = o (1) (5.15)as n → ∞ for every p > I n \ M n ) < ∼ n δ with δ < / G ′ ] (i) and (5.1),and (cid:13)(cid:13)(cid:13)(cid:13) max j ∈ I n (cid:12)(cid:12) ∆ j e X (cid:12)(cid:12) (cid:13)(cid:13)(cid:13)(cid:13) p = O ( n − ǫ )for any p > ǫ . Proof ends with (5.14) and (5.15). Lemma 5.8.
Suppose that [ G o ] is satisfied. Then n / (cid:0) e V n − Θ (cid:1) → d s Γ / ζ as n → ∞ . roof. We have e V n = X j ∈ I n Z t k t k − σ t dw t + Z t k t k − b t dt ! = Φ ( . ) n + Φ ( . ) n + 2Φ ( . ) n + Φ ( . ) n (5.16)where Φ ( . ) n = X j ∈ I n Z t j t j − Z tt j − σ s dw s σ t dw t , (5.17)Φ ( . ) n = X j ∈ I n Z t j t j − σ t dt = Θ , (5.18)Φ ( . ) n = X j ∈ I n Z t j t j − σ t dw t Z t j t j − b t dt, (5.19)and Φ ( . ) n = X j ∈ I n (cid:18) Z t j t j − b t dt (cid:19) . (5.20)Since b is a c`adl`ag process, for any ǫ >
0, there exists a number δ > P (cid:2) w ′ ( b, δ ) ≥ ǫ (cid:3) < ǫ . Here w ′ ( b, δ ) is a modulus of continuity defined by w ′ ( b, δ ) = inf ( s i ) ∈S δ max i sup r ,r ∈ [ s i − ,s i ) (cid:12)(cid:12) b r − b r (cid:12)(cid:12) , where S δ is the set of sequences ( s i ) such that 0 = s < s < · · · < s v = T and min i =1 ,...,v − ( s i − s i − ) > δ . Let ˙Φ ( . ) n = X j ∈ I n Z t k t k − σ t dw t Z t j t j − ( b t − b t j − ) dt. (5.21)Write E j = Z t j t j − σ t dw t , V j = n / (cid:12)(cid:12)(cid:12)(cid:12) Z t j t j − σ t dw t (cid:12)(cid:12)(cid:12)(cid:12) Z t j t j − (cid:0) | b t | + | b t j − | (cid:1) dt. For ω ∈ Ω such that w ′ ( b ( ω ) , δ ) < ǫ , there exists a ( s i ) (depending on ω ) such thatmax i sup r ,r ∈ [ s i − ,s i ) (cid:12)(cid:12) b r ( ω ) − b r ( ω ) (cid:12)(cid:12) < ǫ, min i =1 ,...,v − ( s i − s i − ) > δ. n > T /δ , all intervals [ t j − , t j ) ( j ∈ I n ) includes at most one point among ( s i ), thereforethe number of intervals [ t j − , t j ) that include some one s i is at most T /δ . The increment of b ( ω ) in [ t j − , t j ) is less than ǫ if [ t j − , t j ) ∩ { s i } = ∅ . Thus, we have the inequality (cid:13)(cid:13) n / ˙Φ ( . ) n (cid:13)(cid:13) p ≤ (cid:13)(cid:13)(cid:13)(cid:13) X j ∈ I n n / | E j | (cid:13)(cid:13)(cid:13)(cid:13) p ǫh + (cid:13)(cid:13)(cid:13)(cid:13) max j ∈ I n V j (cid:13)(cid:13)(cid:13)(cid:13) p Tδ + (cid:13)(cid:13)(cid:13)(cid:13) X j ∈ I n V j (cid:13)(cid:13)(cid:13)(cid:13) p P (cid:2) w ′ ( b, δ ) ≥ ǫ (cid:3) p for every p >
1. Therefore, (cid:13)(cid:13) n / ˙Φ ( . ) n (cid:13)(cid:13) p ≤ C (cid:20) ǫ + (cid:18) n − / + X j ∈ I n (cid:13)(cid:13) V j { V j >n − / } (cid:13)(cid:13) p (cid:19) Tδ + ǫ p (cid:21) ≤ C ′ (cid:0) ǫ + n − / + ǫ p (cid:1) for all n > T /δ , where C and C ′ are some constants independent of n . Consequently,lim n →∞ (cid:13)(cid:13) n / ˙Φ ( . ) n (cid:13)(cid:13) p = 0 (5.22)for every p >
1. Moreover, for¨Φ ( . ) n = X j ∈ I n Z t j t j − σ t dw t Z t k t k − b t j − dt = X j ∈ I n hb t j − Z t j t j − σ t dw t , we have lim n →∞ (cid:13)(cid:13) n / ¨Φ ( . ) n (cid:13)(cid:13) p = 0 (5.23)for every p >
1, by orthogonality. From (5.22) and (5.23),lim n →∞ (cid:13)(cid:13) n / Φ ( . ) n (cid:13)(cid:13) p = 0 (5.24)for every p > (cid:13)(cid:13) n / Φ ( . ) n (cid:13)(cid:13) p = 0 (5.25)for every p >
1. Now, we can show the claim of the lemma by using (5.16), (5.24) and (5.25)together with the mixture type of martingale central limit theorem applied to n / Φ ( . ) n , withthe aid of the c`adl`ag property of σ . Theorem 5.9.
Suppose that [ G o ] , [ G ′ ] and [ G o ] are satisfied. Suppose that ξ < δ . Then n / (cid:0) V n − Θ (cid:1) → d s Γ / ζ as n → ∞ .Proof. Just combine Lemmas 5.6, 5.7 and 5.8.35
Constant volatility
The case of constant σ is specific and theoretical treatments can be slightly different fromthose of the previous sections. In this situation, we do not need to pre-estimate the local spotvolatility, and hence, we can take S n,j = 1 constantly and no approximation error is caused. σ t = θ ˜ σ t is also the case if ˜ σ t j − are observable. For example, the GRV with a fixed cut-off rate α is redefined as V n ( α ) = X j ∈J n ( α ) q ( α ) − | ∆ j X | K n,j , where J n ( α ) = (cid:8) j ∈ I n ; | ∆ j X | < | ∆ X | ( s n ( α )) (cid:9) . Then we have the following theorem. Note that we do not need the condition [ G γ in[ G / Theorem 6.1.
Suppose that [ G and [ G are fulfilled. Suppose that ξ < . Let α ∈ (0 , and β < − ξ . Then k V n ( α ) − Θ k p = O ( n − β ) as n → ∞ for every p > . The other global-threshold estimators are discussed similarly.
In this section, we conduct several numerical simulations to see that our global realized volatilityestimators outperform those proposed in previous studies.
Here we consider a process X = ( X t ) t ∈ [0 , satisfying the stochastic differential equation dX t = θX t dt + ( σ + ηX t ) dw t + dJ t , t ∈ [0 , , (7.1)with X = 1, where J t is the jump part of X . In this section, we assume that J is a compoundPoisson process of the form J t = P N t i =1 ξ i , where ( N t ) t is a Poisson process with intensity λ > ξ i ) i are independently and normally distributed random variable with mean µ and variance ν . For the intensity parameter, we consider both cases where λ is high and low. Our aim is toestimate the integrated volatility Θ = R ( σ + ηX t ) dt .By simulation, we will compare the performance of the threshold realized volatility (TRV),bipower variation (BV), minimum realized volatility (minRV), the GRV, and the WGRV, where36RV, BV and minRV are given byTRV n = n X j =1 | ∆ j X | {| ∆ j X |≤ n − ρ } , ρ ∈ (0 , / , BV n = π n − X j =1 | ∆ j X || ∆ j +1 X | , minRV n = ππ − n − X j =1 | ∆ j X | ∧ | ∆ j +1 X | , respectively. The package YUIMA (cf. [4], [7]) was used for the simulation studies below.Note that, although TRV is based on threshold method, it is completely different fromour GRV, since TRV employs a deterministic threshold and never uses information of otherincrements. In this sense, TRV is based on a “local” approach.The set-up of simulation is as follows. The number of samples is n = 2000. We repeatcalculating the estimators 500 times to obtain their average and quantile. The true parametersare θ = 0 . , σ = 1 , η = 3 , µ = 0 . , ν = 0 .
2. Throughout this subsection, we set the cut-offratio α = 0 . S n,j − . That is, we trim theupper 20% of absolute increments. While it may seem that we eliminate too many observationsand the estimator suffers from downside bias, GRV and WGRV estimate the integrated volatilitywell thanks to the adjustment coefficient by q ( α ) and w ( α ). In calculating the TRV, we set ρ = 0 . , . , . σ s in (1.1) is not directly observable and depends on X t . Hence, we need S n,j − to normalize the increment ∆ i X when constructing the GRV. In this simulation, we use theLGRV L n,j ( α ) of (3.15) with α = 0 .
2, and the local minRV M n,j of (3.48) for S n,j − . We adopt κ n = ⌊ × n . ⌋ for the length of a subinterval to calculate these local volatilities. Moreover,we calculate GRV without normalization (defined in Section 6) for comparison. Note that κ n depends on two tuning parameters, the choice of which can affect the precision of estimation.We argue this point in the final Section 7.3.We use the following labels as in Table 7.1 to describe the estimators.37able 1: Definitions of estimatorsLabel Method Spot volatility Cut-off ratio α Exponent ρ for truncation trv[ ρ ] TRV – – 0.45, 0.2, 0.1 bv BV – – – mrv minRV – – – grv.lgrv[ α ] GRV GRV 0.2 – grv.mrv[ α ] GRV minRV 0.2 – wgrv.lgrv[ α ] WGRV GRV 0.2 – wgrv.mrv[ α ] WGRV minRV 0.2 – grv[ α ] GRV – 0.2, 0.1, 0.05 – grv.lgrv.mov
GRV GRV depends on n – wgrv.lgrv.mov WGRV GRV depends on n – First, we deal with the case of high intensity. Here we set λ = 30 so that the data includes manyjumps. The example of a sample path and its increments are shown in Figure 1. Obviously,there are many large spikes in the data, suggesting the existence of jumps.Note that the volatility is non-constant here. In fact, in Panel (b) of Figure 1, the size ofincrements tend to increase as time passes. Hence, to estimate the volatility, we have to useestimated spot volatilities to normalize the increments.In this example, we show the error ratios of GRV and WGRV with shrinking cut-off ratio(tuning parameters that determine the cut-off ratio α n = ⌊ Bn δ ⌋ for these estimators are B = 10and δ = 0 .
45, the same as those used in the next subsection). Theoretically, they are availablein the case of moderate intensity of jumps. We show their results just for reference. We willdiscuss the case of moderate intensity in more detail in the next subsection.38 ime X (a) Sample path of X Time i n c r e m en t o f X − . . . . . (b) Increment of X Figure 1: Sample path of X and its increments ( λ = 30)Table 2 shows the summary of error ratios (percentage deviation of estimated values fromthe true value for each estimator), and Figure 2 gives their box plots. In this case, both BV( bv ) and minRV ( mrv ) seem to suffer from upward bias due to jumps. In particular, the BVdeviates from the true value considerably. On the other hand, GRV with normalization performwell with errors concentrating around zero ( grv.lgrv, grv.mrv ). Note that, although WGRVperforms relatively well, it seems to have a small upward bias ( wgrv.lgrv, wgrv.mrv ). Thissuggests that, if there are many large jumps, using an upper quantile ( V ( s n ( α )) ) may sometimeslead to biases rather than obtaining a robust estimate.The three right box plots in this figure ( grv[0.20], grv[0.10], grv[0.05] ) are the re-sults of GRV without normalizing increments by local-global filters, with the cut-off ratio α = 0 . , . , .
05, respectively. We see that they seem to be less precise (especially when α is large extremely small) than GRV or WGRV with local volatility. This result suggests that,if we do not normalize increments by spot volatilities in the case of non-constant volatility, weend up obtaining inappropriate estimates.Intuitively, when we ignore normalization, we tend to eliminate increments where volatilityis high (because they are typically large), even if they come from the Brownian motion, whilekeeping relatively small jumps which we should actually remove. In addition, theoretically,the adjusting constant q ( α ) in the definition of GRV (2.3) comes from the standard normaldistribution. Therefore, when the volatility is non-constant, we should normalize the increments | ∆ i X | by local volatility to make them approximately standard normally distributed.39able 2: Summary of error ratios: λ = 30Min. 1st Qu. Median 3rd Qu. Max. trv[0.45] -29.88 -28.16 -27.42 -26.45 -21.64 trv[0.20] -9.40 -3.57 -1.95 -0.67 3.54 trv[0.10] bv -0.25 2.66 3.87 5.11 8.98 mrv -2.61 0.36 1.43 2.48 5.75 grv.lgrv[0.20] -6.65 -1.03 -0.10 0.88 3.64 grv.mrv[0.20] -6.65 -1.33 -0.40 0.64 3.65 wgrv.lgrv[0.20] -3.50 -0.39 0.60 1.46 3.68 wgrv.mrv[0.20] -3.53 -0.50 0.44 1.32 3.72 grv[0.20] -9.29 -4.39 -3.21 -1.91 2.15 grv[0.10] -10.26 -2.84 -1.76 -0.76 2.71 grv[0.05] -13.85 -4.41 -1.77 -0.42 3.38 grv.lgrv.mov -9.13 -1.18 -0.00 0.95 3.55 wgrv.lgrv.mov -3.19 -0.14 0.80 1.62 4.6140 r v [ . ] t r v [ . ] t r v [ . ] −100−50050 R e l a t i v e E rr o r [ % ] b v m r v g r v . l g r v [ . ] g r v . m r v [ . ] w g r v . l g r v [ . ] w g r v . m r v [ . ] g r v [ . ] g r v [ . ] g r v [ . ] −40−20020 R e l a t i v e E rr o r [ % ] Figure 2: Error ratios [%] for the case of high intensity: λ = 30The good news is that they also perform well even in the case of extremely high intensity.We consider λ = 50 here to see their accuracy. Figure 3 shows a sample path and its increments.It is obvious there are numerous jumps and one can easily imagine that the standard realizedvolatility estimator can never estimate the true volatility. Table 3 and Figure 4 show errorratios of each estimator for λ = 50. It shows that GRV and WGRV with cut-off ratio α = 0 . ime X (a) Sample path of X Time i n c r e m en t o f X − . − . . . . . (b) Increment of X Figure 3: Sample path of X and its increments ( λ = 50)42able 3: Summary of error ratios: λ = 50 , α = 0 . trv[0.45] -99.52 -98.90 -98.52 -98.09 -94.42 trv[0.20] -52.18 -27.61 -19.86 -12.98 2.35 trv[0.10] bv mrv -6.36 3.40 7.72 11.58 27.20 grv.lgrv[0.20] -38.58 -7.51 -2.38 1.82 18.41 grv.mrv[0.20] -39.65 -8.54 -3.51 0.81 16.58 wgrv.lgrv[0.20] -10.64 0.87 3.99 7.44 25.22 wgrv.mrv[0.20] -12.60 -0.08 3.18 6.74 23.23 grv[0.20] -33.92 -15.51 -11.40 -7.01 5.42 grv[0.10] -56.46 -23.57 -11.63 -4.84 9.21 grv[0.05] -66.14 -40.55 -29.94 -17.97 5.88 grv.lgrv.mov -48.17 -16.03 -6.55 -0.14 16.16 wgrv.lgrv.mov -10.31 2.16 5.85 9.61 31.2143 r v [ . ] t r v [ . ] t r v [ . ] −100−80−60−40−2002040 R e l a t i v e E rr o r [ % ] b v m r v g r v . l g r v [ . ] g r v . m r v [ . ] w g r v . l g r v [ . ] w g r v . m r v [ . ] g r v [ . ] g r v [ . ] g r v [ . ] −60−40−2002040 R e l a t i v e E rr o r [ % ] Figure 4: Error ratios [%] for the case of high intensity: λ = 50By taking α larger, accuracy improves. Table 4 shows the error ratios of each estimator inthe case of λ = 50, with α = α ranging from 0.1 to 0.5. We can see that, for large cut-off ratio( α = α = 0 . , . grv.lgrv , grv.mrv , wgrv.lgrv , wgrv.mrv ) still perform well. Looking in more detail, we see that the local GRV outperformslocal minRV for both GRV and WGRV. This example imply that we should take a cut-off ratioquite large in order to obtain a precise estimate.44able 4: Error ratios [%] for the case of extremely high intensity: λ = 50Cut-off ratio ( α = α )0.1 0.2 0.3 0.4 0.5 trv[0.45] -98.40 -98.40 -98.40 -98.40 -98.40 trv[0.20] -20.35 -20.35 -20.35 -20.35 -20.35 trv[0.10] bv mrv grv.lgrv [ α ] -18.04 -3.49 -0.41 -0.96 -1.47 grv.mrv [ α ] -18.05 -4.35 -2.62 -3.90 -4.89 wgrv.lgrv [ α ] 10.28 4.05 2.27 1.35 0.59 wgrv.mrv [ α ] 10.87 3.21 1.06 -0.30 -1.40 grv[0.20] -11.50 -11.50 -11.50 -11.50 -11.50 grv[0.10] -14.60 -14.60 -14.60 -14.60 -14.60 grv[0.05] -28.85 -28.85 -28.85 -28.85 -28.85 grv.lgrv.mov -8.31 -8.76 -8.78 -8.80 -8.83 wgrv.lgrv.mov Next, we consider the case of low intensity. In this case, we can use shrinking cut-off rate.Recall that the shrinking cut-off rate is defined by α n = ⌊ Bn δ ⌋ /n . In this simulation, we set B = 10 and δ = 0 .
45, so the cut-off rate is then α n = 0 . α = 0 . ρ = 0 .
45. Thisimplies that the accuracy of estimation is still highly vulnerable to the choice of ρ for TRV,even in the case of moderate intensity of jumps.45able 5: Summary of error ratios: λ = 5Min. 1st Qu. Median 3rd Qu. Max. trv[0.45] -29.61 -27.40 -26.09 -23.89 -18.27 trv[0.20] -2.79 -0.58 0.14 0.85 3.86 trv[0.10] -2.39 1.16 2.52 4.25 13.51 bv -2.26 0.31 1.29 2.24 6.34 mrv -3.63 -0.69 0.29 1.24 4.27 grv.lgrv[0.20] -3.62 -1.44 -0.55 0.21 4.79 grv.mrv[0.20] -3.59 -1.40 -0.55 0.30 5.00 wgrv.lgrv[0.20] -3.52 -1.11 -0.39 0.44 4.55 wgrv.mrv[0.20] -3.53 -1.06 -0.40 0.46 4.62 grv[0.20] -7.14 -2.82 -1.75 -0.75 2.90 grv[0.10] -5.46 -2.15 -1.32 -0.46 3.17 grv[0.05] -4.15 -1.63 -0.86 -0.10 3.23 grv.lgrv.mov -3.54 -1.32 -0.50 0.27 4.87 wgrv.lgrv.mov -3.15 -0.96 -0.34 0.47 4.1746 r v [ . ] t r v [ . ] t r v [ . ] −100−80−60−40−2002040 R e l a t i v e E rr o r [ % ] b v m r v g r v . l g r v [ . ] g r v . m r v [ . ] w g r v . l g r v [ . ] w g r v . m r v [ . ] g r v [ . ] g r v [ . ] g r v [ . ] g r v . l g r v . m o v w g r v . l g r v . m o v −30−20−1001020 R e l a t i v e E rr o r [ % ] Figure 5: Error ratios [%] for the case of low intensity: λ = 5For GRV and WGRV with shrinking cut-off ratio, we proved the asymptotic mixed normal-ity. Hence, the distribution of the Studentized errors Γ − / √ n ( V n − Θ) and Γ − / √ n ( W n − Θ)are expected to follow the standard normally distribution.Figure 6 shows QQ plots comparing theoretical quantiles of the standard normal distributionand the Studentized errors of GRV and WGRV estimators with shrinking threshold, BV andminRV. In this example, wgrv.lgrv.mov outperforms the others. It is close to the standardnormal distribution. On the other hand, grv.lgrv.mov seems to deviate from N (0 , bv are far from N (0 , − − − grv.lgrv.mov Theoretical Quantiles QQ da t a −3 −2 −1 0 1 2 3 − − − wgrv.lgrv.mov Theoretical Quantiles QQ da t a −3 −2 −1 0 1 2 3 − − − bv Theoretical Quantiles QQ da t a −3 −2 −1 0 1 2 3 − − − mrv Theoretical Quantiles QQ da t a Figure 6: QQ plot for Studentized errors: λ = 5The important tuning parameter for the shrinking threshold GRV is the exponent δ , anappropriate choice of which may strongly depend on the intensity of jumps. Recall that small δ means that we keep almost all the samples untrimmed. Table 6 shows average error ratios ofGRV and WGRV with shrinking cut-off ratio for several values of intensity λ and the parameter δ . For moderate intensity ( λ = 5 , δ . On theother hand, for high intensity ( λ = 30 , δ . This can be interpreted that its multiplication of q ( α n ) − for GRV is insufficient to compensate its elimination of jumps (small δ implies small α n , making q ( α n ) − close to 1). Moreover, as for WGRV, there occur large upward biasesfor small δ , since it keeps almost large increments and uses an extremely large increment forwinsorization.It is worth noting that large δ makes both GRV and WGRV accurate to a certain extent,even in the case of high intensity of jumps. Thus, in practice, one may use shrinking cut-offGRV and WGRV by setting the tuning parameter δ sufficiently close to 1 / −3 −2 −1 0 1 2 3 − − − grv.lgrv.mov Theoretical Quantiles QQ da t a −3 −2 −1 0 1 2 3 − − − wgrv.lgrv.mov Theoretical Quantiles QQ da t a Figure 7: QQ plot for Studentized errors: λ = 50 , δ = 0 . λ ) δ λ ) δ Since we assumed that the volatility is location-dependent in the previous sections, the nor-malization by estimated spot volatilities is needed to obtain an accurate estimator. However,if the true volatility of data is constant, we may ignore normalization.Here we set η = 0 so that the data is driven by a constant-volatility diffusion process. Theintensity is λ = 30. The summary table of estimated values are shown in Table 7. Obviously,all types of GRV and WGRV outperform other estimators.Figure 8 shows the error ratios of this case. The GRVs without normalization ( grv[0.20] , grv[0.10] and grv[0.05] ) perform as well as those with normalization. This suggests that, ifthe true process can be thought as constant-volatility, we may skip normalization (calculationof spot volatilities) procedure.However, it would be more typical that the volatility is non-constant. Thus, basically, itwould be advisable to use normalization. 50able 7: Summary table of estimated values: λ = 30Min. 1st Qu. Median 3rd Qu. Max. trv[0.45] trv[0.20] trv[0.10] bv mrv grv.lgrv[0.20] grv.mrv[0.20] wgrv.lgrv[0.20] wgrv.mrv[0.20] grv[0.20] grv[0.10] grv[0.05] grv.lgrv.mov wgrv.lgrv.mov True Value r v [ . ] t r v [ . ] t r v [ . ] R e l a t i v e E rr o r [ % ] b v m r v g r v . l g r v [ . ] g r v . m r v [ . ] w g r v . l g r v [ . ] w g r v . m r v [ . ] g r v [ . ] g r v [ . ] g r v [ . ] g r v . l g r v . m o v w g r v . l g r v . m o v R e l a t i v e E rr o r [ % ] Figure 8: Error ratios [%] results for the constant volatility: λ = 30 As the previous examples show, the minRV performs relatively well in the case of compoundPoisson type jumps. However, even if the intensity of jumps is small, the minRV may suffer froman upward bias depending on the structure of jumps. In particular, if there are consecutivejumps (which is quite rare for compound Poisson processes), the minRV loses it advantage.Here we show an example of such a situation.We consider the case that the data-generating process is given by X = U + J , where U isthe continuous part and J is the jump part. Here we assume that J is a marked Neyman-Scottclustering process (simply denoted by NS hereafter), instead of a compound Poisson process.The NS process is a typical point process representing consecutive jumps. That is, theremay be jumps within some consecutive intervals. This leads to upward bias of BV and minRVbecause the both of two adjacent increments can consist of large jumps. The NS process isconstructed as follows. (1) Set “centers” on the time interval [0 ,
1] by a Poisson process ( N t ) with intensity λ . Acenter is defined as the point t ∈ [0 ,
1] which satisfies ∆ N t = 1.52 For each center c ∈ [0 , N c of “children,” assuming N c is Poisson-distributed with mean λ c . (3) For each center c ∈ [0 , (cid:16) v ( c ) i (cid:17) ≤ i ≤ N c with mean h . Then the location of child i derived from center c isdefined as c − v ( c ) i . This defines the location of a jump. (4) For each child i , generate an independently and normally distributed random variable ξ i ∼ N (0 , ν J ). This determines the size and direction of a jump ∆ J s . (5) The NS process is defined as J t = P s ∈ [0 ,t ] ∆ J s .We generate X = U + J , where U is the Brownian semimartingale independent of J ,satisfying the stochastic differential equation dU t = θ U t dt + ( σ + η U t ) dw t (7.2)with U = 1. We set λ = λ c = 5 and ν J = 0 .
5. For the continuous part U , we use θ = 0 . , σ = 1 , η = 3. As before, the number n of samples is n = 2000, and the number oftrials is 500.Table 8: Summary table of error ratios: Neyman-Scott clustering jumpsMin. 1st Qu. Median 3rd Qu. Max. trv[0.45] -97.04 -88.67 -82.29 -75.74 -59.83 trv[0.20] -74.35 -29.07 -11.09 6.46 138.67 trv[0.10] -68.02 -14.24 3.17 24.43 157.66 bv -54.60 5.52 27.19 69.17 369.71 mrv -67.31 -1.40 19.59 61.35 300.83 grv.lgrv[0.20] -74.39 -31.45 -14.02 3.64 136.14 grv.mrv[0.20] -70.53 -26.44 -9.19 8.39 139.11 wgrv.lgrv[0.20] -74.49 -31.31 -13.35 4.32 137.83 wgrv.mrv[0.20] -74.48 -30.70 -12.73 4.23 136.96 grv[0.20] -74.89 -33.38 -16.67 0.73 134.64 grv[0.10] -74.70 -32.63 -15.03 1.88 136.44 grv[0.05] -74.38 -31.68 -14.04 3.40 136.67 grv.lgrv.mov -74.50 -31.63 -13.79 3.84 138.23 wgrv.lgrv.mov -74.32 -30.82 -12.89 4.34 139.7853able 8 and Figure 9 show the error ratios in the case of NS jumps. Because of the possibleconsecutive jumps, both bipower variation and minRV have upward bias, whereas GRV andWRGV are all robust to such clustering jumps. This suggests that the GRV and WRGVperform very well for various structures of jumps. t r v [ . ] t r v [ . ] t r v [ . ] b v m r v g r v . l g r v [ . ] g r v . m r v [ . ] w g r v . l g r v [ . ] w g r v . m r v [ . ] g r v [ . ] g r v [ . ] g r v [ . ] g r v . l g r v . m o v w g r v . l g r v . m o v −1000100200300 R e l a t i v e E rr o r [ % ] Figure 9: Error ratios [%] for the case of Neyman-Scott clustering jumps
Finally, we argue how estimation of spot volatilities affect the accuracy of GRV and WGRV.We have used κ n = ⌊ Bn c ⌋ = ⌊ n . ⌋ = 305 for local GRV and local minRV and seen thatGRV and WGRV with these spot volatilities perform highly well. However, the choice of κ n may affect the accuracy of GRV and WGRV. In fact, if the true volatility varies greatly, a widesubinterval (a large κ n ) leads to imprecise estimation of spot volatilities and causes misdetectionof jumps by using such information. Therefore, it ends up obtaining biases of GRV and WGRV.To see this, consider the following SDE: dX t = θX t dt + ( σ + η sin X t ) dw t + dJ t , J t = P N t j =1 ξ j is the same compound Poisson process with intensity λ as in Section 7.1.We set σ = 1 , η = 5 , λ = 10 , µ = 0 . , ν = 0 .
2. Again, the number n of samples is n = 2000, andthe number of trials is 500. In this example, the volatility ( σ + η sin X t ) swings in the range[1 , κ n . Time X (a) Sample path of X Time i n c r e m en t o f X − . . . . . (b) Increment of X Figure 10: Sample path of X and its incrementsTable 9 shows the summary and average error ratios of GRV and WGRV, respectively.for several values of c and B that determine the width κ n = 2 κ n + 1 of subintervals for spotvolatility estimation. This indicates that large B and c (wide subinterval) tend to give impreciseestimates. Since the volatility varies in a wide subinterval as Figure 10 shows, the estimatedspot volatility is prone to deviate the true value. This leads to misdetection of jumps, andthus distorts the estimate of GRV and WGRV. For instance, an underestimated spot volatilitymakes normalized increments too large, so the increments are likely to be regarded as jumps andeliminated from calculation of the estimates. As a result, GRV and WGRV are underestimated.In this example, it seems that small values such as c = 0 . , . B = 1 , B and c carefully,especially when volatility switches between high and low states frequently. After all, the properchoice of tuning parameters, such as B and c , while observing the data in detail, is needed toobtain precise estimates by GRV and WGRV.55able 9: Average error ratios of GRV and WGRV for different c and B determining the width κ n (a) GRV with local GRV ( grv.lgrv ) Bc grv.mrv ) Bc wgrv.lgrv ) Bc wgrv.mrv ) Bc In this paper, we construct the global realized volatility estimator in the nonparametric con-text. We proved the consistency and the asymptotic normality of GRV and WGRV, and, bynumerical simulations, we show that these new approaches outperform previous studies whichuse increments within a single or two intervals.Our new approach for eliminating jumps is highly versatile. For example, by normalization,it works well when the volatility of data is driven by a nonconstant-volatility process. Moreover,both GRV and WGRV are accurate enough in the case of not only compound-Poisson sporadicjumps but also Neyman-Scott consecutive jumps.56he global-filtering method could be extended to the covariance estimation even underthe nonsynchronous sampling scheme. Furthermore, this approach could also be applied toconstruct a test statistic for jump. Also, it is valuable to apply our approach to empiricalresearch of high-frequency time series data. These are important topics for future research.
References [1] Andersen, T.G., Dobrev, D., Schaumburg, E.: Jump-robust volatility estimation usingnearest neighbor truncation. J. Econometrics (1), 75–93 (2012). DOI 10.1016/j.jeconom.2012.01.011[2] Barndorff-Nielsen, O.E., Shephard, N.: Power and bipower variation with stochasticvolatility and jumps. Journal of Financial Econometrics , 1–48 (2004)[3] Barndorff-Nielsen, O.E., Shephard, N., Winkel, M.: Limit theorems for multipower varia-tion in the presence of jumps (5), 796–806 (2006). DOI 10.1016/j.spa.2006.01.007[4] Brouste, A., Fukasawa, M., Hino, H., Iacus, S., Kamatani, K., Koike, Y., Masuda, H.,Nomura, R., Ogihara, T., Shimuzu, Y., Uchida, M., Yoshida, N.: Statistical inference forstochastic processes: overview and prospects. Journal of Statistical Software (4), 1–51(2014)[5] Dohnal, G.: On estimating the diffusion coefficient. J. Appl. Probab. (1), 105–114 (1987)[6] Genon-Catalot, V., Jacod, J.: On the estimation of the diffusion coefficient for multi-dimensional diffusion processes. Ann. Inst. H. Poincar´e Probab. Statist. (1), 119–151(1993)[7] Iacus, S.M., Yoshida, N.: Simulation and inference for stochastic processes with YUIMA.Springer (2018)[8] Inatsugu, H., Yoshida, N.: Global jump filters and quasi-likelihood analysis for volatility.Annals of the Institute of Statistical Mathematics, on-line (2021)[9] Kamatani, K., Uchida, M.: Hybrid multi-step estimators for stochastic differential equa-tions based on sampled data. Statistical Inference for Stochastic Processes (2), 177–204(2014)[10] Kessler, M.: Estimation of an ergodic diffusion from discrete observations. Scand. J.Statist. (2), 211–229 (1997)[11] Koike, Y.: An estimator for the cumulative co-volatility of asynchronously observed semi-martingales with jumps. Scandinavian Journal of Statistics (2), 460–481 (2014)[12] Mancini, C.: Disentangling the jumps of the diffusion in a geometric jumping brownianmotion (1), 19–47 (2001) 5713] Ogihara, T., Yoshida, N.: Quasi-likelihood analysis for the stochastic differential equa-tion with jumps. Stat. Inference Stoch. Process. (3), 189–229 (2011). DOI 10.1007/s11203-011-9057-z. URL http://dx.doi.org/10.1007/s11203-011-9057-z [14] Ogihara, T., Yoshida, N.: Quasi-likelihood analysis for nonsynchronously observed diffu-sion processes. Stochastic Processes and their Applications (9), 2954–3008 (2014)[15] Prakasa Rao, B.: Statistical inference from sampled data for stochastic processes. Statis-tical inference from stochastic processes (Ithaca, NY, 1987) , 249–284 (1988)[16] Prakasa Rao, B.L.S.: Asymptotic theory for nonlinear least squares estimator for diffusionprocesses. Math. Operationsforsch. Statist. Ser. Statist. (2), 195–209 (1983)[17] Shimizu, Y., Yoshida, N.: Estimation of parameters for diffusion processes with jumpsfrom discrete observations. Stat. Inference Stoch. Process. (3), 227–277 (2006). DOI10.1007/s11203-005-8114-x. URL http://dx.doi.org/10.1007/s11203-005-8114-x [18] Uchida, M., Yoshida, N.: Adaptive estimation of an ergodic diffusion process based onsampled data. Stochastic Process. Appl. (8), 2885–2924 (2012). DOI 10.1016/j.spa.2012.04.001. URL http://dx.doi.org/10.1016/j.spa.2012.04.001 [19] Uchida, M., Yoshida, N.: Quasi likelihood analysis of volatility and nondegeneracy ofstatistical random field. Stochastic Process. Appl. (7), 2851–2876 (2013). DOI 10.1016/j.spa.2013.04.008. URL http://dx.doi.org/10.1016/j.spa.2013.04.008 [20] Uchida, M., Yoshida, N.: Adaptive bayes type estimators of ergodic diffusion processesfrom discrete observations. Statistical Inference for Stochastic Processes (2), 181–219(2014)[21] Yoshida, N.: Estimation for diffusion processes from discrete observation. J. MultivariateAnal. (2), 220–242 (1992)[22] Yoshida, N.: Polynomial type large deviation inequalities and quasi-likelihood analysis forstochastic differential equations. Ann. Inst. Statist. Math. (3), 431–479 (2011). DOI10.1007/s10463-009-0263-z. URL http://dx.doi.org/10.1007/s10463-009-0263-zhttp://dx.doi.org/10.1007/s10463-009-0263-z