On the statistics of scaling exponents and the Multiscaling Value at Risk
OOn the statistics of scaling exponents and theMultiscaling Value at Risk
Giuseppe Brandi ∗ , † , T. Di Matteo † , § , ¶ † Department of Mathematics, King’s College London, The Strand, London,WC2R 2LS, UK § Department of Computer Science, University College London, Gower Street,London, WC1E 6BT, UK ¶ Complexity Science Hub Vienna, Josefstaedter Strasse 39, A 1080 Vienna,Austria
April 29, 2020
Abstract
Scaling and multiscaling financial time series have been widely studied in the liter-ature. The research on this topic is vast and still flourishing. One way to analyse thescaling properties of time series is through the estimation of scaling exponents. Theseexponents are recognized as being valuable measures to discriminate between random,persistent, and anti-persistent behaviours in time series. In the literature, several meth-ods have been proposed to study the multiscaling property and in this paper we usethe generalized Hurst exponent (GHE). On the base of this methodology, we propose anovel statistical procedure to robustly estimate and test the multiscaling property andwe name it RNSGHE. This methodology, together with a combination of t-tests andF-tests to discriminated between real and spurious scaling. Moreover, we also introducea new methodology to estimate the optimal aggregation time used in our methodology.We numerically validate our procedure on simulated time series using the MultifractalRandom Walk (MRW) and then apply it to real financial data. We also present resultsfor times series with and without anomalies and we compute the bias that such anoma-lies introduce in the measurement of the scaling exponents. Finally, we show how theuse of proper scaling and multiscaling can ameliorate the estimation of risk measuressuch as Value at Risk (VaR). We also propose a methodology based on Monte Carlo ∗ Corresponding author. Email: [email protected] a r X i v : . [ q -f i n . R M ] A p r ubmitted to the European Journal of Finance simulation, that we name Multiscaling Value at Risk (MSVaR), which takes into accountthe statical properties of multiscaling time series. We show that by using this statisticalprocedure in combination with the robustly estimated multiscaling exponents, the oneyear forecasted MSVaR mimics the VaR on the annual data for the majority of thestocks analysed. Keywords:
Multiscaling time series, Robust scaling, long-memory, Value at Risk.
Nowadays, scaling and multiscaling are widely accepted as empirical stylized facts in financialtime series and they need to be properly addressed and analysed since they provide importantinformation to risk and asset managers. The (multi)scaling property of time series is partic-ularly important in risk management, especially when the model used assumes independenceof asset returns. If this assumption is not met risk measures might be severely biased, espe-cially if there is long-range dependence and this is acting with a different degree across thetime series statistical moments. In particular, multiscaling has been adopted as a formalismin two different branches of quantitative finance, i.e. econophysics and mathematical finance.The former devoted most of the attention to price and returns series in order to understandthe source of multifractality from an empirical and theoretical point of view (Mantegna andStanley, 1999; Dacorogna et al., 2001; Mantegna and Stanley, 1995; Di Matteo, 2007; Calvetand Fisher, 2002; Lux, 2004; Lux and Marchesi, 1999; Di Matteo, Aste, and Dacorogna, 2005;Buonocore et al., 2019) and has recently identified a new stylized fact which relates (non-linearly) the strength of multiscaling and the dependence between stocks (Buonocore et al.,2019). The latter instead builds on the work of (Gatheral, Jaisson, and Rosenbaum, 2018)on rough volatility and has been used to construct stochastic models with anti-persistentvolatility dynamics (Gatheral, Jaisson, and Rosenbaum, 2018; Takaishi, 2019; Fukasawa,Takabatake, and Westphal, 2019; Livieri et al., 2018). Even if the research question comesfrom different perspectives, it is important to recognize the relevance that its study has in fi-nance. Multiscaling have been understood to originate from one or more phenomenon relatedto trading dynamics. In particular, it can be attributed to the fat tails, the autocorrelationof the absolute value of log-returns, liquidity dynamics, or (non-linear)correlation betweenhigh and low returns generated by the different time horizon of traders and the consequentvolumes traded. It can also be caused by the endogeneity of markets for which a given ordergenerates many other orders, this occurs especially in markets where algorithmic trading isprevalent. There are different methodologies used to extract the scaling exponent from timeseries. Among all, the Multifractal Detrended Fluctuation Analysis (MFDFA) proposed in(Kantelhardt et al., 2002), the Wavelet Transform Modulus Maxima (WTMM) introducedby (Muzy, Bacry, and Arneodo, 1991, 1993) and the Structure function approach also knownas the Generalized Hurst exponent (GHE) method (Di Matteo, Aste, and Dacorogna, 2003;Di Matteo, 2007; Sornette et al., 2018). In a recent paper, (Barunik and Kristoufek, 2010)2ubmitted to the European Journal of Financetested different methodologies against some data specification and empirically showed thatthe GHE approach overperforms the other models. For this reason, throughout this workwe will use the GHE method. Notwithstanding the importance of the correct estimation ofthe Hurst exponent, the analysis has been rarely addressed from a statistical point of view.In this paper, we propose a step-by-step procedure for a robust estimation and test of themultiscaling property. Application to simulated data and empirical data allows us also todemonstrate the impact of bias on scaling and multiscaling estimation. After the estimationand test of multiscaling, we also show how the use of proper scaling and multiscaling canameliorate the estimation of risk measures such as Value at Risk(VaR). We finally proposea methodology based on Monte Carlo simulation, that we name Multiscaling VaR, whichtakes into account the statistical properties of multiscaling time series by using a multiscal-ing consistent data generating process. The paper is structured as follows. Sections 2 and 3are devoted to a brief description of multiscaling in finance and to the statistical procedureproposed to consistently estimate and test the scaling spectrum. Section 4 shows the resultsof the proposed methodology applied to synthetic data while Section 5 reports the resultsof an empirical application to real financial time series. Section 6 is devoted to a practicalapplication of scaling and multiscaling property to VaR while Section 7 concludes.
In this Section, we explain the importance of the multifractal (multiscaling) formalism infinancial markets. Let us first fix the notation by defining the prices time series as P t andthe log-prices p t = ln( P t ) . From this, the log-returns over a time aggregation τ are r τ ( t ) = p ( t + τ ) − p t , where τ is expressed in days. Financial models are usually based on the assumptionthat log-prices follow a Brownian Motion and the for this model, the rescaled second momentof the log-returns over time aggregation τ follows σ τ = E [ | r τ ( t ) | ] ∼ στ , (1)where σ τ is the standard deviation at aggregation horizon τ while σ is the standard deviationat daily aggregation. This Equation is usually referred to as the square root of time rule andit is widely applied in quantitative finance Danielsson and Zigrand (2006); Wang, Yeh, andCheng (2011). Examples are the Black and Scholes model in which the volatility evolves as στ , or the VaR which under Basel regulatory framework can be computed for higher timeaggregation, e.g. the τ days VaR can be computed as the daily VaR multiplied by τ . In theanalysis of the Nile river, Hurst found that the scaling behaviour described by a BrownianMotion was not in line with the empirical data (Hurst, 1956). This formalism has been laterintroduced in finance (Mandelbrot, 1963, 2013; Sornette et al., 2018). To detect multiscaling,it is necessary to study the non-linearity of the scaling exponents of the q -order momentsof the absolute value of log-returns (Di Matteo, 2007). In particular, the process ( p t ) with3ubmitted to the European Journal of Financestationary increments is said to be multiscaling if Ξ( τ, q ) = E [ | r τ ( t ) | q ] ∼ K q τ qH q , (2)where q = { q , q , . . . , q M } is the set of evaluated moments, τ = { τ , τ , . . . , τ N } is the setof lags used to compute the log-returns, K q is the q -moment for τ = 1 , H q is the so calledgeneralized Hurst exponent which is a function of q . Finally, the function qH q is concave(Mandelbrot, Fisher, and Calvet, 1997) and codifies the scaling exponents of the process. Aprocess is uniscaling when the function H q does not depend on q , i.e. H q = H (Di Matteo,2007), while it is multiscaling otherwise. If H (cid:54) = 0 . , the process does not behave as a standardBrownian Motion and neglecting this feature, would significantly bias the estimation of thetrue risk. In particular, if H < . H > . the process is said to be anti-persistent(persistent) while if H = 0 . the process has independent increments. Given Equation 2, apossible way to define a multiscaling proxy is by quantifying the degree of non-linearity ofthe function qH q . The standard procedure used in order to extract qH q consists in runninga linear regression in log-log scale of Equation 2, which reads as ln(Ξ( τ, q )) = qH q ln( τ ) + ln ( K q ) , (3)where τ is defined in the range τ = [ τ min , τ max ] and q = [ q min , q max ] (Di Matteo, 2007). Amultiscaling proxy can be obtained by fitting the measured scaling exponent with a seconddegree polynomial (Buonocore, Aste, and Di Matteo, 2016; Buonocore et al., 2019) of theform qH q = Aq + Bq , (4)where A and B are two constants. In this mathematical setting, the measured B , (cid:98) B , repre-sents the curvature of qH q . If (cid:98) B = 0 , the process is uniscaling, while if (cid:98) B (cid:54) = 0 , the processis multiscaling (Buonocore, Aste, and Di Matteo, 2016; Buonocore et al., 2019). What is ofvital importance in oder to widely apply the multiscaling formalism in finance is the abilityto correctly estimate the value of qH q and consequently, of A and B . As highlighted in the previous Sections, estimating the Hurst exponent from empirical datais a difficult task. The difficulties can be categorized in two different classes: • Those due to the statistical procedure adopted; • Those linked to the financial data themselves. Technical details of the choice of this functional form can be found in (Buonocore, Aste, and Di Matteo,2016; Buonocore et al., 2019). • The statistical model used to compute the scaling exponents; • The input variables used in the statistical procedure.In the second class, issues arise mainly from the following question: • If the data contain an anomaly, how this is impacting the estimation of the scaling andmultiscaling exponents?In this work we first focus on the statistical procedure and the implication on financialtimes series with and without anomalies. Then, we we discuss practical implications relatedto finance with a special attention to Value at Risk.
Multiscaling properties of financial time series have been understood to come from one ormore phenomena related to trading dynamics. From the point of view of the financial mi-crostructure, scaling can be attributed to liquidity dynamics, endogeneity of markets or anyother trading dynamic existing in the market. In particular, the superimposition of differentstrategies and investment horizons generates the long-range dependence and its variationwhen evaluated at different order moments. In this Section we propose a methodology toestimate the Hurst exponent qH q and the multiscaling depth (curvature) coefficient B in arobust manner. As specified in Equation 3, the estimation of scaling laws is generally per-formed through a linear regression in log-log scales. The statistical problem which mightarise in this context is that the regression is performed minimizing the squared log-errors in-stead of the true errors. This procedure might, in case of strong deviation from the assumedstatistical model for the errors, severely impact the results. The solution to this problemconsists of applying a nonlinear regression to the original (i.e. not transformed) data, com-paring the fit of the two specifications to the original data and using the one which performsbetter. Another issue related to the statistical model is the variation of the intercept for the q regressions. In particular, we can exactly compute the value of K q rather than estimatingit, thus eliminating possible errors and bias. We can define the standardized Ξ( τ, q ) as (cid:101) Ξ( τ, q ) = Ξ( τ, q ) K q , (5)from which it is possible to notice that Equation 2 becomes (cid:101) Ξ( τ, q ) ∼ τ qH q . (6)Equation 6 eliminates the possible bias introduced by the estimation of K q via regression.To easily exploit and model the multiscaling behaviour, we define the q-order normalizedmoment as ... Ξ( τ, q ) = (cid:101) Ξ( τ, q ) q (7)5ubmitted to the European Journal of Financewhich transforms Equation 6 in ... Ξ( τ, q ) ∼ τ H q . (8)Within this new formulation, the analysis is much easier since now, all the q regressions havea intercept and the multiscaling is present only if the regression coefficients H q differ fordifferent values of q . In fact, for uniscaling time series all regression lines are overlappedwhile, for multiscaling time series they diverge. Given the formalism introduced by Equation8, it is easy to check with the naked eye whether a process is multiscaling or not. In addition,we can now rewrite Equation 4 for the normalized and standardized structure function ofEquation 8 as H q = A + Bq. (9)This Equation, even if mathematically equivalent to Equation 4, has a statistical advantage.Eliminating the multiplication by q from both sides of the Equation, we reduce the possibilityof spurious results in case q is a dominant factor in the multiplication. Indeed, the interpre-tation is equivalent, i.e. A is the linear scaling index while B is the multiscaling proxy. Letus now define the relative structure function between two consecutive moments q i and q j ,with q j > q i as ... Ξ( τ, q i , q j ) = ... Ξ( τ, q j ) ... Ξ( τ, q i ) ∼ τ H qj τ H qi = τ H qj − H qi = τ H ( q i ,q j ) . (10)This formalization helps in the statistical analysis since we can now test if a process isstatistically multiscaling using a significance test on the estimated H ( q i , q j ) . In fact, foruniscaling time series we have that H q = H , which implies that the difference betweendifferent order moments is always . On the contrary, for multiscaling time series it should bedifferent from for all q . This reduces to a t-test on the regression coefficients estimated usingEquation 10. Besides the multi-regression approach, it is possible to perform a multivariateregression by rewriting Equation 10 as ... Ξ( τ, , q ) ... Ξ( τ, q , q ) ...... Ξ( τ, q M − , q M ) = τ (cid:104) H (0 ,q ) , H ( q ,q ) , · · · , H ( q M − ,q M ) (cid:105) , (11)where M is the maximum number of moments used. This is a multivariate nonlinear regres-sion which can be easily solved via nonlinear optimization algorithms. Such methodologi-cal approach implies a possible relationship between the q-moments used in the regression.Depending on the model assumptions, one can use Equation 11 or perform M separate re-gressions for each exponent. In the first case, it is then possible to use an F-test to testif all the coefficient except for the first one ( H (0 , q ) ) are jointly equal to against the al-ternative that some coefficients are different from . This is a less restrictive multiscaling Exluding the special case H (0 , q ) . strongly multiscaling processes those processeswhich reject both the null hypothesis for all the t-tests and the null of the F-test. Conversely,we call weakly multiscaling processes those processes for which the null hypothesis of all thet-tests is rejected but not the null of the F-test. This is quite intuitive since if a process ismultiscaling, all the relative increments are statistically significant. However, if the processreconstructed with a single exponent is statistically equivalent to the one reconstructed withthe full multiscaling spectrum, it means that such multiscaling behaviour is weak. As alreadymentioned, it is recommended to perform the model in the log-log scale and in the originalcoordinate system, and base the choice of the model on the goodness of fit. q and τ An important point in the statistical evaluation of the multiscaling exponent is the choice of q and τ . They must be selected using specific statistical criteria. The wrong values of q and τ can severely bias the result. Regarding q , many research papers about multiscaling systemspropose the use of a vast spectrum of q s. This approach has two fallacies. The first one liesin the the fact that multiscaling processes are such even for small values of q . Secondly andmost importantly, given a distribution of returns with tail exponent α , for q ≥ α we havethat E [ r q ] diverges. Hence, we need to have q < α . Any multiscaling behaviour found byneglecting or ignoring this fact, is severely biased and possibly false. The method used toset q can come from two different approaches, i.e. established research results or direct tailcomputation. Since it is empirically proven that for financial time series α ∈ (1 , (fat tails),a conservative approach would be to use q ≤ . Alternatively, it is possible to estimate α on the empirical distribution through a tail estimator (e.g. (Clauset, Shalizi, and Newman,2009; Virkar, Clauset et al., 2014)) and use it as threshold for q . In this paper, we willfollow the conservative approach and use q ≤ . In fact, as already stated if the multiscalingphenomenon is present, it can be extrapolated from this range of moments.Regarding the time aggregation τ , a general rule would be to use the minimum possiblevalue of τ , τ ∗ , such that the autocorrelation information of the series is preserved. Theautocorrelation ρ of the return series at lag τ is defined as: ρ τ ( r ( t )) = E [( r ( t + τ ) − µ )( r ( t ) − µ ))] σ τ where µ and σ τ are respectively the mean and variance of r ( t ) . It is a well known stylizedfact that returns are expected to be uncorrelated at daily frequency while the absolute andsquared returns exhibit long-persistence (Cont, 2001; Chakraborti et al., 2011). Among thedifferent procedures used to estimate τ ∗ , we mention: • Segmented regression on the structure function (Yue et al., 2017); If the null hypothesis for one or more t-tests is not rejected but the F-test rejects the null hypothesis,the process is a non-stable multiscaling process. • Autocorrelation significance test (Buonocore, Aste, and Di Matteo, 2017).The first one computes the structure function for each q -moment and then, fits a segmentedregression in log-log coordinates between τ and Ξ( τ, q ) , and finds two slopes - one for thescaling component and one for the non-scaling component (Yue et al., 2017). The secondapproach instead, chooses the value of τ prior to compute the structure function, settingthe value of τ ∗ as the minimum value of τ for which the autocorrelation is not statisticallysignificant (Buonocore, Aste, and Di Matteo, 2017). In this paper we propose a new ap-proach which takes the advantages of both methods. We name it Autocorrelation SegmentedRegression. The rationale behind this approach is to perform a segmented regression on theautocovariance function computed on the absolute returns and take τ ∗ as the value whichminimizes the sum of squared residuals for the high autocorrelation state and the randomnoise state (plateau). This approach has the advantage of setting the value of τ in advanceavoiding ad-hock solutions and reducing computations. Nevertheless, the method is less sen-sitive to a unique non-significant lag. In fact, in noisy data it is possible to have a lag forwhich the autocorrelation is not significant unless for a considerable amount of consequentlags. The Equation for the proposed Autocorrelation Segmented Regression (ACSR) takesthe form ρ τ ( r ( t )) = (cid:40) α + βτ, if τ < τ ∗ α + βτ ∗ , if τ ≥ τ ∗ (12)where α is the intercept of the regression and can be fixed to be equal to ρ τ with τ = 1 , β is a slope parameter for the autocorrelation function, τ is the lag at which the autocorre-lation is computed, and τ ∗ is the value of aggregation which maximizes the autocorrelationinformation. We use ln( τ ) instead of τ for better estimation of τ ∗ . Figure 1 shows howthis method works. We generated a process with known τ max = 250 and run the ACSR onthe autocorrelation function. As shown in the Figure, we get a value of ln( τ ∗ ) = 5 . whichcorresponds to τ ∗ = 247 . Before turning the attention to the simulation experiments, let us here recall the full proce-dure to follow in order to robustly extract the scaling exponents:1. Compute τ ∗ with the Autocorrelation Segmented Regression method;2. Compute q = α or rely on the empirical evidence available in the literature;3. Perform the linear and nonlinear regressions with the above parameters (equation 11); (Valsamis, Husband, and Chan, 2019) reviews different segmented regression specifications. It is important to notice that the segmented regression in the structure function and the ACSR methodyield similar results. H ( q i , q j ) is statistically significant through a t-test. The second stepis devoted to the F-test. In particular, we perform the F-test using the predicted relativemoments from the regression with the full estimated scaling spectrum and the one in whichonly the first scaling is different from , i.e. (cid:98) ... Ξ( τ, q ) and (cid:98) ... Ξ( τ, ¯ q ) , where ¯ q = { q , , . . . , } .If the null hypothesis is rejected, it means that the full spectrum is necessary to recover allthe relative moments. The third step of the procedure consists of a random walk (RW)hypothesis. Assuming the multiscaling parameter B = 0 , we perform the regression ofEquation 9 with only the constant A and test if (cid:98) A = 0 . with a t-test. In case the nullhypothesis is rejected, this means that the RW scaling is incorrect and the use of the square-root of time rule severity creates a bias in the risk measures. The last step involves aconfirmatory test of the results deriving from the first and second steps of the test procedure.In particular, we perform the full regression of Equation 9 and test for (cid:98) A = 0 . and (cid:98) B = 0 using a t-test. If this test gives a conflicting result with respect to the first and second steps,we cannot assert anything precise on the process and a deeper analysis needs to be performedby controlling for different input specifications. For the non-linear regression, in order to use the F-test we have to use ln( (cid:98) ... Ξ( τ, q )) and ln( (cid:98) ... Ξ( τ, ¯ q )) . In the simulation experiment, we focus on one of the most used models to generate multi-fractal time series: the Multifractal Random Walk (MFRW), proposed by (Bacry, Delour,and Muzy, 2001b,a). This model is capable to generate multifractal time series with aknown multiscaling spectrum. In addition, this model is able to generate time series withthe empirically observed stylized facts. In the discrete version of the MFRW, the process r τ ( t ) = p ( t + τ ) − p ( t ) is defined as (Bacry, Delour, and Muzy, 2001b): r τ ( t ) = p ( t + τ ) − p t = t + τ ∆ t (cid:88) k = t ∆ t +1 (cid:15) ∆ t ( k ) e ω ∆ t ( k ) , (13)whith (cid:15) ∆ t ∼ N (0 , σ ∆ t ) , ω ∼ N ( − λ ln( L/ ∆ t ) , λ ln( L/ ∆ t )) where λ is called intermittency parameter and determines the strength of the multifractality, L is the autocorrelation length, σ is the variance of the process and ∆ t is the discretizationstep. The distinctive feature of the MFRW is that, even if the (cid:15) ∆ t ( k ) are independent, the ω ∆ t ( k ) are not, having autocovariance Cov ( ω ∆ t ( k ) , ω ∆ t ( k )) = λ ln ρ ∆ t ( k − k ) , whit ρ ∆ t ( k − k ) = L ( | k − k | + 1)∆ t | k − k | < L/ ∆ t, otherwise . In the continuous limit, the scaling exponents of this model are ζ ( q ) = qH ( q ) = ( λ + 12 ) q − λ q . (14)The power of this model is that it encompasses all the major stylized facts using only threeparameters ( λ, L, σ ). In fact, this model is able to reproduce fat tails, volatility clusteringand a multiscaling spectrum. For the purpose of simulation, we generated paths each ofdimension T = 10000 and we set the model parameter to L = 250 , σ = 1 and λ = 0 . , . , . in accordance to empirical findings (Bacry, Kozhemyak, and Muzy, 2008a,b; Løvsletten andRypdal, 2012). As explained in previous Section, q max = 1 . In particular, we use q ∈ [0 . , with pace . which converts to evaluated moments. Since the process is generated to beendogenously correlated and with volatility clustering, the anomaly procedure does not findany anomaly. To select τ max , we use the τ ∗ estimated by the ACSR. Table 1 shows the resultof the method for the different specifications of λ . As it is possible to observe, the procedureis quite accurate and the 95% confidence intervals (C.I.) always contain the value of L = 250 which is the truncation parameter. 10ubmitted to the European Journal of Finance ¯ τ ∗
95% C.I. λ = 0 . λ = 0 . λ = 0 . τ max . C.I. computed over 200000bootstrapped samples.Once the parameters are estimated, we compute the multiscaling exponents and evalu-ated their statistical significance. Since Equation 14 gives us the true multiscaling spectrum,we can easily test the performance of the GHE approach and compare it with the new pro-posed methodology of this paper based on the normalized and standardized structure func-tion (NSSF) proposed in Equation 8 and the relative normalized and standardized structurefunction (RNSSF) proposed in Equation 11, which we name Normalized and StandardizedGeneralized Hurst Exponent (NSGHE) and the Relative Normalized and Standardized Gen-eralized Hurst Exponent (RNSGHE), respectively. We will then use the latter methodologyto test the multiscaling spectrum. Tables 2 and 3 present the root mean squared errors(RMSE) of the different methodologies computed over the realizations for both A and B of Equation 4. As it is possible to notice, the RNSGHE generally overperformes withrespect to the other specifications. It is important to highlight that the elimination of theslope ambiguity has improved considerably the results. In fact, the standard GHE approachhas the highest RMSE among all the specifications. This result was expected since the newmethodology helps to remove uncertainty and thus the estimation performance. We finally
GHE RNSGHE L NSGHE NL RNSGHE NL λ = 0 . λ = 0 . λ = 0 . Table 2: RMSE for the parameter A for the different methodologies. Subscript L refers tothe linear regression while N L to the non-linear regression.report the results for the multiscaling test.Given the superiority of the RNSGHE L and RNSGHE NL models, from this stage on wewill use these models in the paper. Now, we show the nature of a process by performingthe multiscaling test. Figure 2 shows the p-values of all the coefficients related to the q moments Equations for a realization of the MFRW model with the same parameters specifi-cation as above and λ = 0 . . What we can observe is that if we choose a confidence level of For the linear regression case, the NSGHE and RNSGHE are equivalent models, so we report only theresult for the RNSGHE.
GHE RNSGHE L NSGHE NL RNSGHE NL λ = 0 . λ = 0 . λ = 0 . B for the different methodologies. Subscript L refers tothe linear regression while N L to the non-linear regression. , the null hypothesis of increments scaling increments equal to for all the moments willbe rejected. However, if we set a more stringent confidence level, for example , the null isnot rejected for some coefficients, resulting in a uniscaling process.Figure 2: P-values of all the 50 coefficients related to q moments Equations for a multifractalrandom walk with T = 10000 , λ = 0 . , L = 250 and σ = 1 .For λ = 0 . , . all the p-values are almost . This is not unexpected since we generatedprocesses with a non negligible amount of multiscaling. Once the t-test has been carried out,we perform an F-test on the overall scaling spectrum (equation 11). Results are reported inTable 4. It is possible to infer that the process generated with λ = 0 . does not reject thenull for which all the scaling increments are equal to while, for λ = 0 . , . the null isrejected and the full scaling spectrum is necessary to reconstruct the relative moments. Bycombining this result with the outcome of the t-test, we can state that the process generatedwith λ = 0 . is weakly multiscaling at confidence level but it is not multiscaling at confidence level. The other two specifications are strongly multiscaling at any reasonableconfidence level.Since these processes have been generated such that they have specific multiscaling spec-12ubmitted to the European Journal of Finance RNSGHE L RNSGHE NL λ = 0 . λ = 0 . λ = 0 . H (0 , q ) is different from 0. Subscript L refers to linear regression while N L to non-linear regression.trum, the other two tests of the multiscaling test procedure have trivial results.
In this Section we perform the empirical application of our statistical methodology.
The dataset used for the analyses is composed by stocks listed in the
Dow Jones (DJ).In particular, close prices of stocks are recorded on a daily basis from 03/05/1999 up to20/11/2019, i.e. 5363 trading days. We use 27 over the 30 listed stocks as they are the onesfor which the entire time series is available. For the purpose of our analysis, we use log-pricesand log-returns. Table 5 reports the summary statistics of the stocks under analysis.As it is possible to notice from Table 5, all the log-returns are centered at , the skewnessis (in most of the cases) different from while the Kurtosis clearly shows on of the stylizedfacts of most financial time series: fat tails. In this Section, we report the results of the multiscaling test. We report the results ofall the steps of the testing procedure explained at the end of Section 3.1.2. Results arepresented using the RNSGHE L because as explained in the previous Section, it has the bestperformance in the correct estimation of the scaling spectrum. Results are summarizedin Table 6. The second column of the Table presents the τ ∗ calculated using the ACSRmethodology, which is presented in Section 3.1.1. For its estimation, we fix a maximumvalue for the choice of τ equal to T = 1072 in order not to bias the scaling estimation withtoo few values. Hence, we notice that several stocks reach the boundary value, suggesting avery high rate of persistency in the time series. The third column of the Table reports theresponse to the weak multiscaling (weak M-S) process hypothesis, i.e. hypothesis that a single All the tests are performed using a confidence level of unless differently stated. It is important to highlight that if the other methods proposed in this paper are adopted, results remainqualitatively unchanged.
Summary statisticsStock Mean Median Min Max Std Skewness Kurtosis ObsAAPL
AXP BA CAT
CSCO
CVX
DIS GS HD IBM
INTC
JN’
JPM KO MCD
MMM
MRK
MSFT
NKE
PFE PG TRV
UNH
UTX VZ WMT
XOM H q = A + ε and test ifthe estimated A , (cid:98) A , is equal to . . We note that only for two stocks the null hypothesis This is the standard procedure to estimate the Hurst exponent for uniscaling processes, i.e. (cid:98) A = (cid:98) H . (cid:98) A = 0 . and the statistical significanceof (cid:98) B in the full regression model of Equation 9. Finally, the last three columns of the Tablereports the estimated Hurst exponent (cid:98) H computed for the RW test, the linear scaling index (cid:98) A and the multiscaling proxy (cid:98) B . These results point out that multiscaling is a stylized factand can be statistically tested by rewriting the structure function in a convenient way. Stock τ ∗ WeakM-S StrongM-S RWt-test M-SF-test (cid:98) H (cid:98) A (cid:98) B AAPL
963 NO YES NO YES 0.5567 0.5774 -0.0407(0.5534, 0.5600) (0.5770, 0.5779) (-0.0414, 0.0399)
AXP BA CAT
527 NO YES NO YES 0.5422 0.5603 -0.0355(0.5393, 0.5451) (0.5597, 0.5608) (-0.0364, -0.0345)
CSCO
CVX
313 NO YES NO YES 0.4722 0.4872 -0.0293(0.4699, 0.4746) (0.4868, 0.4876) (-0.0300, -0.0286)
DIS GS HD IBM
INTC JN
488 NO YES NO YES 0.4664 0.4880 -0.0423(0.4629, 0.4698) (0.4874, 0.4885 ) (-0.0432, -0.0415)
JPM KO MCD
480 NO YES NO YES 0.5116 0.5266 -0.02930.5092, 0.5140) (0.5261, 0.5270) (-0.0301, -0.0286)
MMM
MRK
867 NO YES NO YES 0.5231 0.5498 -0.0522(0.5189, 0.5274) (0.5484, 0.5512) (-0.0546, -0.0499)
MSFT
NKE
PFE PG
969 NO YES NO YES 0.4617 0.4834 -0.0426(0.4582, 0.4652) (0.4829, 0.4839) (-0.0435, -0.0417)
TRV
604 NO YES NO YES 0.4673 0.4861 -0.0368(0.4643, 0.4703) (0.4857, 0.4865) (-0.0375, -0.0362)
UNH
580 NO YES NO YES 0.5163 0.5346 -0.0360(0.5134, 0.5192) (0.5342, 0.5351) (-0.0367, -0.0352)
UTX VZ WMT
XOM
344 NO YES NO YES 0.4642 0.4852 -0.0413(0.4608, 0.4675) (0.4846, 0.4858) (-0.0422, -0.0403)
Table 6: Results of the multiscaling estimation and testing procedure.15ubmitted to the European Journal of FinanceIn the next subsection, these results are used to compare the standard VaR calculationusing square root of time rule and using the estimated scaling.
Mltiscaling time series are generated from trading dynamics. One of the fundamental as-pects of multiscaling systems is the strong endogeneity of the sample paths, aspect which isconsidered to be originated by financial trading dynamics. For this reason, transient exoge-nous shocks only distort the analysis and consequently, the estimation procedure. Hence, thestatistical procedures used to analyse multiscaling systems are highly sensitive to exogenousshocks. In this context, we refer to an exogenous shock as an unexpected and transient be-haviour of the stock price, not explainable by the market conditions or by the price path. Inaddition, it is possible to have anomalies in the time series due to errors or algorithmic tradingcrashes. The anomalies in financial time series can be grouped in 3 main categories: spikes,jumps and contamination errors. Figure 3 shows these possible anomalies. The top left panelis dedicated to the original log-price time series for Verizon. The time series is quite volatileand in fact, the log-returns has a Kurtosis index equal to . However, although the distribu-tion of log-returns is fat tailed, there are not clear anomalies. The top right panel of Figure 3depicts the same log-price time series to which a strong fat tailed series (Kurtosis larger than ) is added. This is the case of contamination error. This is generally due to machineerrors in the data transmission process. The bottom left panel reflects the Verizon log-pricetime series with a random spike added. The spike can arise from multiple sources, among allalgorithmic trading errors or contamination errors due to data manipulations. The last panelin the bottom right corner, represents the log-price series with an added jump. Jumps per secan arise from endogenous or exogenous shocks, but if they come from an endogenous drivingforce, they persist in the jump direction. If they come from an exogenous source instead,they tend to be transient. In a relatively recent paper (D. Sornette and Muzy., 2003), theauthors explain that huge financial crashes can be originated from endogenous shocks whichhave a huge persistence behaviour. This kind of shocks are inherent in the price process sothey are not transient anomalies.In a mostly technical paper, (Katsev and L’Heureux, 2003) showed both theoretically andexperimentally, that such data anomalies can strongly bias the results, especially for shortdatasets. In particular, the paper shows that under certain circumstances, these irregularitiescan generate spurious scaling. For these reason, it is suggested to analyse the time seriesand eliminate such anomalies before proceeding with the scaling estimation. In order todo so, we propose a methodology based on financial stylized facts. More precisely, we usevolatility clustering and long-range dependence of asset returns (Cont, 2001; Chakrabortiet al., 2011). In this empirical context, the quantities that we name cumulative variance16ubmitted to the European Journal of FinanceFigure 3: Verizon log-prices time series and some anomalies of financial time series.(CV) and cumulative auto-covariance (CAV): CV ( t ) = t (cid:88) i =1 r i and CAV ( t ) = t (cid:88) i =1 | r i +1 || r i | should be very similar, except when an exogenous (unexpected) anomaly exists. The volatilityclustering drives the similarity in the short period since | r i +1 | and | r i | are expected to bevery similar (same cluster), while the long-range dependence drives the similarity of the twomeasures over the long-run. Figure 4 represents the two quantities for the Verizon stock.These two quantities are approximately equal, confirming that even with high volatility andmany tail events, the time series does not contain exogenous shocks.The difference between the two cumulative series is given by D ( t ) = CV ( t ) − CAV ( t ) .By running a change-point detection in the mean level of D ( t ) , it is possible to detect theanomalies in the price series and replace the corresponding values on the original seriesaccording to a specific rule, e.g. the mean of previous and subsequent datapoints. The top In the financial High Frequency literature, these quantities are called Realized Variance and BipowerVariation. D ( t ) . The panel shows thatthe procedure correctly identifies the position of the anomaly.Given the fact that such events (spikes or jumps) are rare and have unconventional mag-nitude, their removal can only benefit the analysis. Let us now estimate the multiscalingexponent for the Verizon stock when the anomalies reported in Figure 3 (bottom panels)are not removed by the time series. Table 7 reports the results. As it can be noticed, theestimated values changes considerably, especially in the scenario in which a spike is added. (cid:98) H (cid:98) A (cid:98) B Spike 0.5290 0.6013 -0.1417Jump 0.4942 0.5220 -0.0543Table 7: Results of the multiscaling estimation on the times series reported in Figure 3(bottom panels) with anomalies not removed.For completeness, we also performed a t-test with the null hypothesis of no differencebetween the estimates with anomalies and the one reported in Table 6. The null hypothesisfor all the coefficient is strongly rejected at any confidence level.To show how these anomalies can generate spurious multiscaling, we generate frac-tional Brownian motions (uniscaling process) of length with Hurst exponent H = 0 . (the one estimated for Verizon). At these simulated time series we add a spike and a jump18ubmitted to the European Journal of FinanceFigure 5: Cumulative variance ( CV ( t ) ) and cumulative auto-covariance ( CAV ( t ) ) for theVerizon time series (top panel) and the measure D ( t ) (bottom panel) in case of the addedspike.and estimate (cid:98) H , (cid:98) A and (cid:98) B for both the series with and without the anomalies. The resultsare reported in Table 8. As we can see, when the anomalies are not present in the timeseries, the average values for (cid:98) H , (cid:98) A and (cid:98) B are in line with the true values and not statisticallydifferent from them. In the case of a spike, we can see that (cid:98) H , (cid:98) A and (cid:98) B are severely biased.In particular, the spikes make the times series looking multiscaling, while we know it is not.Finally, for the case of the added exogenous jump, we have that the scaling exponent curvesand the parameter B is not equal to . Also the other two estimated coefficients are upwardbiased and statistically different from the theoretical ones. These results clearly show thatthe scaling exponents are sensitive to such anomalies and estimates can be biased if suchanomalies are not taken with care. Therefore, it is fundamental to analyse the data beforeperforming the multiscaling analysis. 19ubmitted to the European Journal of FinanceFigure 6: Cumulative variance ( CV ( t ) ) and cumulative auto-covariance ( CAV ( t ) ) for theVerizon time series (top panel) and the measure D ( t ) (bottom panel) in case of the addedjump. (cid:98) H (cid:98) A (cid:98) B No anomalies 0.4663 0.4654 -0.0018(0.4597, 0.4710) (0.4596, 0.4728) (-0.0048, 0.0012)Spike 0.4403 0.4893 -0.0961(0.4330, 0.4481 ) (0.4799, 0.4998) (-0.1027, -0.0899)Jump 0.4845 0.4806 0.0076(0.4785, 0.4905) (0.4737, 0.4874) (0.0045, 0.011)Table 8: Avarege of the 100 estimated (cid:98) H , (cid:98) A and (cid:98) B for a fractional Brownian motion with H = 0 . . C.I. computed over 200000 bootstrapped samples are reported in parenthesis.20ubmitted to the European Journal of Finance
In this section we show that a simple VaR configuration without a scaling or multiscalingconsideration might bias the VaR estimation at higher aggregation scales. We use daily stocksdata from the Dow Jones index to estimate the multiscaling spectrum and to perform themultiscaling test described in Section 5. After the procedure is carried out, we estimate theHistorical and Gaussian VaR at day and we successively use it to compute the yearly VaRusing the square root of time rule . We then compare it with the fractional VaR with properscaling and highlight eventual biases. To conclude, we propose a methodology to compute amultifractal VaR. VaR is an easy and intuitive way to quantify risk for assets and portfolios. Let
V aR ( τ, − α ) be the Value at Risk at frequency τ for a confidence level equal to − α which satisfies P ( r τ ( t ) < V aR ( τ, − α )) = α (15)where r τ ( t ) are the log-returns at frequency τ . Several methodologies are used to compute theVaR. Among all, we recall the Historical VaR (HVaR) and the Gaussian VaR(GVaR). Theformer is a non-parametric approach which uses historical data to compute the VaR, while thelatter assumes a Gaussian distribution of stock returns and applies the Gaussian formula forthe percentiles computation to extract the VaR at a given confidence level. The issue facedin applied finance is that the square root of time rule works only under the assumption of iidGaussian returns. However, this technique is widely adopted regardless of its assumptions.In our analysis, VaR is computed using the two aforementioned approaches at τ = 1 day at confidence level ( V ar (1 , ). Annual VaR ( V ar (250 , ) is calculated withthe scaling exponent equal to . , i.e. (cid:100) V ar (250 , V ar (1 , × . and to theestimated H , i.e. (cid:100) V ar H (250 , V ar (1 , × (cid:98) H . We further estimate the true
V ar (250 , using annual returns ( τ = 250 days) and compare them. Results are shownin Figures 7 and 8. These Figures show that when we compare the VaR with H scaling timerule and the VaR with square root of time rule , the deviation from the true VaR is lowerwhen the former approach is used. In fact, the bias (with respect to the true VaR) of theVaR with H scaling time is considerably lower for most of the stocks in both the HVaR andGVaR settings. This is due to the fact that over the long-run, even a small divergence fromthe assumption of scaling exponent equal to . can make a substantial impact.To conclude the analysis, we also report the relative error, i.e. RE = | V ar (250 , − V ar (1 , × K || V ar (250 , | , We will discuss the multiscaling case later in the paper. . and by (cid:98) H . The stocks are sorted in order of the magnitude of the true VaR.Figure 8: GVaR using annual data and using daily data rescaled by the factor . and by (cid:98) H . The stocks are sorted in order of the magnitude of the true VaR.where K is equal to . for the VaR computed with the square root of time rule or equalto the estimated H , (cid:98) H , reported in Table 6. This helps us to identify the magnitude ofthe bias from the true VaR and to compare the two scaling approaches. Figure 9 shows theresults. As the Figure shows, using the correct scaling results in a smaller relative error.This confirms the fact the choice of a proper scaling exponent should not be neglected bythe financial community considering that its estimation and testing is relatively simple.22ubmitted to the European Journal of FinanceFigure 9: Relative error between the true VaR calculated using annual data, HVaR andGVaR computed using daily data scaled by the factor 0.5 and by the estimated factor H . In the previous subSection, we showed that using the correct scaling contributes to reduce thecomputation error for VaR at smaller frequencies. However, as explained in Section 5.2, allthe time series analysed present a strong multiscaling. To deal with such situations, we reporthere a possible solution. While VaR is related to the log-returns, multiscaling is a propertyof the moments of the log-returns. For this reason, there is not a straightforward formula tocompute VaR which takes into account multiscaling. An exception to this is the MultifractalVaR proposed in (Lee, Song, and Chang, 2016), where the author introduces a VaR consistentwith the multifractality of financial time series using the Multifractal Model of Asset Returns(MMAR). In a previous paper, (Batten, Kinateder, and Wagner, 2014) performs a similaranalysis but relies on the MMAR Monte Carlo simulations and then, computes VaR on thesimulated time series. The Monte Carlo approach has the advantage of letting the researcheruse the model that best depicts the data. In fact, one can calibrate the MRW or the MMARand generate a large number of sample paths which can be used to compute VaR. In caseof moderate multiscaling, the difference can be low but for multiscaling processes with a | B | > . , neglecting such feature can strongly distort the VaR. In this work, we use theMFRW to simulate trading days (i.e. year) of log-returns and compute the VaR of thesimulated paths. For this purpose, three parameters need to be estimated: the variance σ ,the autocorrelation scale parameter L and the intermittency parameter λ . The variance canbe estimated from the log-returns time series as σ = V ar ( r τ ( t )) , with τ = 1 , the parameter L is set to be equal to τ ∗ , while the intermittency parameter λ can be extracted by equating theestimated coefficients of Equation 4 to the parameters in Equation 14 getting two (possiblydifferent) estimates of λ , i.e. λ A and λ B . For each stock we estimate the three parameters, If the data generating process is not a fully MRW, the estimation of λ by using A or B can differsubstantially. In our case, for most of the stocks analysed the intermittency parameter computed with the σ , L and λ , and we generate independent paths of daily returns for a year (i.e. 250days). Finally, we compute the VaR which we name Multiscaling VaR (MSVaR) by using a confidence level on these simulations. Results are depicted in Figure 10.Figure 10: Annual Historical VaR (HVaR) and Multiscaling VaR (MSVaR). Confidence level . Stocks are sorted according to the magnitude of the Historical VaR.It is possible to appreciate that the MSVaR computed on the simulated paths has compa-rable size to the Historical VaR computed on annual data. It is also important to notice thatthe values predicted using λ A and λ B are very similar, suggesting that the stocks log-returnscan be adequately approximated by the MFRW model. Nevertheless, we remark the impor-tance of the full multiscaling estimation and testing procedure which leads to the MSVaR.In fact, if the previous analysis is bypassed the estimated risk metrics can be severely biasedand inconsistent. In this paper we proposed a step by step procedure to robustly estimate and test multiscalingin financial time series. By rewriting the structure function in a convenient way we were ableto make multiple tests on the scaling spectrum and asses the statistical significance of mul-tiscaling, discriminating between weak and strong multiscaling. We have shown the effect ofanomalies in financial time series and studied the impact on the estimated scaling exponents.Moreover, we have showed how the use of proper scaling can help reducing the error in theforecasting of VaR at smaller frequency with respect to the commonly used square root oftime rule . Finally, we have proposed a Multiscaling consistent VaR using a Monte Carlo two estimated coefficients, i.e. λ A and λ B , are very similar. Acknowledgement
We want to thank the ESRC Network Plus project ’Rebuilding macroeconomics’. We aregrateful to NVIDIA corporation for supporting our research in this area with the donationof a GPU. We thank Bloomberg for providing the data.
References
Bacry, E, A Kozhemyak, and J-F Muzy. 2008a. “Log-Normal continuous cascades: aggre-gation properties and estimation. Application to financial time-series.” arXiv preprintarXiv:0804.0185 .Bacry, Emmanuel, Jean Delour, and Jean-François Muzy. 2001a. “Modelling financial timeseries using multifractal random walks.”
Physica A: Statistical Mechanics and its Applica-tions
299 (1): 84–92.Bacry, Emmanuel, Jean Delour, and Jean-François Muzy. 2001b. “Multifractal random walk.”
Physical Review E
64 (2): 026103.Bacry, Emmanuel, Alexey Kozhemyak, and Jean-François Muzy. 2008b. “Continuous cascademodels for asset returns.”
Journal of Economic Dynamics and Control
32 (1): 156–199.Barunik, Jozef, and Ladislav Kristoufek. 2010. “On Hurst exponent estimation under heavy-tailed distributions.”
Physica A: Statistical Mechanics and its Applications
389 (18): 3844–3855.Batten, Jonathan A, Harald Kinateder, and Niklas Wagner. 2014. “Multifractality and value-at-risk forecasting of exchange rates.”
Physica A: Statistical Mechanics and its Applications
Chaos, Solitons & Fractals
88: 38 – 47. Complexity in Quantita-tive Finance and Economics, . 25ubmitted to the European Journal of FinanceBuonocore, Riccardo J., Tomaso Aste, and Tiziana Di Matteo. 2017. “Asymptotic scalingproperties and estimation of the generalized Hurst exponents in financial data.”
Phys.Rev. E
95: 042311. https://link.aps.org/doi/10.1103/PhysRevE.95.042311 .Buonocore, RJ, G Brandi, RN Mantegna, and T Di Matteo. 2019. “On the interplay betweenmultiscaling and stock dependence.”
Quantitative Finance
20 (1): 133–145.Calvet, Laurent E., and Adlai J. Fisher. 2002. “Multifractality in asset returns: Theory andevidence.”
Review of Economics and Statistics
84 (3): 381–406.Chakraborti, Anirban, Ioane Muni Toke, Marco Patriarca, and Frédéric Abergel. 2011.“Econophysics review: I. Empirical facts.”
Quantitative Finance
11 (7): 991–1012.Clauset, Aaron, Cosma Rohilla Shalizi, and Mark EJ Newman. 2009. “Power-law distributionsin empirical data.”
SIAM review
51 (4): 661–703.Cont, Rama. 2001. “Empirical properties of asset returns: stylized facts and statistical issues.”
Quantitative Finance
Risk, 16:67–71 .Dacorogna, Michel M., Ramazan Gençay, Ulrich A. Müller, Richard B. Olsen, and Olivier V.Pictet. 2001.
An Introduction to High-Frequency Finance . San Diego: Academic Press. .Danielsson, Jon, and Jean-Pierre Zigrand. 2006. “On time-scaling of risk and the square-root-of-time rule.”
Journal of Banking & Finance
30 (10): 2701–2713.Di Matteo, Tiziana. 2007. “Multi-scaling in finance.”
Quantitative finance
Physica A: Statistical Mechanics and its Applications
Journal of Banking & Finance
29 (4): 827–851.Fukasawa, Masaaki, Tetsuya Takabatake, and Rebecca Westphal. 2019. “Is Volatility Rough?” arXiv preprint arXiv:1905.04852 .Gatheral, Jim, Thibault Jaisson, and Mathieu Rosenbaum. 2018. “Volatility is rough.”
Quan-titative Finance
18 (6): 933–949.Hurst, Harold Edwin. 1956. “Methods of using long-term storage in reservoirs.”
Proceedingsof the Institution of Civil Engineers
Physica A: Statistical Mechanics and its Applications
316 (1):87–114.Katsev, Sergei, and Ivan L’Heureux. 2003. “Are Hurst exponents estimated from short orirregular time series meaningful?”
Computers & Geosciences
29 (9): 1085–1089.Lee, Hojin, Jae Wook Song, and Woojin Chang. 2016. “Multifractal value at risk model.”
Physica A: Statistical Mechanics and its Applications
IISE transactions
50 (9): 767–776.Løvsletten, Ola, and Martin Rypdal. 2012. “Approximated maximum likelihood estimationin multifractal random walks.”
Physical Review E
85 (4): 046705.Lux, Thomas. 2004. “Detecting multi-fractal properties in asset returns: The failure of thescaling estimator.”
International Journal of Modern Physics C
15 (04): 481–491. .Lux, Thomas, and Michele Marchesi. 1999. “Scaling and criticality in a stochastic multi-agentmodel of a financial market.”
Nature
397 (6719): 498.Mandelbrot, Benoit B. 1963. “The variation of certain speculative prices.”
The journal ofbusiness
36 (4): 394–419.Mandelbrot, Benoit B. 2013.
Fractals and Scaling in Finance: Discontinuity, Concentration,Risk. Selecta Volume E . Springer Science & Business Media.Mandelbrot, Benoit B., Adlai Fisher, and Laurent E. Calvet. 1997.
A Multifractal Model ofAsset Returns . Cowles Foundation Discussion Papers 1164. Cowles Foundation for Researchin Economics, Yale University. https://ideas.repec.org/p/cwl/cwldpp/1164.html .Mantegna, Rosario N., and H. Eugene Stanley. 1995. “Scaling behaviour in the dynamics ofan economic index.”
Nature
376 (6535): 46–49.Mantegna, Rosario N., and H. Eugene Stanley. 1999.
Introduction to Econophysics: Correla-tions and Complexity in Finance . Cambridge University Press. https://books.google.co.uk/books?id=SzgXWCS7Nr8C .Muzy, Jean-François, Emmanuel Bacry, and Alain Arneodo. 1991. “Wavelets and multifractalformalism for singular signals: Application to turbulence data.”
Physical review letters
Physical review E
47 (2): 875.Sornette, Didier, Zhi-Qiang Jiang, Wen-Jie Xie, and Wei-Xing Zhou. 2018. “Multifractalanalysis of financial markets.”
Physics Reports .Takaishi, Tetsuya. 2019. “Rough volatility of Bitcoin.”
Finance Research Letters
Computational and Mathematical Methods in Medicine
The Annals of Applied Statistics
Journal of Banking & Finance