lCARE -- localizing Conditional AutoRegressive Expectiles
llCARE - localizing Conditional AutoRegressiveExpectiles ∗† Xiu Xu ‡ , Andrija Mihoci § , Wolfgang Karl Härdle ¶ Abstract
We account for time-varying parameters in the conditional expectile-based value at risk(EVaR) model. The EVaR downside risk is more sensitive to the magnitude of portfoliolosses compared to the quantile-based value at risk (QVaR). Rather than fitting the expec-tile models over ad-hoc fixed data windows, this study focuses on parameter instability oftail risk dynamics by utilising a local parametric approach. Our framework yields a data-driven optimal interval length at each time point by a sequential test. Empirical evidenceat three stock markets from 2005-2016 shows that the selected lengths account for approxi-mately 3-6 months of daily observations. This method performs favorable compared to themodels with one-year fixed intervals, as well as quantile based candidates while employinga time invariant portfolio protection (TIPP) strategy for the DAX, FTSE 100 and S&P 500portfolios. The tail risk measure implied by our model finally provides valuable insights forasset allocation and portfolio insurance.
JEL classification : C32, C51, G17
Keywords : expectiles, tail risk, local parametric approach, risk management ∗ Financial support from the Deutsche Forschungsgemeinschaft via CRC 649 ”Economic Risk” andIRTG 1792 ”High Dimensional Non Stationary Time Series”, Humboldt-Universität zu Berlin, is grate-fully acknowledged. † This is a post-peer-review, pre-copyedit version of an article published in the Jour-nal of Empirical Finance. The final authenticated version is available online at: http://dx.doi.org/10.1016/j.jempfin.2018.06.006 ‡ Humboldt-Universität zu Berlin, C.A.S.E. - Center for Applied Statistics and Economics, SpandauerStr. 1, 10178 Berlin, Germany, tel: +49 (0)30 2093 5721, fax: +49 (0)30 2093 5649, Xiamen University,Wang Yanan Institute for Studies in Economics (WISE), 361005 Xiamen, China. Email: [email protected] § Brandenburg University of Technology, Chair of Economic Statistics and Econometrics, Erich Wein-ert Str. 1, 03046 Cottbus, Germany, tel: +49 (0)355 69 38 20 ¶ Humboldt-Universität zu Berlin, C.A.S.E. - Center for Applied Statistics and Economics, SpandauerStr. 1, 10178 Berlin, Germany and School of Business, Singapore Management University, 50 StamfordRoad, Singapore 178899 a r X i v : . [ q -f i n . S T ] S e p Introduction
Value at risk (VaR) is commonly used to measure the downside risk in finance, especiallyin portfolio risk management. Given a predetermined probability level, VaR evaluatesthe potential maximum loss for the targeted portfolio value; statistically it represents thequantile of the portfolio loss distribution, see Jorion (2000). Although it is straightforwardto understand the VaR concept, it has been recently criticized. VaR lacks the propertyof sub-additivity, that is, under the VaR risk measure, the risk of a diversified portfoliomay be larger than the sum of each individual asset risk, which in turn contradicts thecommon wisdom of diversification. Artzner et al. (1999) thus proposed the expectedshortfall (ES) as a portfolio risk measure, i.e., the expected loss below a given threshold(e.g., VaR) given the risk probability level.Another undesirable aspect of the VaR measure is its insensitivity to the magnitude ofthe portfolio loss. Kuan et al. (2009) provide an example where, under a given probabilitylevel, the potential downside risk changes under different tail loss distributions while thecorresponding VaR remains the same. Since VaR merely depends on the probabilityvalue and neglects the size of the downside loss, Kuan et al. (2009) proposed a downsiderisk measure, the expectile-based Value at Risk (EVaR), a more sensitive measure ofthe magnitude of extreme losses than the conventional quantile-based VaR (QVaR). Theexpectile at given level is estimated by minimizing the asymmetric weighted least squarederrors, exploring the method proposed by Newey and Powell (1987). The expectile levelrepresents the relative cost of the expected margin shortfall, explained as the level ofprudentiality. EVaR may be interpreted as a flexible QVaR (Kuan et al., 2009), becauseof the one-to-one mapping between quantiles and expectiles for a given loss distribution,see Efron (1991), Jones (1994) and Yao and Tong (1996).Models based on the expectile risk measure framework have thus been proposed, seee.g. Taylor (2008) and Kuan et al. (2009) after Engle and Manganelli (2004) successfullyinitialize the conditional autoregressive framework to model VaR. Kuan et al. (2009)moreover extend the EVaR to conditional EVaR and propose various Conditional Au-toRegressive Expectile (CARE) specifications to accommodate stationary and weaklydependent data, extending the work by Newey and Powell (1987). Potential time-varying2arameters resulting from the dynamic state of the economic and financial environmentare however barely analysed. This is where this research comes into play. We focus onincorporating and reacting to potential structural breaks when estimating the expectiletail risk measure.The proposed local parametric approach (LPA) utilizes a parametric model over adap-tively chosen intervals. The essential idea of the LPA is to find the longest interval lengthguaranteeing a relatively small modelling bias, see e.g. Spokoiny (1998) and Spokoiny(2009). The main advantage of the approach is the achievement of a balance betweenthe modelling bias and parameter variability in data modelling. This approach has beensuccessfully applied in many research areas: Čížek et al. (2009) analyse the GARCH(1 , .
25% expectile level the typical interval lengths that strike a balance betweenbias and variability in daily time series include on average 100 days. At the higher, 5%expectile level, the selected interval lengths range roughly between 80-90 days. The re-sulting time-varying expectile series allows us moreover to consider the dynamics of othertail risk measures, most prominently that of quantiles or the expected shortfall.The methodology presented here is successfully applied to a portfolio insurance strategyfor the DAX, FTSE 100 and S&P 500 index portfolios. A portfolio insurance strategy isdesigned to guarantee a minimum asset portfolio value over a selected investment horizon,where the downside risk can be reduced and controlled while investors can participate inthe potential gains. The proportion of the value invested into the risky asset (here theselected index portfolio), denoted as the multiplier, is directly related to the estimatedtail risk measure. A standard approach keeps the multiplier fixed regardless of the marketconditions, Estep and Kritzman (1988), Hamidi et al. (2014), whereas we exercise theprotection strategy utilising the dynamic tail risk measure implied by the lCARE model.Comparison to the benchmarks - one-year fixed rolling window CARE estimation andquantile-based (CAViaR) estimation - reveals that the lCARE model presents a strikingoutperformance in portfolio insurance.This paper is structured as follows: firstly, the data is presented in Section 2 whereasSection 3 introduces the lCARE modelling framework based on the CARE setup and thelocal parametric approach in tail risk modelling. Section 4 presents the empirical resultsand finally, Section 5 concludes.
In risk modelling we consider three stock markets and focus on the dynamics of the rep-resentative index time series, namely, the DAX, FTSE 100 and S&P 500 series. Daily4igure 1: Selected index return time series from 3 January 2005 to 30 December 2016(3130 trading days).index returns are obtained from Datastream and our data cover the period from 3 Jan-uary 2005 to 30 December 2016, in total 3130 trading days. The daily returns evolvesimilarly across the selected markets and all present relatively large variations duringthe financial crisis period from 2008-2010, see Figure 1. Although the return time seriesexhibit nearly zero-mean with slightly pronounced skewness values, all present compara-tively high kurtosis, see Table 1 that collects the summary statistics. Please note that theempirical results of this paper as well as the corresponding MATLAB programming codescan be found in the folder https://github.com/QuantLet/lCARE-BTU-HUB as well asat http://quantlet.de/d3/ia/. 5ndex Mean Median Min Max Std Skew. Kurt.DAX 0.0003 0.0007 0.1080 -0.0743 0.0137 -0.0406 9.2297FTSE 100 0.0001 0.0001 0.0938 -0.0927 0.0117 -0.1481 11.2060S&P 500 0.0002 0.0003 0.1096 -0.0947 0.0121 -0.3403 14.6949Table 1: Descriptive statistics for the selected index return time series from 3 January2005 to 30 December 2016 (3130 trading days): mean, median, minimum (Min), maxi-mum (Max), standard deviation (Std), skewness (Skew.) and kurtosis (Kurt.).
Understanding tail risk plays an essential role in asset pricing, portfolio allocation, invest-ment performance evaluation and external regulation. Tail event dynamics is commonlyassessed through the employment of parametric, semi-parametric or nonparametric tech-niques, see, e.g., Taylor (2008). Our paper contributes to the econometric literature bylocalizing parametric CARE specifications by Kuan et al. (2009) and explores the effectsof potential market structure changes when modelling tail risk measures. In this sectionwe summarise the current research on expectile-based risk management and conduct adetailed empirical study of the parameter dynamics of the introduced DAX, FTSE 100and S&P 500 return series. The results motivate the application of the local parametricapproach by Spokoiny (1998) and finally the localized conditional autoregressive expectile(lCARE) model provides a sound downside risk assessment framework for quantitativefinance practice.
Tail risk exposure can successfully be captured by an expectile-based risk measure, incontrast to modelling risk solely using Value at Risk (VaR). Despite being the most com-monly used (not coherent) tail risk measure, VaR exhibits insensitivity to the potentialmagnitude of the loss, see, e.g., Acerbi and Tasche (2002), Taylor (2008). After the con-ditional autoregressive value at risk (CAViaR) model by Engle and Manganelli (2004)was proposed, Taylor (2008) found that VaR, based on the conditional autoregressive ex-pectile model, is more sensitive to the underlying tail risk distribution. The conditional6utoregressive expectile (CARE) model specifications by Kuan et al. (2009) neverthelessdirectly model the return time series and extend the asymmetric least square estimationmethod by Newey and Powell (1987) in the analysis of stationary but weakly dependenttime series data.CARE model specifications provide insights into the dynamics of financial data and of-fer valuable economic interpretation. Although quantiles and expectiles belong to M-quantiles, see, e.g., Jones (1994), the implications in risk assessment differ considerably.VaR is a zero-moment whereas expectile is a first-moment tail risk measure, thus in theformer case the proportion of asymmetric downside and upside quantile level is deter-mined only by the ratio between downside and upside probabilities. Expectiles measurethe proportion of asymmetric downside and upside expectile level while capturing theratio between the expected marginal shortfall. Equivalently, the potential cost of moreextreme losses and the opportunity cost due to the expected marginal overcharge is cap-tured by expectiles. The CARE specifications furthermore accommodate stylised factsof the return time series, such as weak serial dependence, or volatility heteroskedasticity.Accommodating asymmetric effects on the tail expectiles of the positive and negativereturns becomes essential in interpreting tail risk dynamics.Consider the CARE model specification for a return time series y = { y t } nt =1 y t = e t,τ + ε t,τ e t,τ = α ,τ + α ,τ y t − + α ,τ (cid:16) y + t − (cid:17) + α ,τ (cid:16) y + t − (cid:17) + α ,τ (cid:16) y + t − (cid:17) ++ α ,τ (cid:16) y − t − (cid:17) + α ,τ (cid:16) y − t − (cid:17) + α ,τ (cid:16) y − t − (cid:17) (1)where e t,τ and ε t,τ denote the expectile and the error term at level τ ∈ (0 ,
1) and time t , respectively. For j = 1 , , y + t − j = max { y t − j , } and y − t − j = min { y t − j , } denote thepositive or negative observed j -th period lagged returns at time t , respectively.The τ -level expectile e t,τ in Equation (1) can be estimated by minimising the asymmetricleast square (ALS) loss function n X t =4 | τ − I ( y t ≤ e t,τ ) | ( y t − e t,τ ) (2)7ith I ( · ) denoting the indicator function.Within the CARE framework, Gerlach and Chen (2015) and Gerlach et al. (2012) assumethat the error term ε t,τ follows the asymmetric normal distribution (AND). We assumethat, conditional on the information set F t − , the data process follows an asymmetricnormal distribution AND (cid:16) µ, σ ε τ , τ (cid:17) with pdf: f ( y t − µ | F t − ) = 2 σ ε τ s π | τ − | + r πτ ! − exp ( − η τ y t − µσ ε τ !) (3)where η τ ( u ) = | τ − I { u ≤ }| u is the employed check function, µ represents the expec-tile value to be estimated and σ ε τ denotes the variance of the error term. It is worthnoting that maximising the likelihood based on the distribution (3) is mathematicallyequivalent to minimising the asymmetric least square loss function (2).Conditional on the information set F t − up to observation ( t − e t,τ includes a lagged return component and it mimics several financial series features, namely,the volatility clustering and potential asymmetric magnitude effects. Note that at level τ = 0 .
5, the expectile equals to the mean value. Given specification (2), the parametervector finally contains nine elements, namely θ τ = ( α ,τ , α ,τ , α ,τ , α ,τ , α ,τ , α ,τ , α ,τ , α ,τ , σ ε τ ) > .In specification (1), the parameter α ,τ indirectly measures the persistence level in theconditional expectile tail through the lagged return series. Since the parameters related tothe past positive or negative squared returns potentially differ, specification (1) accountsfor the asymmetric effects of the positive and negative squared lagged returns on theconditional tail expectile magnitude. This similarly mimics the leverage effect associatedwith volatility modelling, where negative (positive) returns are followed by relativelylarger (lower) variability. Under the working assumption that the expectile tail dynamicscan be well approximated over a given data interval by a model with constant parameters,it suffices to include three lags in modelling return series.The resulting quasi log likelihood function for observed data Y = { y , . . . , y n } over a fixedinterval I is given by 8 I ( Y ; θ τ ) = X t ∈ I log f ( y t − e t,τ | F t − ) (4)The quasi maximum likelihood estimate (QMLE) for the CARE parameter is then ob-tained through e θ I,τ = arg max θ τ ∈ Θ ‘ I ( Y ; θ τ ) (5)over a right-end fixed interval I = [ t − m, t ] of ( m + 1) observations at observation t . The idea behind the local parametric approach (LPA) is to find the optimal (in-sample)data interval over which one can safely fit a parametric model with time-invariant pa-rameters. This optimal interval, the so-called interval of homogeneity, is selected amongpre-specified right-end interval candidates at each time point. The proposed lCAREmodel is thus able to incorporate potential structure breaks in expectile dynamics. Inthis part we implement a fixed rolling window exercise in order to provide empirical evi-dence on the time-varying characteristics of the CARE estimates, as well as to select the’true’ parameter constellation used in the LPA simulation. At the end we discuss theestimation quality of the QMLE (5).
Dynamics and Distributional Characteristics
In the analysis of the selected (daily) stock market indices presented in Section 2, weconsider different interval lengths (e.g., 60, 125 and 250 observations) and analyse thecorresponding estimates. One may observe a relatively large variability of the estimatedparameters while fitting the model over short data intervals and vice versa. Note thatthe modelling bias moves in the opposite direction: shorter (longer) intervals lead to arelatively low (high) modelling bias. The distributional features of the estimated CAREparameters are here studied through three expectile level cases, namely τ = 0 . τ = 0 .
01 and τ = 0 .
05. Our conducted rolling window estimation exercise finally providesvaluable insights into the expectile (distribution) dynamics.Parameter estimates are indeed more volatile while fitting the data over shorter intervalswith a comparably smaller modelling bias as compared to schemes using longer window9izes, see e.g. Figures 2, 3 and 4. Here we display the estimated CARE parameters e α , . , e α , . and e α , . in a rolling window exercise across the three selected stockmarket indices from 2 January 2006 to 30 December 2016 at levels τ = 0 . τ = 0 . τ = 0 . e α , . across the three selected stock markets from 2 Jan-uary 2006 to 30 December 2016, with 60 (upper panel) and 250 (lower panel) observationsused in the rolling window exercise at fixed expectile level τ = 0 . Descriptive Statistics e α , . across the three selected stock markets from 2 Jan-uary 2006 to 30 December 2016, with 60 (upper panel) and 250 (lower panel) observationsused in the rolling window exercise at fixed expectile level τ = 0 . τ = 0 . τ = 0 .
01 and τ = 0 .
05, and thatat a given expectile level, there are three ’true’ parameter constellations, i.e., parametervalues most likely found in practice.
Estimation Quality
Here we address the estimation quality of the quasi-maximum likelihood approach. De-11igure 4: Estimated parameter e α , . across the three selected stock markets from2 January 2006 to 30 December 2016, with 60 (upper panel) and 250 (lower panel)observations used in the rolling window exercise at fixed expectile level τ = 0 . τ as θ ∗ τ . The quality of estimatingthe unknown parameter by the quasi-maximum likelihood estimator (QMLE) e θ I,τ givenin (5) is measured in terms of the Kullback-Leibler divergence E θ ∗ τ (cid:12)(cid:12)(cid:12) ‘ I (cid:16) Y ; e θ I,τ (cid:17) − ‘ I ( Y ; θ ∗ τ ) (cid:12)(cid:12)(cid:12) r ≤ R r ( θ ∗ τ ) (6)with R r ( θ ∗ τ ) denoting the risk bound, see, e.g., Mercurio and Spokoiny (2004) andSpokoiny (2009). In the selection of the power risk level r , we follow empirical evidence.In recent studies, a lower selected risk power level r leads to relatively shorter intervals ofhomogeneity and vice versa, thus it is recommended to consider the moderate risk case( r = 0 . r = 0 .
8) or the so-called conservative risk case, r = 1, see Härdle et al. (2015).Since our results favour the conservative risk case (results for r = 0 . r = 0 . r = 1. The parametric risk12 = 0 . τ = 0 . τ = 0 . e α ,τ -0.016 -0.013 -0.009 -0.026 -0.021 -0.015 -0.034 -0.026 -0.021 e α ,τ -0.035 0.051 0.153 -0.075 0.079 0.240 -0.131 0.090 0.295 e α ,τ e α ,τ e α ,τ e α ,τ e α ,τ e α ,τ -0.014 0.099 0.152 -0.861 0.106 0.149 -3.124 0.108 0.161 e σ ε τ τ = 0 . τ = 0 .
01 and τ = 0 . How to account for the time-varying characteristics of CARE parameters in tail risk mod-elling? Here we utilize the aforementioned local parametric approach (LPA), which hasbeen gradually introduced to time series literature. The essential idea of the proposedlCARE framework is to find the longest time series data interval over which the CARE13odel can be approximated by a specification with time-invariant parameters. This in-terval is labelled as the interval of homogeneity. By a sequential testing procedure, theso-called local change point detection test, we adaptively select the interval of homogene-ity among interval candidates. The critical values of the sequential test are simulatedby a Monte Carlo method, for details we refer to Appendix B. Finally, the adaptivelyestimated parameter vector at every time point (for example, at each trading day) isselected based on the test outcome.
Interval Selection
There are many possible candidates for these intervals of homogeneity. To alleviatethe computational burden, we choose ( K + 1) nested intervals of length n k = | I k | , k = 0 , . . . , K , i.e., I ⊂ I ⊂ · · · ⊂ I K . Interval lengths are assumed to be geometri-cally increasing with n k = h n c k i . Based on the empirical results reported above, it isreasonable to select ( K + 1) = 8 intervals, starting with 60 observations and for conve-nience to end with 250 observations (one trading year), i.e., we consider the set { , , , , , , , } .We assume that the model parameters are constant within the initial interval I . Fur-thermore, c = 1 .
20 is selected in accordance with current literature.
Local Change Point Detection Test
A sequential testing procedure enables us to adaptively find the homogeneous intervalat a fixed data point t . Assuming that I is homogeneous, consider now the interval J k = I k \ I k − , and sequentially conduct the test, over interval index steps k = 1 , . . . , K .The hypotheses of the test at step k read as H : parameter homogeneity of I k vs H : ∃ change point within J k = I k \ I k − .The test statistics is T k,τ = sup s ∈ J k n ‘ A k,s (cid:16) Y , e θ A k,s ,τ (cid:17) + ‘ B k,s (cid:16) Y , e θ B k,s ,τ (cid:17) − ‘ I k +1 (cid:16) Y , e θ I k +1 ,τ (cid:17)o (7)where A k,s = [ t − n k +1 , s ] and B k,s = ( s, t ] are subintervals of I k +1 . Since the changepoint position is unknown, we test every point s ∈ J k .14 k t n t k t n k t n s k J k J k I k I k I Figure 5: Sequential testing for parameter homogeneity in interval I k with length n k ending at fixed time point t The algorithm at step k is visualized in Figure 5. Assuming that the null of homogeneityof interval I k − has not been rejected, the testing procedure at step k tests for the homo-geneity of I k . Since the position of a change point within J k = I k \ I k − is unknown, thetest statistic is calculated based on all points s ∈ J k , i.e. s ∈ ( t − n k − , t − n k ], utilizingdata from I k +1 . Compute the sum of the log-likelihood values over the sample interval A k,s = [ t − n k +1 , s ] (dotted area) and B k,s = ( s, t ] (solid area) and subtract the log-likelihood value over I k +1 . The likelihood ratio test statistics T k,τ at each predeterminedexpectile level τ is then found by (7).In order to identify the homogeneous interval length, the test statistic (7) is at everystep k = 1 , . . . , K compared to the corresponding simulated critical value, here denotedby z k,τ and elaborated below. If the test statistics at all steps up to and including k arelower than the critical values, we do not reject the null hypothesis that I k is homogeneousand proceed to the next ( k + 1)-st step. If, however, the test statistic firstly exceeds thecritical value at step k , I k − is our adaptive choice. For convenience, we denote by b k theindex of the interval of homogeneity. If the null is already rejected at the interval I , b k = 0 and similarly, if I K represents the interval of homogeneity, b k = K .The adaptive estimate is finally represented by the QMLE at the interval of homogeneity.Formally, it is obtained by b θ τ = e θ I ˆ k ,τ , with b k = max k ≤ K { k : T ‘,τ ≤ z ‘,τ , ‘ ≤ k } . Here the indexand the length of the interval of homogeneity are denoted by b k and n b k , respectively.Again, if the null is already rejected at the interval I , b θ τ = e θ I ,τ and if I K is selected, b θ τ = e θ I K ,τ . Before presenting our key empirical results we now discuss the basic idea ofcalculating critical values and provide at the end of the chapter a summary of the LCPtesting procedure. 15 ritical Values The critical value defines the level of significance for the aforementioned test statistic(7). In classical hypothesis testing, critical values are selected to ensure a prescribed testlevel, the probability of rejecting the null under the null hypothesis (type I error). Inthe considered framework, we similarly control the loss of this ’false alarm’ of detectinga non-existing change point.Under the null hypothesis of time-invariant parameters, the desired interval of homo-geneity is the longest interval I K . When the selected interval is relatively shorter, oneeffectively detects a non-existing change point, here regarded as a ’false alarm’. There-fore, we aim controlling the loss associated with selecting the adaptive estimate b θ τ = e θ I b k ,τ instead of e θ I K ,τ : the loss is stochastically bounded E θ ∗ τ (cid:12)(cid:12)(cid:12) ‘ I K (cid:16) Y ; e θ I K ,τ (cid:17) − ‘ I K (cid:16) Y ; b θ τ (cid:17)(cid:12)(cid:12)(cid:12) r ≤ ρ R r ( θ ∗ τ ) (8)where ρ denotes a given significance level, see, e.g., Spokoiny (2009). This condition (8)ensures that the loss associated with ’false alarm’ (i.e., selecting b k < K ) is at most equalto a ρ − fraction of the parametric risk bound (6).In a similar way at each step k = 1 , . . . , K , the algorithm satisfies the so-called propaga-tion condition E θ ∗ τ (cid:12)(cid:12)(cid:12) ‘ I k (cid:16) Y ; e θ I k ,τ (cid:17) − ‘ I k (cid:16) Y ; b θ τ (cid:17)(cid:12)(cid:12)(cid:12) r ≤ ρ k R r ( θ ∗ τ ) (9)with ρ k = ρkK and the adaptive estimator b θ τ = e θ I k ,τ . This propagation condition (9)controls not only the frequency but also accounts for the deviation of the selected (adap-tive) estimate from the unknown ’true’ parameter. A relatively small likelihood loss onthe left hand side of equation (9) implies that the adaptive estimate b θ τ lies with highprobability in the confidence set of the optimal parameter e θ I k ,τ within the interval I k . Alarge deviation value indicates that the adaptive estimate belongs to the confidence setof the optimal estimate with a small probability, i.e., b θ τ differs significantly from e θ I k ,τ and there may be a change point presented within the interval I k . Under homogeneityat every step up to k , it is ensured that the adaptive selected homogenous interval I b k extends to the underlying optimal I k with high probability.16he power loss r close to zero ( r →
0) leads back to only counting the occurrence of falsealarms. Larger risk power levels also account for the deviation of the adaptive estimateto the true parameter. Equation (9) provides the essential requirements of calculatingcritical values. A detailed description of the simulation steps, as well as the resultingcritical values figures are for convenience provided in Appendix B.
LCP Detection Test in Practice
The scheme of the conducted LCP detection test at fixed time point t , expectile level τ ,risk power r and ρ is:1. Select intervals I k +1 , J k , A k,s and B k,s at step k and compute the test statistics T k,τ , see equation (7)2. Testing procedure - select the set of critical values according to the persistenceparameter estimate e α (based on I K ), see Appendix B3. Interval of homogeneity - interval I b k for which the null has been first rejected atstep b k + 1; b k = max k ≤ K { k : T ‘,τ ≤ z ‘,τ , ‘ ≤ k }
4. Adaptive estimation - the adaptively estimated parameter vector equals the QMLEat the interval of homogeneity b θ τ = e θ I ˆ k ,τ . lCARE accommodates and reacts to structural changes. From the fixed rolling windowexercise in subsection 3.2 one observes time-varying parameter characteristics while facingthe trade-off between parameter variability and the modelling bias. How to accountfor the effects of potential market changes on the tail risk based on the intervals ofhomogeneity? In this section, we utilize the lCARE model to estimate the tail riskexposure across three stock markets. Using the time series of the adaptively selectedinterval length, we improve a portfolio insurance strategy employing our tail risk estimateand furthermore enhance its performance in the financial applications part.17igure 6: Estimated length of the interval of homogeneity in trading days across theselected three stock markets from 2 January 2006 to 30 December 2016 for the modest(upper panel, r = 0 .
8) and the conservative (lower panel, r = 1) risk cases. The expectilelevel equals τ = 0 . The interval of homogeneity in tail expectile dynamics is obtained here by the lCAREframework for the time series of DAX, FTSE 100 and S&P 500 returns. Using thesequential local change point detection test, the optimal interval length is considered atthree expectile levels, namely, τ = 0 . τ = 0 .
01 and τ = 0 .
05. The homogeneityintervals are interestingly relatively longer at the end of 2009 and at the beginning of2010, especially at τ = 0 .
05, the period following the financial crisis across all three stockmarkets, see, e.g., Figures 6, 7 and 8. All figures present the estimated lengths of theinterval of homogeneity in trading days across the selected three stock market indicesfrom 2 January 2006 to 30 December 2016. The upper panel depicts the modest risk case r = 0 .
5, whereas the lower panel denotes the conservative risk case r = 1.Recall that the lCARE model selects the longest interval over which the null hypothesis18igure 7: Estimated length of the interval of homogeneity in trading days across theselected three stock markets from 2 January 2006 to 30 December 2016 for the modest(upper panel, r = 0 .
8) and the conservative (lower panel, r = 1) risk cases. The expectilelevel equals τ = 0 .
01. 19igure 8: Estimated length of the interval of homogeneity in trading days across theselected three stock markets from 2 January 2006 to 30 December 2016 for the modest(upper panel, r = 0 .
8) and the conservative (lower panel, r = 1) risk cases. The expectilelevel equals τ = 0 . r = 0 . r = 1 . τ = 0 .
05 85 83 90 89 88 95 τ = 0 .
01 93 95 95 100 102 101 τ = 0 . τ = 0 . τ = 0 .
01 and τ = 0 . r = 0 .
50) and the conservative ( r = 1 .
00) risk case.In a similar way, the intervals of homogeneity are relatively shorter in the modest riskcase r = 0 .
8, as compared to the conservative risk case r = 1. The average daily selectedoptimal interval length supports this, see, e.g., Table 3. The results are presented forall expectile levels at the modest and the conservative risk cases, r = 0 .
80 and r = 1,respectively. At expectile levels τ = 0 . τ = 0 .
01, the intervals of homogeneityare slightly larger than the intervals at τ = 0 . Based on the lCARE model, one can directly estimate dynamic tail risk exposure measuresusing the adaptively selected intervals. The tail risk at smaller expectile level is lowerthan risk at higher levels, see, e.g., Figure 9. Here the estimated expectile risk exposurefor the three stock market indices from 2 January 2006 to 30 December 2016 is displayedfor all three expectile levels. The left panel represents the conservative risk case r = 1results, whereas the right panel considers the modest risk case r = 0 .
8. The former leadson average to slightly lower variability, as compared to the modest risk which results inshorter homogeneity intervals.The estimated expectiles allow us to compute other tail risk measures, most promi-nently expected shortfall - the expected value of portfolio loss above a certain threshold,21igure 9: Estimated expectile risk exposure at level τ = 0 .
05 (blue), τ = 0 .
01 (red) and τ = 0 . r = 1 and the right panel depicts the results of the modest risk case r = 0 . τ α being selectedsuch that e t,τ α = q α , i.e., α -quantile τ α = α · q α − Z q α −∞ ydF ( y ) E [ Y ] − Z q α −∞ ydF ( y ) − (1 − α ) q α (10)where F ( · ) denotes the cumulative density function (cdf) of a random variable Y . Thecorresponding expected shortfall can be expressed as ES e t,τα = (cid:12)(cid:12)(cid:12) τ α (1 − τ α ) − α − (cid:12)(cid:12)(cid:12) e t,τ α (11)with e t,τ α denoting the expectile at level τ α . In order to apply (11), one needs to fix acertain cdf F ( · ) in (10). For convenience we chose the asymmetric normal distribution.Consider the tail risk exposure of DAX, FTSE 100 and S&P 500 index series at expectilelevel τ = 0 .
05 and conservative risk case r = 1 .
0. During market distress periods, the2008 financial crisis and the 2012 European sovereign debt crisis, the estimated expectedshortfall (11) exhibits a high variation as depicted in the upper panel of figures 10, 11,12. Similarly to current research developments, the estimated expected shortfall usingthe proposed lCARE model exceeds (by magnitude) the estimated expectile e t,τ value. Dynamic tail risk measures are useful tools in quantitative practice. Portfolio insurancedeals, for instance, with (portfolio) protection strategies tailored especially for mutualfund management while solving portfolio optimization tasks. Consider particularly thetask of preserving a given proportion of an initial asset portfolio value at the end of thepredetermined time horizon. In this strategy the downside risk is limited under bearishmarket conditions and simultaneously the optimal profit return emerges in bullish mar-ket situations and thus fund managers can utilize the time invariant portfolio protection(TIPP), Estep and Kritzman (1988), Hamidi et al. (2014). It turns out that this rep-23igure 10: Adaptively estimated expectile (blue) and expected shortfall (red) series forDAX index returns from 2 January 2006 to 30 December 2016 (upper panel). The lowerpanel shows the corresponding multiplier dynamics. We choose r = 1 and τ = 0 . r = 1 and τ = 0 . r = 1 and τ = 0 . Time Invariant Portfolio Protection Strategy (TIPP)
Denote the initial asset portfolio value as V t at time t ∈ (0 , T ]. An investor aims topreserve a predetermined protection value F st , the so-called floor, at each day V t ≥ s × max ( F · e − rf t · ( T − t ) , sup p ≤ t V p ) = F st (12)with an exogenous parameter s ∈ (0 ,
1) and the cushion value, C t = V t − F st ≥ rf t isthe risk-free rate, we set the initial value F = 100 and the proportion value s = 0 .
9. Theallocation decision states that G t = m · C t is invested into the risky asset with return r t (here the index portfolio) where m denotes a non-negative multiplier that controls theportfolio performance. The remaining amount V t − G t is invested into a riskless asset.The portfolio value V t and consequently the cushion value C t = V t − F st evolve as V t +1 = V t + G t r t +1 + ( V t − G t ) rf t +1 (13) C t +1 = C t { m · r t +1 + (1 − m ) rf t +1 } (14)Since the cushion value C t ≥
0, for all t ≤ T , an upper bound of the multiple m canbe derived from equation (14) when rf t is negligibly small and the risky asset return isnegative m ≤ (cid:16) − r − t +1 (cid:17) − , ∀ t ≤ T (15)with r − t +1 = min(0 , r t +1 ).This equation (15) reflects a relationship between m and the tail structure of the dis-tribution of r t . When the downside return loss is, for example, 10%, m ≤
10, and fora downside of 20%, m ≤
5. When the market is bullish (bearish), the investor is moreprone to invest into the risky (risk-free) asset.In the above TIPP strategy, the cushion value is always expected to be near or above zero.26his property only holds in continuous time and assumes that the investor could timelymodify their portfolio allocation before a large downside return happens. In practice, fundmanagers have to account for the risk that the cushion value may be negative since theremay happen a unpredictable large downside market movement whereupon the managersmay fail to reschedule their portfolio allocations in the discontinuous rebalancing. Thisrisk is known as the gap risk.How to deal with gap risk and correspondingly calculate the multiplier? There are twocommon approaches: the first is through the quantile hedging method, see e.g. Föllmerand Leukert (1999), exploiting VaR to imply the multiplier; another method is basedon expected shortfall, see e.g. Hamidi et al. (2014), Ameur and Prigent (2014). In thequantile hedging framework, for a given level α , the protection portfolio condition is givenby P ( C t ≥ , ∀ t ≤ T ) ≥ − α .Similar to the derivation of (15), the multiplier can now be expressed as the (1 − α )-thquantile of the return distributionP (cid:26) m t ≤ (cid:16) − r − t +1 (cid:17) − , ∀ t ≤ T (cid:27) ≥ − α where the bound of m with quantile can be obtained by the above equation.The expected shortfall is a coherent risk measure and is more suitable to reflect thetail risk since the quantile technique does not take the magnitude of tail risk at all intoaccount. When the investor is prone to more conservative asset allocation, ES is proposedto estimate the multiplier, see Hamidi et al. (2014). Performance Comparison
Here we employ the lCARE method to estimate ES controlling the gap risk. The corre-sponding multiplier selection is thus expressed by the lCARE-based ES m t,τ = (cid:12)(cid:12)(cid:12) ES e t,τ (cid:12)(cid:12)(cid:12) − (16)with e t,τ denoting the associated expectile value. The conditional multiplier is the inverseof the expected shortfall. In practice, we assume that the data process follows an asym-27etric normal distribution, and the threshold range for m t,τ ∈ { , , . . . } is used. Thedynamics of the implied multipliers for the selected indices corresponding to ES estimatesare displayed in the lower panel of figures 10, 11, 12, based on the lCARE model with r = 1 and τ = 0 .
05 from 2 January 2006 to 30 December 2016 for the DAX, FTSE 100and S&P 500 series, respectively.The one-year rolling window estimation strategy is also selected as one of the benchmarkmodels. In the appendix, the left panel of figures 17, 18, 19 presents the estimated expec-tile and ES based on a one-year fixed rolling window estimation and the correspondingmultipliers for the three stock markets respectively. The constant multiplier cases (from1 to 12) are included for benchmark comparisons as well.ES can also be implied by the CAViaR framework, one of the popular conditional au-toregressive modelling approaches for the Value at Risk. Given a one-to-one mappingbetween expectiles and quantiles, the expected shortfall can be formulated by the quan-tile at the corresponding quantile level when the expectile and quantile values are equal,see (10). Here we include the CAViaR based ES as another benchmark and provide itscorresponding multiplier dynamics that are implemented in the insurance strategy. Wefirstly choose the corresponding quantile level, then illustrate the CAViaR specificationfrom Engle and Manganelli (2004), before presenting the final results.Under the asymmetric normal distribution assumption, given expectile level τ = 0 . α = 0 . y t = q t,α + ε t,α Quant α ( ε t,α |F t − ) = 0 (17) q t,α = β + β q t − ,α + β q t − ,α + β q t − ,α + β y + t − + β y − t − (18)where q t,α represents the quantile (VaR) at α ∈ (0 , Quant α ( ε t,α |F t − ) is the α -quantile of ε t,α conditional on the information set F t − . In addition, we choose α = 0 . e τ α = q α when τ α = 0 . F = 100 in equation (12)). Associated tothe cushioned portfolio strategy, the daily asset allocation decision at time t is to investthe multiple amount of the difference between the portfolio value and the discountedfloor up to t into the stock portfolio, the rest into a riskless asset. Figure 13 presentsthe performance of the portfolio values based on the cushioned portfolio strategy withunconditional constant multipliers as well as the conditional time-varying multipliers. Theblack solid line represents the index, the blue line represents the cushioned portfolio withlCARE based conditional dynamic multiplier, the green line represents the portfolio valueusing a one-year fixed rolling window estimated multiplier, and the brown line presents29igure 13: Performance of the portfolio value: (a) DAX index (black), (b) m = 5 (red),(c) one-year rolling approach (green), (d) CAViaR based one-year rolling approach ( α =0 . m t,τ - lCARE (blue) from 2 January 2006 to 30 December 2016.30he value under CAViaR based one-year rolling estimated multiplier. The comparativelybest performed portfolio among the constant multipliers considers m = 5, denoted by thered line.The cushioned portfolio with the dynamic multiplier closely tracks the observed indexseries and simultaneously guarantees the target portfolio value floor at the end of theinvestment horizon at every trading day, see Figure 13. The lCARE strategy performsvery well in comparison to the cushioned portfolio with a constant multiplier, the one-yearrolling window estimation based on expectile or quantile levels.lCARE exhibits the best return moment performance of the portfolio insurance strategy,see Table 6. We list the statistical results of empirical data, the TIPP strategy withlCARE - based multiplier, one-year fixed rolling window CARE - implied multiplier, one-year rolling window CAViaR implied multiplier, and constant multipliers. The averagereturn of lCARE based strategy, 7.36% is larger than the counterpart based on a fixedrolling window, 5.70%. It is also observed that the CAViaR based strategy performs lessfavourable. Although the lCARE strategy leads to slightly lower average returns thanthe observed return series of 8.79%, it turns out that it performs favourable relative toall other benchmark strategies. The localized conditional autoregressive expectiles (lCARE) model accounts for time-varying parameter characteristics and potential structure changes in tail risk exposuremodelling. The parameter dynamics implied by a fixed rolling window exercise of threestock market indices, DAX, FTSE 100 and S&P 500, indicates that there is a trade-offbetween the modelling bias and parameter variability. A local parametric approach (LPA)assumes that locally one can successfully fit a parametric model. Based on a sequentialtesting procedure, one determines the interval of homogeneity over which a parametricmodel can be approximated by a constant parameter vector.The lCARE model adaptively estimates the tail risk exposure by relying on the (in-sample) ’optimal’ interval of homogeneity. Setting the expectile levels τ = 0 .
05 and31 = 0 .
01, the dynamic expectile tail risk measures for the selected three stock marketsare successfully obtained by lCARE. Furthermore, ES has been introduced, evaluated andemployed in the asset allocation example: the portfolio protection strategy is improvedby the lCARE modelling framework.
References
Acerbi, C. and Tasche, D. (2002). Expected Shortfall: a natural coherent alternative toValue at Risk,
Economic notes (2): 379–388.Ameur, H. and Prigent, J.-L. (2014). Portfolio insurance: Gap risk under conditionalmultiples, European Journal of Operational Research (1): 238–253.Artzner, P., Delbaen, F., Eber, J. and Heath, D. (1999). Coherent Measures of Risk,
Mathematical Finance : 203–228.Black, F. and Jones, R. W. (1987). Simplifying portfolio insurance, The Journal ofPortfolio Management (1): 48–51.Black, F. and Perold, A. F. (1992). Theory of constant proportion portfolio insurance, Journal of Economic Dynamics and Control (3): 403–426.Cai, Z. and Xu, X. (2008). Nonparametric Quantile Estimation for Dynamic SmoothCoefficient Models, Journal of the American Statistical Association (492): 1595–1608.Chen, Y., Härdle, W. K. and Pigorsch, U. (2010). Localized Realized Volatility,
Journalof the American Statistical Association (492): 1376–1393.Chen, Y. and Niu, L. (2014). Adaptive dynamic Nelson–Siegel term structure model withapplications,
Journal of Econometrics (1): 98–115.Čížek, P., Härdle, W. K. and Spokoiny, V. (2009). Adaptive pointwise estimation in time-inhomogeneous conditional heteroscedasticity models,
Econometrics Journal : 248–271. 32e Rossi, G. and Harvey, A. (2009). Quantiles, expectiles and splines, Journal of Econo-metrics (2): 179–185.Efron, B. (1991). Regression percentiles using asymmetric squared error loss,
StatisticaSinica (1): 93–125.Engle, R. F. and Manganelli, S. (2004). CAViaR: Conditional autoregressive value at riskby regression quantiles, Journal of Business & Economic Statistics (4): 367–381.Estep, T. and Kritzman, M. (1988). TIPP: Insurance without complexity, The Journalof Portfolio Management (4): 38–42.Föllmer, H. and Leukert, P. (1999). Quantile hedging, Finance and Stochastics (3): 251–273.Gerlach, R. and Chen, C. W. (2015). Bayesian Expected Shortfall Forecasting Incorpo-rating the Intraday Range, Journal of Financial Econometrics (1): 128–158.Gerlach, R. H., Chen, C. W. S. and Lin, L. Y. (2012). Bayesian GARCH Semi-parametricExpected Shortfall Forecasting in Financial Markets, Business Analytics Working Pa-per No. 01/2012 .Hamidi, B., Maillet, B. and Prigent, J. L. (2014). A dynamic autoregressive expectilefor time-invariant portfolio protection strategies,
Journal of Economic Dynamics andControl : 1–29.Härdle, W. K., Hautsch, N. and Mihoci, A. (2015). Local Apative Multiplicative ErrorModels for High-Frequency Forecasts, Journal of Applied Econometrics (4): 529–550.Honda, T. (2004). Quantile Regression in Varying Coefficient Models, Journal of Statis-tical Planning and Inference : 113–125.Inoue, A., Jin, L. and Rossi, B. (2014). Window selection for out-of-sample forecastingwith time-varying parameters,
CEPR Discussion Paper No. DP10168 .Jones, M. C. (1994). Expectiles and M-quantiles are quantiles,
Statistics & ProbabilityLetters (2): 149–153. 33orion, P. (2000). Value at risk: The new benchmark for managing market risk, McGraw-Hill, 2nd edition, New York.Kim, M. O. (2007). Quantile Regression With Varying-Coefficients, The Annals of Statis-tics (2): 92–108.Kuan, C. M., Yeh, J. H. and Hsu, Y. C. (2009). Assessing value at risk with CARE, theConditional Autoregressive Expectile models, Journal of Econometrics (2): 261–270.Mercurio, D. and Spokoiny, V. (2004). Statistical inference for time-inhomogeneousvolatility models,
The Annals of Statistics (2): 577–602.Newey, W. K. and Powell, J. L. (1987). Asymmetric least squares estimation and testing, Econometrica (4): 819–847.Pesaran, M. H. and Timmermann, A. (2007). Selection of estimation window in thepresence of breaks, Journal of Econometrics (1): 134–161.Spokoiny, V. (1998). Estimation of a function with discontinuities via local polynomialfit with an adaptive window choice,
The Annals of Statistics (4): 1356–1378.Spokoiny, V. (2009). Multiscale local change point detection with applications to Value-at-Risk, The Annals of Statistics (3): 1405–1436.Spokoiny, V. and Willrich, N. (2015). Bootstrap tuning in ordered model selection, arXivpreprint arXiv:1507.05034 .Spokoiny, V. and Zhilova, M. (2015). Bootstrap confidence sets under model misspecifi-cation, The Annals of Statistics (6): 2653–2675.Taylor, J. W. (2008). Estimating Value at Risk and Expected Shortfall Using Expectiles, Journal of Financial Econometrics (2): 231–252.Xie, S., Zhou, Y. and Wan, A. T. (2014). A varying-coefficient expectile model forestimating Value at Risk, Journal of Business & Economic Statistics (4): 576–592.Yao, Q. and Tong, H. (1996). Asymmetric least squares regression estimation: a non-parametric approach, Journal of Nonparametric Statistics (2): 273–292.34 ppendix A Parametric Risk Bound
Data Simulation
Adaptive estimation of CARE parameters demands critical values as the distributionof the test statistics in our finite sample environment is unknown. Thus the proposedsequential testing procedure demands critical values that are here found by a simulationstudy. The training data should furthermore be obtained at each expectile level forcalculating the test statistics and then simulate the corresponding critical values. Thisstep is necessary and unavoidable. Our data used for obtaining the critical values aresimulated for given expectile levels. However concerning the optimal implementation ofsimulation procedure, unfortunately there is almost few literature covering this issue.Gerlach and Chen (2015) base on the AND assumption and develop a MCMC simulationstudy to estimate the autoregressive expectiles, which is published in Journal of FinancialEconometrics (2015).In the similar context, we follow Gerlach and Chen (2015) and Gerlach et al. (2012)assuming the same AND framework. There are three parameters in AND, the mean,variance and scape parameters, in which the scape largely depends on the tail structureof the distribution. After initially fixing the mean and variance parameters with theempirical estimates, we set the expectile value of AND equal to the counterpart fromempirical data at one specific expetile level, and then obtain the scape parameter. In thisway, we can generate the independent disturbance term ε t,τ in (1) at a given expectilelevel τ .Further, as discussed in section 3, there are three pseudo true parameter constellationsselected at each expectile level. These parameters are estimated from the one-year rollingsample, which is regarded as the longest homogeneous interval. For each pseudo trueparameter vector from Table 2 and for each given expectile level ( τ = 0 . τ = 0 . τ = 0 . isk Bound The largest average value of the ( r -th power) difference between the respective log-likelihood values, see equation (6), is taken as the corresponding risk bound. Note thatthe considered interval candidates in this simulation cover { , , , , , , , } observations - see the selection details in sub-section 3.3.The values of the simulated risk bound R r ( θ ∗ τ ) across different setups are provided inTable 5. We particularly consider the modest ( r = 0 .
8) and the conservative ( r = 1) riskcase and set three expectile levels, namely τ = 0 . τ = 0 .
01 as well as τ = 0 .
05. Therisk bounds are obtained by Monte Carlo simulation for each selected parameter vectorcorresponding to Table 2 where we label the first quartile of estimated parameter valuesas ’low’, the mean as ’mid’ and the third quartile as ’high’. It turns out that the riskbounds in the conservative case are relatively larger than the bounds obtained in themodest risk case. τ = 0 . τ = 0 . τ = 0 . r = 0 . r = 1 . R r ( θ ∗ τ ) given three expectile levels, τ = 0 . τ = 0 .
01 and τ = 0 .
05. We consider the modest ( r = 0 .
8) and the conservative ( r = 1 .
0) risk case.The risk bounds are obtained by Monte Carlo simulation for each selected parametervector from Table 2 where we label the first quartile of estimated parameters as ’low’,the median as ’mid’ and the third quartile as ’high’.
B Critical Values and Adaptive Estimation
Critical Values
Here we present a sequential choice of critical values z k,τ in practice. Considering thesituation after the first k steps of the algorithm, we need to distinguish between twocases: in the first, change point is detected at some step, and in the other case no change36oint is detected. In the first case, we denoted by B q the event that change point isdetected at step q , B q = { T ,τ ≤ z ,τ , · · · , T q − ,τ ≤ z q − ,τ , T q,τ > z q,τ } (19)where b θ τ = e θ I q − ,τ on B q , q = 1 , , · · · , k . The sequence choice of z k is based on thedecomposition (cid:12)(cid:12)(cid:12) ‘ I k (cid:16) Y ; e θ I k ,τ (cid:17) − ‘ I k (cid:16) Y ; b θ τ (cid:17)(cid:12)(cid:12)(cid:12) r = k X q =1 (cid:12)(cid:12)(cid:12) ‘ I k (cid:16) Y ; e θ I k ,τ (cid:17) − ‘ I k (cid:16) Y ; e θ I q − ,τ (cid:17)(cid:12)(cid:12)(cid:12) r I ( B q ) (20)where k ≤ K . Note that the event B q only depends on z ,τ , · · · , z q,τ . For example, B means T ,τ > z ,τ and b θ τ = e θ I ,τ for all b k ≥
1. We select z ,τ as the minimal value thatensures max k =1 , ··· ,K E θ ∗ τ (cid:12)(cid:12)(cid:12) ‘ I k (cid:16) Y ; e θ I k ,τ (cid:17) − ‘ I k (cid:16) Y ; e θ I ,τ (cid:17)(cid:12)(cid:12)(cid:12) r I ( T ,τ > z ,τ ) ≤ ρ k R r ( θ ∗ τ ) (21)Similarly, for every q ≥
2, the event B q means that the first false alarm occurs at thestep q and b θ τ = e θ I q − ,τ . If z ,τ , · · · , z q − ,τ have already been fixed, the event B q is onlycontrolled by z q,τ , which is the minimal value that ensuresmax k ≥ q E θ ∗ τ (cid:12)(cid:12)(cid:12) ‘ I k (cid:16) Y ; e θ I k ,τ (cid:17) − ‘ I k (cid:16) Y ; e θ I q − ,τ (cid:17)(cid:12)(cid:12)(cid:12) r I ( B q ) ≤ ρ k R r ( θ ∗ τ ) (22)Hence the value of z q,τ can be obtained numerically by the Monte Carlo simulations forthe nine different scenarios of fixed θ ∗ τ . It is easy to prove that such defined z q,τ fulfill thepropagation condition (9) in view of the decomposition (20). We summarize the concretesteps of calculating critical values,1. select the minimum value satisfying (21) as the critical value of interval I , z ,τ .2. Given z ,τ , select the minimum value satisfying (22) for q = 2 as the critical valueof interval I , z ,τ .3. Repeat step 2 for q = 3 , · · · , K . Then we sequentially have z k,τ .The resulting critical value curves for the selected six ’true’ parameter constellations from37igure 14: Simulated critical values across different parameter constellations given inTable 2 for the modest (upper panel, r = 0 .
8) and conservative (lower panel, r = 1) riskcases. We consider three expectile levels, τ = 0 .
05 (blue), τ = 0 .
01 (red) and τ = 0 . τ = 0 . τ = 0 .
01 and τ = 0 . Adaptive Estimation
Figure 14 presents that critical values evolve in a decreasing route with a similar magni-tude across all cases. When practicing the adaptive estimation, it is reasonable to choosethe critical value set in a data-driven fashion: at a fixed time point, the yearly estimate b α ,τ serves as a benchmark to select the appropriate scenario. If its value is, for example,lower (higher) than the reported first (third) quartile case in Table 2, then the correspond-ing left (right) panel of critical value curve is selected. Figure 15 presents the frequenciesof each critical value scenario for the three expectile level frequencies according to thecloseness of parameter b α ,τ . Discussion
In addition, one possible solution for obtaining critical values is to use the technique of38igure 15: Histogram of the selected parameter scenarios (Low, Mid and High) for adap-tive estimation with τ = 0 .
05 (blue), τ = 0 .
01 (red), and τ = 0 . ρ , P( T k,τ > z true k,τ ) = ρ , with z true k,τ denoted as the unknown true criticalvalues for interval index k and expectile level τ . We practically use the simulated criticalvalue z k,τ as a substitute of z true k,τ . Thus we can check the quality of approximation byinvestigating the difference δ = | ρ − P( T k,τ > z k,τ ) | with ρ = 0 .
25 as in the followingfigure 16. Most of the differences δ are relatively small, largely lower than 5%, and tend39o decline as the interval length rises.Figure 16: Validation for the critical values for expectile level τ = 0 .
05 (blue), τ = 0 . τ = 0 . r = 0 .
8) and conservative(lower panel, r = 1) risk cases. C Application
Multipliers of alternativesPerformance comparison α = 0 ..
Multipliers of alternativesPerformance comparison α = 0 .. α = 0 ..
Multipliers of alternativesPerformance comparison α = 0 .. α = 0 .. α = 0 ..