Estimation of conditional extreme risk measures from heavy-tailed elliptical random vectors
EESTIMATION OF CONDITIONAL EXTREME RISK MEASURES FROMHEAVY-TAILED ELLIPTICAL RANDOM VECTORS
A. USSEGLIO-CARLEVE
Abstract.
In this work, we focus on some conditional extreme risk measures estimation forelliptical random vectors. In a previous paper, we proposed a methodology to approximate ex-treme quantiles, based on two extremal parameters. We thus propose some estimators for theseparameters, and study their consistency and asymptotic normality in the case of heavy-taileddistributions. Thereafter, from these parameters, we construct extreme conditional quantilesestimators, and give some conditions that ensure consistency and asymptotic normality. Us-ing recent results on the asymptotic relationship between quantiles and other risk measures,we deduce estimators for extreme conditional L p − quantiles and Haezendonck-Goovaerts riskmeasures. Under similar conditions, consistency and asymptotic normality are provided. Inorder to test the effectiveness of our estimators, we propose a simulation study. A financial dataexample is also proposed. Keywords:
Elliptical distribution; Extreme quantiles; Extreme value theory; Haezendonck-Goovaerts riskmeasures; Heavy-tailed distributions; L p − quantiles. Introduction
In many fields such as finance or actuarial science, quantile, or Value-at-Risk (see Linsmeier and Pear-son (2000)) is a recognized tool for risk measurement. In Koenker and Bassett (1978), quantile is seenas minimum of an asymmetric loss function. However, Value-at-Risk, or VaR, has some disadvantages,such as that of not being a coherent measure in the sense of Artzner et al. (1999). These limits have ledmany authors to use alternative risk measures.On the basis of Koenker’s approach, Newey and Powell (1987) proposed another measure called expectile,which has since been widely studied (see for example Sobotka and Kneib (2012) or more recently Daouiaet al. (2017a)) and applied (Taylor (2008) and Cai and Weng (2016)). Later, Breckling and Chambers(1988) introduced M-quantiles, a family of measures minimizing an asymmetric loss function, and Chen(1996) focused on asymmetric power functions to define L p − quantiles. The cases p = 1 and p = 2 corre-spond respectively to the quantile and expectile. Recently, Bernardi et al. (2017) provided some resultsconcerning L p − quantiles for Student distributions, and have shown that closed formula are difficult toobtain in the general case.In parallel, Artzner et al. (1999) introduced the Tail-Value-at-Risk as an alternative to Value-at-Risk, andthis risk measure subsequently had many applications (see e.g. Barg`es et al. (2009)). Moreover, TVaRbelongs to a larger family of risk measures called Haezendonck-Goovaerts risk measures and introducedin Haezendonck and Goovaerts (1982), Goovaerts et al. (2004) and Tang and Yang (2012). In the sameway as L p − quantiles, we do not have an explicit formula in the general case.However, for a heavy-tailed random variable, Daouia et al. (2017b) proved that L p − quantile and L − quantile(or quantile) are asymptotically proportional. Then, as proposed in Daouia et al. (2017a), an estimatorof a L p − quantile may be deduced from a suitable estimator of the quantile, for extreme levels. In thesame spirit, Tang and Yang (2012) provided a similar asymptotic relationship between a subclass ofHaezendonck-Goovaerts risk measures and quantiles. Finally, all these risk measures we introduced maybe estimated through a quantile estimation in an asymptotic setting.Extreme quantiles estimation is a very active area of research. In recent years, we can give many exam-ples : Gardes and Girard (2005) focused on Weibull tail distributions, El Methni et al. (2012) proposed astudy for heavy and light tailed distributions, Gong et al. (2015) was interested in functions of dependentvariables, and de Valk (2016) provided a methodology for high quantiles estimation. The question ofextreme conditional quantiles estimation has also been explored in Wang et al. (2012) in a regressionframework. However, Maume-Deschamps et al. (2017) and Maume-Deschamps et al. (2018) have shownthat the regression setting may lead to a poor estimation of extreme measures in the case of ellipticaldistributions. Elliptical distributions, introduced in Kelker (1970), aim to generalize the gaussian distri-bution, i.e to define symmetric distributions with different properties, such as a heavy tail. This is why a r X i v : . [ m a t h . S T ] J u l A. USSEGLIO-CARLEVE elliptical distributions are more and more used in finance (see for example Owen and Rabinovitch (1983)or Xiao and Valdez (2015)).For all these reasons, we consider, in this paper, an elliptical random vector Z = ( X , Y ) with the con-sistency property (in the sense of Kano (1994)), where X ∈ R N , Y ∈ R , and propose to estimate someextreme quantiles (and deduce L p − quantiles and Haezendonck-Goovaerts risk measures) of Y | X = x ,i.e. of a component conditionnally to the others. In order to improve the conditional quantile estimation,we proposed in Maume-Deschamps et al. (2017) a methodology based on two extremal parameters, andthe unconditional quantile of Y . Indeed, if we denote F − Y | x ( α ) the quantile of level α of Y | X = x , thelatter is asymptotically equivalent to a quantile of Y ( F − Y will be the quantile function of Y ), in thefollowing manner :(1.1) F − Y | x ( α ) ∼ α → F − Y ( δ ( α, η, (cid:96) )) , where δ is a known function (detailed later) depending on α and two parameters η and (cid:96) called extremalparameters. One can notice that Equation (1.1) may only holds under the consistency property of Z .Maume-Deschamps et al. (2017) has also shown that extremal parameters do not exist for some consistentelliptical distributions (see e.g. the Laplace distribution).In this paper, the goal will be in a first time to give a sufficient condition on Z that ensures the existenceof η and (cid:96) . This is why a regularly varying assumption is done. After having proved their existence,estimators for the parameters η and (cid:96) are proposed, and therefore for extreme conditional quantiles.The paper is organized as follows. Section 2 provides some definitions and properties of elliptical distri-butions, including the extremal parameters introduced in Maume-Deschamps et al. (2017). A particularinterest is given to consistent elliptical distributions. Section 3 is devoted to extremal parameters η and (cid:96) . Under a regularly varying assumption, their existence is proved, and estimators are proposed. Byadding some conditions, consistency and asymptotic normality results are given. In Section 4, we usethe results of Section 3 to introduce some estimators of extreme quantiles, and give consistency and as-ymptotic normality results. The asymptotic relationships between L p − quantiles and quantiles recalled inSection 5 allow us to give extreme L p − quantiles estimators. The same approach is proposed for extremeHaezendonck-Goovaerts risk measures. In order to analyze the efficiency of our estimators, we propose asimulation study in Section 6, and a real data example in Section 7.2. Preliminaries
In this section, we first recall some classical results on elliptical distributions. We consider a d − dimensionalvector Z from an elliptical distribution with parameters µ ∈ R d and Σ ∈ R d × d . Then the density of Z ,if it exists, is given by :(2.1) c d | Σ | g d (cid:16) ( z − µ ) T Σ − ( z − µ ) (cid:17) .c d and g d will respectively be called normalization coefficient and generator of Z . Cambanis et al. (1981)gives another way to characterize an elliptical distribution, through the following stochastic representa-tion :(2.2) Z d = µ + R Λ U ( d ) , where ΛΛ T = Σ , U ( d ) is a d − dimensional random vector uniformly distributed on the unit sphere ofdimension d , and R is a non-negative random variable independent of U ( d ) . R is called radius of Z . Inthe following, the radius must have a particular shape. Indeed, Huang and Cambanis (1979) and Kano(1994) propose a representation for some particular elliptical distributions. Let us consider ( Z d ) d ∈ N ∗ afamily of elliptical distributions of dimension d . Then ( Z d ) d ∈ N ∗ possesses the consistency property if itadmits the following representation for all d ∈ N ∗ :(2.3) Z d d = µ + χ d ξ Λ U ( d ) , where χ d is the square root of a χ distribution with d degrees of freedom, ξ is a non-negative randomvariable which does not depend on d , and χ d , ξ and U ( d ) are mutually independent. In Kano (1994),such elliptical distributions are said consistent, have the advantage of being stable by linear combina-tions (combining Theorem 2.16 of Fang et al. (1990) and Theorem 1 in Kano (1994)), and allow us todefine elliptical random fields (see, e.g., Opitz (2016)). In the following, we focus on consistent ellipticaldistributions, and take the notation(2.4) R d = χ d ξ. ONDITIONAL EXTREME RISK MEASURES FROM HEAVY-TAILED ELLIPTICAL RANDOM VECTORS 3
For the sake of clarity, we will say that a random variable with stochastic representation (2.3) is( ξ, d ) − elliptical with parameters µ and Σ . Using this terminology, the purpose of the paper is as follows.Let Z = ( X , Y ) ∈ R N +1 be a ( ξ, N + 1) − elliptical random vector with parameters µ and Σ , where X ∈ R N and Y ∈ R . Consistency property of Z implies that X and Y are respectively ( ξ, N ) − and( ξ, − elliptical distributions with parameters µ X ∈ R N , Σ X ∈ R N × N and µ Y ∈ R , Σ Y ∈ R . We alsodenote Σ X Y the covariance vector between X and Y . The aim is thus to provide a predictor for thequantile of the conditional distribution Y | X = x . According to Theorem 7 of Frahm (2004), such adistribution is still elliptical, with a radius R ∗ different from R in the general case. In particular, wehave :(2.5) { Y | X = x } d = µ Y | X + σ Y | X R ∗ U (1) , where µ Y | X = µ Y + Σ T X Y Σ X − ( x − µ X ) and σ Y | X = Σ Y − Σ T X Y Σ X − Σ X Y . Then, denoting Φ R ∗ ( t ) = P (cid:0) R ∗ U (1) ≤ t (cid:1) , and using the translation equivariance and positive homogeneity of elliptical quantiles(see McNeil et al. (2015)), conditional quantiles of Y | X = x may be expressed as :(2.6) q α ( Y | X = x ) = µ Y | X + σ Y | X Φ − R ∗ ( α ) , where α ∈ ]0 , q α ( Y | X = x ), we need to estimate theconditional function Φ − R ∗ . Unfortunately, when we have a data set X , ..., X n , we only observe theunconditional distribution of X . This is why, in Maume-Deschamps et al. (2017), we have given apredictor for conditional quantiles, based solely on the unconditional c.d.f Φ R ( t ) = P (cid:0) R U (1) ≤ t (cid:1) . Thisapproximation is based on two parameters η ∈ R and 0 < (cid:96) < + ∞ such that :(2.7) lim t →∞ ¯Φ R ∗ ( t )¯Φ R ( t η ) = (cid:96). Table 1 gives some examples of coefficients η and (cid:96) for classical elliptical distributions. However, we have Distribution η (cid:96)
Gaussian 1 1Student, ν > Nν + 1 Γ ( ν + N +12 ) Γ ( ν ) Γ ( ν + N ) Γ ( ν +12 ) (cid:16) M ( x ) ν (cid:17) N + ν ν N ν + N UGM 1 min( θ ,...,θ n ) N exp (cid:26) − min( θ ,...,θ n)22 M( x ) (cid:27) n (cid:80) k =1 π k θ Nk exp (cid:18) − θ k M ( x ) (cid:19) Slash, a > Na + 1 Γ ( N +1+ a ) M ( x ) N + a Γ ( N + a ) ( N + a ) χ N + a ( M ( x ))2 a − Γ ( a ) Table 1.
Coefficients η and (cid:96) for classical distributions, where M ( x ) = ( x − µ X ) (cid:62) Σ X − ( x − µ X ). shown in Maume-Deschamps et al. (2017) that such parameters not always exist for all elliptical distri-bution (see, e.g, Laplace distribution). In a first time, we can wonder in which setting these parametersexist. We thus consider the following assumption, that will ensures the existence of η and (cid:96) . Assumption 1 (Second order regular variations) . We assume that there exist a function A such that A ( t ) → as t → + ∞ , and (2.8) lim t → + ∞ Φ − R ( − ωt ) Φ − R ( − t ) − ω γ A ( t ) = ω γ ω ρ − ρ , where γ > and ρ < . This assumption is widespread in literature of extreme quantiles (see, e.g, Daouia et al. (2017a)). Afirst consequence is that Φ R , or equivalently F R is attracted to the maximum domain of Pareto-typedistributions with tail index γ . Furthermore, it entails Φ − R (1 − /t ) ∼ c t γ , or equivalently ¯Φ R ( t ) ∼ c t − γ as t → + ∞ (see de Haan and Ferreira (2006)). As example, Student distribution satisfies Assumption 1.The following lemma provides some results concerning asymptotic equivalences. Lemma 2.1 (Regular variation properties) . Under Assumption 1, we get the following regular variationsproperties :
A. USSEGLIO-CARLEVE (i) The random variable ξ satisfies (2.9) ¯ F ξ ( t ) ∼ t → + ∞ λt − γ , λ ∈ R . (ii) For all d ∈ N ∗ , the random variable R d = χ d ξ is attracted to the maximum domain of Pareto-typedistribution with tail index γ , and (2.10) ¯ F R d ( t ) ∼ t → + ∞ γ Γ (cid:16) d + γ − (cid:17) Γ (cid:0) d (cid:1) ¯ F ξ ( t ) ∼ t → + ∞ γ Γ (cid:16) d + γ − (cid:17) Γ (cid:0) d (cid:1) λt − γ , λ ∈ R . (iii) For all η > , d ∈ N ∗ , (2.11) f R d ( t ) f R ( t η ) ∼ t → + ∞ √ π Γ (cid:16) d + γ − (cid:17) Γ (cid:0) d (cid:1) Γ (cid:16) γ − (cid:17) t ( η − γ − +1) . These results will be usefull throughout the paper, and especially in the following result which provesthe existence of our parameters.
Proposition 2.2 (Existence of extremal parameters) . Under Assumption 1, parameters η and (cid:96) exist,and are expressed : (2.12) η = 1 + γN(cid:96) = Γ (cid:16) N + γ − (cid:17) Γ (cid:16) γ − (cid:17) γ − π − N ( N + γ − ) c N g N ( M ( x )) . One can notice that η is only related to the tail index γ , and not to the covariate vector x , while (cid:96) is depending on c N g N ( M ( x )). In the next, we thus denote rather (cid:96) ( x ), in order to emphasize the roleplayed by the covariate vector x . We can now give the following predictor for q α ( Y | X = x ) :(2.13) q α ↑ ( Y | X = x ) = µ Y | X + σ Y | X (cid:34) Φ − R (cid:40) − (cid:96) ( x )1 − α + 2(1 − (cid:96) ( x )) (cid:41)(cid:35) /η . From there, we have proved in Theorem 7 of Maume-Deschamps et al. (2017) that q α ↑ ( Y | X = x ) and q α ( Y | X = x ) were asymptoticaly equivalent as α →
1, i.e(2.14) q α ↑ ( Y | X = x ) ∼ α → q α ( Y | X = x ) . A similar equivalence has been easily deduced for α →
0, using the symmetry properties of ellipticaldistributions. In this paper, we focus on the case α →
1, case α → η and (cid:96) ( x ). Before that, we need to do a littlesimplification. Indeed, Equation (2.13) shows that the extreme quantile estimation requires the priorestimation of quantities µ Y | X and σ Y | X . These quantities may be easily estimated by the method ofmoments or fixed-point algorithm (c.f p.66 of Frahm (2004)). In a spatial setting, even if the variable Y is not observed, a stationarity assumption on the random field makes it possible to estimate these values(see Cressie (1988)). Furthermore, the speed of convergence of these methods is higher than those of theestimators we propose in this paper, and therefore do not interfere in the asymptotic results. This iswhy, in the following, we suppose that µ Y | X , σ Y | X , and therefore µ X , Σ X are known. Then, it remainsto estimate η , (cid:96) ( x ) and Φ − R ∗ . Section 3 focuses on η and (cid:96) ( x ), while Section 4 deals with Φ − R ∗ .3. Extremal coefficients estimation
In this section, the aim is to estimate the extremal parameters η and (cid:96) ( x ) conditionaly to the co-variates vector X = x . For that purpose, we consider a random sample X , ..., X n independent andidentically distributed from an ( ξ, N ) − elliptical vector with the same distribution as X , and denote M ( x ) = ( x − µ X ) T Σ X − ( x − µ X ). The aim is then to give two suitable estimator ˆ η and ˆ (cid:96) ( x ), respec-tively for η and (cid:96) ( x ). ONDITIONAL EXTREME RISK MEASURES FROM HEAVY-TAILED ELLIPTICAL RANDOM VECTORS 5
Estimation of η . We notice that coefficient η is directly related on the tail index γ . Then, using asuitable estimator of γ , we easily deduce η . There are several estimators widespread in the literature. Asexamples, Pickands (1975), Schultze and Steinebach (1996) or Kratz and Resnick (1996) provide someestimators for γ . In the following, we use the Hill estimator, introduced in Hill (1975) :(3.1) ˆ γ k n = 1 k n k n (cid:88) i =1 ln (cid:18) W [ i ] W [ k n +1] (cid:19) , where W [1] ≥ . . . ≥ W [ k n +1] ≥ . . . ≥ W [ n ] and k n = o ( n ) such that k n → + ∞ as n → + ∞ . In thiscontext, the statistic W may be : • The first (or indifferently any) component of the reduced centered covariate vector Λ X − ( X − µ X ),where Λ X T Λ X = Σ X . This approach works well, but we do not use all available data. • The Mahalanobis norm (cid:113) ( X − µ X ) T Σ X − ( X − µ X ). This approach has the advantage ofusing all available data.Indeed, according to Theorem 2 of Hashorva (2007b), the two last quantities both admit γ as tail index.In the following we will use the one-component approach, since the asymptotic results we give are validunder Assumption 1, applied to the univariate c.d.f Φ R . Moreover, numerical comparisons seem show thatthe second approach does not significantly improve the estimation of the parameters. Main properties ofˆ γ k n may be found in de Haan and Resnick (1998). Under second order condition given in Assumption 1,de Haan and Ferreira (2006) proved the following asymptotic normality for ˆ γ k n .(3.2) (cid:112) k n (ˆ γ k n − γ ) → n → + ∞ N (cid:18) λ − ρ , γ (cid:19) , where λ = lim n → + ∞ √ k n A (cid:16) nk n (cid:17) and k n = o ( n ) such that k n → + ∞ as n → + ∞ . Then, using Proposition 2.2and Equation (3.1), we define the following estimator for η . Definition 3.1 (Estimator of η ) . We define ˆ η k n as (3.3) ˆ η k n = Nk n k n (cid:88) i =1 ln (cid:18) W [ i ] W [ k n +1] (cid:19) + 1 . As an affine transformation of Hill estimator, asymptotic normality of ˆ η k n is obvious. In order tosimplify the next results, we suppose λ = 0 in what follows. Proposition 3.1 (Asymptotic normality of ˆ η k n ) . Under Assumption 1, and if lim n → + ∞ √ k n A (cid:16) nk n (cid:17) = 0 ,then (3.4) (cid:112) k n (ˆ η k n − η ) → n → + ∞ N (cid:0) , N γ (cid:1) . Estimation of (cid:96) ( x ) . The form of (cid:96) ( x ), given in Proposition 2.2, leads to a more complicated estima-tion. Indeed, (cid:96) ( x ) is related on both γ and c N g N ( M ( x )). Our estimator for γ is given in Equation (3.1).Concerning c N g N ( M ( x )), we propose a kernel estimator. Class of kernel estimators, introduced in Parzen(1962), makes it possible to estimate probability densities. Then, the following lemma will be usefull forthe construction of our estimator. This result comes from p.108 of Johnson (1987). Lemma 3.2.
The Mahalanobis distance M ( X ) = ( X − µ X ) T Σ X − ( X − µ X ) has density : (3.5) f M ( X ) ( t ) = π N Γ (cid:0) N (cid:1) x N − c N g N ( t ) . Using Lemma 3.2, we introduced a kernel estimator ˆ g h n for c N g N ( M ( x )). Definition 3.2 (Generator estimator) . We define ˆ g h n as (3.6)ˆ g h n = M ( x ) − N Γ (cid:0) N (cid:1) π N ˆ f M ( X ) ( M ( x )) = M ( x ) − N Γ (cid:0) N (cid:1) π N nh n n (cid:88) i =1 K (cid:32) M ( x ) − ( X i − µ X ) T Σ X − ( X i − µ X ) h n (cid:33) , where the kernel K fills some conditions given in Parzen (1962) and bandwith h n verifies h n → and nh n → + ∞ as n → + ∞ . A. USSEGLIO-CARLEVE
Parzen (1962) provided the asymptotic normality for kernel estimators. We first define some assump-tions concerning K and g N needed for the next results. • ( K
1) : K is compactly supported on [ − ,
1] and bounded. In addition, (cid:82) R K ( u ) du = 1, K ( u ) = K ( − u ) ∀ u ∈ R and (cid:82) R u K ( u ) du (cid:54) = 0. • ( K
2) : In the neighborhood of M ( x ), g N is bounded and twice continuously differentiable withbounded derivatives.The following results may be found in Li and Racine (2007). Under conditions ( K − ( K E (cid:104) ˆ f M ( X ) ( M ( x )) (cid:105) − f M ( X ) ( M ( x )) = O ( h n ) V ar (cid:104) ˆ f M ( X ) ( M ( x )) (cid:105) = O (cid:16) nh n (cid:17) . By adding the condition nh n → n → + ∞ , we also obtain the asymptotic normality :(3.8) (cid:112) nh n (cid:16) ˆ f M ( X ) ( M ( x )) − f M ( X ) ( M ( x )) (cid:17) → n → + ∞ N (cid:18) , f M ( X ) ( M ( x )) (cid:90) K ( u ) du (cid:19) . Using the previous results given above, the following asymptotic normality for ˆ g h n is easily deduced. Proposition 3.3 (Asymptotic normality of generator estimator) . Under conditions ( K − ( K , andtaking a sequence h n such that h n → , nh n → + ∞ and nh n → as n → + ∞ , then the followingrelationship holds : (3.9) (cid:112) nh n (ˆ g h n − c N g N ( M ( x ))) → n → + ∞ N (cid:32) , M ( x ) − N Γ (cid:0) N (cid:1) π N c N g N ( M ( x )) (cid:90) K ( u ) du (cid:33) . Replacing γ by ˆ γ and c N g N ( M ( x )) by ˆ g h n in Equation (2.12), we are now able to provide an estimatorˆ (cid:96) ( x ) for (cid:96) ( x ), in the following definition. Furthermore, under Assumption 1, we give the asymptoticnormality of ˆ (cid:96) ( x ). Definition 3.3 (Estimator of (cid:96) ( x )) . We define ˆ (cid:96) k n ,h n ( x ) as : (3.10) ˆ (cid:96) k n ,h n ( x ) = Γ (cid:18) N +ˆ γ − kn +12 (cid:19) Γ (cid:18) ˆ γ − kn +12 (cid:19) ˆ γ − k n π − N (cid:0) N + ˆ γ − k n (cid:1) ˆ g h n . where ˆ γ k n and ˆ g h n are respectively given in Equations (3.1) and (3.6) . Proposition 3.4.
Under Assumption 1, conditions ( K − ( K and if lim n → + ∞ √ k n A (cid:16) nk n (cid:17) = 0 , thefollowing asymptotic relationships hold :(i) If nh n /k n → n → + ∞ + ∞ and √ k n h n → n → + ∞ , then (3.11) (cid:112) k n (cid:16) ˆ (cid:96) k n ,h n ( x ) − (cid:96) ( x ) (cid:17) → n → + ∞ N (0 , V ( γ, c N g N ( M ( x )))) . (ii) If nh n /k n → n → + ∞ and nh n → n → + ∞ , then (3.12) (cid:112) nh n (cid:16) ˆ (cid:96) k n ,h n ( x ) − (cid:96) ( x ) (cid:17) → n → + ∞ N (0 , V ( γ, c N g N ( M ( x )))) , where ( Ψ is the digamma function (see p.258 of Abramowitz et al. (1966))) (3.13) V ( γ, c N g N ( M ( x ))) = π − N γ c N g N ( M ( x )) Γ (cid:16) N + γ − (cid:17) Γ (cid:16) γ − (cid:17) (cid:34) Ψ (cid:16) γ − (cid:17) − Ψ (cid:16) N + γ − (cid:17) γ ( Nγ +1) − N ( Nγ +1) (cid:35) V ( γ, c N g N ( M ( x ))) = Γ ( N ) M ( x ) N − π N c N g N ( M ( x )) (cid:82) K ( u ) du (cid:34) Γ (cid:16) N + γ − (cid:17) Γ (cid:16) γ − (cid:17) γ − π − N ( N + γ − ) c N g N ( M ( x )) (cid:35) . We have the asymptotic normality for our estimators ˆ η k n and ˆ (cid:96) k n ,h n ( x ). The next proposition givesthe joint distribution according to the asymptotic relations between k n and h n . The proof derives fromdelta method. ONDITIONAL EXTREME RISK MEASURES FROM HEAVY-TAILED ELLIPTICAL RANDOM VECTORS 7
Proposition 3.5.
Under Assumption 1, conditions ( K − ( K and if √ k n A (cid:16) nk n (cid:17) → as n → + ∞ ,then the following asymptotic relationships hold :(i) If nh n /k n → n → + ∞ and nh n → n → + ∞ , then (3.14) (cid:112) nh n (cid:18) ˆ (cid:96) k n ,h n ( x ) − (cid:96) ( x )ˆ η k n − η (cid:19) → n → + ∞ N (cid:18)(cid:18) (cid:19) , (cid:18) V ( γ, c N g N ( M ( x ))) 00 0 (cid:19)(cid:19) , where V ( γ, c N g N ( M ( x ))) is given in Equation (3.13) .(ii) If nh n /k n → n → + ∞ + ∞ and √ k n h n → n → + ∞ , then (3.15) (cid:112) k n (cid:18) ˆ (cid:96) k n ,h n ( x ) − (cid:96) ( x )ˆ η k n − η (cid:19) → n → + ∞ N (cid:18)(cid:18) (cid:19) , (cid:18) V ( γ, c N g N ( M ( x ))) − N γ (cid:112) V ( γ, c N g N ( M ( x ))) − N γ (cid:112) V ( γ, c N g N ( M ( x ))) N γ (cid:19)(cid:19) , where V ( γ, c N g N ( M ( x ))) is given in Equation (3.13) . Using the previous results, we propose, in Section 4, some estimators of extreme conditional quantilesbased on ˆ (cid:96) k n ,h n ( x ) and ˆ η k n . 4. Extreme quantiles estimation
In this section, we propose some estimators of extreme quantiles q α n ( Y | X = x ), for a sequence α n → n → + ∞ . For that purpose, we divide the study in two cases : • Intermediate quantiles, i.e we suppose n (1 − α n ) → + ∞ . It entails that the estimation of the α n − quantile leads to an interpolation of sample results. • High quantiles. According to de Haan and Rootz´en (1993), we suppose n (1 − α n ) →
0, i.e weneed to extrapolate sample results to areas where no data are observed.In both cases, the asymptotic results require some conditions we will provide throughout the section.The first one brings together the assumptions of Proposition 3.5. • ( C ) : Kernel conditions ( K − ( K
2) hold. In addition, k n → + ∞ , h n → k n = o ( nh n ), √ k n h n → √ k n A (cid:16) nk n (cid:17) → n → + ∞ .Condition ( C ) will be common to both approaches, and ensures in a first time that Hill estimator isunbiased, according to Equation (3.2). Moreover, k n = o ( nh n ) means that ˆ g h n converges to c N g N ( M ( x ))faster than ˆ γ k n to γ . In practice, this condition seems appropriate, because k n must not be too large forthe Hill estimator to be unbiased, and h n must be tall enough to provide a good estimation of (cid:96) ( x ).4.1. Intermediate quantiles.
We consider the case where n (1 − α n ) → + ∞ with α n → n → + ∞ .We recall q α n ( Y | X = x ) = µ Y | X + σ Y | X Φ − R ∗ ( α n ). According to Equation (2.14), we can approximateΦ − R ∗ ( α n ) by Φ − R (cid:16) − (cid:0) (cid:96) ( x ) (cid:0) (1 − α n ) − − (cid:1)(cid:1) − (cid:17) . The idea is then to estimate a quantile of level1 − v n = 1 − (cid:0) (cid:96) ( x ) (cid:0) (1 − α n ) − − (cid:1)(cid:1) − on the unconditional radius R , easier to deal with. Bynoticing that nv n ∼ (cid:96) ( x ) − n (1 − α n ) → + ∞ as n → + ∞ , we introduce the following statistic order basedestimator ˆ q α n ( Y | X = x ) for q α n ( Y | X = x ), inspired by Theorem 2.4.1 in de Haan and Ferreira (2006). Definition 4.1 (Intermediate quantile estimator) . We define (ˆ q α n ( Y | X = x )) n ∈ N as : (4.1) ˆ q α n ( Y | X = x ) = µ Y | X + σ Y | X (cid:0) W [ n ˜ v n +1] (cid:1) ηkn , where ˜ v n = (cid:16) (cid:96) k n ,h n ( x ) (cid:16) − α n − (cid:17)(cid:17) − , ˆ η k n and ˆ (cid:96) k n ,h n ( x ) are respectively given in Definitions 3.1and 3.3, and W is the first (or indifferently any) component of the vector Λ X − ( X − µ X ) . In order to prove the consistency of our estimator, we need a further condition ( C int ) concerning thesequences α n and k n , usefull in the proof. • ( C int ) : n (1 − α n ) → + ∞ , ln(1 − α n ) = o ( √ k n ) and √ k n ln(1 − α n ) = o (cid:16)(cid:112) n (1 − α n ) (cid:17) as n → + ∞ .Obviously, ( C int ) contains n (1 − α n ) → + ∞ , as mentioned above. Furthermore, ln(1 − α n ) = o ( √ k n )ensures that the rate of convergence in Theorem 4.1 goes to infinity (see below) and the last relationshipallows us to eliminate a term in the proof. In order to make this condition more meaningful, let uspropose a simple example: we choose our sequences in polynomial forms k n = n b , 0 < b < α n = 1 − n − a , a >
0. It is straightforward to see that ln(1 − α n ) = o ( k n ) and ln( n (1 − α n )) = o ( k n ) , ∀ a > , < b <
1. However, √ k n ln(1 − α n ) = o (cid:16)(cid:112) n (1 − α n ) (cid:17) if and only if a <
1, i.e n (1 − α n ) → + ∞ as A. USSEGLIO-CARLEVE n → + ∞ .In a first time, we give a result concerning the asymptotic behavior of ˆ q α n ( Y | X = x ) with respect to q α n ↑ ( Y | X = x ). Then, with Equation (2.14), we easily deduce a consistency result for ˆ q α n ( Y | X = x ). Theorem 4.1 (Consistency of ˆ q α n ( Y | X = x )) . Let us denote v n = (cid:0) (cid:96) ( x ) (cid:0) (1 − α n ) − − (cid:1)(cid:1) − and ˜ v n = (cid:16) (cid:96) k n ,h n ( x ) (cid:0) (1 − α n ) − − (cid:1)(cid:17) − . Under Assumption 1, and conditions ( C ) , ( C int ) : (4.2) √ k n ln (1 − α n ) (cid:18) ˆ q α n ( Y | X = x ) q α n ↑ ( Y | X = x ) − (cid:19) → n → + ∞ N (cid:32) , N γ ( γN + 1) (cid:33) . And therefore : (4.3) ˆ q α n ( Y | X = x ) q α n ( Y | X = x ) P → . The same asymptotic normality with Φ − R ∗ ( α n ) instead of Φ − R (1 − v n ) η may be deduced from Propo-sition 4.1 under the condition lim n → + ∞ √ k n ln (1 − α n ) ln (cid:32) Φ − R (1 − v n ) η Φ − R ∗ ( α n ) (cid:33) = 0 . This condition, which seems quite simple, is difficult to prove in a general context. Indeed, we need asecond order expansion of Equation (2.14). But the second order properties of the unconditional quantileΦ − R given by Assumption 1 are not necessarily the same as those of the conditional quantile Φ − R ∗ , whichmakes the study complicated. However, in some simple cases, we are able to solve the problem. We thusgive another assumption, stronger that Assumption 1. In the following, we refer to this assumption forresults of asymptotic normality. Assumption 2. ∀ d ∈ N ∗ , there exists λ , λ ∈ R such that : (4.4) c d g d ( t ) = λ t − d + γ − (cid:104) λ t ρ γ + o (cid:16) t ρ γ (cid:17)(cid:105) . It is obvious that Assumption 2 implies Assumption 1. Indeed, according to Hua and Joe (2011),Equation (4.4) is equivalent to say that c g ( t ) is regularly varying of second order with indices − − γ − , ρ/γ and an auxiliary function proportional to t ργ . Then, Proposition 6 in Hua and Joe (2011) entails¯Φ R ( t ) is second order regularly varying with − γ − , ρ/γ and the same kind of auxiliary function. Finally,this is equivalent (see de Haan and Ferreira (2006)) to Assumption 1 with indicated γ and ρ , and anauxiliary function A ( t ) proportional to t ρ .Furthermore, according to Kano (1994), the dependance on d in Equation (4.4) remains coherent withthe assumption of consistent elliptical distributions, the latter having to have a function g d depending on d . As an example, the Student distribution fills Assumption 2. The latter allows us to provide a secondorder expansion for Equation (2.14).In order to prove the asymptotic normality of ˆ q α n ( Y | X = x ), we add a technical condition (cid:0) C HGint (cid:1) thatinvolves tail indices γ and ρ . • (cid:0) C HGint (cid:1) : ( C int ) holds. In addition, √ k n (1 − α n ) = o (ln(1 − α n )), and :(4.5) lim n → + ∞ √ k n ln (1 − α n ) (1 − α n ) min( − ρ, γ ) γN +1 = 0 . Condition (cid:0) C HGint (cid:1) means that sequence k n must not be too large. In view of Equation (4.5), it is obviousthat if N or γ goes to infinity, (cid:0) C HGint (cid:1) is not filled. The tail of the underlying distribution may thus notbe too heavy, and the size N of the covariate not too large. Similarly, they no longer hold if γ or ρ goesto 0, i.e. if the underlying distribution is either too lightly varying, or its c.d.f. takes too long to behavelike λt − /γ . Proposition 4.2 (Asymptotic normality of ˆ q α n ( Y | X = x )) . Assume that Assumption 2 and conditions ( C ) , (cid:0) C HGint (cid:1) hold. Then : (4.6) √ k n ln (1 − α n ) (cid:18) ˆ q α n ( Y | X = x ) q α n ( Y | X = x ) − (cid:19) → n → + ∞ N (cid:32) , N γ ( γN + 1) (cid:33) . ONDITIONAL EXTREME RISK MEASURES FROM HEAVY-TAILED ELLIPTICAL RANDOM VECTORS 9
We notice that asymptotic variance in Equation (4.2) tends to 0 as the number of covariates N goesto + ∞ . Indeed, we observe a fast convergence of ˆ q α n to q α n ↑ when N is large. However, (cid:0) C HGint (cid:1) is notfilled if N is tall. Then asymptotic normality (4.6) no longer holds. This is explained by the fact thatmore N is tall, more q α n ( Y | X = x ) /q α n ↑ ( Y | X = x ) (see Equation (2.14)) tends to 1 slowly.4.2. High quantiles.
We now consider n (1 − α n ) → n → + ∞ . In the following definition, weintroduce another quantile estimator ˆˆ q α n ( Y | X = x ) for q α n ( Y | X = x ). We first recall that the ideais to estimate an unconditional quantile of level 1 − v n = 1 − (cid:0) (cid:96) ( x ) (cid:0) (1 − α n ) − − (cid:1)(cid:1) − . A quickcalculation proves that v n is asymptotically equivalent to (cid:96) ( x ) − (1 − α n ), and therefore nv n → n → + ∞ . The use of statistic order (at level nv n ) is then impossible in that case. According to Theorem4.3.8 in de Haan and Ferreira (2006), a way to estimate such a quantile may be to take the statistic orderat the intermediate level k n (we recall k n → + ∞ ), and apply an extrapolation coefficient ( k n / ( nv n )) γ .This approach inspired the following estimator. Definition 4.2 (High quantile estimator) . We define (cid:16) ˆˆ q α n ( Y | X = x ) (cid:17) n ∈ N as : (4.7) ˆˆ q α n ( Y | X = x ) = µ Y | X + σ Y | X (cid:34) W [ k n +1] (cid:18) k n n (cid:18) (cid:96) k n ,h n ( x ) (cid:18) − α n − (cid:19)(cid:19)(cid:19) ˆ γ kn (cid:35) ηkn . The aim is now to study the asymptotic properties of ˆˆ q α n ( Y | X = x ). As for the intermediate quantileestimator, we propose a result of asymptotic normality, under a condition ( C high ) (given below) whichwe then refine under Assumption 2. • ( C high ) : n (1 − α n ) →
0, ln ( n (1 − α n )) = o ( √ k n ) and ln(1 − α n )ln ( nkn (1 − α n ) ) → θ ∈ [0 , + ∞ [ as n → + ∞ .The second statement is added in order to apply Theorem 4.3.8 in de Haan and Ferreira (2006), and thethird one is a notation used in the following. Let us propose a simple example: if we choose our sequencesin polynomial forms k n = n b , 0 < b < α n = 1 − n − a , a >
0, the first condition is filled if and onlyif a >
1, ln( n (1 − α n )) = o ( √ k n ) and the last assertion holds with a particular θ given later.The consistency result that follows immediatly is given just below. Theorem 4.3 (Consistency of high quantile estimator) . Let us denote v n = (cid:0) (cid:96) ( x ) (cid:0) (1 − α n ) − − (cid:1)(cid:1) − and ˜ v n = (cid:16) (cid:96) k n ,h n ( x ) (cid:0) (1 − α n ) − − (cid:1)(cid:17) − . Under Assumption 1, and conditions ( C ) , ( C high ) : (4.8) √ k n ln (cid:16) k n n (1 − α n ) (cid:17) (cid:32) ˆˆ q α n ( Y | X = x ) q α n ↑ ( Y | X = x ) − (cid:33) → n → + ∞ N (cid:32) , (cid:18) γγN + 1 − θ N γ ( γN + 1) (cid:19) (cid:33) . And therefore : (4.9) ˆˆ q α n ( Y | X = x ) q α n ( Y | X = x ) P → as n → + ∞ . We can emphasize that condition ( C high ) is filled in most of the common cases. Indeed, the simpleexamples to find that do not satisfy (ii) are of the form α n = 1 − n − ln( n ) − κ , κ > k n = ln( n ). Butsuch a choice of sequences would lead to a poor estimation of ˆ γ k n and ˆ η k n , since k n → + ∞ very slowly,and moreover a poor estimation of the quantile, the level α n tending to 1 slowly. These sequences aretherefore not recommanded in practice. Next corollary gives the value of θ when sequences k n and α n have a polynomial form. Corollary 4.4.
Under Assumption 1, conditions ( C ) , ( C high ) , and taking k n = n b , < b < and α n = 1 − n − a , a > , asymptotic relationship (4.8) holds with θ = aa + b − . As for the intermediate quantile estimator, asymptotic normality (4.8) may be improved under thecondition lim n → + ∞ √ k n ln (cid:16) k n n (1 − α n ) (cid:17) ln (cid:32) Φ − R (1 − v n ) η Φ − R ∗ ( α n ) (cid:33) = 0Assumption 2 places us in a framework where it is quite simple to prove it, if we add the followingcondition : • (cid:16) C HGhigh (cid:17) : ( C high ) holds. In addition,(4.10) lim n → + ∞ √ k n ln (cid:16) k n n (1 − α n ) (cid:17) (1 − α n ) min( − ρ, γ ) γN +1 = 0 . As (cid:0) C HGint (cid:1) , condition (cid:16) C HGhigh (cid:17) means that sequence k n must be small enough. In view of Equation (4.10),we deduce that if N or γ goes to infinity, (cid:16) C HGhigh (cid:17) is not filled. The tail of the underlying distributionmay thus not be too heavy, and the size N of the covariate not too large. Similarly, they no longer holdif γ or ρ goes to 0.By combining Assumption 2 and (cid:16) C HGhigh (cid:17) , the following result is obtained.
Proposition 4.5 (Asymptotic normality of high quantile estimator) . Assume that Assumption 2 andconditions ( C ) , (cid:16) C HGhigh (cid:17) hold. Then : (4.11) √ k n ln (cid:16) k n n (1 − α n ) (cid:17) (cid:32) ˆˆ q α n ( Y | X = x ) q α n ( Y | X = x ) − (cid:33) → n → + ∞ N (cid:32) , (cid:18) γγN + 1 − θ N γ ( γN + 1) (cid:19) (cid:33) . We can make the same kind of remark as in the previous subsection when N is large. In the following,we give estimators for two other classes of extreme risk measures, based on the estimators given inEquations (4.1) and (4.7). The first one generalizes quantiles.5. Some extreme risk measures estimators L p − quantiles. Let Z be a real random variable. The L p − quantiles of Z with level α ∈ ]0 ,
1[ and p >
0, denoted q p,α ( Z ), is solution of the minimization problem (see Chen (1996)) :(5.1) q p,α ( Z ) = arg min z ∈ R E (cid:2) (1 − α ) ( z − Z ) p + + α ( Z − z ) p + (cid:3) , where Z + = Z { Z> } . According to Koenker and Bassett (1978), the case p = 1 leads to the quantile q ,α ( Z ) = F − Z ( α ), where F Z is the c.d.f of Z . The case p = 2, formalized in Newey and Powell (1987),leads to more complicated calculations, and admits, with the exception of some particular cases (see,e.g., Koenker (1992)), no general formula. The general case p ≥ L p − quantiles get the translation equivariance and positively homogeneityproperties for p >
1. More recently, the particular case of Student distributions has, for example, beenexplored in Bernardi et al. (2017). However, it seems difficult to obtain a general formula. On the otherhand, in the case of extreme levels α , i.e. when α tends to 1, Daouia et al. (2017b) proved that thefollowing relationship holds, for a heavy-tailed random variable with tail index γ .(5.2) q p,α ( Z ) q α ( Z ) → α → (cid:20) γB ( p, γ − − p + 1) (cid:21) − γ := f L ( γ, p ) , where B ( ., . ) is the beta function. We add that for a Pareto-type distribution with tail index γ , the L p − quantile exists if and the only if the moment of order p − γ < /p . The expectilecase p = 2 leads to the result of Bellini et al. (2014). Using this result, we can estimate the conditional L p − quantiles from the quantile estimated in Section 4. For that purpose, we need to know the tail indexof the conditional radius R ∗ , given in the following lemma. Lemma 5.1.
The conditional distribution Y | X = x is attracted to a maximum domain of Pareto-typedistribution with tail index ( γ − + N ) − , i.e (5.3) lim t → + ∞ ¯Φ R ∗ ( ωt )¯Φ R ∗ ( t ) = ω − γ − N . With Lemma 5.1 and Equation (5.2), we define the following estimators for the L p − quantile of Y | X = x , according to whether if n (1 − α n ) tends to 0 or + ∞ . ONDITIONAL EXTREME RISK MEASURES FROM HEAVY-TAILED ELLIPTICAL RANDOM VECTORS 11
Definition 5.1.
Let ( α n ) n ∈ N be a sequence such that α n → as n → + ∞ . If either p ≤ N or γ < p − N ,we define: (5.4) ˆ q p,α n ( Y | X = x ) = µ Y | X + σ Y | X (cid:0) W [ n ˜ v n +1] (cid:1) / ˆ η kn f L (cid:16)(cid:0) ˆ γ − k n + N (cid:1) − , p (cid:17) ˆˆ q p,α n ( Y | X = x ) = µ Y | X + σ Y | X (cid:20) W [ k n +1] (cid:16) k n n ˜ v n (cid:17) ˆ γ kn (cid:21) / ˆ η kn f L (cid:16)(cid:0) ˆ γ − k n + N (cid:1) − , p (cid:17) . where ˆ γ k n and ˜ v n are respectively given in Equation (3.1) and Theorem 4.1. We have proved the convergence in probability of ˆ q α n ( Y | X = x ) and ˆˆ q α n ( Y | X = x ). Furthermore,the convergence in probability of the asymptotic term, and consequently the empirical L p − quantile isnot difficult to get, this is why we omit the proof. Proposition 5.2 (Consistency of L p − quantile estimators) . Assume that Assumption 1 and condition ( C ) hold. Under conditions ( C int ) and ( C high ) respectively, ˆ q p,α n ( Y | X = x ) and ˆˆ q p,α n ( Y | X = x ) areconsistent, i.e. : (5.5) ˆ q p,αn ( Y | X = x ) q p,αn ( Y | X = x ) P → ˆˆ q p,αn ( Y | X = x ) q p,αn ( Y | X = x ) P → . Using the second order expansion of Equation (5.2) given in Daouia et al. (2017b), and doing somestronger assumptions, we can deduce the following asymptotic normality results. For that purpose, letus add two conditions. • (cid:16) C L p int (cid:17) : ( C int ) holds. In addition, √ k n (1 − α n ) = o (ln(1 − α n )), and :(5.6) lim n → + ∞ √ k n ln (1 − α n ) (1 − α n ) min( − ρ,γ ) γN +1 = 0 . • (cid:16) C L p high (cid:17) : ( C high ) holds. In addition,(5.7) lim n → + ∞ √ k n ln (cid:16) k n n (1 − α n ) (cid:17) (1 − α n ) min( − ρ,γ ) γN +1 = 0 . These conditions will be used below. If we compare (cid:16) C L p int (cid:17) and (cid:16) C L p int (cid:17) with (cid:0) C HGint (cid:1) and (cid:0) C HGint (cid:1) respec-tively, sequence k n must be chosen smaller. Finally, we can draw the same conclusions than above, i.e.these conditions are applicable for regularly varying distributions with an intermediate level γ , and asmall number of covariates N .To sum up, among all these conditions, we can deduce the following ordering : (cid:16) C L p int (cid:17) ⇒ (cid:0) C HGint (cid:1) ⇒ ( C int ) (cid:16) C L p high (cid:17) ⇒ (cid:16) C HGhigh (cid:17) ⇒ ( C high ) . Proposition 5.3 (Asymptotic normality of L p − quantile estimators) . Assume that Assumption 2 andcondition ( C ) hold. Under conditions (cid:16) C L p int (cid:17) and (cid:16) C L p high (cid:17) respectively, and if p > , then : (5.8) √ k n ln(1 − α n ) (cid:16) ˆ q p,αn ( Y | X = x ) q p,αn ( Y | X = x ) − (cid:17) → n → + ∞ N (cid:16) , N γ ( γN +1) (cid:17) √ k n ln ( knn (1 − αn ) ) (cid:16) ˆˆ q p,αn ( Y | X = x ) q p,αn ( Y | X = x ) − (cid:17) → n → + ∞ N (cid:18) , (cid:16) γγN +1 − θ Nγ ( γN +1) (cid:17) (cid:19) . An example of L − quantile, or expectile, is provided in Section 6. The second risk measure we focuson is called Haezendonck-Goovaerts risk measure.5.2. Haezendonck-Goovaerts risk measures.
Let Z be a real random variable, and ϕ a non negativeand convex function with ϕ (0) = 0, ϕ (1) = 1 and ϕ (+ ∞ ) = + ∞ . The Haezendonck-Goovaerts riskmeasure of Z with level α ∈ ]0 ,
1[ associated to ϕ , is given by the following (see Tang and Yang (2012)) :(5.9) H α ( Z ) = inf z ∈ R { z + H α ( Z, z ) } , where H α ( Z, z ) is the unique solution h to the equation :(5.10) E (cid:20) ϕ (cid:18) ( Z − z ) + h (cid:19)(cid:21) = 1 − α. ϕ is called Young function. This family of risk measures has been firstly introduced as Orlicz riskmeasure in Haezendonck and Goovaerts (1982), then Haezendonck risk measure in Goovaerts et al.(2004), and finally Haezendonck-Goovaerts risk measure in Tang and Yang (2012). According to Belliniand Rosazza Gianin (2008), such a risk measure is coherent, and therefore translation equivariant andpositively homogeneous. The particular case ϕ ( t ) = t leads to the Tail Value at Risk with level α TVaR α ( X ), introduced in Artzner et al. (1999). In the following, we denote H p,α ( Z ) the Haezendonck-Goovaerts risk measure of Z with a power Young function t p , p ≥
1. In Tang and Yang (2012), theauthors provided the following result.
Proposition 5.4 (Tang and Yang (2012)) . If Z fills Assumption 1, and taking a Young function ϕ ( t ) = t p , p ≥ , then the following relationship holds : (5.11) H p,α ( Z ) q α ( Z ) → α → γ − (cid:0) γ − − p (cid:1) pγ − p γ ( p − B (cid:0) γ − − p, p (cid:1) γ := f H ( γ, p ) . In particular, taking p = 1 leads to TVaR α ( Z ) ∼ (1 − γ ) − q α ( Z ) as α →
1. Using Lemma 5.1, extremequantiles estimators in Definitions 4.1, 4.2 and Proposition 5.4, we can deduce estimators for extremeHaezendonck-Goovaerts risk measure H p,α ( Y | X = x ) (with power Young function ϕ ( t ) = t p , p ≥
1) of Y | X = x . Definition 5.2.
Let ( α n ) n ∈ N be a sequence such that α n → as n → + ∞ . If either p ≤ N or γ < p − N ,we define : (5.12) ˆ H p,α n ( Y | X = x ) = µ Y | X + σ Y | X (cid:0) W [ n ˜ v n +1] (cid:1) / ˆ η kn f H (cid:16)(cid:0) ˆ γ − k n + N (cid:1) − , p (cid:17) ˆˆ H p,α n ( Y | X = x ) = µ Y | X + σ Y | X (cid:20) W [ k n +1] (cid:16) k n n ˜ v n (cid:17) ˆ γ kn (cid:21) / ˆ η kn f H (cid:16)(cid:0) ˆ γ − k n + N (cid:1) − , p (cid:17) . The condition p ≤ N or γ < p − N simply ensures the existence of H p,α n ( Y | X = x ). Using theconsistency results given in Propositions 4.1 and 4.3, the consistency of these estimators is immediate.The proof is also omitted from the appendix. Proposition 5.5 (Consistency of H-G estimators) . Assume that Assumption 1 and condition ( C ) hold.Under conditions ( C int ) and ( C high ) respectively, ˆ H p,α n ( Y | X = x ) and ˆˆ H p,α n ( Y | X = x ) are consistent,i.e. : (5.13) ˆ H p,αn ( Y | X = x ) H p,αn ( Y | X = x ) P → ˆˆ H p,αn ( Y | X = x ) H p,αn ( Y | X = x ) P → . Proposition 5.6 (Asymptotic normality of H-G estimators) . Assume that Assumption 2 and condition ( C ) hold. Under conditions (cid:0) C HGint (cid:1) and (cid:16) C HGhigh (cid:17) respectively, we have : (5.14) √ k n ln(1 − α n ) (cid:16) ˆ H p,αn ( Y | X = x ) H p,αn ( Y | X = x ) − (cid:17) → n → + ∞ N (cid:16) , N γ ( γN +1) (cid:17) √ k n ln ( knn (1 − αn ) ) (cid:18) ˆˆ H p,αn ( Y | X = x ) H p,αn ( Y | X = x ) − (cid:19) → n → + ∞ N (cid:18) , (cid:16) γγN +1 − θ Nγ ( γN +1) (cid:17) (cid:19) . We can emphasize that conditions for asymptotic normality are less strong in the case of Haezendonck-Goovaerts risk measures. We propose some examples (with p = 1, i.e TVaR) in Sections 6 and 7.6. Simulation study
In this section, we apply our estimators to 100 samples of n simulations of a Student vector Z =( X , Y ) ∈ R ( X ∈ R and Y ∈ R ) with ν = 2 degrees of freedom, and compare with theoreticalresults. According to de Haan and Ferreira (2006), the Student distribution with ν degrees of freedomfills Assumption 1 with indices γ = 1 /ν , ρ = − /ν , and an auxiliary function A ( t ) proportional to t − /ν .The latter even fills Assumption 2, and is the only heavy-tailed elliptical distribution (to our knowledge)where we can obtain closed formula for conditional quantiles. In addition, such a degree of freedom makesthe tail of the distribution sufficiently heavy to easily observe the asymptotic results. We can notice thatthe unconditional distribution Y has tail index 1 /
2, then, using Lemma 5.1, the conditional distribution Y | X = x has tail index 2 / < /
2, and admits quantile, expectile ( L − quantile) and TVaR. This sectionbeeing uniquely devoted to the performance of our estimators, we take for conveniance µ = R and ONDITIONAL EXTREME RISK MEASURES FROM HEAVY-TAILED ELLIPTICAL RANDOM VECTORS 13 Σ = I . Let us now estimate the extreme quantiles of Y | X = x . For that purpose, we have to chose anarbitrary value of x . We thus suppose for example that the observed covariates x satisfy M ( x ) = 1.6.1. Choice of parameters.
As mentioned in Sections 3 and 4, the asymptotic results obtained aresensitive to the choice of sequences k n , h n , α n , and to a lesser extent to the kernel K . The latter will bethe gaussian p.d.f in the following. Concerning the sequences, we propose in this section to consider thepolynomial forms α n = 1 − n − a , a > k n = n b , b > h n = n − c , c >
0. In order to deal with highquantiles, we fix in a first time a = 1 .
25. We now have to chose carefully the parameters b and c , fulfillingthe conditions ( C ), ( C high ) and (cid:16) C HGhigh (cid:17) . ( C ) imposes b < − c , b < c and b < / ( ν +4) = 2 /
3, ( C high ) issatisfied with θ = a/ ( a + b −
1) (see Corollary 4.4), (cid:16) C HGhigh (cid:17) entails b ≤ a = 2 . b ≤ a/ ( N + ν ) = 1.Finally, it seems reasonable to chose b (respectively c ) as tall (respectively small) as possible. The choices b = 0 . c = 0 . Extremal parameters estimation.
The next step is to estimate the quantities η and (cid:96) ( x ).For that purpose, we use our estimators ˆ η k n and ˆ (cid:96) k n ,h n ( x ) respectively introduced in Equations (3.3)and (3.10). These two estimators are related to the Hill estimator ˆ γ k n , and asymptotic results of Sec-tion 3 hold only if the data is independent. This is why we do the estimation of γ only with the n realizations of the first component from the vector Z . Figure 1.
From left to right: boxplots of 100 estimators ˆ η k n and ˆ (cid:96) k n ,h n ( x ), fordifferent sample sizes n . Theoretical values are in red. The chosen sequences are k n = n . , h n = n − . and α n = 1 − n − . . Figure 1 shows the boxplots of our estimators ˆ η k n and ˆ (cid:96) k n ,h n ( x ). In this example, the theoretical valueof η is 3 / .
5, and (cid:96) ( x ) is equal to 5 . Extreme risk measures estimation.
It remains to estimate the conditional quantiles, expectilesand TVaRs of Y | X = x . Theoretical formulas (or algorithm) for conditional quantiles and expectilesmay be found in (Maume-Deschamps et al., 2017) and Maume-Deschamps et al. (2018). Furthermore,using straightforward calculations, formulas for Tail-Value-at-Risk may be obtained.(6.1) q α ( Y | X = x ) = (cid:113) ν + M ( x ) ν + N Φ − ν + N ( α )TVaR α ( Y | X = x ) = − α Γ ( N +1+ ν ) Γ ( N + ν ) √ ν + M ( x ) √ π ( ν + N − (cid:18) Φ − ν + N ( α ) ν + N (cid:19) − N − ν , where Φ ν is the c.d.f of a Student distribution with ν degrees of freedom. In order to give an idea ofthe performance of our estimator, we propose in Figure 2 some box plots representing 100 relative errors(based on sample sizes n from 1 000 to 10 000 000) of our quantile estimator (4.7) with α n = 1 − n − . .Finally, we would like to compare these results with other estimators already used. The most commonand widespread methods for estimating conditional quantiles and expectiles are respectively quantile and expectile regression, introduced in Koenker and Bassett (1978) and Newey and Powell (1987). In Maume-Deschamps et al. (2017) and Maume-Deschamps et al. (2018), we have shown that such approach leads toa poor estimation in case of extreme levels. Indeed, in this example, a quantile regression estimator willconverge to Φ − ν ( α n ) = 1530 .
15, very far from 7 .
31, the theoretical result. Obviously, since the quantileregression estimator does not assume any structure on the underlying distribution, the latter is clearlyless efficient than the tailored extreme quantile estimators introduced in this paper.
Figure 2.
Box plots representing 100 relative errors ˆˆ q αn ( Y | X = x ) q αn ( Y | X = x ) − n from 1 000 to 10 000 000) with α n = 1 − n − . , k n = n . and h n = n − . . It may also be interesting to compare the empirical variance of our estimator with our asymptoticresult given in Proposition 4.5. Furthermore, the latter allow us to provide confidence intervals for q α n ( Y | X = x ). We thus introduce the notation ˆ ζ n the empirical variance of √ k n ln ( knn (1 − αn ) ) (cid:16) ˆˆ q αn ( Y | X = x ) q αn ( Y | X = x ) − (cid:17) ,while ζ = (cid:16) γγN +1 − θ Nγ ( γN +1) (cid:17) = 0 . m n the numberof times the theoretical value q α n ( Y | X = x ) is in the 95% confidence interval. Table 2 gives an overviewof the behavior of these quantities according to n . n α n ˆ ζ n m n ∞ Table 2.
Empirical variance ˆ ζ n , number of confidence intervals containing thetheoretical value m n for 100 estimations ˆˆ q α n ( Y | X = x ) of q α n ( Y | X = x ), with n ranging from 1 000 to 10 000 000. Chosen sequences are α n = 1 − n − . , k n = n . and h n = n − . . Finally, based on these quantile estimates, we deduce, using Definitions 5.1 and 5.2, L − quantile (orexpectile) and Tail-Value-at-Risk estimates. Figure 3 provides relative errors for estimators ˆˆ q ,α n andˆˆ H ,α n . ONDITIONAL EXTREME RISK MEASURES FROM HEAVY-TAILED ELLIPTICAL RANDOM VECTORS 15
Figure 3.
From left to right : Box plots representing 100 relative errors ˆˆ q ,αn ( Y | X = x ) q ,αn ( Y | X = x ) − ˆˆ H ,αn ( Y | X = x ) H ,αn ( Y | X = x ) − n from 1 000to 10 000 000) with α n = 1 − n − . , k n = n . and h n = n − . . In the previous figures, only the first component of the vector is used to estimate the tail index. Thereis therefore some loss of information. We have suggested in Section 3 another approach. Furthermore,Resnick and St˘aric˘a (1995) or Hsing (1991) proved that the Hill estimator may also work with dependentdata. Thus it would be possible to improve the estimation of ˆ γ k n by adding the other components ofthe vector in Equation (3.1), but in that case the asymptotic results of Propositions 3.1 or 3.3 would nothold anymore. 7. Real data example
As an application, we use the daily market returns (computed from the closing prices) of financialassets from 2006 to 2016, available at http://stanford.edu/class/ee103/portfolio.html . We focuson the first four assets, i.e iShares Core U.S. Aggregate Bond ETF, PowerShares DB Commodity IndexTracking Fund, WisdomTree Europe SmallCap Dividend Fund and SPDR Dow Jones Industrial AverageETF which will be our covariate X . Figure 4 represents the daily return for each day. Figure 4.
Daily market returns of 4 different assets.
The reason for focusing solely on the value of these assets could be, for example, that they arethe first available every day. The aim would be to anticipate the behavior of another asset on an-other market. We thus consider the return of WisdomTree Japan Hedged Equity Fund as randomvariable Y . The size of the sample is 2520. The first 2519 days (from January 3, 2007 to Decem-ber 5, 2016) will be our learning sample, and we focus on the 2520th day, when the covariate X is x = ( − . , − . , . , . Y given X = x .After a brief study of the autocorrelation functions, we consider that the daily returns can be consid-ered as independent. Concerning the shape of the data, histograms of the marginals seem symmetrical. Furthermore, the measured tail index is approximately the same for the 4 marginals. This is why sup-pose that the data is elliptical. After having estimated µ and Σ by the method of moments, we get M ( x ) = 1 . η k n and ˆ (cid:96) k n ,h n ( x ) given in Equations (3.3) and (3.10). Wetake as sequences k n = n . ( b = 0 .
6) and h n = n − . ( c = 0 . K the gaussian p.d.f, hencewe deduce the asymptotic confidence bounds from Equation (3.14). We then obtain ˆ η k n = 2 . (cid:96) k n ,h n ( x ) = 6 . q α n ( Y | X = x ) with level α n = 1 − n − a , a > a = (1 − b ) (ˆ γ k n + 1) = 1 . . . Y | X = x . Inother words, before the opening of the second market, we consider that given the returns of our first fourassets, that of WisdomTree Japan Hedged Equity Fund has a probability 0 . . . Conclusion
In this paper, we propose two estimators ˆ (cid:96) k n ,h n ( x ) and ˆ η k n respectively for extremal parameters (cid:96) ( x )and η introduced in Equation (2.13). We have proved their consistency and asymptotic normality ac-cording to the asymptotic relationships between the sequences k n and h n . Using these estimators, wehave defined estimators for intermediate and high quantiles, proved their consistency, given their as-ymptotic normality under stronger conditions, and deduced estimators for extreme L p − quantiles andHaezendonck-Goovaerts risk measures. Consistency and asymptotic normality are also provided for theseestimators, under conditions. We have also illustrated with a numerical example the performance of ourestimators, and applied them to real data set.As working perspectives, we intend to propose a method of optimal choice of the sequences k n and h n ,which is not totally discussed in this paper. Furthermore, the shape of (cid:96) ( x ) and η leaving Assumption 1is a current research topic. More generally, the asymptotic relationships between conditional and un-conditional quantile in other maximum domains of attraction, using for example the results of Hashorva(2007a), may be developed. However, we need a second-order refinement, as we need a second-orderrefinement of Equation (2.14) to propose asymptotic normalities 4.2 and 4.5 under weaker assumptionsthan Assumption 2. Finally, it seems that the ratio of the two terms in Equation (2.14) tends to 1 moreand more slowly when the covariate vector size N becomes large. Then, our estimation approach mayperform poorly if N is tall. This is why it might be wise to propose another method when the covariatevector size N is large. Appendix
Proof of Lemma 2.1. (i) Since R d = χ ξ , where χ has a Lebesgue density (cid:113) π e − x . According to Lemma 4.3 in Jessenand Mikosch (2006), ξ satisfies ¯ F ξ ( tω ) / ¯ F ξ ( t ) → ω − γ as t → + ∞ . Furthermore, Lemma 4.2in Jessen and Mikosch (2006) entails P ( ξ > t ) ∼ t → + ∞ E (cid:20) χ γ (cid:21) − P ( R > t ) . Assumption 1 provides P ( R > t ) ∼ λt − γ , hence the result.(ii) Using again Lemma 4.2 in Jessen and Mikosch (2006) for R d d = χ d ξ , it comes immediatly P ( R d > t ) ∼ t → + ∞ E (cid:20) χ γ d (cid:21) P ( ξ > t ) . Some straightforward calculations provide E (cid:20) χ γ d (cid:21) = 2 γ Γ (cid:16) d + γ − (cid:17) Γ ( d ) .(iii) From (ii), we have, for all d ∈ N , f R d ( t ) ∼ t → + ∞ γ Γ (cid:16) d + γ − (cid:17) Γ ( d ) λ (cid:48) t − γ − , where λ (cid:48) ∈ R is not relatedto d . The result is immediate with this expression. (cid:50) Proof of Proposition 2.2.
The conditional density (Proposition 3 in Maume-Deschamps et al. (2017))leads to :lim t →∞ ¯Φ R ∗ ( t )¯Φ R ( t η ) = lim t →∞ c N +1 g N +1 ( M ( x ) + t ) c N g N ( M ( x )) ηt η − c g ( t η ) = lim t →∞ Γ (cid:0) N +12 (cid:1) ( M ( x ) + t ) − N π N +12 c N g N ( M ( x )) ηt η − f R N +1 (cid:16)(cid:112) M ( x ) + t (cid:17) f R ( t η ) . ONDITIONAL EXTREME RISK MEASURES FROM HEAVY-TAILED ELLIPTICAL RANDOM VECTORS 17
Using Equation (2.11) of Lemma 2.1, it comes¯Φ R ∗ ( t )¯Φ R ( t η ) ∼ t → + ∞ π N c N g N ( M ( x )) η Γ (cid:16) N +1+ γ − (cid:17) Γ (cid:16) γ − (cid:17) t ( η − γ − +1)+1 − η − N . Obviously, we impose 0 < (cid:96) ( x ) < + ∞ , then 1 − η − N + ( η − γ − + 1) = 0, hence η = N γ + 1.Replacing η in the previous equation, (cid:96) ( x ) is easily deduced : (cid:96) ( x ) = 1 π N c N g N ( M ( x )) η Γ (cid:16) N +1+ γ − (cid:17) Γ (cid:16) γ − (cid:17) . (cid:50) Proof of Proposition 3.4.
It is obvious that under conditions ( K − ( K √ k n (ˆ g h n − c N g N ( M ( x ))) P → n → + ∞ if k n = o ( nh n ) and √ k n h n →
0. Then we get the following asymptotic normality : (cid:112) k n (cid:18) ˆ γ k n − γ ˆ g h n − c N g N ( M ( x )) (cid:19) → n → + ∞ N (cid:18)(cid:18) (cid:19) , (cid:18) γ
00 0 (cid:19)(cid:19) . Since (cid:96) ( x ) = u ( γ ), the delta method entails (cid:112) k n (cid:16) ˆ (cid:96) k n ,h n ( x ) − (cid:96) ( x ) (cid:17) → n → + ∞ N (cid:0) , u (cid:48) ( γ ) γ (cid:1) . A quick calculation of u (cid:48) , using Equation (2.12), gives the first result. The second part of the proof issimilar. Indeed, if nh n = o ( k n ) and nh n → n → + ∞ , then (cid:112) nh n (cid:18) ˆ γ k n − γ ˆ g h n − c N g N ( M ( x )) (cid:19) → n → + ∞ N (cid:18) (cid:19) , M ( x ) − N Γ ( N ) π N c N g N ( M ( x )) (cid:82) K ( u ) du . The delta method completes the proof. (cid:50)
In order to make the proof of Theorem 4.1 easier to read, we give the following lemma, which providesthe asymptotic behavior of a statistic order under Assumption 1.
Lemma 8.1.
Under Assumption 1 and condition ( C ) , (8.1) √ nv n (cid:32) (cid:0) W [ nv n +1] (cid:1) Φ − R (1 − v n ) − (cid:33) → n → + ∞ N (cid:0) , γ (cid:1) . Proof of Lemma 8.1.
The proof is inspired by Theorem 2.4.1 in de Haan and Ferreira (2006). Let Y , Y , . . . be independant and identically distributed random variables with c.d.f. 1 − y − , y >
1. Wedenote in addition Y [ n ] ≤ . . . ≤ Y [1] . We thus have √ nv n (cid:0) v n Y [ nv n +1] − (cid:1) → n → + ∞ N (0 , . By noticing that W [ nv n +1] d = Φ − R (cid:0) − /Y [ nv n +1] (cid:1) , it comes √ nv n (cid:32) (cid:0) W [ nv n +1] (cid:1) Φ − R (1 − v n ) − (cid:33) d = √ nv n (cid:32) Φ − R (cid:0) − /Y [ nv n +1] (cid:1) Φ − R (1 − v n ) − (cid:0) v n Y [ nv n +1] (cid:1) γ (cid:33) + √ nv n (cid:0)(cid:0) v n Y [ nv n +1] (cid:1) γ − (cid:1) . The delta method entails that the second term tends to N (0 , γ ). Moreover, Assumption 1 and √ k n A (cid:16) nk n (cid:17) → n → + ∞ ensure the asymptotic nullity of the first term. Proof of Theorem 4.1.
In a first time, we can notice ˜ v n is related to ˆ (cid:96) k n ,h n ( x ). Then, according toProposition 3.4, (i) entails that we can deal with v n instead of ˜ v n in Equation (4.2). Furthermore, wegive the decomposition : √ k n ln (1 − α n ) (cid:18) ˆ q α n ( Y | X = x ) q α n ↑ ( Y | X = x ) − (cid:19) ∼ n → + ∞ √ k n ln (1 − α n ) (cid:32) (cid:0) W [ nv n +1] (cid:1) / ˆ η kn Φ − R (1 − v n ) /η − (cid:33) = √ k n ln (1 − α n ) (cid:32) (cid:0) W [ nv n +1] (cid:1) / ˆ η kn Φ − R (1 − v n ) / ˆ η kn − (cid:33) Φ − R (1 − v n ) / ˆ η kn − /η + √ k n ln (1 − α n ) (cid:16) Φ − R (1 − v n ) / ˆ η kn − /η − (cid:17) . Under Assumption 1, and according to Proposition 3.1 and Theorem 2.4.1 in de Haan and Ferreira (2006)(with ( C )), we have : √ k n (cid:16) η kn − η (cid:17) → n → + ∞ N (cid:16) , N γ ( γN +1) (cid:17) √ nv n (cid:18) ( W [ nvn +1] ) Φ − R (1 − v n ) − (cid:19) → n → + ∞ N (cid:0) , γ (cid:1) , By noticing that v n is equivalent to (cid:96) ( x ) − (1 − α n ) as n → + ∞ , and using condition √ k n = o (cid:16) ln(1 − α n ) (cid:112) n (1 − α n ) (cid:17) in ( C int ), it comes √ k n ln (1 − α n ) (cid:32) (cid:0) W [ nv n +1] (cid:1) / ˆ η kn Φ − R (1 − v n ) / ˆ η kn − (cid:33) → n → + ∞ . Furthermore, under Assumption 1, ln (cid:0) Φ − R (1 − v n ) (cid:1) is cleary equivalent to − γ ln( v n ), or − γ ln(1 − α n ).Then ( C int ) ensures Φ − R (1 − v n ) / ˆ η kn − /η → n → + ∞ , and therefore the first term of the decom-position tends to 0. It thus remains to calculate the limit of the second term. It is not complicated tonotice that √ k n ln (cid:0) Φ − R (1 − v n ) (cid:1) (cid:16) Φ − R (1 − v n ) / ˆ η kn − /η − (cid:17) → n → + ∞ N (cid:32) , N γ ( γN + 1) (cid:33) . Using the equivalence ln (cid:0) Φ − R (1 − v n ) (cid:1) ∼ − γ ln( v n ) ∼ − γ ln(1 − α n ), we get the result (4.2). Usingasymptotic relationship (2.14), the consistency 4.3 is obvious. (cid:50) Proof of Proposition 4.2.
We recall that density of Φ R ∗ is proportional to c N +1 g N +1 (cid:0) M ( x ) + t (cid:1) ,and, from Assumption 2, there exist λ , λ ∈ R such that : c N +1 g N +1 (cid:0) M ( x ) + t (cid:1) = λ (cid:0) M ( x ) + t (cid:1) − N +1+ γ − (cid:104) λ (cid:0) M ( x ) + t (cid:1) ρ γ + o (cid:16) t ργ (cid:17)(cid:105) The previous expression may be rewritten as follows, where λ , λ , λ ∈ R : c N +1 g N +1 (cid:0) M ( x ) + t (cid:1) = λ t − ( N +1+ γ − ) (cid:104) λ (cid:0) M ( x ) + t (cid:1) ρ γ + λ t − + o (cid:16) t ργ (cid:17)(cid:105) In order to make the proof more readable, we do not specify the values of constants λ i , because they arenot essential. Then, in the case, ρ/γ ≤ −
2, we get c N +1 g N +1 (cid:0) M ( x ) + t (cid:1) = λ t − ( N +1+ γ − ) (cid:2) λ t − + o (cid:0) t − (cid:1)(cid:3) , λ , λ ∈ R In other terms, c N +1 g N +1 (cid:0) M ( x ) + t (cid:1) is regularly varying of second order with indices − N − − γ − , −
2, and an auxiliary function propotional to t − . According to Proposition 6 of Hua and Joe (2011),¯Φ R ∗ ( t ) = (cid:82) + ∞ t c N +1 g N +1 (cid:0) M ( x ) + u (cid:1) du ∈ RV − N − γ − , − with an auxiliary function proportional to t − . Equivalently, there exists λ , λ ∈ R such thatΦ − R ∗ (cid:18) − t (cid:19) = λ t γγN +1 (cid:104) λ t − γγN +1 + o (cid:16) t − γγN +1 (cid:17)(cid:105) . Since Assumption 1 and Assumption 2 provide Φ − R (1 − /t ) = λ t γ [1 + λ t ρ + o ( t ρ )], it comesΦ − R (1 − v n ) η Φ − R ∗ ( α n ) = (cid:96) ( x ) − γγN +1 (cid:18) − α n v n (cid:19) γγN +1 λ v − ρn + o ( v − ρn )1 + λ (1 − α n ) γγN +1 + o (cid:16) (1 − α n ) γγN +1 (cid:17) , for some constants λ , λ ∈ R . In that case, we considered ρ ≤ − γ , hence − ρ > γ/ ( γN + 1). We thendeduce the following expansion :Φ − R (1 − v n ) η Φ − R ∗ ( α n ) = (cid:96) ( x ) − γγN +1 (cid:18) − α n v n (cid:19) γγN +1 (cid:104) λ (1 − α n ) γγN +1 + o (cid:16) (1 − α n ) γγN +1 (cid:17)(cid:105) , for a certain constant λ ∈ R . We can notice that (1 − α n ) /v n = 2(1 − (cid:96) ( x ))(1 − α n ) + (cid:96) ( x ), and let usnow focus on the limit :lim n → + ∞ √ k n ln (1 − α n ) ln (cid:32) Φ − R (1 − v n ) η Φ − R ∗ ( α n ) (cid:33) = γγN + 1 lim n → + ∞ √ k n ln (1 − α n ) ln (cid:18) − (cid:96) ( x ) (cid:96) ( x ) (1 − α n ) + 1 (cid:19) + lim n → + ∞ √ k n ln (1 − α n ) ln (cid:16) λ (1 − α n ) γγN +1 + o (cid:16) (1 − α n ) γγN +1 (cid:17)(cid:17) . ONDITIONAL EXTREME RISK MEASURES FROM HEAVY-TAILED ELLIPTICAL RANDOM VECTORS 19
The first term gives is easy to calculate. Indeed, since √ k n (1 − α n ) / ln (1 − α n ) → n → + ∞ , wededuce lim n → + ∞ √ k n ln (1 − α n ) ln (cid:18) − (cid:96) ( x ) (cid:96) ( x ) (1 − α n ) + 1 (cid:19) = 2 1 − (cid:96) ( x ) (cid:96) ( x ) lim n → + ∞ √ k n ln (1 − α n ) (1 − α n ) = 0 . By a similar calculation, the second term also tends to 0, supposing √ k n ln(1 − α n ) (1 − α n ) γγN +1 → n → + ∞ . Then, we deduce, using Proposition 4.1 : √ k n ln (1 − α n ) (cid:18) ˆ q α n ( Y | X = x ) q α n ( Y | X = x ) − (cid:19) ∼ √ k n ln (1 − α n ) (cid:32) (cid:0) W [ nv n +1] (cid:1) / ˆ η kn Φ − R ∗ ( α n ) − (cid:33) = √ k n ln (1 − α n ) (cid:32) ˆ q α n (cid:0) R ∗ U (1) (cid:1) Φ − R (1 − v n ) η − (cid:33) Φ − R (1 − v n ) η Φ − R ∗ ( α n ) + √ k n ln (1 − α n ) (cid:32) Φ − R (1 − v n ) η Φ − R ∗ ( α n ) − (cid:33) → n → + ∞ N (cid:32) , N γ ( γN + 1) (cid:33) . Now, let us focus on the case ρ/γ > −
2. The proof is exactly the same, with c N +1 g N +1 (cid:0) M ( x ) + t (cid:1) = λ t − ( N +1+ γ − ) (cid:104) λ t ργ + o (cid:16) t ργ (cid:17)(cid:105) , λ , λ ∈ R . Using the same calculations and doing the further assumption lim n → + ∞ √ k n ln(1 − α n ) (1 − α n ) − ργN +1 = 0 leadsto the result. (cid:50) Proof of Theorem 4.3.
Firstly, we can notice that(8.2) ˆˆ q α n ( Y | X = x ) q α n ↑ ( Y | X = x ) − ∼ n → + ∞ (cid:20) W [ k n +1] (cid:16) k n n ˜ v n (cid:17) ˆ γ kn (cid:21) ηkn Φ − R (1 − v n ) η − (cid:20) W [ k n +1] (cid:16) k n nv n (cid:17) ˆ γ kn (cid:21) ηkn Φ − R (1 − v n ) η − (cid:18) v n ˜ v n (cid:19) ˆ γkn ˆ ηkn + (cid:18) v n ˜ v n (cid:19) ˆ γkn ˆ ηkn − . Since k n = o ( nh n ), we deduce √ k n ln (cid:16) k n nv n (cid:17) (cid:20) W [ k n +1] (cid:16) k n n ˜ v n (cid:17) ˆ γ kn (cid:21) ηkn Φ − R (1 − v n ) η − ∼ n → + ∞ √ k n ln (cid:16) k n nv n (cid:17) (cid:20) W [ k n +1] (cid:16) k n nv n (cid:17) ˆ γ kn (cid:21) ηkn Φ − R (1 − v n ) η − . Furthermore, according to Theorem 4.3.8 in de Haan and Ferreira (2006), ( C ) and ( C high ) lead to √ k n ln (cid:16) k n nv n (cid:17) W [ k n +1] (cid:16) k n nv n (cid:17) ˆ γ kn Φ − R (1 − v n ) − ∼ n → + ∞ √ k n ln (cid:16) k n nv n (cid:17) (cid:32)(cid:18) k n nv n (cid:19) ˆ γ kn − γ − (cid:33) . From Assumption 1, it is not difficult to prove that ln (cid:0) Φ − R (1 − v n ) (cid:1) / ln ( k n / ( nv n )) is asymptoticallyequivalent to γ ln(1 − α n ) / ln ( n (1 − α n ) /k n ). Then, if we focus on the second term, it comes, using thelimit given in ( C high ) : √ k n ln (cid:16) k n nv n (cid:17) (cid:16) k n nv n (cid:17) ˆ γ kn − γ − R (1 − v n ) ηkn − η − → n → + ∞ N (cid:32)(cid:18) (cid:19) , (cid:32) γ − θ Nγ ( γN +1) − θ Nγ ( γN +1) θ N γ ( γN +1) (cid:33)(cid:33) . Finally,(8.3) √ k n ln (cid:16) k n nv n (cid:17) (cid:20) W [ k n +1] (cid:16) k n n ˜ v n (cid:17) ˆ γ kn (cid:21) ηkn Φ − R (1 − v n ) η − = √ k n ln (cid:16) k n nv n (cid:17) (cid:16) Φ − R (1 − v n ) ηkn − η − (cid:17) + √ k n ln (cid:16) k n nv n (cid:17) (cid:20) W [ k n +1] (cid:16) k n n ˜ v n (cid:17) ˆ γ kn (cid:21) ηkn Φ − R (1 − v n ) ηkn − Φ − R (1 − v n ) ηkn − η . When n → ∞ , this expression is the sum of the following bivariate normal distribution : √ k n ln (cid:16) k n nv n (cid:17) (cid:104) W [ kn +1] ( knn ˜ vn ) ˆ γkn (cid:105) ηkn Φ − R (1 − v n ) ηkn − R (1 − v n ) ηkn − η − → n → + ∞ N (cid:32)(cid:18) (cid:19) , (cid:32) γ ( γN +1) − θ Nγ ( γN +1) − θ Nγ ( γN +1) θ N γ ( γN +1) (cid:33)(cid:33) , To conclude, ln (cid:16) k n nv n (cid:17) ∼ ln (cid:16) k n n (1 − α n ) (cid:17) as n → + ∞ , hence the result. The consistency is immediate. (cid:50) Proof of Proposition 4.5.
The proof is similar to that of Proposition 4.2. Indeed, we have given, inthe case ρ/γ ≤ − − R (1 − v n ) η Φ − R ∗ ( α n ) = (cid:96) ( x ) − γγN +1 (2(1 − (cid:96) ( x ))(1 − α n ) + (cid:96) ( x )) γγN +1 (cid:104) λ (1 − α n ) γγN +1 + o (cid:16) (1 − α n ) γγN +1 (cid:17)(cid:105) , for a certain constant λ ∈ R . It thus remains to calculatelim n → + ∞ √ k n ln (cid:16) k n n (1 − α n ) (cid:17) ln (cid:32) Φ − R (1 − v n ) η Φ − R ∗ ( α n ) (cid:33) = γγN + 1 lim n → + ∞ √ k n ln (cid:16) k n n (1 − α n ) (cid:17) ln (cid:18) − (cid:96) ( x ) (cid:96) ( x ) (1 − α n ) + 1 (cid:19) + lim n → + ∞ √ k n ln (cid:16) k n n (1 − α n ) (cid:17) ln (cid:16) λ (1 − α n ) γγN +1 + o (cid:16) (1 − α n ) γγN +1 (cid:17)(cid:17) . The first term is easy to calculate. Indeed, since n (1 − α n ) → k n = o ( n ) as n → + ∞ , we deducelim n → + ∞ √ k n ln (cid:16) k n n (1 − α n ) (cid:17) ln (cid:18) − (cid:96) ( x ) (cid:96) ( x ) (1 − α n ) + 1 (cid:19) = 2 1 − (cid:96) ( x ) (cid:96) ( x ) lim n → + ∞ √ k n ln (cid:16) k n n (1 − α n ) (cid:17) (1 − α n ) = 0 . By a similar calculation, the second term also tends to 0, supposing √ k n ln ( knn (1 − αn ) ) (1 − α n ) γγN +1 → n → + ∞ . Then, we deduce, using Proposition 4.3 : √ k n ln (cid:16) k n n (1 − α n ) (cid:17) (cid:20) W [ k n +1] (cid:16) k n n ˜ v n (cid:17) ˆ γ kn (cid:21) ηkn Φ − R ∗ ( α n ) − = √ k n ln (cid:16) k n n (1 − α n ) (cid:17) (cid:20) W [ k n +1] (cid:16) k n n ˜ v n (cid:17) ˆ γ kn (cid:21) ηkn Φ − R (1 − v n ) η − Φ − R (1 − v n ) η Φ − R ∗ ( α n )+ √ k n ln (cid:16) k n n (1 − α n ) (cid:17) (cid:32) Φ − R (1 − v n ) η Φ − R ∗ ( α n ) − (cid:33) → n → + ∞ N (cid:18) , γ ( γN + 1) − θ N γ ( γN + 1) + θ N γ ( γN + 1) (cid:19) . Now, let us focus on the case ρ/γ > −
2. The proof is exactly the same, withΦ − R (1 − v n ) η Φ − R ∗ ( α n ) = (cid:96) ( x ) − γγN +1 (2(1 − (cid:96) ( x ))(1 − α n ) + (cid:96) ( x )) γγN +1 (cid:104) λ (1 − α n ) − ργN +1 + o (cid:16) (1 − α n ) − ργN +1 (cid:17)(cid:105) , λ ∈ R . Using the same calculations and doing the further assumption lim n → + ∞ √ k n ln ( knn (1 − αn ) ) (1 − α n ) − ργN +1 = 0 leadsto the result. (cid:50) ONDITIONAL EXTREME RISK MEASURES FROM HEAVY-TAILED ELLIPTICAL RANDOM VECTORS 21
Proof of Lemma 5.1.
The density of Y | X = x is given by c N +1 g N +1 (cid:16) M ( x ) + ( t − µ Y | X ) σ − Y | X (cid:17) ( c N g N ( M ( x ))) − , where M ( x ) = ( x − µ X ) (cid:62) Σ − X ( x − µ X ). In order to simplify, we consider the case reduced and centered,i.e µ Y | X = 0 and σ Y | X = 1. A quick calculation giveslim t → + ∞ ¯Φ R ∗ ( ωt )¯Φ R ∗ ( t ) = ω lim t → + ∞ g N +1 ( M ( x ) + ω t ) g N +1 ( M ( x ) + t ) = ω lim t → + ∞ ( M ( x ) + ω t ) − N ( M ( x ) + t ) − N f R N +1 (cid:16)(cid:112) M ( x ) + ω t (cid:17) f R N +1 (cid:16)(cid:112) M ( x ) + t (cid:17) . Equation (2.10) leads to lim t → + ∞ ¯Φ R ∗ ( ωt )¯Φ R ∗ ( t ) = ωω − N ω − γ − = ω − γ − N . (cid:50) Proof of Proposition 5.3.
We recall in a first time that condition (cid:16) C L p int (cid:17) entails (cid:0) C HGint (cid:1) . We have thefollowing decomposition : √ k n ln (1 − α n ) (cid:18) ˆ q p,α n ( Y | X = x ) q p,α n ( Y | X = x ) − (cid:19) ∼ n → + ∞ √ k n ln (1 − α n ) (cid:0) W [ n ˜ v n +1] (cid:1) / ˆ η kn f L (cid:16)(cid:0) ˆ γ − k n + N (cid:1) − , p (cid:17) q p,α n (cid:0) R ∗ U (1) (cid:1) − = √ k n ln (1 − α n ) f L (cid:16)(cid:0) ˆ γ − k n + N (cid:1) − , p (cid:17) f L (cid:16) ( γ − + N ) − , p (cid:17) − (cid:0) W [ n ˜ v n +1] (cid:1) / ˆ η kn Φ − R ∗ ( α n ) f L (cid:16)(cid:0) γ − + N (cid:1) − , p (cid:17) Φ − R ∗ ( α n ) q p,α n (cid:0) R ∗ U (1) (cid:1) + √ k n ln (1 − α n ) (cid:32) (cid:0) W [ n ˜ v n +1] (cid:1) / ˆ η kn Φ − R ∗ ( α n ) − (cid:33) f L (cid:16)(cid:0) γ − + N (cid:1) − , p (cid:17) Φ − R ∗ ( α n ) q p,α n (cid:0) R ∗ U (1) (cid:1) + √ k n ln (1 − α n ) f L (cid:16)(cid:0) γ − + N (cid:1) − , p (cid:17) Φ − R ∗ ( α n ) q p,α n (cid:0) R ∗ U (1) (cid:1) − . We know that f L (cid:16)(cid:0) ˆ γ − k n + N (cid:1) − , p (cid:17) , as a function of ˆ γ k n , is asymptotically normal with rate √ k n (seeEquation (3.2)). Then, the first term in the sum clearly tends to 0 as n → + ∞ . Using Proposition 4.2,the second term tends to the normal distribution given in (4.6). Finally, we have to check that the thirdterm tends to 0. For that purpose, we use the second order expansion given in Daouia et al. (2017b): q p,α n (cid:0) R ∗ U (1) (cid:1) f L (cid:16) ( γ − + N ) − , p (cid:17) q α n (cid:0) R ∗ U (1) (cid:1) = 1 − ( γ − + N ) − r ( α n , p ) + ( λ + o (1)) A ∗ (cid:18) − α n (cid:19) , where r ( α n , p ) = λ q αn ( R ∗ U (1) ) (cid:0) E (cid:2) R ∗ U (1) (cid:3) + o (1) (cid:1) + λ A ∗ (cid:16) − α n (cid:17) (1 + o (1)), λ, λ , λ ∈ R are notrelated to n and A ∗ ( t ) is the auxiliary function of Φ R ∗ (cid:0) − t (cid:1) . It seems important to precise that theconditional distribution R ∗ U (1) is regularly varying with tail index γ − + N >
1, then E (cid:2) R ∗ U (1) (cid:3) existsand, R ∗ U (1) being symmetric, equals 0. Then, a sufficient condition for asymptotic normality may be lim n → + ∞ √ k n ln(1 − α n ) q αn ( R ∗ U (1) ) = 0lim n → + ∞ √ k n ln(1 − α n ) A ∗ (cid:16) − α n (cid:17) = 0 . We know, using Assumption 2 and the proof of Proposition 4.5, that q α n (cid:0) R ∗ U (1) (cid:1) = Φ − R ∗ ( α n ) is asymp-totically proportional to (1 − α n ) − γγN +1 , while A ∗ (cid:16) − α n (cid:17) is asymptotically proportional to (1 − α n ) − ργN +1 if ρ > − γ and (1 − α n ) γγN +1 otherwise. Finally, it is not difficult to check that (cid:16) C L p int (cid:17) leads to the nullityof the two limits, and therefore to the third term of the decomposition, hence the result. The proof isexactly the same for the second normality, replacing ˆ q p,α n ( Y | X = x ) by ˆˆ q p,α n ( Y | X = x ), ln (1 − α n ) byln (cid:16) k n n (1 − α n ) (cid:17) and using Proposition 4.5 instead of 4.2. (cid:50) Proof of Proposition 5.6.
We have the following decomposition : √ k n ln (1 − α n ) (cid:32) ˆ H p,α n ( Y | X = x ) H p,α n ( Y | X = x ) − (cid:33) ∼ n → + ∞ √ k n ln (1 − α n ) (cid:0) W [ n ˜ v n +1] (cid:1) / ˆ η kn f H (cid:16)(cid:0) ˆ γ − k n + N (cid:1) − , p (cid:17) H p,α n (cid:0) R ∗ U (1) (cid:1) − = √ k n ln (1 − α n ) f H (cid:16)(cid:0) ˆ γ − k n + N (cid:1) − , p (cid:17) f H (cid:16) ( γ − + N ) − , p (cid:17) − (cid:0) W [ n ˜ v n +1] (cid:1) / ˆ η kn Φ − R ∗ ( α n ) f H (cid:16)(cid:0) γ − + N (cid:1) − , p (cid:17) Φ − R ∗ ( α n ) H p,α n (cid:0) R ∗ U (1) (cid:1) + √ k n ln (1 − α n ) (cid:32) (cid:0) W [ n ˜ v n +1] (cid:1) / ˆ η kn Φ − R ∗ ( α n ) − (cid:33) f H (cid:16)(cid:0) γ − + N (cid:1) − , p (cid:17) Φ − R ∗ ( α n ) H α n (cid:0) R ∗ U (1) (cid:1) + √ k n ln (1 − α n ) f H (cid:16)(cid:0) γ − + N (cid:1) − , p (cid:17) Φ − R ∗ ( α n ) H p,α n (cid:0) R ∗ U (1) (cid:1) − . We know that f H (cid:16)(cid:0) ˆ γ − k n + N (cid:1) − , p (cid:17) , as a function of ˆ γ k n , is asymptotically normal with rate √ k n (seeEquation (3.2)). Then, the first term in the sum clearly tends to 0 as n → + ∞ . Using Proposition 4.2,the second term tends to the normal distribution given in (4.6). Finally, we have to check that the thirdterm tends to 0. For that purpose, we use the result of Theorem 4.5 in Mao and Hu (2012), which ensuresthat there exists λ ∈ R such that : H p,α n (cid:0) R ∗ U (1) (cid:1) f H (cid:16) ( γ − + N ) − , p (cid:17) Φ − R ∗ ( α n ) = 1 + λA ∗ (cid:18) − α n (cid:19) (1 + o (1)) , where A ∗ is the auxiliary function of Φ − R ∗ (cid:0) − t (cid:1) . In the proof of Proposition 4.2, we have seen that A ∗ ( t ) was proportional either to t − γγN +1 if ρ ≤ − γ or t ργN +1 otherwise. Then condition (cid:0) C HGint (cid:1) ensureslim n → + ∞ √ k n ln (1 − α n ) ln H α n (cid:0) R ∗ U (1) (cid:1) f H (cid:16) ( γ − + N ) − , p (cid:17) Φ − R ∗ ( α n ) = 0 . Hence the third term in the sum tends to 0, and the first result of (5.14) is proved. The proof is exactlythe same for the second one, with rate √ k n ln ( knn (1 − αn ) ) instead of √ k n ln(1 − α n ) . Then condition (cid:16) C HGhigh (cid:17) gives theexpected result. (cid:50)
Acknowledgements
The author would like to thank the Editor-in-Chief, the Associate Editor and the referees, who did anextremely detailed and relevant report that widely helped to improve the paper. This work was partiallysupported by the MultiRisk LEFE-INSU Program, and by the LABEX MILYON (ANR-10-LABX-0070)of Universit´e de Lyon, within the program “Investissements d’Avenir” (ANR-11-IDEX-0007) operated bythe French National Research Agency (ANR).
ONDITIONAL EXTREME RISK MEASURES FROM HEAVY-TAILED ELLIPTICAL RANDOM VECTORS 23
References
Abramowitz, M., Stegun, I. A., et al. (1966). Handbook of mathematical functions. Applied mathematicsseries, 55(62):39.Artzner, P., Delbaen, F., Eber, J., and Heath, D. (1999). Coherent measures of risk. MathematicalFinance, 9(3):203–228.Barg`es, M., Cossette, H., and Marceau, E. (2009). TVaR-based capital allocation with copulas. Insurance:Mathematics and Economics, 45(3):348–361.Bellini, F., Klar, B., Muller, A., and Gianin, E. R. (2014). Generalized quantiles as risk measures.Insurance: Mathematics and Economics, 54:41–48.Bellini, F. and Rosazza Gianin, E. (2008). On Haezendonck risk measures. Journal of Banking & Finance,32:986–994.Bernardi, M., Bignozzi, V., and Petrella, L. (2017). On the Lp-quantiles for the Student t distribution.Statistics & Probability Letters.Breckling, J. and Chambers, R. (1988). M-quantiles. Biometrika, 75(4).Cai, J. and Weng, C. (2016). Optimal reinsurance with expectile. Scandinavian Actuarial Journal,2016(7):624–645.Cambanis, S., Huang, S., and Simons, G. (1981). On the theory of elliptically contoured distributions.Journal of Multivariate Analysis, 11:368–385.Chen, Z. (1996). Conditional Lp-quantiles and their application to the testing of symmetry in non-parametric regression. Statistics & Probability Letters, 29(2):107 – 115.Cressie, N. (1988). Spatial prediction and ordinary kriging. Mathematical Geology, 20(4):405–421.Daouia, A., Girard, S., and Stupfler, G. (2017a). Estimation of tail risk based on Extreme Expectiles.Journal of the Royal Statistical Society: Series B.Daouia, A., Girard, S., and Stupfler, G. (2017b). Extreme M-quantiles as risk measures: from L1 to Lpoptimization. Bernoulli.de Haan, L. and Ferreira, A. (2006). Extreme value theory: an introduction. Springer Science & BusinessMedia.de Haan, L. and Resnick, S. (1998). On asymptotic normality of the hill estimator. Communications inStatistics, 14(4):849–866.de Haan, L. and Rootz´en, H. (1993). On the estimation of high quantiles. Journal of Statistical Planningand Inference, 35(1):1–13.de Valk, C. (2016). Approximation of high quantiles from intermediate quantiles. Extremes, 19:661–686.El Methni, J., Gardes, L., Girard, S., and Guillou, A. (2012). Estimation of extreme quantiles from heavyand light tailed distributions. Journal of Statistical Planning and Inference, 142(10):2735–2747.Fang, K.-T., Kotz, S., and Ng, K. W. (1990). Symmetric multivariate and related distributions. Chapmanand Hall.Frahm, G. (2004). Generalized Elliptical Distributions: Theory and Applications. PhD thesis, Universit¨atzu K¨oln.Gardes, L. and Girard, S. (2005). Estimating extreme quantiles of weibull tail distributions.Communications in Statistics-Theory and Methods, 34:1065–1080.Gong, J., Li, Y., Peng, L., and Yao, Q. (2015). Estimation of extreme quantiles for functions of depen-dent random variables. Journal of the Royal Statistical Society: Series B (Statistical Methodology),77(5):1001–1024.Goovaerts, M., Kaas, R., Dhaene, J., and Tang, Q. (2004). Some new classes of consistent risk measures.Insurance: Mathematics and Economics, 34:505–516.Haezendonck, J. and Goovaerts, M. (1982). A new premium calculation principle based on Orlicz norms.Insurance: Mathematics and Economics, 1:41–53.Hashorva, E. (2007a). Extremes of conditioned elliptical random vectors. Journal of Multivariate Analysis,98(8):1583–1591.Hashorva, E. (2007b). Sample extremes of l p -norm asymptotically spherical distributions. AlbanianJournal of Mathematics, 1(3):157–172.Hill, B. M. (1975). A simple general approach to inference about the tail of a distribution. The Annalsof Statistics, 3.Hsing, T. (1991). On Tail Index Estimation Using Dependant Data. The Annals of Statistics, 19(3):1547–1569.Hua, L. and Joe, H. (2011). Second order regular variation and conditional tail expectation of multiplerisks. Insurance: Mathematics and Economics, 49:537–546. Huang, S. T. and Cambanis, S. (1979). Spherically invariant processes: Their nonlinear structure, dis-crimination, and estimation. Journal of Multivariate Analysis, 9(1):59–83.Jessen, H. A. and Mikosch, T. (2006). Regularly varying functions. Publications de l’InstitutMathematique, 80(94):171–192.Johnson, M. (1987). Multivariate Statistical Simulation. Wiley & Sons.Kano, Y. (1994). Consistency property of elliptical probability density functions. Journal of MultivariateAnalysis, 51:139–147.Kelker, D. (1970). Distribution theory of spherical distributions and a location-scale parameter general-ization. Sankhya: The Indian Journal of Statistics, Series A, 32(4):419–430.Koenker, R. (1992). When are expectiles percentiles? Econometric Theory, 8:423–424.Koenker, R. and Bassett, G. J. (1978). Regression quantiles. Econometrica, 46(1):33–50.Kratz, M. and Resnick, S. I. (1996). The qq-estimator and heavy tails. Stochastic Models, 12(4):699–724.Li, Q. and Racine, J. S. (2007). Nonparametric econometrics: theory and practice. Princeton UniversityPress.Linsmeier, T. J. and Pearson, N. D. (2000). Value at risk. Financial Analysts Journal, 56(2):47–67.Mao, T. and Hu, T. (2012). Second-order properties of the haezendonck-goovaerts risk measure forextreme risks. Insurance: Mathematics and Economics, 51:333–343.Maume-Deschamps, V., Rulli`ere, D., and Usseglio-Carleve, A. (2017). Quantile predictions for ellipticalrandom fields. Journal of Multivariate Analysis, 159:1 – 17.Maume-Deschamps, V., Rulli`ere, D., and Usseglio-Carleve, A. (2018). Spatial expectile predictions forelliptical random fields. Methodology and Computing in Applied Probability, 20(2):643–671.McNeil, A. J., Frey, R., and Embrechts, P. (2015). Quantitative risk management : Concepts, techniquesand tools. Princeton university press.Newey, W. and Powell, J. (1987). Asymmetric least squares estimation and testing. Econometrica,(55):819–847.Opitz, T. (2016). Modeling asymptotically independent spatial extremes based on Laplace random fields.Spatial Statistics, 16:1–18.Owen, J. and Rabinovitch, R. (1983). On the class of elliptical distributions and their applications to thetheory of portfolio choice. The Journal of Finance, 38(3):745–752.Parzen, E. (1962). On estimation of a probability density function and mode. Annals of MathematicalStatistics, 33(3):1065–1076.Pickands, J. (1975). Statistical inference using extreme order statistics. The Annals of Statistics, 3(1):119–131.Resnick, S. and St˘aric˘a, C. (1995). Consistency of Hill’s Estimator for Dependent Data. Journal ofApplied Probability, 32(1):139–167.Schultze, J. and Steinebach, J. (1996). On least squares estimates of an exponential tail coefficient.Statistics & Decisions, 14(3):353–372.Sobotka, F. and Kneib, T. (2012). Geoadditive expectile regression. Computational Statistics & DataAnalysis, 56(4):755–767.Tang, Q. and Yang, F. (2012). On the Haezendonck-Goovaerts risk measure for extreme risks. Insurance:Mathematics and Economics, 50:217–227.Taylor, J. W. (2008). Estimating Value at Risk and Expected Shortfall Using Expectiles. Journal ofFinancial Econometrics, 6(2):231–252.Wang, H. J., Li, D., and He, X. (2012). Estimation of high conditional quantiles for heavy-tailed distri-butions. Journal of the American Statistical Association, 107(500):1453–1464.Xiao, Y. and Valdez, E. (2015). A Black-Litterman asset allocation under Elliptical distributions.Quantitative Finance, 15(3):509–519.
Universit´e de Lyon, Universit´e Lyon 1, Institut Camille Jordan ICJ UMR 5208 CNRS
E-mail address ::