Range Value-at-Risk: Multivariate and Extreme Values
RRange Value-at-Risk: Multivariate and
Extreme Values
Roba Bairakdar Lu Cao M´elina Mailhot ∗ May 27, 2020
Abstract
The concept of univariate Range Value-at-Risk, presented by Cont et al. (2010),is extended in the multidimensional setting. Traditional risk measures are not wellsuited when dealing with heavy-tail distributions and infinite tail expectations. Themultivariate definitions of robust truncated tail expectations are provided to over-come this problem. Robustness and other properties as well as empirical estimatorsare derived. Closed-form expressions and special cases in the extreme value frame-work are also discussed. Numerical and graphical examples are provided to examinethe accuracy of the empirical estimators.
Keywords:
Multivariate Risk Measures, Dependence, Robustness, Extreme Values.
Recent progress in understanding specific risks faced by an entity is mainly rising fromthe emergence of models reflecting more precisely the entity and measures that are usedto quantify and represent a company’s global and granular structures. Risk measuresare essential for insurance companies and financial institutions for several reasons such asquantifying capital requirements to protect against unexpected future losses and to set ∗ Department of Mathematics and Statistics, Concordia University, 1400 de Maisonneuve Blvd. West,Montr´eal (Qu´ebec) Canada H3G 1M8; e-mail: [email protected] a r X i v : . [ q -f i n . R M ] M a y nsurance premiums for all lines of business and risk categories. Different univariate riskmeasures have been proposed in the literature. The most common risk measures are Value-at-Risk (VaR) and Tail Value-at-Risk (TVaR). VaR, which represents an α -level quantile,found its way through the G-30 report, see Group et al. (1993) for details. Artzner et al.(1999) show that VaR is not a coherent risk measure in addition to not providing anyinformation about the tail of the distribution and thus suggests other specific risk measuressuch as TVaR, which evaluates the average value over all VaR values at confidence levelsgreater than α , which is a significant measure for heavy-tailed distributions.Dependencies between risks needs to be taken into account to obtain accurate capitalallocation and systemic risk evaluation. For example, systemic risk refers to the risksimposed by interdependencies in a system. Univariate risk measures are not suitable tobe employed for heterogeneous classes of homogeneous risks. Therefore, multivariate riskmeasures have been developed and gained popularity in the last decade.The notion of quantile curves is employed in Embrechts and Puccetti (2006), Nappo andSpizzichino (2009) and Pr´ekopa (2012) to define a multivariate risk measure called upperand lower orthant VaR. Based on the same idea, Cossette et al. (2013) redefine the lowerand upper orthant VaR and Cossette et al. (2015) propose the lower and upper orthantTVaR. Cousin and Di Bernardino (2013) develop a finite vector version of the lower andupper orthant VaR. A drawback of multivariate VaR is that it represents the boundaryof the α -level set and no additional tail information is provided, similar to the univariateVaR. Furthermore, relationships holding for univariate risk measures can be different in amultivariate setting.Most risk measures are defined as functions of the loss distribution which should be es-timated from the data. In Cont et al. (2010), risk measurement procedures are definedand analysis of robustness of different risk measures is performed. They point out the con-flict between the subadditivity and robustness and propose a robust risk measure calledweighted VaR (WVaR). The use of a truncated version of TVaR, defined as Range-Value-at-Risk (RVaR), is suggested by Bignozzi and Tsanakas (2016). The lower and upperorthant RVaR in the multivariate setting are developed in this paper, in order to providea new robust multivariate risk measure. We aim to study in details their properties andderive their estimators. We will also focus on extreme value distributions which can beused to model the heavy tail of the data.The paper is organized as follows. In Section 2, definitions and properties of the univariateRVaR are given, with examples in the Extreme Value Theory (EVT) framework. Sections3.1 and 3.2 define the multivariate lower and upper orthant RVaR, respectively. Section3.3 presents their interesting and desirable properties, such as their behavior under trans-2ormations or translation of the multivariate variables and monotonicity. We also developasymptotic results, the behavior with aggregate risks and we prove their robustness. InSection 3.4, we define the empirical estimator of the lower and upper orthant RVaR andwe illustrate the accuracy of this estimator graphically. Concluding remarks are given inSection 4 In this section, we present the univariate definition of RVaR and provide resulting measures,based on univariate RVaR, in the EVT framework and in asymptotic scenarios.
Consider a random loss variable X on a probability space (Ω , F , P ) with its cumulativedistribution function (cdf) F X . A risk measure ρ ( X ) for a random variable (r.v.) X corresponds to the required amount that has to be maintained such that the financialposition ρ ( X ) − X is acceptable. Since there are several definitions of risk measures, anappropriate choice becomes crucial for stakeholders. Definition 2.1.
For a continuous random variable X with cumulative distribution function(cdf ) F X , the univariate Range Value-at-Risk at significance level range [ α , α ] ⊆ [0 , isdefined by RVaR α ,α ( X ) = E [ X | VaR α ( X ) ≤ X ≤ VaR α ( X )] = 1 α − α (cid:90) α α VaR u ( X ) du, where VaR α ( X ) = inf { x ∈ R : F X ( x ) ≥ α } is the univariate Value-at-Risk at significance level α ∈ [0 , . For a continuous random variable X with strictly increasing cdf, VaR α ( X ) = F − X ( α ), isalso called the α -quantile, where F − X is the inverse function of cdf. VaR fails to give anyinformation beyond the level α , However, RVaR quantifies the magnitude of the loss of theworst 100(1 − α ) to 100(1 − α ) cases. When α = 1, we obtain a special case of RVaR,which is referred to as TVaR in this article. 3obust statistics can be defined as statistics that are not unduly affected by outliers. Inorder to establish the robustness of RVaR, we need to define a measure of affectation.Consider a continuous random variable X with cdf F ∈ D where D is the convex set ofcdfs. Notice that a risk measure is distribution-based if ρ ( X ) = ρ ( X ) when F X = F X .Hence, we use ρ ( F ) (cid:44) ρ ( X ) to represent the distribution-based risk measures. To quantifythe sensitivity of a risk measure to the change in the distribution, we use the sensitivityfunction. This method is used by Cont et al. (2010) and can be explained as the one-sideddirectional derivative of the effective risk measure at F in the direction δ z . Definition 2.2.
Consider ρ , a distribution-based risk measure of a continuous randomvariable X with distribution function F ∈ D where D is the convex set of cdfs. For ε ∈ [0 , ,set F ε = εδ z + (1 − ε ) F such that F ε ∈ D . δ z ∈ D is the probability measure which gives amass of 1 to { z } . The distribution F ε is differentiable at any x (cid:54) = z and has a jump pointat the point x = z . The sensitivity function is defined by S ( z ) = S ( z ; F ) (cid:44) lim ε → + ρ ( F ε ) − ρ ( F ) ε , for any z ∈ R such that the limit exists. The value of sensitivity function for a robust statistic will not go to infinity when z becomesarbitrarily large. In other word, the bounded sensitivity function makes sure that the riskmeasure will not blow up when a small change happens. Accordingly, Cont et al. (2010)show that VaR and RVaR are robust statistics by showing that their respective sensitivityfunctions are bounded. In this section, we will provide some examples for the discussed risk measures in the EVTframework. Most of the statistical techniques are focused on the behavior of the center ofthe distribution, usually the mean. However, EVT is a branch in statistics that is focusedon the behavior of the tail of the distribution. There are two principle models for extremevalues; the block maxima model and the peaks-over-threshold model. The block maxima approach is used to model the largest observations from samples of identically distributedobservations in successive periods. The peaks-over-threshold is used to model all largeobservations that exceed a given high threshold value, denoted u .The limiting distribution of block maxima, from Fisher and Tippett (1928), is given in thetheorem below; 4 heorem 2.1. Let X , . . . , X n be a sequence of independent random variables having acommon distribution function F and consider M n = max { X , . . . , X n } . If there existsnorming constants ( a n ) and ( b n ) where a n ∈ R and b n > for all n ∈ N and somenon-degenerate distribution function H such that M n − a n b n d −→ H, then H is defined as the Generalized Extreme Value Distribution (GEV) given by H ξ ( x ) = exp (cid:110) − (1 + ξx ) − ξ (cid:111) , ξ (cid:54) = 0 , exp {− exp( − x ) } , ξ = 0 , where ξx > . A three-parameter family is obtained by defining H ξ,µ,σ := H ξ (cid:0) x − µσ (cid:1) fora location parameter µ ∈ R , a scale parameter σ > , and a shape parameter ξ ∈ R . The one-parameter GEV is the limiting distribution of the normalized maxima, but inreality, we do not know the norming constants ( a n ) and ( b n ), therefore, the three-parameterGEV provides a more general and flexible approach as it is the limiting distribution of theunnomarlized maxima.Pickands III et al. (1975) and Balkema and De Haan (1974) show that the theorem belowprovides a very powerful result regarding the excess distribution function; Theorem 2.2.
Let X be a random variable with distribution function F and an upperend-point x F ≤ ∞ . If F is a distribution function that belongs to the maximum domainattraction of a GEV distribution H ξ,µ,σ , then lim u → x F sup ≤ x
Assume X ∼ GEV ( µ, σ, ξ ) , then for ≤ α ≤ α ( X ) = µ − σξ (cid:104) − ( − ln α ) − ξ (cid:105) ξ (cid:54) = 0 , E [ X ] − σξ (cid:2) Γ(1 − ξ ) − ( − ln α ) − ξ (cid:3) ξ (cid:54) = 0 , ξ < , E [ X ] − σγ − σ ln( − ln α ) ξ = 0 , where E [ X ] = µ + σξ (Γ(1 − ξ ) − ξ (cid:54) = 0 , ξ < ,µ + σγ ξ = 0 , ∞ ξ ≥ , where Γ( x, a ) is the incomplete Gamma function Γ( x, a ) = (cid:82) ∞ a t x − e − t dt such that Γ( x ) =Γ( x, and γ is Euler’s constant defined by γ = (cid:82) ∞ (cid:16) − x + (cid:98) x (cid:99) (cid:17) dx . VaR diverges for ξ ≥ .Let ≤ α ≤ α ≤ , then RVaR α ,α ( X ) = (cid:40) µ − σξ ( α − α ) [( α − α ) − Γ(1 − ξ, − log α ) + Γ(1 − ξ, − log α )] ξ (cid:54) = 0 , VaR α i ( X ) − σα − α [ln( − ln α )) − α j (ln( − ln α ) − li ( α ) + li ( α )] ξ = 0 , for i, j ∈ { , } , i (cid:54) = j and where li ( x ) is the logarithmic integral li ( x ) = (cid:82) x t ) dt for < x < and has a singularity at x = 1 .As a special case, let α = 1 , then for ξ (cid:54) = 0 and ξ < , we have that TVaR α ( X ) = E [ X ] − σξ (1 − α ) [Γ(1 − ξ, − ln α ) − α Γ(1 − ξ )]= VaR α ( X ) − σξ (1 − α ) (cid:2) (1 − α )( − ln α ) − ξ + Γ(1 − ξ, − ln α ) − Γ(1 − ξ ) (cid:3) , nd TVaR diverges for ξ = 0 and ξ ≥ .Proof. See Appendix A.In addition to the risk measures, it is interesting to observe how the ratio of the riskmeasures behaves for large confidence levels α . Proposition 2.2.
Assume X ∼ GEV ( µ, σ, ξ ) . Let ≤ α ≤ α ≤ , then lim α → (cid:20) lim α → RVaR α ,α ( X )VaR α ( X ) (cid:21) = lim α → (cid:20) TVaR α ( X )VaR α ( X ) (cid:21) = (cid:40) (1 − ξ ) − ξ > , ξ < . Proof.
See Appendix A.Hence, the shape parameter ξ is a strong factor that affects this ratio for large values of α .For ξ <
0, TVaR approaches the value of VaR for high values of α , while for ξ >
0, TVaRbecomes significantly larger than VaR.
Proposition 2.3.
Consider the random variable X with cdf F and survival ¯ F . Assume F ∈ MDA ( H ξ ) , then for ≤ α ≤ and x ≥ u , where u is a high threshold, we have thefollowing VaR α ( X ) = u + σξ (cid:32)(cid:18) − αζ u (cid:19) − ξ − (cid:33) ξ (cid:54) = 0 ,u − σ log (cid:18) − αζ u (cid:19) ξ = 0 . Let ≤ α ≤ α ≤ , then for any value of ξ , we have that RVaR α ,α ( X ) = (1 − α ) VaR α ( X ) − (1 − α ) VaR α ( X )( α − α )(1 − ξ ) + ( σ − ξu )(1 − ξ ) . As a special case, let α = 1 , then for ξ < α ( X ) = VaR α ( X )1 − ξ + σ − ξu − ξ , here ζ u = ¯ F ( u ) = P ( X > u ) , and TVaR is infinite for ξ ≥ .Proof. See Appendix A.RVaR has not been explored in the literature of Extreme Value Theory. Therefore, wehave derived a closed form expression for RVaR. Even though TVaR is infinite for valuesof ξ >
1, RVaR exists. Thus, RVaR is useful for the cases where ξ ≥
1, due to its abilityto capture an expected value over a range of high extremes. In theory, this might not berepresentative of the heavy tail, however, in practice, this can be used to eliminate the issueof having an infinite mean for real data, i.e. insurance companies and financial institutionswould still be interested in calculating their reserves and economic capital, and it is notpossible to hold an infinite amount of reserves. In this scenario, RVaR can be used withhigh values of α and α .In addition to the results of VaR and TVaR, it is interesting to observe how the ratio ofthe two risk measures behaves for large confidence levels α . Proposition 2.4.
Consider the random variable X with cdf F . Assume F ∈ MDA ( H ξ ) ,then for ≤ α ≤ , ξ < and x ≥ u , where u is a high threshold, we have the following lim α → (cid:20) lim α → RVaR α ,α ( X )VaR α ( X ) (cid:21) = lim α → (cid:20) TVaR α ( X )VaR α ( X ) (cid:21) = (cid:40) (1 − ξ ) − ξ ≥ , ξ < . Proof.
See Appendix A.Hence, similar to the GEV distribution, the shape parameter ξ is a strong factor thataffects this ratio for large values of α . In this section, we define the multivariate lower and upper orthant RVaR and study theirproperties. Examples and illustrations of the findings are provided. Finally, empiricalestimators are presented. 8 .1 Lower Orthant RVaR
Consider the continuous random vector X = ( X , X , . . . , X d ) ∈ R d + with joint CDF F andjoint survival function ¯ F . Define the random vector X \ i = ( X , . . . , X i − , X i +1 , . . . , X d )with joint cdf F \ i and joint survival function ¯ F \ i , for i = 1 , . . . , d . Let x = ( x , . . . , x d ) bea realization of X and consider the vector x \ i = ( x , . . . , x i − , x i +1 , . . . , x d ). Definition 3.1.
Consider a continuous random vector X = ( X , X ) on the probabilityspace (Ω , F , P ) with a joint cdf F . The lower orthant RVaR at significance level range [ α , α ] ⊆ [0 , is given by RVaR α ,α ( X ) = d (cid:91) i =1 (cid:110)(cid:16) x , . . . , x i − , RVaR α ,α , x \ i ( X )) , x i +1 , . . . , x d (cid:17)(cid:111) , where RVaR α ,α , x \ i ( X ) = E [ X i | VaR α , x \ i ( X ) ≤ X i ≤ VaR α ( X i ) , X \ i ≤ x \ i ] , for VaR α , VaR α ( X \ i ) ( X ) ≤ x j ≤ VaR α ( X j ) , for all j = 1 , . . . , d, i (cid:54) = j, in which the lower orthant VaR at significance level α is defined by VaR α ( X ) = d (cid:91) i =1 (cid:110)(cid:16) x , . . . , x i − , VaR α, x \ i ( X )) , x i +1 , . . . , x d (cid:17) : x j ≥ VaR α ( X j ) , ∀ j (cid:54) = i (cid:111) , where VaR α, x \ i ( X ) = inf (cid:110) x i ∈ R : F x \ i ( X i ) ≥ α (cid:111) . Proposition 3.1.
For a continuous random vector X = ( X , . . . , X d ) with joint cdf F andfor the subvector X \ i = ( X , . . . , X i − , X i +1 , . . . , X d ) with joint cdf F \ i , RVaR α ,α , x \ i ( X ) can be restated as RVaR α ,α , x \ i ( X ) = 1 F ( x \ i , VaR α ( X i )) − α (cid:90) F ( x \ i , VaR α ( X i )) α VaR u, x \ i ( X ) du, for VaR α , VaR α ( X \ i ) ( X ) ≤ x j ≤ VaR α ( X j ) , for all j = 1 , . . . , d, i (cid:54) = j. Proof.
See Appendix A. 9 a) (b)
Figure 1: (a) Lower orthant VaR at level 0.95 and RVaR at level range [0 . , .
99] for fixedvalues of X and (b) Lower orthant VaR at level 0.95 and RVaR at level range [0 . , . X Example 3.1.
Consider the random vector ( X , X ) with joint cdf defined with a Gumbelcopula with dependence parameter θ = 1 . and marginals X ∼ Weibull (2, 50) and X ∼ Weibull (2, 150). Let the confidence level range be α = 0 . and α = 0 . . Then, we getbivariate lower orthant RVaR in Figure 1. For comparison, we plot
VaR . ,x i ( X ) on thesame graph. One can observe from Figure 1 that RVaR α ,α ,x i ( X ) converges to the univariate RVaRwhen x i ( i = 1 ,
2) approaches infinity. Also, when x i gets close to VaR α ( X i ), RVaR α ,α ,x i ( X )approaches VaR α ( X j ).By letting α = 1, a special case of the lower orthant RVaR is obtained, namely the lowerorthant TVaR, as defined, studied and illustrated by Cossette et al. (2015). Definition 3.2.
Consider a continuous random vector X = ( X , X ) on the probabilityspace (Ω , F , P ) with a joint cdf F . The upper orthant RVaR at significance level range α , α ] ⊆ [0 , is given by RVaR α ,α ( X ) = d (cid:91) i =1 (cid:110)(cid:16) x , . . . , x i − , RVaR α ,α , x \ i ( X )) , x i +1 , . . . , x d (cid:17)(cid:111) , where RVaR α ,α , x \ i ( X ) = E [ X i | VaR α ( X i ) ≤ X i ≤ VaR α , x \ i ( X ) , X \ i ≥ x \ i ] , for VaR α ( X j ) ≤ x j ≤ VaR α , VaR α ( X \ i ) ( X ) , for all j = 1 , . . . , d, i (cid:54) = j, in which the upper orthant VaR at significance level α is defined by VaR α ( X ) = d (cid:91) i =1 (cid:110)(cid:16) x , . . . , x i − , VaR α, x \ i ( X )) , x i +1 , . . . , x d (cid:17) : x j ≤ VaR α ( X j ) , ∀ j (cid:54) = i (cid:111) , where VaR α, x \ i ( X )) = inf (cid:110) x i ∈ R : F x \ i ( X i ) ≤ − α (cid:111) . Similar to the lower orthant RVaR we can define the upper orthant RVaR in the form ofthe integration of VaR α,x i ( X ). Proposition 3.2.
For a continuous random vector X = ( X , . . . , X d ) with joint cdf F andfor the subvector X \ i = ( X , . . . , X i − , X i +1 , . . . , X d ) with joint cdf F \ i , RVaR α ,α , x \ i ( X ) can be restated as RVaR α ,α , x \ i ( X ) = 1 α − (1 − F ( x \ i , VaR α ( X i ))) (cid:90) α − F ( x \ i , VaR α ( X i )) VaR u, x \ i ( X ) d u, for VaR α ( X j ) ≤ x j ≤ VaR α , VaR α ( X \ i ) ( X ) , for all j = 1 , . . . , d, i (cid:54) = j, Proof.
See Appendix A.
Example 3.2.
Consider the same random vector defined in Example 3.1. Let the confi-dence level range be α = 0 . and α = 0 . . Then, we get bivariate upper orthant RVaR in Figure 2. For comparison, we plot
VaR . ,x i ( X ) on the same graph. a) (b) Figure 2: (a) Upper orthant VaR at level 0.99 and RVaR at level range [0 . , .
99] for fixedvalues of X and (b) Upper orthant VaR at level 0.99 and RVaR at level range [0 . , . X One observes from Figure 1 that RVaR α ,α ,x i ( X ) converges to the univariate RVaR . , . ( X j )when x i ( i = 1 ,
2) gets close to the lower support of X i . Also, when x i approachesVaR α ( X i ), RVaR α ,α ,x i ( X ) approaches VaR α ( X j ). As a result, the curves of bivariateRVaR are bounded by the curves of univariate VaR, which is similar to the univariate RVaR.Analogously as for the lower orthant case, letting α = 1, is a special case of the upperorthant which leads to the upper orthant TVaR. For simplicity of notation and proofs, we will consider the bivariate case.
Proposition 3.3.
Let X = ( X , X ) be a continuous random vector.1. (Translation invariance) For all c = ( c , c ) ∈ R and i, j = 1 , , i (cid:54) = j , then RVaR α ,α ,x j + c j ( X + c ) = RVaR α ,α ,x j ( X ) + c i , RVaR α ,α ,x j + c j ( X + c ) = RVaR α ,α ,x j ( X ) + c i . . (Positive homogeneity) For all c = ( c , c ) ∈ R and i, j = 1 , , i (cid:54) = j , then RVaR α ,α ,c j x j ( c X ) = c i RVaR α ,α ,x j ( X ) , RVaR α ,α ,c j x j ( c X ) = c i RVaR α ,α ,x j ( X ) .
3. (Monotonicity) Let X = ( X , X ) and X (cid:48) = ( X (cid:48) , X (cid:48) ) be two pairs of risks with jointcdf ’s F X and F X (cid:48) , respectively. If X ≺ co X (cid:48) , then RVaR α ,α ( X (cid:48) ) ≺ RVaR α ,α ( X ) , RVaR α ,α ( X ) ≺ RVaR α ,α ( X (cid:48) ) . Proof.
See Appendix A.Consider the random vectors X M , X W and X Π which denote the monotonic, counter-monotonic and independent vector, respectively. They have the following relationship X W ≺ co X Π ≺ co X M , which means according to the Proposition 3.3, we haveRVaR α ,α ( X M ) ≺ RVaR α ,α ( X Π ) ≺ RVaR α ,α ( X W ) , and RVaR α ,α ( X W ) ≺ RVaR α ,α ( X Π ) ≺ RVaR α ,α ( X M ) . Example 3.3.
Consider a bivariate random vector ( X , X ) which is either comonotonic,countermotonic or independent. We obtain the lower orthant RVaR based on Proposition3.1. For i, j = 1 , ( i (cid:54) = j ), RVaR α ,α ,x i ( X Π ) = 1 α F X i ( x i ) − α (cid:90) α F Xi ( x i ) α VaR uFXi ( xi ) ( X j ) du, RVaR α ,α ,x i ( X M ) = 1 F X i ( x i ) − α (cid:90) F Xi ( x i ) α VaR u ( X j ) du,and RVaR α ,α ,x i ( X W ) = 1 F X i ( x i ) + α − − α (cid:90) F Xi ( x i )+ α − α VaR u − F Xi ( x i )+1 ( X j ) du. et the random vector above be defined with exponential marginal cdfs, i.e. X i ∼ Exp ( λ i ) ,then we get the following results. RVaR α ,α ,x i ( X Π ) = 1 α F X i ( x i ) − α (cid:32) λ i (cid:34) ( F X i ( x i ) − α F X i ( x i )) ln(1 − α ) − ( F X i ( x i ) − α ) ln (cid:18) F X i ( x i ) − α F X i ( x i ) (cid:19) + ( α F X i ( x i ) − α ) (cid:21)(cid:19) , RVaR α ,α ,x i ( X M ) = 1 F X i ( x i ) − α (cid:32) λ i (cid:34) (1 − F X i ( x i )) ln(1 − F X i ( x i )) − (1 − α ) ln (1 − α ) + ( F X i ( x i ) − α ) (cid:35)(cid:33) , RVaR α ,α ,x i ( X W ) = 1( F X i ( x i ) + α − − α (cid:32) λ i (cid:34) (1 − α ) ln(1 − α ) − ( F X i ( x i ) − α ) ln ( F X i ( x i ) − α ) + ( F X i ( x i ) + α − − α ) (cid:35)(cid:33) . Now, we illustrate some examples of multivariate RVaR in the context of EVT. We presentclosed form expressions obtained in the independence case. Dependence between randomvariables is considered in section 3.4.
Example 3.4.
Assume F X i ∼ GEV ( µ i , σ i , ξ i ) and F X j ∼ GEV ( µ j , σ j , ξ j ) . Let ≤ α ≤ and consider the independent copula where C ( u, v ) = uv .Let A = F X i ( x i ) , B = F ( x i , VaR α ( X j )) and C = 1 − ¯ F ( x i , V aR α ( X j )) , then, VaR α,x i ( X ) = µ j − σ j ξ j (cid:34) − (cid:18) ln (cid:18) Aα (cid:19)(cid:19) − ξ j (cid:35) , ξ i , ξ j (cid:54) = 0 µ j − σ j ln (cid:20) ln (cid:18) Aα (cid:19)(cid:21) , ξ i = ξ j = 0 , nd VaR α,x i ( X ) = µ j − σ j ξ j (cid:32) − (cid:20) ln (cid:18) − Aα − A (cid:19)(cid:21) − ξ j (cid:33) , ξ i , ξ j (cid:54) = 0 µ j − σ j ln (cid:20) ln (cid:18) − A ) α − A (cid:19)(cid:21) , ξ i = ξ j = 0 . Then by using the above results, and for ξ i , ξ j (cid:54) = 0 , we obtain the multivariate lower andupper orthant RVaR, respectively represented by RVaR α ,α ,x i ( X ) = µ j − σ j ξ j (cid:20) − AB − α (cid:20) Γ (cid:18) − ξ j , ln (cid:18) AB (cid:19)(cid:19) − Γ (cid:18) − ξ j , ln (cid:18) Aα (cid:19)(cid:19)(cid:21)(cid:21) , RVaR α ,α ,x i ( X ) = µ j − σ j ξ j (cid:20) − − Aα − C (cid:20) Γ (cid:18) − ξ j , ln (cid:18) − Aα − A (cid:19)(cid:19) − Γ (cid:18) − ξ j , ln (cid:18) − AC − A (cid:19)(cid:19)(cid:21)(cid:21) , while for ξ i = ξ j = 0 , RVaR α ,α ,x i ( X ) = µ j − σ j B − α (cid:20) B ln (cid:18) ln (cid:18) AB (cid:19)(cid:19) − α ln (cid:18) ln (cid:18) Aα (cid:19)(cid:19)(cid:21) − σ j AB − α (cid:20) Ei (cid:18) ln (cid:18) Aα (cid:19)(cid:19) − Ei (cid:18) ln (cid:18) AB (cid:19)(cid:19)(cid:21) , RVaR α ,α ,x i ( X ) = µ i − σ j α − C (cid:20) ( α − A ) ln (cid:18) ln (cid:18) − Aα − A (cid:19)(cid:19) − ( C − A ) ln (cid:18) ln (cid:18) − AC − A (cid:19)(cid:19)(cid:21) − σ j (1 − A ) α − C (cid:20) E (cid:18) ln (cid:18) − Aα − A (cid:19)(cid:19) − E (cid:18) ln (cid:18) − AC − A (cid:19)(cid:19)(cid:21) , where Ei( x ) = − (cid:82) ∞− x e − t t dt .As a special case of RVaR , we have that when α = 1 , α,x i ( X ) = µ j − σ j ξ j (cid:20) − AA − α (cid:20) Γ (1 − ξ j ) − Γ (cid:18) − ξ j , ln (cid:18) Aα (cid:19)(cid:19)(cid:21)(cid:21) , ξ i , ξ j (cid:54) = 0 , ∞ , ξ i = ξ j = 0 , and TVaR α,x i ( X ) = µ j − σ j ξ j (cid:20) − − A − α (cid:20) Γ (1 − ξ j ) − Γ (cid:18) − ξ j , ln (cid:18) − Aα − A (cid:19)(cid:19)(cid:21)(cid:21) , ξ i , ξ j (cid:54) = 0 , ∞ , ξ i = ξ j = 0 . Proof.
See Appendix A.
Proposition 3.4.
Let X = ( X , X ) be a pair of random variables with cdf F X andmarginal distributions F X and F X . Assume that F X is continuous and strictly increasing.Then, for i, j = 1 , and i (cid:54) = j , lim x i → VaR α , VaR α Xj ) ( X ) RVaR α ,α ,x i ( X ) = VaR α ( X j ) , lim x i → VaR α , VaR α Xj ) ( X ) RVaR α ,α ,x i ( X ) = VaR α ( X j ) . Moreover, lim x i → u xi RVaR α ,α ,x i ( X ) = RVaR α ,α ( X j ) , lim x i → l xi RVaR α ,α ,x i ( X ) = RVaR α ,α ( X j ) , where u x i (or l x i ) represents the upper (or lower) support of the rv X i .Proof. See Appendix A.Now, we consider the behavior of aggregate risks defined as follows: S = (cid:18) S S (cid:19) = n (cid:88) i =1 (cid:18) X i Y i (cid:19) , S and S denote the aggregate amount of claims for two different business classrespectively. X i and Y i represent the risks within each class, where i = 1 , . . . , n , such that S = (cid:80) ni =1 X i and S = (cid:80) ni =1 Y i .Unlike univariate TVaR, the univariate RVaR does not satisfy the subadditivity. Hence, itseems impossible to prove that the bivariate RVaR is subadditive. However, if we supposethat ( X , . . . , X n ) (respectively ( Y , . . . , Y n )) is comonotonic, the following results can beobtained. Proposition 3.5.
Let ( X , . . . , X n ) (respectively ( Y , . . . , Y n ) ) be comonotonic with cdf ’s F X , . . . , F X n (respectively G Y , . . . , G Y n ). The dependence structure between ( X , . . . , X n ) and ( Y , . . . , Y n ) is unknown. Then, RVaR α ,α ,S ( S ) = n (cid:88) i =1 RVaR α ,α ,x i ( X i , Y i ) , RVaR α ,α ,S ( S ) = n (cid:88) i =1 RVaR α ,α ,y i ( X i , Y i ) , and RVaR α ,α ,S ( S ) = n (cid:88) i =1 RVaR α ,α ,x i ( X i , Y i ) , RVaR α ,α ,S ( S ) = n (cid:88) i =1 RVaR α ,α ,y i ( X i , Y i ) . Proof.
See Appendix A.In conclusion, the bivariate RVaR has similar properties to the bivariate VaR and TVaR,such as translation invariance, positive homogeneity and monotonicity. Furthermore, it hasan advantage over bivariate VaR and TVaR. Compared to bivariate VaR, bivariate TVaRand RVaR provide essential information about the tail of the distribution. Moreover,TVaR α,x i ( X ) and TVaR α,x i ( X ) will go to infinity when X i approaches VaR α ( X i ) whereasthe bivariate RVaR is bounded in the area [VaR α ( X i ) , VaR α ( X i )] × [VaR α ( X j ) , VaR α ( X j )].This measure could be useful for insurance companies that must set aside capital for risksthat are sent to a reinsurer after having reached a certain level. Assume that the insur-ance company transfers the risks to the reinsurer when the total losses exceed VaR at17evel α . Then, to comply to solvency capital requirements, the insurance company needsto measure the risks with truncated data. In this case, multivariate RVaR could be helpful.We will check the robustness of the estimator of bivariate RVaR. Since RVaR is distribution-based, the sensitivity function can be used to quantify the robustness. Proposition 3.6.
For a pair of continuous random variables X with joint cdf F ( x , x ) and marginals F X ( x ) and F X ( x ) , the sensitivity function of VaR α,x i ( X ) is given by S ( z ) = − F X i ( x i ) − αf x i (cid:2) VaR α,x i ( X ) (cid:3) F X i ( x i ) , z < VaR α,x i ( X ) ,αf x i (cid:2) VaR α,x i ( X ) (cid:3) F X i ( x i ) , z > VaR α,x i ( X ) , , otherwise , which is bounded. Thus, VaR α,x i ( X ) is a robust risk measure.Proof. See Appendix A.
Proposition 3.7.
For a pair of continuous random variables X with joint cdf F ( x , x ) and marginals F X ( x ) and F X ( x ) , let A = F X i ( x i ) and B = F ( x i , VaR α ( X j )) . Thenthe sensitivity function of RVaR α ,α ,x i ( X ) is given by S ( z ) = S (cid:48) ( z ) − RVaR α ,α ,x i ( X ) , where S (cid:48) ( z ) = ( A − α )VaR α ,x i ( X ) − ( A − B ) VaR α ( X j ) B − α , z < VaR α ,x i ( X ) ,zA − α VaR α ,x i ( X ) − ( A − B ) VaR α ( X j ) B − α , VaR α ,x i ( X ) ≤ z ≤ VaR α ( X j ) ,B VaR α ( X j ) − α VaR α ,x i ( X ) B − α , z > VaR α ( X j ) , is a bounded function. Thus, RVaR α ,α ,x i ( X ) is robust.Proof. See Appendix A. 18 roposition 3.8.
For a pair of continuous random variables X with joint survival function ¯ F ( x , x ) and marginals F X ( x ) and F X ( x ) , the sensitivity function of VaR α,x i ( X ) isgiven by S ( z ) = − − αf ¯ x i (cid:2) VaR α,x i ( X ) (cid:3) (1 − F X i ( x i )) , z < VaR α,x i ( X ) ,α − F X i ( x i ) f ¯ x i (cid:2) VaR α,x i ( X ) (cid:3) (1 − F X i ( x i )) , z > VaR α,x i ( X ) , , z = VaR α,x i ( X ) . The bounded sensitivity function implies
VaR α,x i ( X ) is a robust risk measure.Proof. See Appendix A.
Proposition 3.9.
For a pair of continuous random variables X with joint survival function ¯ F ( x , x ) and marginals F X ( x ) and F X ( x ) , let A = F X i ( x i ) and C = 1 − ¯ F ( x i , V aR α ( X j )) .Then the sensitivity function of RVaR α ,α ,x i ( X ) is given by S ( z ) = S (cid:48) ( z ) − RVaR α ,α ,x i ( X ) , where S (cid:48) ( z ) = (1 − C ) VaR α ( X j ) − (1 − α )VaR α ,x i ( X ) α − C , z <
VaR α ( X j ) ,z (1 − A ) − ( C − A ) VaR α ( X j ) − (1 − α )VaR α ,x i ( X ) α − C ,
VaR α ( X j ) ≤ z ≤ VaR α ,x i ( X ) , ( α − A )VaR α ,x i ( X ) − ( C − A ) VaR α ( X j ) α − C , z >
VaR α ,x i ( X ) . The bounded function proves that
RVaR α ,α ,x i ( X ) is robust.Proof. See Appendix A. 19 .4 Empirical Estimator for Multivariate Lower and Upper Or-thant RVaR
Next, we will propose empirical estimators for the lower and upper orthant RVaR, basedon the estimators developed by Beck (2015), and provide numerical examples.
Definition 3.3.
Consider a series of observations X = ( X , . . . , X d ) with X i = ( x i , . . . , x ni ) and X \ i = ( X , . . . , X i − , X i +1 , . . . , X d ) , i = 1 , . . . , d . Denote F n and F n, \ i , the empiricalcdf ’s (ecdf ) for X and X \ i , i = 1 , . . . , d , respectively. We define the estimator for thelower orthant RV aR for fixed x \ i , i = 1 , . . . , d , by RVaR nα ,α , x \ i ( X ) = 1 F n ( x \ i , VaR α ( X i )) − α (cid:90) F n ( x \ i , VaR α ( X i )) α VaR nu, x \ i ( X ) du, For m ∈ N large enough, let s = F n ( x \ i , VaR α ( X i )) − α m and u k = α + ks , then the aboveexpression can be simplified into RVaR nα ,α , x \ i ( X ) = m (cid:88) k =1 VaR nu k , x \ i ( X ) · sF n ( x \ i , VaR α ( X i )) − α = m (cid:88) k =1 VaR nu k , x \ i ( X ) m , where VaR nu, x \ i ( X ) = inf { x i ∈ R + : F n, x \ i ( x i ) ≥ u } is the empirical lower orthant VaR fora given x \ i and F n, x \ i is the ecdf of X given the same x \ i . Note, VaR nu, x \ i ( X ) is the smallest value of X i given x \ i such that F n is larger than u .Similarly, we define the empirical estimator of upper orthant RVaR as follows. Definition 3.4.
Consider a series of observations X = ( X , . . . , X d ) with X i = ( x i , . . . , x ni ) and X \ i = ( X , . . . , X i − , X i +1 , . . . , X d ) , i = 1 , . . . , d . Denote ¯ F n and ¯ F n, \ i , the empiricalsurvival functions for X and X \ i , i = 1 , . . . , d , respectively. We define the estimator forthe upper orthant RVaR for fixed x \ i , i = 1 , . . . , d , by RVaR nα ,α , x \ i ( X ) = 1 α − (1 − F n ( x \ i , VaR α ( X i ))) (cid:90) α − F n ( x \ i , VaR α ( X i )) VaR nu, x \ i ( X ) d u, For m ∈ N large enough. Let s = α − (1 − ¯ F n ( x \ i , VaR α ( X i ))) m and v k = 1 − ¯ F n ( x \ i , VaR α ( X i ))+20 s , then the above expression can be simplified into RVaR nα ,α , x \ i ( X ) = m (cid:88) k =1 VaR nv k , x \ i ( X ) · sα − (1 − ¯ F n ( x \ i , VaR α ( X i )))= m (cid:88) k =1 VaR nv k , x \ i ( X ) m , where VaR nv, x \ i ( X ) = inf { x i ∈ R + : ¯ F n, x \ i ( x i ) ≤ − v } is the empirical upper orthant VaR given x \ i and ¯ F n, x \ i is the empirical survival function of X given the same x \ i . The following proposition, based on the proof of the consistency of bivariate VaR by Cousinand Di Bernardino (2013), shows the consistency of the bivariate RVaR in Hausdorff dis-tance. For α ∈ (0 ,
1) and r, ζ >
0, consider the ball E = B ( { x ∈ R : | F ( x ) − α | ≤ r } , ζ ) . Denote m ∇ = inf x ∈ E (cid:107) ( ∇ F ) x (cid:107) as the infimum of the Euclidean norm of the gradientvector and M H = sup x ∈ E (cid:107) ( HF ) x (cid:107) as the matrix norm of the Hessian matrix evaluatedat x for a twice differentiable F ( x , x ). Proposition 3.10.
Let [ α , α ] ⊂ (0 , and F ( x , x ) be twice differentiable on R . As-sume there exists r, ζ > such that m ∇ > and M H < ∞ . Assume for each n , F n iscontinuous with probability one (wp1) and (cid:107) F − F n (cid:107) wp −→ n →∞ . Also, let F n,i be the consistent estimator of F i . Then, we have RVaR nα ,α ,x i ( X ) wp −→ n −→∞ RVaR α ,α ,x i ( X ) . Proof.
See Appendix A.A simulation study is performed to compare the empirical estimators to the theoreticallower and upper orthant RVaR. Marginally, the random variables are distributed from GEVdistributions. The dependence is represented by an independent copula. 50 simulations21 a) (b)
Figure 3: (a) Lower orthant VaR at level 0.95 and RVaR at level range [0 . , .
99] for fixedvalues of X for GEV samples where ξ , ξ (cid:54) = 0 and (b) Lower orthant VaR at level 0.95 andRVaR at level range [0 . , .
99] for fixed values of X for GEV samples where ξ = ξ = 0are performed for samples of 4000 observations from each marginal distribution and for m = 250. The results of the simulation are presented in Figure 3. As shown, the differenecesbetween the theoretical values and their empirical estimates are negligible. This could beattributed to the robustness and consistency of the empirical estimators of VaR and RVaR.The accuracy of the estimates improves with the sample size and value of m . In this paper, we introduce the multivariate extension of the RVaR risk measure, whichresults in a tool employed to assess dependent risks. This tool is particularly useful forheavy-tail distributions. Similar to its univariate counterpart, multivariate lower and upperorthant RVaR are defined as the conditional expectation of the lower and upper orthantVaR for large confidence levels. Their properties are discussed, such as translation invari-ance, positive homogeneity and monotonicity. Subadditivity can be satisfied for aggregatedrisks if each risk class is monotonic. Moreover, we develop resulting measures with spe-cific extreme value distributions. The method of sensitivity functions to study robustness22s extended for distribution-based multivariate RVaR. Finally, the empirical estimators ofmultivariate RVaR are proposed. The robustness and consistency of such estimators areconfirmed. Furthermore, the simulations illustrate the accuracy of the empirical estimatorswithout the need to assume any statistical distributions. RVaR may be extremely relevantin some instances where the loss distribution is characterized with an infinite mean, whichresults in an infinite TVaR.
Acknowledgements
M´elina Mailhot acknowledges financial support from the Natural Sciences and EngineeringResearch Council. Roba Bairakdar acknowledges financial support from the Society ofActuaries Hickman Scholar Program.
A Proofs
Proposition 2.1
Proof.
Finding VaR is straightforward by inverting F (VaR α ( X )) = α , where F ( x ) is thethree-parameter GEV distribution.RVaR can be derived from its definition as follows,RVaR α ,α ( X ) = α − α (cid:90) α α µ − σξ (cid:104) − {− ln w } − ξ (cid:105) dw ξ (cid:54) = 0 , α − α (cid:90) α α µ − σ ln {− ln w } dw ξ = 0 . = µ − σξ ( α − α ) [( α − α ) − Γ(1 − ξ, − ln α ) + Γ(1 − ξ, − ln α )] ξ (cid:54) = 0 ,µ − σα − α [ α ln( − ln α ) − α ln( − ln α ) − li( α ) + li( α )] ξ = 0 , where li( x ) is the logarithmic integral li( x ) = (cid:82) x t ) dt for 0 < x < x = 1. Thus, TVaR diverges for ξ = 0.TVaR can be directly derived as a special case of RVaR, when α = 1.23 roposition 2.2 Proof.
When ξ (cid:54) = 0, we have thatlim α → RVaR α ,α ( X )VaR α ( X ) = lim α → µ − σξ − σξ ( α − α ) [ − Γ(1 − ξ, − ln α ) + Γ(1 − ξ, − ln α )] µ − σξ (cid:104) − ( − ln α ) − ξ (cid:105) = µ − σξ (1 − α ) [(1 − α ) − Γ(1 − ξ ) + Γ(1 − ξ, − ln α )] µ − σξ (cid:104) − ( − ln α ) − ξ (cid:105) = TVaR α ( X )VaR α ( X ) . lim α → TVaR α ( X )VaR α ( X ) = lim α → µ − σξ (1 − α ) [(1 − α ) − Γ(1 − ξ ) + Γ(1 − ξ, − ln α )] µ − σξ (cid:104) − ( − ln α ) − ξ (cid:105) = (cid:40) (1 − ξ ) − ξ > , ξ < . Proposition 2.3
Proof.
Given that F ∈ MDA( H ξ ), thus we can make use of Theorem 2.2, such that for ahigh threshold u , we can model F u by a GPD, where we assume that ξ (cid:54) = 0. Note that for x ≥ u , we have that P ( X > x | X > u ) = P ( X > x ) P ( X > u ) = ¯ F ( x )¯ F ( u ) .Let 0 ≤ α ≤ α ≤
1, then for any value of ξ , we have thatRVaR α ,α ( X ) = 1 α − α (cid:90) α α (cid:34) u + σξ (cid:32)(cid:18) − wζ u (cid:19) − ξ − (cid:33)(cid:35) d w
24 1( α − α )(1 − ξ ) (cid:34) (1 − α ) σξ (cid:18) − α ζ u (cid:19) − ξ − (1 − α ) σξ (cid:18) − α ζ u (cid:19) − ξ + (cid:18) u − σξ (cid:19) ( α − α )(1 − ξ ) (cid:21) . = (1 − α ) VaR α ( X ) − (1 − α ) VaR α ( X )( α − α )(1 − ξ ) + ( σ − ξu )(1 − ξ )If we assume that ξ <
1, then TVaR can be obtained directly from its definition, as suchTVaR α ( X ) = 11 − α (cid:90) α (cid:34) u + σξ (cid:32)(cid:18) − ωζ u (cid:19) − ξ − (cid:33)(cid:35) d ω = VaR α ( X )1 − ξ + σ − ξu − ξ . Note that if ξ >
1, the integral does not converge.
Proposition 2.4
Proof.
We first observe thatlim α → VaR α ( X ) = lim α → (cid:34) u + σξ (cid:32)(cid:18) − αζ u (cid:19) − ξ − (cid:33)(cid:35) = ∞ ξ ≥ ,u − σξ ξ < . Thus,lim α → RVaR α ,α ( X )VaR α ( X ) = lim α → (cid:20) (1 − α )( α − α )(1 − ξ ) − (1 − α ) VaR α ( x )( α − α )(1 − ξ ) VaR α ( x ) + σ − ξu (1 − ξ ) VaR α ( X ) (cid:21) = TVaR α ( X )VaR α ( X ) . α → TVaR α ( X )VaR α ( X ) = lim α → (cid:20) − ξ + σ − ξu (1 − ξ ) VaR α (cid:21) = (cid:40) (1 − ξ ) − ξ ≥ , ξ < . Proposition 3.1
Proof.
Let F \ i ( x i ) = Pr( X i ≤ x i | X \ i ≤ x \ i ), thenRVaR α ,α , x \ i ( X ) = E [ X i | VaR α , x \ i ( X ) ≤ X i ≤ VaR α ( X i ) , X \ i ≤ x \ i ]= (cid:90) VaR α ( X i )VaR α , x \ i ( X ) x i d F \ i ( x i ) F ( x \ i , VaR α ( X i )) F \ i ( x \ i ) − α F \ i ( x \ i ) = F \ i ( x \ i ) F ( x \ i , VaR α ( X i )) − α (cid:90) VaR α ( X i )VaR α , x \ i ( X ) x i d F \ i ( x i ) . Note that one has VaR α, x \ i ( X ) = VaR αF \ i ( x \ i ) ( X i | X \ i ≤ x \ i ) . Then, by letting u = F \ i ( x i ),RVaR α ,α , x \ i ( X ) = F \ i ( x \ i ) F ( x \ i , VaR α ( X i )) − α (cid:90) F \ i (VaR α ( X i )) α /F \ i ( x \ i ) F − \ i ( u )d u = 1 F ( x \ i , VaR α ( X i )) − α (cid:90) F \ i (VaR α ( X i ) F \ i ( x \ i )) α F − \ i (cid:18) uF \ i ( x \ i ) (cid:19) d u = 1 F ( x \ i , VaR α ( X i )) − α (cid:90) F ( x \ i , VaR α ( X i )) α VaR u, x \ i ( X )d u. roposition 3.2 Proof.
This follows the same reasoning as for Proposition 3.1.
Proposition 3.3
Proof.
We invite the reader to refer to Cossette et al. (2013) for the properties of bivariateVaR to prove the results.
Example 3.4
Proof.
Consider the independent copula C ( u, v ) = uv and let A = F X i ( x i ) , B = F ( x i , VaR α ( X j ))and C = 1 − ¯ F ( x i , V aR α ( X j ))Choose u ≥ F ( x ) = α . Then, for ξ (cid:54) = 0, ξ (cid:54) = 0, we have thatRVaR α ,α ,x ( X ) = 1 B − α (cid:90) Bα VaR u,x i ( X ) du = 1 B − α (cid:90) Bα µ j − σ j ξ j (cid:104) − (ln A − ln u ) − ξ j (cid:105) du = µ j − σ j ξ j (cid:20) − AB − α (cid:20) Γ (cid:18) − ξ j , ln (cid:18) AB (cid:19)(cid:19) − Γ (cid:18) − ξ j , ln (cid:18) Aα (cid:19)(cid:19)(cid:21)(cid:21) , and for ξ = ξ = 0, we haveRVaR α ,α ,x i ( X ) = 1 B − α (cid:90) Bα VaR u,x i ( X ) du = 1 B − α (cid:90) Bα µ j − σ j ln (cid:20) ln (cid:18) F ( x i ) u (cid:19)(cid:21) du µ j − σ j B − α (cid:20) u ln (cid:18) ln (cid:18) Au (cid:19)(cid:19) − A Ei (cid:18) ln (cid:18) Au (cid:19)(cid:19)(cid:21) Bα − σ j AB − α (cid:20) Ei (cid:18) ln (cid:18) Aα (cid:19)(cid:19) − Ei (cid:18) ln (cid:18) AB (cid:19)(cid:19)(cid:21) , where Ei( x ) = − (cid:82) ∞− x e − t t dt .Also, for ξ (cid:54) = 0, ξ (cid:54) = 0, we haveTVaR α,x i ( X ) = 1 A − α (cid:90) Aα µ j − σ j ξ j (cid:34) − (cid:18) ln (cid:18) Aα (cid:19)(cid:19) − ξ j (cid:35) du = µ j − σ j ξ j (cid:20) − AA − α (cid:20) Γ (1 − ξ j ) − Γ (cid:18) − ξ j , ln (cid:18) Aα (cid:19)(cid:19)(cid:21)(cid:21) , and for ξ = ξ = 0, we haveTVaR α,x i ( X ) = 1 A − α (cid:90) Aα µ j − σ j ln (cid:20) ln (cid:18) Au (cid:19)(cid:21) du = ∞ . An analogous reasoning applies for the result of RVaR α ,α ,x i ( X ) and TVaR α,x i ( X ). Proposition 3.4
Proof.
One has that lim x i → VaR α , VaR α Xj ) ( X ) VaR α ,x i ( X ) = VaR α ( X j ) . Thus, integrating this constant on the interval [ α , F ( x i , VaR α ( X j ))] results in VaR α ( X j ).Similarly, we can prove that when x i approaches the upper bound VaR α , VaR α ( X j ) ( X ),RVaR α ,α ,x i ( X ) approaches VaR α ( X j ). Furthermore, we have thatlim x i → u xi VaR u,x i ( X ) = VaR u ( X j ) . F ( u x i , VaR α ( X j )) = α , we get the result thatlim x i → u xi RVaR α ,α ,x i ( X ) = 1 α − α (cid:90) α α VaR u ( X j ) du = RVaR α ,α ( X j ) . Similarly, we can prove the limit of RVaR α ,α ,x i ( X ) is also RVaR α ,α ( X j ). Proposition 3.5
Proof.
Define F − S ( u ) = (cid:80) ni =1 F − X i ( u ) and F − S ( u ) = (cid:80) ni =1 G − Y i ( u ). If ( X , . . . , X n ) (re-spectively ( Y , . . . , Y n )) is comonotonic, then there exists a uniform random variable U (respectively U ) such that S = F − S ( U ) (respectively S = F − S ( U )). Hence,RVaR α ,α ,S ( S ) = (cid:82) F ( s , VaR α ( S )) α VaR u,S (cid:0) F − S ( U ) , F − S ( U ) (cid:1) duF ( s , VaR α ( S )) − α = (cid:82) F ( s , VaR α ( S )) α F − S (cid:16) VaR u,F S ( s ) ( U , U ) (cid:17) duF ( s , VaR α ( S )) − α = n (cid:88) i =1 (cid:82) F ( x i , VaR α ( Y i )) α VaR u,y i (cid:0) F − X i ( U ) , G − Y i ( U ) (cid:1) duF ( x i , VaR α ( Y i )) − α = n (cid:88) i =1 (cid:82) F ( x i , VaR α ( Y i )) α VaR u,y i ( X i , Y i ) duF ( x i , VaR α ( Y i )) − α = n (cid:88) i =1 RVaR α ,α ,y i ( X i , Y i ) . The other results of Proposition 3.5 are developed the same way.
Proposition 3.6
Proof.
Let F x i ( x j ) = Pr( X j ≤ x j | X i ≤ x i ) be the conditional distribution of X j knowing X i , i, j = 1 , i (cid:54) = j ). For any fixed x i and ε ∈ [0 , F ε,x i ( x j ) = εδ z + (1 − ε ) F x i ( x j ).The distribution F ε,x i is differentiable at any x j (cid:54) = z with F (cid:48) ε,x i ( x j ) = (1 − ε ) f x i ( x j ) > x j = z . 29e have that VaR α,x i ( X ) = VaR αFXi ( xi ) ( X j | X i ≤ x i ) . Then, VaR α,x i ( F ε,x i ) = F − ε,x i (cid:16) αA (cid:17) = F − x i (cid:18) α (1 − ε ) F X i ( x i ) (cid:19) , αF X i ( x i ) < (1 − ε ) F x i ( z ) ,F − x i (cid:18) α/F X i ( x i ) − ε − ε (cid:19) , αF X i ( x i ) ≥ (1 − ε ) F x i ( z ) + ε,z, otherwise . As a consquence, the sensitivity function of VaR α,x i ( X ) can be evaluated by S ( z ) = lim ε → + VaR α,x i ( F ε,x i ) − VaR α,x i ( F x i ) ε = (cid:20) ddε VaR α,x i ( F ε,x i ) (cid:21) ε =0 = − F X i ( x i ) − αf x i (cid:2) VaR α,x i ( X ) (cid:3) F X i ( x i ) , z < VaR α,x i ( X ) ,αf x i (cid:2) VaR α,x i ( X ) (cid:3) F X i ( x i ) , z > VaR α,x i ( X ) , , z = VaR α,x i ( X ) . The result shows that VaR α,x i ( X ) has a bounded sensitivity function for any fixed x i , whichmeans it is a robust statistic. Note that this conclusion coincides with the one associatedwith the univariate VaR. Proposition 3.7
Proof.
Let A = F X i ( x i ) and B = F ( x i , VaR α ( X j )). Then the sensitivity function ofRVaR α ,α ,x i ( X ) = 1 B − α (cid:90) Bα VaR u,x i ( X ) du,
30s given by S ( z ) = 1 B − α (cid:90) Bα (cid:20) lim ε → + VaR u,x i ( F ε,x i ) − VaR u,x i ( F x i ) ε (cid:21) du = 1 B − α (cid:90) Bα (cid:20) ddε VaR u,x i ( F ε,x i ) (cid:21) ε =0 du = B − α (cid:90) Bα − A − uf x i (cid:2) VaR u,x i ( X ) (cid:3) A du, z <
VaR α,x i ( X ) , B − α (cid:40)(cid:90) F ( x i ,z ) α uf x i (cid:2) VaR u,x i ( X ) (cid:3) A du + (cid:90) BF ( x i ,z ) − A − uf x i (cid:2) VaR u,x i ( X ) (cid:3) A du (cid:41) , VaR α,x i ( X ) ≤ z ≤ VaR α ( X j ) , B − α (cid:90) Bα uf x i (cid:2) VaR u,x i ( X ) (cid:3) A du, z > V aR α ( X j ) . = S (cid:48) ( z ) − RVaR α ,α ,x i ( X ) , where S (cid:48) ( z ) = ( A − α )VaR α ,x i ( X ) − ( A − B ) VaR α ( X j ) B − α , z < VaR α ,x i ( X ) ,zA − α VaR α ,x i ( X ) − ( A − B ) VaR α ( X j ) B − α , VaR α ,x i ( X ) ≤ z ≤ VaR α ( X j ) ,B VaR α ( X j ) − α VaR α ,x i ( X ) B − α , z > V aR α ( X j ) . Furthermore, the sensitivity function of TVaR α,x i ( X ) can be obtained when B = A .31hen, S ( z ) = VaR α,x i ( X ) − TVaR α,x i ( X ) , z < VaR α,x i ( X ) ,zA − α VaR α,x i ( X ) A − α − TVaR α,x i ( X ) , z ≥ VaR α,x i ( X ) . Obviously, it is linear in z , which implies that TVaR α,x i ( X ) is not a robust statistic. Thisalso coincides with univariate TVaR. Proposition 3.8
Proof.
Let F ¯ x i ( x j ) = Pr( X j ≤ x j | X i ≥ x i ) be the conditional distribution of X j given X i ≥ x i , i, j = 1 ,
2. For any fixed x i and ε ∈ [0 , F ε, ¯ x i ( x j ) = εδ z + (1 − ε ) F ¯ x i ( x j ). F ε, ¯ x i is differentiable at any x j (cid:54) = z with F (cid:48) ε, ¯ x i ( x j ) = (1 − ε ) f ¯ x i ( x j ) > x j = z .Then, given that VaR α,x i ( X ) = VaR α − FXi ( xi )1 − FXi ( xi ) ( X j | X i ≥ x i ) , we haveVaR α,x i ( F ε, ¯ x i ) = F − ε, ¯ x i (cid:18) α − F X i ( x i )1 − F X i ( x i ) (cid:19) = F − x i (cid:18) α − F X i ( x i )(1 − ε )(1 − F X i ( x i )) (cid:19) , α − F X i ( x i )1 − F X i ( x i ) < (1 − ε ) F ¯ x i ( z ) ,F − x i α − F Xi ( x i )1 − F Xi ( x i ) − ε − ε , α − F X i ( x i )1 − F X i ( x i ) ≥ (1 − ε ) F ¯ x i ( z ) + ε,z, otherwise . Hence, the sensitivity function of VaR α,x i ( X ) can be obtained by S ( z ) = lim ε → + VaR α,x i ( F ε, ¯ x i ) − VaR α,x i ( F ¯ x i ) ε = (cid:20) ddε VaR α,x i ( F ε, ¯ x i ) (cid:21) ε =0 − − αf ¯ x i (cid:2) VaR α,x i ( X ) (cid:3) (1 − F X i ( x i )) , z < VaR α,x i ( X ) ,α − F X i ( x i ) f ¯ x i (cid:2) VaR α,x i ( X ) (cid:3) (1 − F X i ( x i )) , z > VaR α,x i ( X ) , , z = VaR α,x i ( X ) . As VaR α,x i ( X ), VaR α,x i ( X ) also has a bounded sensitivity function, meaning it is alsorobust. And differences in results is because that bivariate lower and upper orthat RVaRare evaluated using cdf and survival function, respectively. Proposition 3.9
Proof.
Let A = F X i ( x i ) and C = 1 − ¯ F ( x i , V aR α ( X j )). Then the sensitivity function ofRVaR α ,α ,x i ( X ) = 1 α − C (cid:90) α C VaR v,x i ( X ) dv, is given by S ( z ) = 1 α − C (cid:90) α C (cid:20) lim ε → + VaR v,x i ( F ε, ¯ x i ) − VaR v,x i ( F ¯ x i ) ε (cid:21) dv = 1 α − C (cid:90) α C (cid:20) ddε VaR v,x i ( F ε, ¯ x i ) (cid:21) ε =0 dv = α − C (cid:90) α C − − vf ¯ x i (cid:2) VaR v,x i ( X ) (cid:3) (1 − A ) dv, z < VaR α ( X j ) , α − C (cid:40)(cid:90) − ¯ F ( x i ,z ) C v − Af ¯ x i (cid:2) VaR v,x i ( X ) (cid:3) (1 − A ) dv + (cid:90) α − ¯ F ( x i ,z ) − − vf ¯ x i (cid:2) VaR v,x i ( X ) (cid:3) (1 − A ) dv (cid:41) , VaR α ( X j ) ≤ z ≤ VaR α ,x i ( X ) , α − C (cid:90) α C v − Af ¯ x i (cid:2) VaR v,x i ( X ) (cid:3) (1 − A ) dv, z > VaR α ,x i ( X ) . S (cid:48) ( z ) − RVaR α ,α ,x i ( X ) , where S (cid:48) ( z ) = (1 − C ) VaR α ( X j ) − (1 − α )VaR α ,x i ( X ) α − C , z <
VaR α ( X j ) ,z (1 − A ) − ( C − A ) VaR α ( X j ) − (1 − α )VaR α ,x i ( X ) α − C ,
VaR α ( X j ) ≤ z ≤ VaR α ,x i ( X ) , ( α − A )VaR α ,x i ( X ) − ( C − A ) VaR α ( X j ) α − C , z >
VaR α ,x i ( X ) . Furthermore, the sensitivity function of TVaR α,x i ( X ) can be obtained, when β = α and α = 1. Then, S ( z ) = VaR α,x i ( X ) − TVaR α,x i ( X ) , z < VaR α,x i ( X ) ,z (1 − A ) − ( α − A )VaR α,x i ( X )1 − α − TVaR α,x i ( X ) , z ≥ VaR α,x i ( X ) . Because of their analogous definitions, the sensitivity function of TVaR α,x i ( X ) is similarto the one of TVaR α,x i ( X ). Consequently, TVaR α,x i ( X ) is not robust. Proposition 3.10
Proof.
According to Theorem 2.1 by Beck (2015), we haveVaR nu,x i ( X ) wp −→ n −→∞ VaR u,x i ( X ) . for any u ∈ (0 , F n,i wp −→ n −→∞ F i . [ α ,F n ( x i , VaR α ( X j )) ]( u ) = (cid:40) , u ∈ [ α , F n ( x i , VaR α ( X j ))]0 , otherwise wp −→ n −→∞ (cid:40) , u ∈ [ α , F ( x i , VaR α ( X j ))]0 , otherwise= [ α ,F ( x i , VaR α ( X j )) ]( u ) . As a result, it can be seen thatVaR nu,x i ( X ) [ α ,F n ( x i , VaR α ( X j )) ]( u ) F n ( x i , VaR α ( X j )) − α wp −→ n −→∞ VaR u,x i ( X ) [ α ,F ( x i , VaR α ( X j )) ]( u ) F ( x i , VaR α ( X j )) − α . Therefore, by the dominated convergence theorem,lim n →∞ RVaR nα ,α ,x i ( X ) = lim n →∞ (cid:90) VaR nu,x i ( X ) [ α ,F n ( x i , VaR α ( X j )) ]( u ) F n ( x i , VaR α ( X j )) − α du = (cid:90) VaR u,x i ( X ) [ α ,F ( x i , VaR α ( X j )) ]( u ) F ( x i , VaR α ( X j )) − α du = (cid:82) F ( x i , VaR α ( X j )) α VaR u,x i ( X ) duF ( x i , VaR α ( X j )) − α = RVaR α ,α ,x i ( X ) . Note that the consistency of upper orthant RVaR could be proved in the same way.
References
Artzner, P., Delbaen, F., Eber, J.-M., and Heath, D. (1999). Coherent measures of risk.
Mathematical finance , 9(3):203–228.Balkema, A. A. and De Haan, L. (1974). Residual life time at great age.
The Annals ofprobability , pages 792–804. 35eck, N. (2015).
Multivariate Risk Measures and a Consistent Estimator for the OrthantBased Tail Value-at-Risk . PhD thesis, Concordia University.Bignozzi, V. and Tsanakas, A. (2016). Parameter uncertainty and residual estimation risk.
Journal of Risk and Insurance , 83(4):949–978.Cont, R., Deguest, R., and Scandolo, G. (2010). Robustness and sensitivity analysis of riskmeasurement procedures.
Quantitative finance , 10(6):593–606.Cossette, H., Mailhot, M., Marceau, ´E., and Mesfioui, M. (2013). Bivariate lower andupper orthant value-at-risk.
European actuarial journal , 3(2):321–357.Cossette, H., Mailhot, M., Marceau, ´E., and Mesfioui, M. (2015). Vector-valued tailvalue-at-risk and capital allocation.
Methodology and Computing in Applied Probabil-ity , 3(18):653–674.Cousin, A. and Di Bernardino, E. (2013). On multivariate extensions of value-at-risk.
Journal of Multivariate Analysis , 119:32–46.Embrechts, P. and Puccetti, G. (2006). Bounds for functions of multivariate risks.
Journalof multivariate analysis , 97(2):526–547.Fisher, R. A. and Tippett, L. H. C. (1928). Limiting forms of the frequency distributionof the largest or smallest member of a sample. In
Mathematical Proceedings of theCambridge Philosophical Society , volume 24, pages 180–190. Cambridge University Press.Group, G. D. S. et al. (1993). Derivatives: Practices and principles (washington dc, groupof thirty).Nappo, G. and Spizzichino, F. (2009). Kendall distributions and level sets in bivariateexchangeable survival models.
Information Sciences , 179(17):2878–2890.Pickands III, J. et al. (1975). Statistical inference using extreme order statistics. the Annalsof Statistics , 3(1):119–131.Pr´ekopa, A. (2012). Multivariate value at risk and related topics.