Modality for Scenario Analysis and Maximum Likelihood Allocation
MModality for Scenario Analysis and Maximum LikelihoodAllocation
Takaaki Koike ∗ and Marius Hofert † May 7, 2020
Abstract
We analyze dependence, tail behavior and multimodality of the conditional distribution of a lossrandom vector given that the aggregate loss equals an exogenously provided capital. This conditionaldistribution is a building block for calculating risk allocations such as the Euler capital allocation ofValue-at-Risk. A level set of this conditional distribution can be interpreted as a set of severe andplausible stress scenarios the given capital is supposed to cover. We show that various distributionalproperties of this conditional distribution are inherited from those of the underlying joint loss distribu-tion. Among these properties, we find that multimodality of the conditional distribution is an importantfeature related to the number of risky scenarios likely to occur in a stressed situation. Moreover, Eulerallocation becomes less sound under multimodality than under unimodality. To overcome this issue, wepropose a novel risk allocation called the maximum likelihood allocation (MLA), defined as the mode ofthe conditional distribution given the total capital. The process of estimating MLA turns out to be ben-eficial for detecting multimodality, evaluating the soundness of risk allocations, and constructing moreflexible risk allocations based on multiple risky scenarios. Properties of the conditional distribution andMLA are demonstrated in numerical experiments. In particular, we observe that negative dependenceamong losses typically leads to multimodality, and thus to multiple risky scenarios and less sound riskallocations.
JEL classification:
C02, G32
Keywords:
Risk allocation, Scenario analysis, Conditional distribution, Dependence modeling, Unimodality,Mode
Risk allocation concerns the quantification of the risk of each unit of a portfolio. For a d -dimensionalportfolio of risks or losses (typically risk-factor changes) represented by an R d -valued random vector X =( X , . . . , X d ), d ∈ N , the overall loss S = X + · · · + X d is quantified as a total capital K ∈ R and typicallydetermined as K = (cid:37) ( S ) for a risk measure ρ . The Euler principle , proposed in Tasche (1995), is one ofthe most well-known rules of risk allocation. It is economically justified, for example, in Tasche (1995) andKalkbrener (2005), and the derived allocated capital is also known as the
Aumann-Shapley value (Aumannand Shapley, 2015) in cooperative game theory; see Denault (2001) and Boonen et al. (2020).The Euler principle is applicable when the total capital is determined by a risk measure via K = (cid:37) ( S ).However, as pointed out by Asimit et al. (2019), the total capital in practice may not always coincide withthe risk measure itself but includes various adjustments such as stress scenarios and liquidity adjustments.In such cases, the capital does not possess the original meaning as a risk measure and the formula under the ∗ Corresnponding author: Department of Statistics and Actuarial Science, University of Waterloo, Waterloo, ON, Canada,E-mail: [email protected] † Department of Statistics and Actuarial Science, University of Waterloo, Waterloo, ON, Canada, E-mail: [email protected] a r X i v : . [ q -f i n . R M ] M a y uler principle is not available. In addition, there are situations when the total capital is given exogenouslyas a constant; see Laeven and Goovaerts (2004). For the case when the total capital is regarded as aconstant, various allocation methods have been proposed in the literature. One of the main streams found,for example, in Laeven and Goovaerts (2004) and Dhaene et al. (2012), is to derive an allocation as aminimizer of some loss function over a set of allocations K d ( K ) = { x ∈ R d : x + · · · + x d = K } . Anothermethod is to find a confidence level for which the corresponding risk measure coincides with K , and thenallocate K by regarding it as measured by a risk measure. For example, if Value-at-Risk (VaR) or ExpectedShortfall (ES) are chosen as risk measures, confidence levels p VaR , p ES ∈ (0 ,
1) are first found such that K = VaR p VaR ( S ) or, respectively, K = ES p ES ( S ) hold for a given total capital K . After performing thisprocedure, the Euler principle becomes applicable to K and the resulting risk allocation of K allocates E [ X j | { S = K } ] or, respectively, E [ X j | { S ≥ VaR p ES ( S ) } ] to the j th risk X j ; see Section 2.1 for details.Although these methods often provide plausible risk allocations, they sometimes ignore important dis-tributional properties of X related to the soundness of risk allocations and to risky scenarios expected tobe covered by the allocated capitals. As we will see in Section 2.2, all these allocation methods provide thehomogeneous allocation ( K/d, . . . , K/d ) when X is exchangeable in the sense that X d = ( X π (1) , . . . , X π ( d ) )for any permutation ( π (1) , . . . , π ( d )) of { , . . . , d } . This homogeneous allocation can be sound when theconditional distribution of X in a stressed situation is unimodal with the mode ( K/d, . . . , K/d ) since thishomogeneous allocation covers the risky scenario most likely to occur in a stressed situation. On the otherhand, the same allocation (
K/d, . . . , K/d ) arises when the conditional distribution in a stressed situationis multimodal and (
K/d, . . . , K/d ) is supposed to cover multiple risky scenarios on average . In this multi-modal case, the homogeneous allocation is less sound than in the former unimodal case without identifyingthe multiple risky scenarios hidden in a single vector of (
K/d, . . . , K/d ). Consequently, the soundness ofrisk allocation can be dependent on the distributional properties of the conditional distribution of X in astressed situation.In this paper, we focus on the conditional distribution of X given { S = K } . Since X | { S = K } takes values in K d ( K ), this random vector can be a building block for deriving a risk allocation. Forexample, the Euler allocation (2) arises when K = VaR p ( S ) for some p ∈ (0 ,
1) and the expectation of X | { S = K } is considered. In Section 2.2 we show that a level set of X | { S = K } can be regardedas a set of severe and plausible stress scenarios the given capital K is supposed to cover. Based on themotivation provided there, we investigate distributional properties of X | { S = K } in Section 3. We showthat dependence, tail behavior and unimodality of X | { S = K } are typically inherited from those of theunderlying unconditional loss X , respectively. In addition, we demonstrate by simulation that negativedependence among X typically leads to multimodality of X | { S = K } ; see Section 5.2. These observationscan be useful to detect the hidden risk of multimodality in risk allocation. Furthermore, the properties of X | { S = K } studied in this paper are of potential importance in simulation and statistical inference of X | { S = K } using Markov chain Monte Carlo (MCMC) methods for efficiently simulating the distributionof interest; see Remark 1 and Appendix D.We also propose a novel risk allocation method termed maximum likelihood allocation (MLA) , whichis defined as the (typically unique) mode of X | { S = K } . Besides the mean (which leads to the Eulerallocation of VaR), the mode is also an important summary statistics of X | { S = K } . It can be interpretedas the risky scenario most likely to occur in the stressed situation { S = K } . By searching for the globalmode of X | { S = K } , possibly multiple local modes can be detected. As explained in Section 2.2,this procedure of detecting multimodality is beneficial for evaluating the soundness of risk allocations, fordiscovering hidden multiple scenarios likely to occur in the stressed situation { S = K } and for constructingmore flexible risk allocations by weighting important scenarios. Definitions and required assumptions onMLA are provided in Section 4.1. In Section 4.2, we investigate properties of MLA expected to hold for arisk allocation. MLA is estimated and compared with the Euler allocation in numerical experiments basedon real data in Section 5.1 and based on simulated data in Section 5.2. Concluding remarks are given inSection 6 and all proofs can be found in the Appendix.2 Preliminaries
On a standard atomless probability space (Ω , A , P ), let X = ( X , . . . , X d ), d ≥ d -dimensionalrandom vector with joint distribution function F X with margins F X , . . . , F X d and a copula C . Furthermore,let S = X + · · · + X d and denote F S by its distribution function. If F S and F X have densities, we denotethem by f S and f X , respectively, with marginal densities f X , . . . , f X d of f X and copula density c . Thevariable X j is interpreted as loss of the j th asset, business line, economic entity and so on, of the portfolio X in a fixed period of time. Similarly, S is regarded as the aggregate risk of the portfolio X . Positivevalues of X , . . . , X d and S are understood as losses and negative values are interpreted as profits.The amount of total capital required to cover the risk of the portfolio X is often determined as (cid:37) ( S )where (cid:37) is a risk measure , that is, a map from a set of random variables to a real number. Examples of riskmeasures include Value-at-Risk (VaR) at confidence level p ∈ (0 ,
1) defined byVaR p ( X ) := inf { x ∈ R : F X ( x ) ≥ p } , for a random variable X on (Ω , A , P ) and its distribution function F X , and Expected Shortfall (ES) atconfidence level p ∈ (0 , Conditional VaR, Tail VaR and Average VaR , defined byES p ( X ) = 11 − p (cid:90) p VaR q ( X ) d q, provided that E [ | X | ] < ∞ .Once the total capital is determined as K ∈ R , it is decomposed into d real numbers AC , . . . , AC d suchthat the full allocation property AC + · · · + AC d = K (1)holds. The set of all possible allocations is denoted by K d ( K ) := { x ∈ R d : x + · · · + x d = K } . If K = (cid:37) ( S ) for a risk measure (cid:37) , the so-called Euler principle determines the j th allocated capital byAC Euler j = ∂(cid:37) ( λ (cid:62) X ) ∂λ j (cid:12)(cid:12)(cid:12)(cid:12) λ = d , which leads to the VaR contributions and
ES contributions given by ∂(cid:37) ( λ (cid:62) X ) ∂λ j (cid:12)(cid:12)(cid:12)(cid:12) λ = d = E [ X j | { S = VaR p ( S ) } ] when (cid:37) = VaR p , (2)and ∂(cid:37) ( λ (cid:62) X ) ∂λ j (cid:12)(cid:12)(cid:12)(cid:12) λ = d = E [ X j | { S ≥ VaR p ( S ) } ] , when (cid:37) = ES p , respectively; d denotes (1 , . . . , ∈ R d .We consider the case when the capital is an exogenously given constant K ∈ R . Our proposed riskallocation introduced in Section 4 is based on the conditional distribution F X |{ S = K } ( x ) = P ( X ≤ x | { S = K } ) , x ∈ R d . (3)The conditional distribution (3) is degenerate and its first d (cid:48) = d − X (cid:48) | { S = K } =( X , . . . , X d (cid:48) ) | { S = K } determine the last one via X d | { S = K } = K − ( X + · · · + X d (cid:48) ) | { S = K } .Therefore, it suffices to consider the d (cid:48) -dimensional marginal distribution F X (cid:48) |{ S = K } . Note that throughout3 ll ll l ll ll l llll lll lll l ll ll l ll l llll l llll ll ll lllllll ll lllllllll ll l lll lll ll lllll lll lll lll ll llll ll lll l l ll ll llll ll ll lll ll llllllllll ll lllllllllll l lll ll ll llll l lll lll lllll ll ll ll llll lll ll llll ll ll lllll ll lll ll lllll lll ll lll lll llll lll llll ll ll ll ll lllll lll llll ll ll ll l lll llll ll lll lllll lll lll llll llll llll ll lll ll ll llll l ll lll lll llll llll lllll l lll lll lll ll lll lllll lll llll ll l l ll l ll ll lll lll llll lllll lll ll ll ll ll ll llll ll ll lllllll lllllllll llllll ll lll llll llllllll l ll l ll llll lll llll ll lllll ll ll ll lllll llll ll llllllllllll lllll ll ll llll ll ll lll ll lllll ll lll lll lll lll l llllll llll l lllll l l llll ll lll l ll ll llll llll llll lll l lll lll l lllllll lll lll lllllll llll ll ll ll lll lllll lllll lll llll lll llllll lllll l ll l lll l l lll lll lll lllll lll lllll lllll lllll llll ll lll lll l llll lll ll l ll llll ll lll lll llll lll llll ll lll lllll llll ll ll lll l ll lll ll ll lll lll l ll ll ll l ll lll ll llllll ll lll lll l ll lll lllll llll ll lllll l ll lll llllll llllll lllll llll l llll ll ll lll l llll ll ll l lll ll lll lllll lll ll ll lll ll ll llllll lll llllll l lll ll llllllll ll lllll llll l lll lll ll llll ll lll l ll l lll llllll lll lll lll ll llll llll l lll ll llll lll lll l l llll l ll lllllllll llll lll llllll ll ll lll l ll ll llll lll l ll ll lll ll lll lll ll llllll lll l ll lllll llllllll llll lll llll ll l ll lll lllllll ll lll ll llll l llll lll lll l l llllllllllll ll ll ll ll ll llll ll lll lll lll l lllll lll ll lllllllll ll llll ll lll lllll ll llll ll lll l llll ll ll lll lllllllllll l llllll l l l ll lll l llll lll lllllll llll l l ll lll lll ll llllll llll lll ll llll ll ll ll l lllll llll ll lllllll lllllllllll ll l ll lllllllll llllll lll l lll lll lllll l ll llll l ll ll l ll lllll ll llll l lll l llll l l ll lll lll ll l l lll l lll ll l lll llll l ll l llll l lll l ll lll llllllllll ll ll ll lllll lll l ll llll ll lll ll l lll lll ll l lll lll llll ll ll lllll ll ll lll ll lll ll ll lllll lllllllll llll ll lll ll ll lll llllllll llll lll ll ll lllll lllllll llllllll lllll ll lllllllll lll l ll lll llll lll lll l llllll ll ll l lll lll lll lll lll ll ll lll lll llll lll l l lllll ll ll ll lllll llll l lll ll ll llll llll ll llll ll lll lllll ll ll ll ll ll ll lll ll llllllll l ll ll llll lll ll lllllllll lll ll ll llll ll lll ll ll ll ll ll llll lll llll llllllll lll ll ll lllll l ll llll lllllll l l lllll ll lll ll llll llllll lllll ll llll ll lll llll lll ll llll l lll ll llll l ll ll lll l llllllll llllll lll l ll ll ll ll l lll l lll lll ll lllll l lll lll ll llll ll lllll llll lll l lllll lllll lllllll l llll ll ll lllllll ll ll llll ll lll lll lll ll lll lll lllll lllllll ll ll lllllll l lll llll ll l ll ll lll ll l llll lll lll llll lll ll lll l ll llll l lll l ll ll lll ll llll lllll l lll lllll lllllll l l lll llll lllll ll lll llll ll lll lllllll llll llll ll l lllllll lll ll l llll lll lll l lllllllll l lll lllll lll ll ll lll lll lll ll lll llllllll lll lllll ll l l lllll lll lll lll lll ll lll lll lll llll llll l lll lll lll ll ll ll lll lllll lll l lll l lll ll l lll ll ll llll ll ll lll lllllllllll l ll llll llll lll ll lll ll lllll l llll ll lllll lllll lllll ll ll l llll llll lllllll llllll llll ll llllll lll lll lll ll lllllll l ll llll ll llllll lll l lllll llllll ll lll lllll llllll lll lllll l llllllll llllll ll l llll l lllll lll lll l l lllllll l ll llllllll lllll lll lll lll lll ll llll lllll llllllll lll lllll l lll llllll lll lll llllllll l lll ll llll l llll ll lllll ll l lll lll llll lllllll lllllll ll l llll ll llll lllll l llllll ll ll llll ll ll lll ll ll lll ll llll llll lllll l lll l lll llll ll lllllll lll lll llllll lll lll llll l llllll lll ll lll ll ll ll lll ll lllll l llllllll lll l ll l ll ll ll ll llll ll lll l llll llllllll lll ll llllll l lll lllllll ll l ll ll llll llll ll llll ll l ll lll ll lllllll ll lll lll ll ll lll lllll lllll l llll llll l ll l lll llll lll llll llll llll llll l ll lllllll lll l llllllll l llll ll ll ll ll lllll l ll lll lll ll llllllll ll llll l llll llll llll lllll llllll l llll lll ll lll l llllll lll lll lll lllll lllll ll llll ll llllll l lll l ll l llllll l lllll lllll l lll ll ll lll llll lll l ll lll llll ll llll ll llllll lllll ll ll llll lllll llllll ll lll ll lllll ll llll llll lllllll llll l lll lll ll llll lll ll lll llll lll ll ll ll lll llll lll ll ll ll l llllllll lll ll llll ll llll llll lll lll ll ll ll llll ll lll lll l llllll ll lll lllllllll ll ll ll l lll ll ll llllllll lllllllll ll lll ll lll llllll ll llllllll lllll llll llll ll llll ll lllllllllll ll llll ll lll lll lll llll l llll l llll ll lllllll lll l ll l l lll ll l lll lll ll ll ll l lll ll lllll llll ll llllllllllll ll ll l ll l llll ll l lllllll ll ll l llll llll lllllll llllll lll l llllll llll lllll ll lll ll lll lll lllll ll lll ll ll lll l lllll l ll ll l lll lll llll lllllllll ll ll llll l lllll lllll l llll ll llllll lll llll ll llll ll lllll lllll llll l llllll ll l ll ll ll ll lll lll llll lllll llll ll l llllllllllll lllll lllll llll llll llll l lll lll l ll llll ll lllll ll l ll ll ll ll lllllllll lllllll llll ll l lll ll lll l ll lll ll lllllllll lll llll llll llllll l llll lll l ll ll lll llll lll ll l l ll lllllll ll l ll l lllll ll lllll ll ll lll llllll ll lll lllll l lllll lll lll lll lll llll llllllll lll lll llll ll ll ll llll ll lllll ll lll lllll lll l lll ll ll ll ll l lllll llll llllll llll l ll llll l llllll lll l llll lll ll llll ll ll llllll lll llll lll lll l l llll l ll ll l llll ll ll ll ll lll lll llll lll l l l lll lllll ll ll lllll lll ll ll l llll ll l l llll lll ll lll ll lllll llllll llll lll llllll ll ll lll llll llll llll ll llllll llllllll lllllll llll lll l l llll lll ll ll lll l lllll lll llll lll ll l ll ll llll l ll lll ll lll lllllll llllll ll ll l llll ll ll llllll llll ll lllllll lll ll ll l lllllll ll ll lll ll ll lll lll lll llllll llll ll lllll ll l ll l llll lllll ll lll l l lll llllllllll ll ll l ll ll llll lllll ll lll l ll lllll lll lll l llll l llll lll llll ll ll ll ll lll lll l l ll ll llllll lllllll l ll l ll ll ll llllllll ll llllll l llll l ll ll ll l lll lll llll l ll lllllll ll ll ll ll lllll lll lll lllll ll lll llll lllll l ll lllll llll lll llll lll llll ll ll lllll ll ll l ll llllll ll llll llll l llll lllll llll lll ll ll l llll lllll llll lll l l lllllll lll lllll lllll llll lllll llll lll l ll ll ll l lllll ll lll llllll llll lllll ll lll lll ll ll lll llllll l llll ll ll llll ll l lllllll l l ll l ll lllll lll lll ll llllll llll lll lll l ll lllll ll lll lll l lllll ll lll llll ll lll llll llll llll ll lll llll l lllll ll lll lllllllll lll lll l llll l lll l llllllllll lll lll lll llll llllll ll ll lll ll llll lll lll lll ll lll ll ll llllll ll lll lll l ll ll ll ll lll l llll lll ll l ll ll lll lllllll l lllllll ll ll lll l l lll llllll lll lll lll lllllllll lllll lll l llll lllllll llll l lll ll llll lllll llll llll l l llll ll lll lll lll ll ll lllllll l ll llllll l lllll l llllllll ll l ll llll llll llll lllllllllll lll ll llll lll ll lll ll lll lll ll lll ll lll llll ll lllllll ll llll llll l lllll ll lll lll l lll ll ll ll lllll llll ll l llll lll l ll lll ll ll l ll l ll ll llll ll l ll l lll llllll llll llllll ll ll l ll llll lll ll ll llll ll ll lll ll l ll ll lll llll llll ll lllll l lll ll ll lllll lll ll lll lllll lllllll lll llllll llll l lll lll lll lll lll llllllll l llll llllllllll llll ll l llll llll lll ll llll lll lll ll lll ll ll llll lll ll lll llllllll llll llllllll ll ll lllll llll ll llll lllll lll lllll llll ll l ll lllll lllllll ll ll ll ll lll ll llll lll lll lllll ll ll ll ll l lll ll l llll llllll l lllll ll llll lll llllll lll lllll lll lllll ll ll lll l llll lll ll ll llll ll ll l ll ll lllllllll ll ll ll l ll llll lll ll ll llll ll lll lll l lll lll lll llllllll lll ll llll l lll lllll llllll l ll l l ll l lll llll llll lllll llllllll lllllll lll lll llllll ll llll l ll ll lllll lll llll ll ll ll ll ll ll llll l lllll ll llll llll llllllllll llll lll l llll lll llllllll lll ll lll lll ll lll ll lllll ll lllll llllllll l l lll ll lll ll lll l lll ll llll lll lllllll ll llll lll lll lll ll lllll l ll lll llll llll ll llll lllll ll lll lll llll llll llllllll lll lll llll lllllll llll l ll lll ll lll lll l llll l ll lll ll ll ll llll l lll ll ll ll l lll lllllll lllll ll l ll l llllllll lll lll llllll ll l ll llll ll lll ll l ll lll lll lll l lllllll l ll ll lll ll l ll lllll llll lll llll ll llll l ll lll lll lll lll llll ll l ll ll lllll ll ll lll ll lllll ll ll ll lll ll lll l llllllll l lll lll l ll l ll l llll ll l lll ll ll ll lllll lllll lll ll ll lll lllllll l ll l ll ll llll lllllll lllll llllll ll lll ll lllllllllllll llll l ll llllll l ll ll ll ll lll ll llllll llllll l lll lll ll llll l ll lllll ll lll lll l ll llllll llll lllll lll lll lllll l lllll lll ll l l llll ll l lllll lllll lll l ll lll llllllll ll llll l l llll l llll lllllll lll llll lllll ll lllll lll ll ll ll lllll lll lllllllllllllll ll llll lll lll lllll llllllll ll lllllllll lllll lllll lll lll lllll ll llllll llll lll lllllll lll lll lll ll lll llllll lll llll lll lllll ll ll lll ll ll lll ll l l lllll lll lll lll lll l lll llll llll lll l llll l llllll l ll lll l lll ll ll llllll ll lllll ll lll l llll ll lll ll llllll lll llllll lll llllllllll l lll l lll ll ll lllllll l llll ll lll llll ll ll l llll l llll ll llll lll llll llllll llll l l lll llllllll l lll l lll ll l ll ll lll l lllll l ll llll ll lllll llll ll ll ll l ll llll llllll lllll ll l lll ll ll lll ll l llll lllll ll llll llll lllll llll ll l lll lllll llll lllll l l ll ll llllllllll l lll lllllll llllll ll llllll lll ll l lll l llll ll llllll lll l llllll ll llllllll l lll ll lll ll lll llllllll ll ll lll l l ll lllll ll lllll l lll llllllll l lllllll llll ll ll lll ll llll llll llllllllll l lllllll ll lllll llll llllll llll ll lll ll lll llll lll lllll ll ll l lll ll lll llll llll ll llll l llll lllll lll lll lll lllll ll l l lll llll llllll ll ll lllllll ll lll l ll ll l ll l llll ll ll ll lllll l ll ll llll llll ll ll lll lllll l ll lll ll ll l lll lllll lllllll llll ll ll lllll l lllll ll llll lllll lllll ll lll lll llll ll l llll lllllllll lll ll llllllllll lll l ll ll lll llllllll lll lll llll llllllll lll lll l ll lll l ll llll ll lll llll l ll lllllll lllll llllll llllll lll lllll l ll ll ll lllllll lll llll ll llll l lllll ll ll ll ll lll llll lll l ll lll l llll llll ll ll lll lll ll ll lll l llllll ll llllll ll ll lll lll llllll lll ll ll lllll llll lll llll ll l lll llllll lll ll lll l lll ll ll llll lll llll llll llll llll lll lll llll l ll l ll llll lll llll lll lll l lll l llll ll lll ll llllll lllll lll l lll lll llllllll ll llll l llll lll llllll ll ll ll lllll l ll ll ll llll llll lll ll l lll lllll llll l ll lll lllllll llll lllll l ll llll l lll l ll ll ll ll lll lllll l lll l lllllll llllllllll ll llllll ll ll lll lll lllll lll lll lllll lll lllll llll llll ll ll ll ll l llll l lllll l lll lll ll l lll lll lllll l lll ll llll lllll l lll ll lllll llllll lll ll l llll lll ll ll ll ll lll ll llllll l llllllll ll lll lll llll ll ll lllllll llll lllll l llllllll llllllllll l llll l lll llll l ll l llll l lll lll ll lllll ll lll l llll lllll llllll lll lll lll l lll ll llll lll llll l llll lllll lllllllll l lllll llll l llll lll ll l lll lll ll ll lll llll l l lll lllllll ll llll llll l lllllllllll llll lllll lllll lll ll ll lllll lllll lllll llll lllllllll llllll l llll lll ll lll lll llllll ll l ll ll ll lll l ll llllll l ll ll llllll lllll lllll lllll ll ll llll llll ll ll ll lll lll lll ll ll llll lll lll lllll lll lllll lllllll ll l lllll l llll ll lll ll ll llll l lllllll lll lllll llllllllll l lll l ll lll lll l llll ll l lll l llllll lll l lllll llll ll lllll l ll lll l lll ll llll ll ll l lllll ll lll ll llllll lll ll lll lllll l lll ll ll ll lll llll l llll lllll l llll l ll llll l ll llllll llll llll lllll llllllll ll lll lll ll ll ll lllllllll lllllll l l lllllll lll ll l lllll lllll ll l ll lllll ll llllllllllllllll l ll llll llll lllll l ll llll l ll llll llll ll ll l llllll llllll ll ll ll llllll llll lll l lll lllllllll lllll ll llllll llllll ll lll lllllll ll ll l lll lllllll llll llllll llll ll lll ll l llllll ll lllll lllllll llll llll llllll ll ll ll ll l ll ll l lllllll llll llll lll l llllll ll ll ll lllllllll lll l lllll lllllllll ll lll ll llll llll lll lll lll lll l ll lllllllll ll ll l l lll ll lllll l lll l llll ll l ll lll ll lllll llll ll ll llllllllll l lll llll lllll lll ll lll l ll llll llll lllllllll lll ll lll ll lll llll ll ll lllll ll ll ll l lll lll ll llll llll llll llll lll l ll lllll llll ll lll ll llll l llll llll lllll lllll lll llllll ll ll l ll l ll lll lll ll lllll lll lllll ll ll l lll llll ll l llll lll ll l ll ll lll l l lll llllll llll lll ll lll lllll ll lll ll lllll l lllll lll l ll lllll ll llll ll lll lllll llll lll lll ll ll llllllllll llllll ll l lllllll lll ll ll ll llll lllll lll lllll llllll lll lll l ll lllll llll lll ll lll lll ll lll l lllll lll l llll lll l ll ll ll lll llll ll llll lll llllll ll l ll l llll lll lll lllll ll lll ll lllll llllllll (a) X Y lll l ll ll ll llll l llll lll ll lll ll llll ll llll l lll lllllll ll ll ll llll l ll lll lllllll llllllllll lllll lllllllll l lll ll l l ll llll lll l l lll lllllllll ll l l lll llll lll ll lll llll ll ll llllll ll ll l llll lll lll ll lll l ll ll ll ll lll ll l ll lllll lll ll lll lll llll lllllll ll ll ll ll l llll l llllll llll ll lll l llll ll lll llll lll l llll lll ll ll ll ll l llllll ll ll ll l ll lll lll lll llllllll ll l lll lll lllll lll lll lllllllll ll llll l llll llll llllll llll l lllllll llllllllll ll ll lllllll l ll l llll lllllllll lll llll llllll lll lll ll llllll l lllll l ll llll lll ll l ll ll lll l ll ll ll ll lllll l lllll lll l llllll ll lll llll ll ll l lll lll llllll llll ll llll l llllll llll ll l lllll llll llll lllllll llll llll lll llll llll ll ll ll lll lllll llllll lll lll lll ll ll lll lll ll ll lll l lll llllll ll ll l lll ll lll lllll l lll ll ll llllll lllll lll llll ll llll lllll ll ll ll l lll ll ll l l lll ll ll llllllll l lll ll lll ll ll lllllll l lll ll lllll ll ll lll lll lll ll ll llll ll lll lll ll ll l ll llllll lll l llll llllllll lll l lll lll lllll ll llll ll lll lll ll llll ll llllll lll lll ll ll ll ll ll lll ll l lllll ll llll lll llllll lll llll ll l l lllll llll ll ll ll lll l llll ll ll lll ll l lllll llllll l lll ll llll ll llll lll llll ll ll lll lll ll llll lll l ll ll llll lll ll lllllll ll llll l ll llllll ll ll l lll llllllllllll ll lll llllllll lll lll ll llll lll ll lll lll llllll l llll ll ll lll ll lll lllll ll ll lll l llll llllll lll ll l ll lll lll lll ll lll ll ll ll ll l ll lll lll lll ll l llll l llll ll llllll ll ll ll llllll ll llllll llllllllll lll lll l l lll lll lll ll llll lll lllll l lllll lllll ll ll llllll ll llll ll lll lll llllll ll llll ll l ll lll ll lll lll ll lll ll ll llllll lll l lll lll lllllll lllllll lll l ll llll lll ll l l lllllll ll ll llll l llll l ll lll llll lllllllllllll ll lllll lll l llll llllll lll lllllll l lll llll lll l ll l ll l llllll lllllll llll llll lll l llllll ll lll ll ll ll lllll l lll lll l ll llll llllll ll ll lll lll ll llll lllll lll llll ll lll ll lll l llllll ll l ll llll ll ll llll ll lllll lllll ll llllll ll lllll ll lllllllll lll l lllll llll lll ll lll lllll ll ll l lll lll lll llll ll ll ll l lll ll ll ll lllll lllll ll llll ll lllllll ll ll ll ll ll l lllllllllllll llll llllllll ll lllllllll ll lllll ll ll ll ll lllllll ll l llll lll llll llll llll ll lll ll ll lll ll lllll lll llll lll lllll l ll ll ll llllllllll llllll lll lllllll lll ll ll lllllll llll llll lll lll llll lllll lllll llll llllll l ll l lll lllll lll llllll llllll l lll llll ll llll llllll lllll l lllll llll ll ll llllll lllll llllll l llllll llll l lllll ll lllll lll l lll ll ll llll lllll lll l ll l lll ll l lll ll ll ll l ll lll llll ll lllllll l lll llll ll lll ll llll ll lllllllllll l ll llll lll l lll llll ll ll lllllll l ll ll llll llll l ll lllll llllllll l lll llll lllllll lll llll ll lll lllll lllllll lll ll llllllll ll l ll lll lll ll llll lll llllll lllll l lll lllll llllllll llllll ll lll lllll lll ll lllllll lllll ll l lll ll l lllll ll lll l ll l llll ll ll lllll lll lll lll ll lll ll llllll l llll ll lll ll lll lllll ll llll lll lll llllll ll l ll ll lll lll ll lll lllll llllll ll llll lll lll ll llllll l llll ll lll llll lll llllll lll ll lllll lllllll llllll ll ll llll ll ll lll llllll lllll l ll llllll lll lll lll l ll llll lllllll lllllll ll llllll lllllll lll lll ll lll lll lllll ll lllllll l ll lll llll llll ll lll lll ll l lll llllll lllll ll l llll ll lllllll llll l ll llllll lll lllllll ll lll ll llll l ll llll ll lll l l lllllllll ll l lll ll l l llllllll l llll ll lllll ll lll llll ll llll l lll ll lll llll ll lllll llll ll ll llll l ll llll lll l llll llllll llllll llll ll lll lll l lll l llll ll lll ll lll llll ll lllll ll ll ll llll ll ll lll l ll l llll llll llll lll ll l lll lllll lll llll ll lll llll llllll llll lll ll ll ll ll ll llll llll ll l ll lll ll lllllllll l llll lllll ll ll ll lll llll l lll lllll l ll lll l lllllll lll lllll ll ll lll ll lll lllll llll l llllll ll l ll l ll lll ll ll lll ll llllll lll ll lll lll l l llll ll ll llllllll lllll lll llllll l lll l lllll llll llllll lll llll ll ll lll llll ll ll ll lllll lll ll ll ll ll l lllllllll lll lll ll ll ll ll ll lll ll ll l l llll llll ll l llllll ll lll ll l ll lll ll ll llll ll lll ll ll ll ll lll ll l llll lll lll llll ll ll llll l lll lll ll l lll lll lll l llll ll lll ll l lll ll lll llll llll lll lllll lllll lll lllllll l l ll ll llll llllll llll ll lll l llllll lllll llll ll llllllll llll ll lll l l llll lll l lllllllll l lllllll llll ll lll lll ll ll l lll lll lllllll ll lllll lll llllll llllll lll ll llll l lll llllll ll l llll lllll llll ll llllll ll ll l lll ll ll l lll ll llll ll lllllll lll l llll llll lll l ll l ll lll ll lll l l lllllll ll lll lll lll lll l lllllll l ll lllll llllllllll l lllllllll lllll lllll ll lllll l lllllll lll lll lll lll lll ll lllll lll lll llll lllll llllll lll llll ll lll llllll lll l ll ll lllllll ll lllllll lllllllllllll lllllll ll ll ll lll ll llll ll ll ll llll ll l lllll lll lllll lll lll llllll llllll lllll ll llllll ll lllll l l ll llll lll ll ll llll l l ll l lll llllll lll ll ll lll llll lll llllll lll ll ll lll lll ll l lll lllll lll llllll l l llll lllll ll lll ll lll llll l l lllll lllll llll ll llll ll lllll l ll llll llllll ll lll llllll lllll ll llll lll ll l ll ll ll lll lll lll ll lllll llll llllll llllllll lll l lll ll lll ll ll ll ll lll lll ll ll llll lll lll lll llll llll lllll ll lll llllll ll ll lllll l ll lll ll llll ll ll lll l lllll ll l l l llll llll llll ll lll llllllll llll ll lllll l lll llllll lll lll llll l ll ll lllll lll lll lll ll ll lll lll lllll l lll ll lllll ll ll ll lll ll ll l lll l lllll ll ll ll l llllll l lllllll ll lll ll ll ll ll llllll l ll l llllll lll ll ll lllll ll lll llll l ll lll ll lll lll llllll llll lllll llll llll llllll llll lllllll ll lll lll l lllll lll ll ll lll ll ll llllllllll lllllllll ll ll lllll llll ll ll lllll ll lllll lll lll l llllll ll lll ll ll lll ll lll ll l ll ll llll ll llllll ll l lll lll lll ll lll ll l llllll ll ll ll ll ll ll llllll lll lllll ll lll ll llll l ll lll ll llllll l lllllllll ll lll ll lllllllll lll llll ll llll lllllll llll l lllll l ll lll llll lllll ll llllll ll ll ll l llll llll lll llll ll lll ll lll ll llllll ll ll ll ll lllllll lll ll ll ll lll l lll l llll lllll l lllllll l lll lll l lll l lll llllll l llll ll ll llllll ll ll lllll lll llll llll ll lll llll l lllll lll llll ll l l lll lllllll lllllllll lll lllllll lll lll ll ll ll llll lll lllll llllllll ll ll lll l l lll ll lll l llll l ll lllll ll ll lllll llllllll llll lll lll ll lllll l lll ll ll l ll lll llll ll lll l ll l llll l lll l lll lll llllllllll l lllll ll llll lll lllll ll lllll lllll lllll lllll ll ll llll ll lllllll llll llll lll l ll l llllll lllll l llll llll ll l lll lll lllll lll llll lllllll llll l ll l ll lllll ll lllll lll l llll lllllll l llll llll ll lll ll ll l ll llllllll lll lll ll ll lllll l lll ll llll l l lllllll l lll l lllllll llllll lll llllllllllll lll l ll ll ll ll ll ll lll llll ll ll ll l lll lll l ll lll llll ll ll ll l llllll l ll llll l lll ll lll ll llllll llllll l ll l lllll llll ll lllll lll ll llll ll ll l llllllll l lllll ll lllll l lll l lllllll llllllllll ll ll l llll ll llllll lllllll lll ll lll ll llll ll ll lllll l llllllllll ll lll l l lll lll llllll lll lll l l lll ll l l ll lll l ll ll l ll llllllllll ll l lllll l lllll ll l lll lll lllll l lll l ll ll lll l ll llll lll ll llllllllll l ll ll l lll lllll llll llll lll l ll ll ll lll l lll ll lllllllll lll lll ll llllll llllll lll llll llll ll ll ll lll ll ll lll lll llllllll l ll ll ll ll ll lll llll llllll l l lll lllll ll ll lllll lllll llllll ll lllllll ll lll l lllll llllll lll l ll ll lll lllllllllll llll lll ll lll l ll l ll ll llllll lll lll lllll ll ll l lll llll llll lll lll l llllll ll lllllll ll l l ll l lll ll l l lll l ll llll llllll llll lll lllllll llll ll ll lll ll llll llllllllllllllllll lllll l ll ll lllll l ll ll ll ll lllll ll lllll ll ll llll ll ll llll llll lll l ll ll llllll l llll ll l llll l lll ll l llllll lll llllll lll lll ll l lll llll lllll l llll llll llllll llll ll lll lllll llll llllll ll lll lllll l llllll llllll llll ll ll lllll lll llll ll llll ll lll ll lllll ll lllll lllllll llllll l lll llll ll llll l lllllll ll lll l lll llll llll llll ll l ll ll llll lll lll llll ll ll l ll l lll ll lll lll llllllll llll llll lllll l ll ll l ll lll ll ll ll lll lll ll lll lll ll l lllll lll lll llll lll ll ll llllll llll ll ll l lllllll lll l llll llllll l llll lllll llll lll lll lllllll lll lllll ll lllll ll lllll l ll ll ll lll lllllll l lll lllll lll ll lllll lllll llll ll lllll lllllllll lll lll lllll lll lll lll lll l ll ll lllllll llll lll lll l lll lll ll lll ll ll lllllll ll l llll ll l llll lllll lllll ll lllll llll l lllll lll ll l lllllll llll ll lll ll llll lllll llll l lll lll lll llll lll llll llll llllll llll l lll lllll llllllllll ll ll lll llllllllllll llll ll ll lllll llllllll ll llll lll l ll llll l lllll lllll l lll ll llll lllllll lll lll l l ll llll llllllll l ll llll ll ll l lll llllll ll llll lllll ll llll l l l lll l ll ll l llllllllll lllll lll llll ll ll llll lll llllll l ll lll ll llll l lll lll ll lll ll lll ll l lll l lll lll lll lll lllll lll lll lll lll lll ll llllllll l lll l ll l ll ll llll llll ll llll lll l ll lll ll lllll l lll l ll llll lll ll ll llllll ll ll l llll llllllll lllll ll lllll ll ll ll llllll lll lll lllll lll ll ll llllll l ll llll llllll ll l llll lllll lll llll ll ll lll lll l ll llll llll l llll ll llll l ll ll ll ll lllll llll ll ll l lllll llllll l lll lllllll llll llll llll lll ll ll llll lllll ll llllll lll l l ll l ll ll lll lllll l llll lll l ll lll llllllllll lllll llll lllllll lll llll ll lll l llll lll ll lll ll ll lll ll ll ll lll l llll ll lllll lll l lllllll llll lll lll lllll llll ll llll l ll lll lll lll ll ll ll ll ll l lll lll llllllll ll ll llll ll lll lll ll lll lll llllll ll lll ll ll ll ll ll ll llll lll llllll lllll lll lll ll lllll llllll ll llll ll llll lllllll l lll ll llll llll lll ll lll l lllll l llll l l ll llllll l llll ll l lllll lll ll lllll l ll ll l ll lll lllllll ll l lll lll lll l ll l lll ll lllllllll l lll l ll ll lllll lll l ll lll llllll lll ll llll ll llll ll llll lll llll ll lll ll ll l ll lll ll ll ll lll llllll lll ll l lll llllllll ll lll llll ll ll lllllll lll ll llll lll ll ll ll llll lll llll l llll lllll llll ll ll lll lll llll ll l llllll l ll llll llll ll lll lll lllll llll llll lllll l lllll lll lllll lll llllll lll ll llll llll ll ll ll llllll llll ll llll lll lllll llllll ll ll ll l lll llll lll lll ll ll ll ll ll ll ll lll ll ll ll ll lllll lll ll ll l ll llllllllllll ll l llll lll l l llllll l lllll llllll lll ll ll l lll lll l lllllll llllllll lll ll llllll ll lllll lllll ll lllll llll ll l ll l ll lll lll lll l llllllllllllll ll llll ll llllll ll lllll l ll lllll lll lll lll ll lll lll lll lll lll lll llll lll l ll ll llll ll lllllll ll llllll ll ll ll llllll llllll ll llll llllll llll lll lllll l lllllll llll lllll ll llllll ll lll llllll l lllllll llllll ll llllll l llll ll lll l llll llllll ll lll lll l llll l ll llll ll lllllll l llllll l llllllll lll ll ll ll ll llll lll lllll llll lllll llll ll ll lll ll ll l llllll ll ll l lllll llllll lll lll llll lllllll lll ll ll llllllll l lll ll l llll llllll ll ll l llll lll l ll l llll lllll l ll l l lll ll ll l lllll ll ll l lll ll ll ll lllll llll lllll lll ll lll lllll llll lllll ll l ll lll lllllll llll lll ll ll lllllll llll ll lll lllll ll lll ll lllll lll lll lll lll lllll ll llllll lllll lll ll ll l ll ll ll ll llllll lllll l llll ll ll l lllll ll ll llll lll llllllllllllll l lll l llll lllllllll lllll ll lll llll llllll lllll ll lllllll lll l ll l llllll ll llllll lll lll l lll lllll lll l lllll ll l ll lllll llll ll llll l ll lll ll l lllllllllllll l llll ll l ll lllllllllll lll lllll llll ll ll ll ll ll lll ll ll lllll l lll l lll ll ll ll l llll lll lll ll l lllll ll lll ll lllll ll l ll llll ll lll llll ll ll l ll ll l ll llllll llll lllll llll lll lllllll llllll lllllll ll ll ll llll ll llll lll ll lllllll lllllllll ll ll llll llll llll llllll lll l ll llllllll lllll llll llllll llllll lllll l llll ll ll lllll ll llll llllll lll llllll ll ll lll ll lllllll l ll ll l lll lllllll lllllll l lllll llll l llll lll l lllllllll llll l lllll ll lll ll lll l llll llllll ll ll lllllllll ll ll lllllll lllll llll ll llllll ll l ll lllllll l lll llll llllllll lll lll ll ll llllllllll lll lll ll lllllll llllll ll lll lll lll l lll llllll lll llll ll ll ll l lll ll lll lllll ll ll llll ll l lll lll lll llllllll l ll llll ll lll lllll lll ll lll llll llllllllll ll l llllllll lllll lllllll lll llll l ll llll l lllll llllll ll ll lllll lll lll ll ll llll ll l ll ll llll lllllllllll lll lllll ll ll lllllll lll ll lll llll llll ll lll llll lll lll llll llllll llll llllll lll llll lll ll l ll llll lll lll lll ll lllll lllllll lll lll lll lll ll lll llllll lll l ll ll ll ll lllll lll lllll lllll lll ll lll lll l lllll lll llllll l lll lll llllll llllllll lllll ll ll llll lllll lll (b) X Y Figure 1: Scatter plots (black dots) of (a) ( X , Y ) and (b) ( X , Y ) such that all of X , Y , X and Y areidentically Pareto distributed with shape parameter 3 and scale parameter 5, and ( X , Y ) and ( X , Y )have Student t copulas C tν,ρ and C tν,ρ , respectively, where ν = 5 is the degrees of freedom, and ρ = 0 . ρ = − . x + y = K for K = 35. Histograms(blue) of the conditional distributions of (a) ( X , Y ) and (b) ( X , Y ) on the (approximate) set of allocations { ( x, y ) ∈ R : K − δ < x + y < K + δ } , δ = 0 .
5, are drawn on K d ( K ) = { ( x, y ) ∈ R : x + y = K } .this paper, the (cid:48) -notation is used to denote quantities related to this non-degenerate distribution in d − (cid:62) -symbol.Assuming that X and ( X (cid:48) , S ) admit densities, X (cid:48) | { S = K } also has a density and is given by f X (cid:48) |{ S = K } ( x (cid:48) ) = f ( X (cid:48) ,S ) ( x (cid:48) , K ) f S ( K ) = f X ( x (cid:48) , K − (cid:62) d (cid:48) x (cid:48) ) f S ( K ) , x (cid:48) ∈ R d (cid:48) , (4)where the last equality follows from an affine transformation ( X (cid:48) , S ) (cid:55)→ X with unit Jacobian. The distribution of X | { S = K } is a primary subject in this paper. In this section, we provide amotivating example for investigating this distribution from the viewpoint of risk alloation.To this end, consider two bivariate risks (a) ( X , Y ) and (b) ( X , Y ) such that all of X , Y , X and Y are identically Pareto distributed with shape parameter 3 and scale parameter 5, and ( X , Y ) and ( X , Y )have Student t copulas C tν,ρ and C tν,ρ , respectively, where ν = 5 is the degrees of freedom parameterand ρ = 0 . ρ = − . K = 35. By exchangeability of the risk models (a) and (b), most allocation rules provide thehomogeneous allocation ( K/ , K/
2) = (17 . , .
5) in both cases (a) and (b). For instance, if K is regardedas VaR or ES at some confidence levels and is allocated according to the Euler principle, then both VaRand ES contributions lead to homogeneous allocations. As we see in Figure 1, however, the conditionaldistributions of ( X , Y ) and of ( X , Y ) on the set of allocations K d ( K ) differ substantially. Positivedependence among X and Y prevents the two random variables from moving in opposite directions underthe constraint X + Y = K , which results in unimodality of the conditional distribution on K d ( K ). On theother hand, negative dependence among X and Y allows them to move in opposite directions, which leadsto bimodality of the conditional distribution. From the viewpoint of risk management, the homogeneousallocation ( K/ , K/
2) seems to be a more sound capital allocation in Case (a) because it covers the most4ikely risky scenario. In Case (b), the two risky scenarios around the corners ( K,
0) and (0 , K ) occur equallylikely and the allocation ( K/ , K/
2) can be understood as an average of these scenarios. However, thelikelihood around ( K/ , K/
2) is quite small and a single vector of the equal allocation ( K/ , K/
2) obscuresthe two distinct risky scenarios. Consequently, the soundness of the allocated capital depends on themodality of the conditional loss distribution, and multiple risky scenarios can be hidden in a single vectorresulting from capital allocation.Inspecting modes of X | { S = K } can also be regarded as a stress test of risk allocations. Breuer et al.(2009) requires stress scenarios to be severe and plausible. Suppose that a plausible scenario set is definedby L t ( X ) := { x ∈ R d : f X ( x ) ≥ t } where t > f X is the density function of X .Then the set L t ( X ) ∩ K d ( K ) can be regarded as a set of most severe scenarios the given total capital K cancover. Among the set of severe and plausible scenarios L t ( X ) ∩K d ( K ), the mode of X | { S = K } is the mostsevere and plausible scenario that K can cover since the convention f X |{ S = K } ( x ) = f X ( x ) { (cid:62) d x = K } /f S ( K ), x ∈ R d implies that L t ( X ) ∩ K d ( K ) = { x ∈ R d : f X ( x ) { (cid:62) d x = K } ≥ t } = { x ∈ R d : f X |{ S = K } ( x ) ≥ t/f S ( K ) } = L t/f S ( K ) ( X | { S = K } )and the mode of X | { S = K } attains the highest level of plausibility t . Unimodality of X | { S = K } implies that there exists one representative stress scenario the total capital K can cover, and thus the modeis a sound allocation covering the risky scenario most likely to occur. On the other hand, multimodality of X | { S = K } means that there are multiple distinct stress scenarios that are severe and plausible, and thusit may not be sufficient to only focus on a single scenario without identifying the other ones. Remark X | { S = K } with MCMC methods) . Another motivation for investigatingdistributional properties of X | { S = K } is to be able to efficiently simulate this conditional distribution.This is a challenging task since there are no general and tractable sampling methods known for X | { S = K } .Although samples from X satisfying the constraint { S = K } can be regarded as samples from X | { S = K } ,the probability P ( S = K ) is zero, and thus such samples virtually never exist when S admits a density. Apotential remedy of this problem is to modify the conditioning set { S = K } to { K − δ < S < K + δ } fora small δ > P ( K − δ < S < K + δ ) >
0. However, this modification distorts the distributionof X | { S = K } and the resulting estimates of risk allocations are biased. To overcome this issue, Koikeand Minami (2019) and Koike and Hofert (2020) proposed MCMC methods for exact simulation from X | { S = K } . Although MCMC methods improve sample efficiency and the resulting estimates are unbiased,their performance highly depends on distributional properties of X | { S = K } , in particular on its modalityand heavy-tailedness; see Appendix D for more details. From this viewpoint, investigating properties of X | { S = K } is important for constructing efficient MCMC methods for simulating X | { S = K } . In this section we study the support, dependence, tail behavior and modality of the conditional distri-bution of X given { S = K } for a given constant K > d (cid:48) -dimensional random vector X (cid:48) | { S = K } for d (cid:48) = d − X | { S = K } . X | { S = K } We start with the support of f X (cid:48) |{ S = K } . By Equation (4),supp( X (cid:48) | { S = K } ) := { x (cid:48) ∈ R d (cid:48) : f X (cid:48) |{ S = K } ( x (cid:48) ) > } = { x (cid:48) ∈ R d (cid:48) : f X ( x (cid:48) , K − (cid:62) d (cid:48) x (cid:48) ) > } . If X , . . . , X d are supported on R d , we have supp( X (cid:48) | { S = K } ) = R d (cid:48) . Another typical case is when X , . . . , X d are bounded from below, that is, there exists l , . . . , l d > −∞ such that X j ≥ l j P -almost surelyfor j = 1 , . . . , d . In this case, supp( X ) = ( l , ∞ ) × · · · × ( l d , ∞ ) and thus the support of f X (cid:48) |{ S = K } is given5y supp( X (cid:48) | { S = K } ) = (cid:26) x (cid:48) ∈ R d (cid:48) : x > l , . . . , x d (cid:48) > l d (cid:48) , d (cid:48) (cid:88) j =1 x (cid:48) j < K − l d (cid:27) . (5)If l = · · · = l d (cid:48) = 0, that is, when X models the nonnegative part of losses, the closure of (5) isknown as the K -simplex . Since the set in (5) is bounded, simulation of X (cid:48) | { S = K } can be morestraightforward than in the former case when supp( X (cid:48) | { S = K } ) = R d (cid:48) . For instance, an independentMetropolis-Hastings (MH) algorithm can be applied by first generating a sample y (cid:48) uniformly on the setin (5) (which is a location-shifted simplex and thus uniform sampling from this set can be achieved bysimulating a specific Dirichlet distribution) and then replacing the current state x (cid:48) with the new state y (cid:48) with probability α ( x (cid:48) , y (cid:48) ) = f X (cid:48) |{ S = K } ( y (cid:48) ) /f X (cid:48) |{ S = K } ( x (cid:48) ) = f X ( y (cid:48) , K − (cid:62) d (cid:48) y (cid:48) ) /f X ( x (cid:48) , K − (cid:62) d (cid:48) x (cid:48) ). X | { S = K } in the elliptical case Elliptical distributions are important exceptions for which the distribution of X (cid:48) | { S = K } can be de-rived explicitly. For applications of elliptical distributions to risk management, see, for example, Landsmanand Valdez (2003), Dhaene et al. (2008) or Chapter 6 of McNeil et al. (2015). Throughout this work, theset of all d × d positive definite matrices is denoted as M d × d + . The characteristic function of a randomvector X is given by φ X ( t ) = E [exp( i t (cid:62) X )], t ∈ R d . If a function ψ ( t ) : [0 , ∞ ) → R is such that ψ ( t (cid:62) t )is a d -dimensional characteristic function, then ψ is called a characteristic generator ; see Fang (2018) fordetails. Let Ψ d denote the class of all characteristic generators. A d -dimensional random vector X is saidto have an elliptical distribution , denoted by X ∼ E d ( µ , Σ , ψ ), if its characteristic function can be expressedas φ X ( t ) = exp( i t (cid:62) µ ) ψ (cid:18) t (cid:62) Σ t (cid:19) for a location vector µ ∈ R d , dispersion matrix Σ ∈ M d × d + and a characteristic generator ψ ∈ Ψ d . Whenan elliptical distribution X ∼ E d ( µ , Σ , ψ ) admits a density function, it is of the form f X ( x ) = c d (cid:112) | Σ | g (cid:18)
12 ( x − µ ) (cid:62) Σ − ( x − µ ); d (cid:19) , x ∈ R d , for some normalizing constant c d > density generator g ( · ; d ) satisfying (cid:90) ∞ t d/ − g ( t ; d ) d t < ∞ ;see Fang (2018). We omit the second argument and write g ( · ) = g ( · ; d ) when it is not necessary to indicate.In the following proposition we derive the distribution of X (cid:48) | { S = K } provided that X ∼ E d ( µ , Σ , ψ ). Proposition 1 (Ellipticality of X (cid:48) | { S = K } ) . Suppose X ∼ E d ( µ , Σ , ψ ) . Then X (cid:48) | { S = K } follows anelliptical distribution E d (cid:48) ( µ K , Σ K , ψ K ) for some characteristic generator ψ K ∈ Ψ d (cid:48) and µ K = µ (cid:48) + K − µ S σ S (Σ d ) (cid:48) and Σ K = Σ (cid:48) − σ S (Σ d ) (cid:48) (Σ d ) (cid:48)(cid:62) , (6) where µ (cid:48) and (Σ d ) (cid:48) are the first d (cid:48) -components of µ and (Σ d ) , respectively, Σ (cid:48) is the principal submatrixof Σ deleting the d th row and column, µ S = (cid:62) d µ and σ S = (cid:62) d Σ d . Furthermore, if X admits a densitywith density generator g , then X (cid:48) | { S = K } admits a density with density generator g K ( t ) = g ( t + ∆ K ) where ∆ K = 12 (cid:18) K − µ S σ S (cid:19) . (7)6ote that the characteristic generator ψ K of X (cid:48) | { S = K } is in general different from that of X ; seethe proof in Appendix A. By Proposition 1, ellipticality is preserved under conditioning { S = K } and thusa change of the shape of the distribution as observed in Figure 1 (b) does not occur when X is elliptical.The capital K is typically much larger than the mean of the total loss µ S in practice. By (7), the densitygenerator g K is thus typically the tail part of the generator g . Moreover, the location vector µ K typicallyincreases in proportion to the sum of covariances (Σ d ) (cid:48) . As a consequence, more (less) capital is assigned tolosses which are positively (negatively) correlated with the other losses. On the other hand, the dispersionmatrix Σ K decreases in proportion to the term (Σ d ) (cid:48) (Σ d ) (cid:48)(cid:62) and the reduction depends on the varianceof the sum. Example t Distribution) . A d -dimensional Student t distribution t ν ( µ , Σ) is an elliptical distri-bution E d ( µ , Σ , ψ ) with density generator g ( t ; d ) = (cid:18) tν (cid:19) − d + ν , t ≥ , (8)where ν ≥ degrees of freedom parameter. It is known, for example, from Roth (2012) and Ding(2016) that the conditional distribution of the Student t distribution is again Student t . We can check thisclosedness property with Proposition 1. By (7), the random variable X (cid:48) | { S = K } follows an ellipticaldistribution E d (cid:48) ( µ K , Σ K , g K ) with density generator (up to a constant) given by g K ( t ) = (cid:18) tν + ∆ K (cid:19) − d + ν , for which the corresponding distribution is known as the Pearson type
VII distribution ; see Schmidt (2002).In fact, this distribution reduces to a d (cid:48) -dimensional Student t distribution since g K ( t ) = (cid:18) tν + ∆ K (cid:19) − d + ν ∝ (cid:18) ν + 1 ν + ∆ K tν + 1 (cid:19) − d (cid:48) + ν +12 , and the multiplier ( ν + 1) / ( ν + ∆ K ) can be absorbed by redefining the dispersion matrix as ˜Σ K = ( ν +∆ K )Σ K / ( ν + 1) for ( ν + ∆ K ) / ( ν + 1) >
0. Consequently, X (cid:48) | { S = K } has distribution t ν +1 ( µ K , ˜Σ K ).Since the degrees of freedom of X (cid:48) | { S = K } increases by 1, X (cid:48) | { S = K } has slightly lighter tails than X . X | { S = K } and stochastic order Dependence structure of X (cid:48) | { S = K } is typically described in terms of dependence among X j and S for j = 1 , . . . , d (cid:48) . For instance, when X ∼ E d ( µ , Σ , ψ ), Proposition 1 yieldsCov[ X i , X j | { S = K } ] = (Σ K ) i,j = Cov[ X i , X j ] − σ S (Σ d ) i (Σ d ) j = Cov[ X i , X j ] − σ S Cov[ X i , S ] Cov[ X j , S ] = σ i σ j ( ρ X i ,X j − ρ X i ,S ρ X j ,S ) , where σ j = Var( X j ) and ρ X i ,X j is the correlation coefficient of ( X i , X j ). In this section we study thedependence, especially the total positivity and its related order of X (cid:48) | { S = K } for a general distributionbeyond the elliptical case. To this end, define the following concepts. Definition 1 (Multivariate total positivity of order 2) . Suppose random vectors X and Y have densities f X and f Y , respectively.1. X is said to be multivariate totally positively ordered of order 2 (MTP2) if f X ( x ) f X ( y ) ≤ f X ( x ∧ y ) f X ( x ∨ y ) , for all x , y ∈ R d . . X is said to be multivariate reverse rule of order 2 (MRR2) if f X ( x ) f X ( y ) ≥ f X ( x ∧ y ) f X ( x ∨ y ) , for all x , y ∈ R d . Y is said to be larger than X in T P , denoted as X ≤ tp Y if f X ( x ) f Y ( y ) ≤ f X ( x ∧ y ) f Y ( x ∨ y ) , for all x , y ∈ R d . For examples and implied dependence properties of MTP2, MRR2 and TP2 ordered distributions, seeKarlin and Rinott (1980a) and Karlin and Rinott (1980b). The following proposition states that the MTP2,MRR2 and TP2 order of X (cid:48) | { (cid:62) d X = K } and Y (cid:48) | { (cid:62) d Y = K } are inherited from those of ( X (cid:48) , (cid:62) d X )and ( Y (cid:48) , (cid:62) d Y ). Proposition 2 (MTP2, MRR2 and TP2 order of X (cid:48) | { S = K } ) . Suppose ( X (cid:48) , S ) and ( Y (cid:48) , T ) where S = (cid:62) d X and T = (cid:62) d Y have densities f ( X (cid:48) ,S ) and f ( Y (cid:48) ,T ) , respectively.1. If ( X (cid:48) , S ) is MTP2 (MRR2) then X (cid:48) | { S = K } is MTP2 (MRR2).2. If ( X (cid:48) , S ) ≤ tp ( Y (cid:48) , T ) then X (cid:48) | { S = K } ≤ tp Y (cid:48) | { T = K } . The properties of MTP2 (MRR2) and TP2 order have various implications. For example, when X (cid:48) | { S = K } is MTP2, then X (cid:48) | { S = K } is positively associated in the sense that Cov[ g ( X i ) , h ( X j ) | { S = K } ] ≥ g : R → R and h : R → R . If X (cid:48) | { S = K } ≤ tp Y (cid:48) | { T = K } , then X (cid:48) | { S = K } ≤ st Y | { T = K } , that is, E [ h ( X (cid:48) ) | { S = K } ] ≤ E [ h ( Y (cid:48) ) | { T = K } ] for all boundedand increasing functions h : R d (cid:48) → R . The readers are referred to M¨uller and Stoyan (2002) for moreimplications of the MTP2, MRR2 and TP2 order.Next, we consider the special but important case when X , . . . , X d are perfectly positively dependent,that is, when X is a comonotone random vector X d = ( F − ( U ) , . . . , F − d ( U )) for some U ∼ U(0 , X | { S = K } is degenerate when X is comonotone. Proposition 3 ( X (cid:48) | { S = K } under comonotonicity) . Suppose X is a comonotone random vector withcontinuous margins F , . . . , F d . Then X | { S = K } = ( F − ( u ∗ ) , . . . , F − d ( u ∗ )) P -a.s. , where u ∗ ∈ [0 , is the unique solution to (cid:80) dj =1 F − j ( u ) = K as an equation of u ∈ [0 , . This result can be understood as an extremal case that positive dependence (comonotonicity) impliesunimodality of X | { S = K } (taking on one point ( F − ( u ∗ ) , . . . , F − d ( u ∗ )) with probability 1). Whennegative dependence comes into play, a wider variety of distributions, possibly multimodal ones, arise as X | { S = K } compared with positive dependent case; see the following example for the case that negativedependence implies multimodality of X | { S = K } . Example X | { S = K } under extremal negative dependence) . Let
K > X ∼ F for a continuousdistribution function F supported on [0 , ∞ ) such that X | { X ≤ K } is radially symmetric about K/ X − K/ | { X ≤ K } d = ( K/ − X ) | { X ≤ K } . For U ∼ U(0 ,
1) define ( X , X ) by X = F − ( U ) { U ≤ F ( K ) } + F − ( U ) { U>F ( K ) } = F − ( U ) ,X = ( K − F − ( U )) { U ≤ F ( K ) } + F − ( U ) { U>F ( K ) } . Then P ( X ≤ x ) = P ( F − ( U ) ≤ x ) = P ( U ≤ F ( x )) = F ( x ) for all x ≥
0. Moreover, the conditional radialsymmetry of F implies that P ( K − F − ( U ) ≤ x, U ≤ F ( K )) = P ( F − ( U ) ≤ x, U ≤ F ( K )) and thus that P ( X ≤ x ) = P ( X ≤ x, U ≤ F ( K )) + P ( X ≤ x, U > F ( K ))= P ( K − F − ( U ) ≤ x, U ≤ F ( K )) + P ( F − ( U ) ≤ x, U > F ( K ))= P ( F − ( U ) ≤ x, U ≤ F ( K )) + P ( F − ( U ) ≤ x, U > F ( K ))= P ( F − ( U ) ≤ x ) = F ( x ) , x ≥ . X ∼ F and X ∼ F . The dependence structure of ( X , X ) is a combination of positive andnegative dependence. The body part { X ≤ K } of X and the tail part { X > K } of X are mutuallyexclusive in the sense that P ( X ≤ K, X > K ) = 0. Similarly P ( X > K, X ≤ K ) = 0. In the tail part, X and X are comonotone in the sense that ( X , X ) = ( F − ( U ) , F − ( U )) on { U > F ( K ) } . In the bodypart, X and X are countermonotone in the sense that ( X , X ) = ( F − ( U ) , K − F − ( U )) on { U ≤ F ( K ) } .Since X + X = F − ( U ) + K − F − ( U ) = K on { U ≤ F ( K ) } and X + X = 2 F − ( U ) > K > K on { U > F ( K ) } , we have that { X + X = K } = { X + X = K, U ≤ F ( K ) } ∪ { X + X = K, U > F ( K ) } = { U ≤ F ( K ) } and thus that( X , X ) | { X + X = K } = ( X , X ) | { U ≤ F ( K ) } = ( F − ( U ) , K − F − ( U )) | { U ≤ F ( K ) } . Consequently, ( X , X ) | { S = K } has homogeneous marginal distribution F X |{ X ≤ K } and a countermono-tonic copula W . Therefore, multimodality of X | { S = K } appears when, for example, X ∼ F has abimodal distribution on the body part { X ≤ K } . Remark . Example 2 for constructing ( X , X ) based on counter-monotonicity can be extended to the multivariate case. Let K > X ∼ F for a continuous distributionfunction F supported on [0 , ∞ ) such that the conditional distribution F X |{ X ≤ K } is d -completely mixable with center K for d ≥
3, that is, there exists a d -dimensional random vector Y = ( Y , . . . , Y d ) called the d -complete mix such that Y j ∼ F X |{ X ≤ K } , j = 1 , . . . , d , and Y + · · · + Y d = K almost surely. Such arandom vector exists, for example, when F X |{ X ≤ K } admits a decreasing density with E [ Y ] = K/d ; seeWang and Wang (2011, Corollary 2.9.). Define X = ( X , . . . , X d ) by X j = Y j { U ≤ F ( K ) } + Z j { U>F ( K ) } for Y = ( Y , . . . , Y d ) being the d -complete mix of F X |{ X ≤ K } , U ∼ U(0 , Z j ∼ F X |{ X>K } , j = 1 , . . . , d and Y , U and Z , . . . , Z d are independent of each other. Then one can check that X j ∼ F . Moreover, { X + · · · + X d = K } = { U ≤ K } since S := X + · · · + X d = K { U ≤ F ( K ) } + ( Z + · · · + Z d ) { U>F ( K ) } , and Z + · · · + Z d > dK > K . Consequently, X | { X + · · · + X d = K } = X | { U ≤ K } = Y almost surelyand thus X | { S = K } is the d -complete mix of X | { X ≤ K } . To construct a multimodal X | { S = K } one can choose Y as an equally weighted mixture of three Dirichlet distributions Dir( α, α, β ), Dir( α, β, α )and Dir( β, α, α ) for 0 < α < β . This mixture is a 3-complete mix since it has homogeneous marginaldistributions and a constant sum. Moreover, Y has three distinct modes when, for example, α = 2 and β = 10, and thus X (cid:48) | { S = K } is multimodal. X | { S = K } We now study the tail behavior of X (cid:48) | { S = K } through its density. Since boundedness of X frombelow leads to a bounded support of X (cid:48) | { S = K } as shown in Section 3.1, we focus on the case when X is supported on R d . In this case, the support of X (cid:48) | { S = K } is R d (cid:48) and thus there are 2 d (cid:48) orthants to beconsidered. Hereafter we consider tail behavior only in the first orthant { x (cid:48) ∈ R d (cid:48) : x , . . . , x d (cid:48) > } sincetails on the other orthants can be discussed similarly. We study the following limiting behaviors of the ratioof densities. Definition 2 (Multivariate regular and rapid variation of a density) . Let X be a d -dimensional randomvector X with a density f X .1. X is called multivariate regularly varying with limit function λ : R d → R + (at ∞ and on the firstorthant), denoted by MRV( λ ) if lim t →∞ f X ( t y ) f X ( t x ) =: λ ( x , y ) > for any x , y ∈ R d + , (9) provided the limit function λ exists. . X is called multivariate rapidly varying (at ∞ and on the first orthant), denoted by MRV ( ∞ ) if, lim t →∞ f X ( st x ) f X ( t x ) = (cid:40) , s > , ∞ , < s < , for any s > , x ∈ R d + . Note that we adopt the definition of regular variation of densities for its potential application to MCMCmethods where the ratio of target densities f X (cid:48) |{ S = K } ( y (cid:48) ) /f X (cid:48) |{ S = K } ( x (cid:48) ) at any two points x (cid:48) , y (cid:48) ∈ R d (cid:48) is of interest; see Appendix D. Taking x = d in (9) leads to the standard definition of regular variationintroduced, for example, in Resnick (2007). Regular variation is typically described in terms of probabilitymeasures or survival functions, and these concepts of variations are connected to regular variation of densitiesthrough Resnick (2007, Theorem 6.4.).The following proposition states that one can find a limit function for X (cid:48) | { S = K } based on that of X through the auxiliary random vector ˜ X = ( X (cid:48) , K − X d ). Proposition 4 (Multivariate regular and rapid variation of X (cid:48) | { S = K } ) .
1. Assume that ˜ X = ( X (cid:48) , K − X d ) is MRV( ˜ λ ). Then X (cid:48) | { S = K } is MRV( λ (cid:48) ) with limit function λ (cid:48) ( x (cid:48) , y (cid:48) ) = ˜ λ (( x (cid:48) , (cid:62) d (cid:48) x (cid:48) ) , ( y (cid:48) , (cid:62) d (cid:48) y (cid:48) )) , x (cid:48) , y (cid:48) ∈ R d (cid:48) + .
2. If ˜ X is MRV( ∞ ), then X (cid:48) | { S = K } is MRV( ∞ ). The sufficient conditions in Proposition 4 are more straightforward to check than those in Proposition 2since ˜ X does not depend on the sum S , and the joint distribution of ˜ X can be specified through its marginaldistributions and copula. The margins of ˜ X are ˜ F j = F j , j = 1 , . . . , d (cid:48) , and ˜ F d ( x d ) = ¯ F d ( K − x d ), and thecopula ˜ C of ˜ X is the distribution function of ( U , . . . , U d (cid:48) , − U d ) where U ∼ C is the copula of X . Thisenables one to find a limit function for ˜ X ; see Li (2013), Li and Wu (2013), Li and Hua (2015) and Joe andLi (2019).As the following proposition shows, in the elliptical case the limit function is determined by the densitygenerator g . Proposition 5 (Multivariate regular and rapid variations for elliptical distribution) . Assume X ∼ E d ( µ , Σ , ψ ) admits a density with density generator g continuous on R + .1. If g is regularly varying in the sense that lim t →∞ g ( tu ) g ( ts ) = λ g ( s, u ) , s, u > , then X (cid:48) | { S = K } is MRV( λ K ) with λ K ( x (cid:48) , y (cid:48) ) = λ g ( x (cid:48)(cid:62) Σ − K x (cid:48) , y (cid:48)(cid:62) Σ − K y (cid:48) ) , x (cid:48) , y (cid:48) ∈ R d (cid:48) .
2. If g is rapidly varying in the sense that lim t →∞ g ( st ) g ( t ) = (cid:40) , s > , ∞ , < s < , then X (cid:48) | { S = K } is MRV( ∞ ).Example t distributions) . The multivariate Normal distribution has a rapidly varyingdensity generator g ( t ) = exp( − t ), and thus its corresponding conditional distribution X (cid:48) | { S = K } is alsorapidly varying by Proposition 5 Part 2. Next, suppose X follows a d -dimensional Student t distributionwith degrees of freedom ν ≥
1. Its density generator (8) is regularly varying with limit functionlim t →∞ g ( tu ) g ( ts ) = (cid:16) us (cid:17) − ν + d , u, s > . X (cid:48) | { S = K } is regularly varying with the limit functionlim t →∞ f X (cid:48) |{ S = K } ( t y (cid:48) ) f X (cid:48) |{ S = K } ( t x (cid:48) ) = (cid:32) || Σ − K y (cid:48) |||| Σ − K x (cid:48) || (cid:33) − ( ν + d ) , x (cid:48) , y (cid:48) ∈ R d (cid:48) + , where || · || is an Euclidean norm on R d (cid:48) . X | { S = K } Next we study the modality of X (cid:48) | { S = K } . Among various definitions of unimodality considered inthe literature, we adopt those defined based on the level set L t ( f ) := { x ∈ R d : f ( x ) ≥ t } , t ∈ (0 , max { f ( x ) : x ∈ R } ] , where f is a density on R d which is assumed to be bounded for simplicity so that max { f ( x ) : x ∈ R } exists.By definition, L t ( f ) is a decreasing set, that is, L t (cid:48) ( f ) ⊆ L t ( f ) for 0 < t ≤ t (cid:48) . We also write L t ( X ) for L t ( f ) if X has density f . A set A ⊆ R d is called star-shaped about x ∈ A if, for any y ∈ A , the linesegment from x to y is in A . Definition 3 (Concepts of unimodality) . For a bounded density function f on R d , we call M ( f ) = L t ∗ ( f ) the mode set for t ∗ = max { f ( x ) : x ∈ R d } . If L t ∗ ( f ) = { m } then we call m ∈ R d the mode of f .Furthermore, f is said to be weakly unimodal if L t ( f ) is connected, star unimodal about the center x ∈ R d if L t ( f ) is star-shaped about x and convex unimodal if L t ( f ) is convex, for all < t ≤ t ∗ . From Definition 3, convex unimodality implies star unimodality and star unimodality implies weakunimodality. Other notions of unimodality, such as block unimodality, linear unimodality, monotone uni-modality, α -unimodality, orthounimodality and Khinchin’s unimodality are not introduced in this paperdue to their intractability for our purpose; see Dharmadhikari and Joag-Dev (1988) for a comprehensivediscussion on unimodality. Defining notions of unimodality in terms of the shape of the level set L t ( f ) fitsour purpose in several ways. As mentioned in Section 2.2, L t ( X ) can be understood as a plausible scenarioset with t > L t ( X | { S = K } ) can be regarded as a set ofsevere and plausible stress scenarios the total capital K is supposed to cover. From these interpretations, webelieve that unimodality should describe simplicity of these level sets, such as connectivity and convexity.The level set L t ( f ) is also important when f is simulated with MCMC methods since the ratio of levels of f is a primary quantity of interest for such methods. MCMC methods are required to be specifically designedwhen L t ( f ) is not connected since in this case a Markov chain needs to traverse distinct regions to simulatesamples from the entire space.Note that uniqueness of the maximum of a desity f , that is, the mode set of f being a singleton L t ∗ ( f ) = { m } for m ∈ R d , is an important but different concept of unimodality from those in Definition 3.The notions of unimodality in Definition 3 concern the overall shape of a density through its level setswhereas uniqueness of the maximum of f is a purely analytical property of the derivative of f . In addition,uniqueness of the maximum is not an appropriate concept of unimodality when the relationship between X and X (cid:48) | { S = K } is of interest. In fact, uniqueness of the maximum of f X (cid:48) |{ S = K } is equivalent to that of f X on the restricted domain K d ( K ) via (4), and thus the uniqueness of the maximum of f X on the entiresupport R d does not provide any information on the shape of f X on K d ( K ) unless the mode of f X on R d is in K d ( K ).The following proposition reveals relationships between unimodality of X and that of X (cid:48) | { S = K } . Proposition 6 (Unimodality of X (cid:48) | { S = K } ) .
1. Suppose X ∼ E d ( µ , Σ , ψ ) admits a density with density generator g . If g is decreasing on R + , then f X (cid:48) |{ S = K } is convex unimodal. Furthermore, if the equation g ( t ) = ∆ K of t ∈ R + has a uniquesolution t ∗ K , then f X (cid:48) |{ S = K } has the mode m = µ K .2. If X is convex unimodal, then X (cid:48) | { S = K } is convex unimodal. X imply any of theunimodality concepts introduced in Definition 3 for X (cid:48) | { S = K } . To provide a counterexample, weintroduce the following class of distributions. Definition 4 (Homothetic density) . A d -dimensional random vector X is said to have a homotheticdensity , denoted by X ∼ H ( µ , D, r ) , with a location parameter µ ∈ R d , shape set D ⊆ R d and a scalingfunction r : R + → R + if X − µ admits a density f D satisfying L t ( f D ) = r ( t ) D := { s x : 0 ≤ s ≤ r ( t ) , x ∈ D } for some continuous and decreasing function r and a bounded and star-shaped (around ) set D ∈ R d suchthat (cid:90) ∞ Leb d ( r ( t ) D ) d t = 1 , (10) where Leb d denotes the Lebesgue measure on R d . Note that Condition (10) is required to ensure that (cid:82) R d f D ( x ) d x = 1. To see this, we have (cid:90) R d f D ( x ) d x = (cid:90) R d (cid:90) f D ( x )0 d t d x = (cid:90) R d (cid:90) ∞ { x ∈ L t ( f D ) } d t d x = (cid:90) ∞ Leb d ( L t ( f D )) d t = (cid:90) ∞ Leb d ( r ( t ) D ) d t = 1 . Homothetic distributions arise partly from l p -spherical distributions (Osiewalski, 1993) where the levelsets are determined as balls in the l p -norm, and from a further generalized class of distributions called the v -spherical distributions (Fernandez et al., 1995). Examples of homothetic distributions include skew-normaldistributions and rotund-exponential distributions; see Balkema and Nolde (2010). It is straightforwardto check that X ∼ H ( d , D, r ) is star unimodal about x ∈ R d if D is star-shaped about x , and convexunimodal if D is convex.Suppose X ∼ H ( d , D, r ) for a convex set D . Then X is convex unimodal and so is X (cid:48) | { S = K } byProposition 6. For this homothetic distribution, the level set of X (cid:48) | { S = K } embedded in R d has thefollowing representation { x ∈ R d : x (cid:48) ∈ L t ( X (cid:48) | S = K ) , x d = K − (cid:62) d (cid:48) x (cid:48) } = { x ∈ R d : f X |{ S = K } ( x (cid:48) ) ≥ t, x d = K − (cid:62) d (cid:48) x (cid:48) } = { x ∈ R d : f ( x ) ≥ tf S ( K ) } ∩ K d ( K )= r ( tf S ( K )) D ∩ K d ( K ) = { s x : x ∈ D, ≤ s ≤ r ( tf S ( K )) } ∩ K d ( K )= (cid:26) K (cid:62) d x x : x ∈ D, ≤ K (cid:62) d x ≤ r ( tf S ( K )) (cid:27) = K (cid:62) d x x : x ∈ (cid:91) k ≥ K/r ( tf S ( K )) D ∩ K d ( k ) , that is, the level set L t ( X (cid:48) | { S = K } ) embedded in R d is a collection of the projected points of x ∈ D intersected with the upper half space { x ∈ R d : (cid:62) d x ≥ K/r ( tf S ( K )) } onto K d ( K ).The following example shows that neither weak unimodality nor star unimodality of X imply any of theunimodality concepts introduced in Definition 3 for X (cid:48) | { S = K } . Example . Consider X ∈ H ( , D, r ) where D = ([ − , × [ − , ∪ ([ − , × [ − , r ( t ) = √ exp( − t/ D is star-shaped (and thus connected) around (0 ,
0) and r is a decreasing function. Fur-thermore, the pair of ( D, r ) satisfies Condition (10) since (cid:90) ∞ Leb ( r ( t ) D ) d t = Leb ( D ) (cid:90) ∞ r ( t ) d t = 12 (cid:90) ∞
112 exp( − t ) d t = 1 . K = 1 /
3. For t = − √ / ≈ . r ( t ) = 1 / L t ( f D ) = D/ − / , / × [ − / , / ∪ ([ − / , / × [ − / , / L t ( X (cid:48) | { S = K } ) = [0 , / ∪ [1 / , / X is convex unimodal, it doesnot imply any unimodality for its marginal distributions; see Balkema and Nolde (2010, Example A.3.)for a counterexample. The following example shows that marginal unimodality also does not imply jointunimodality. Example . Consider the following bivariatedensity f ( u, v ) = 94 { ( u,v ) ∈ (cid:83) i =1 [( i − / ,i/ } + 94 { ( u,v ) ∈ [1 / , / } , ( u, v ) ∈ [0 , , which has the convex unimodal marginal densities f ( u ) = f ( u ) = 34 { u ∈ [0 , } + 34 { u ∈ [1 / , / } , u ∈ [0 , . However, L / ( f ) = [0 , / ∪ [1 / , / ∪ [2 / , is neither convex nor star-shaped.Joint unimodality implies marginal unimodality for certain classes of distributions. As is shown inBalkema and Nolde (2010), l p -spherical distributions form a subclass of homothetic densities for whichunimodality is preserved under marginalization. This property also holds for the class of s -concave densities,which is also closed under the operation X (cid:55)→ X (cid:48) | { S = K } ; see Appendix B. The point x ∗ ∈ R d maximizing the density f X on K d ( K ) can be regarded as the most likely loss occuringunder the scenario { S = K } that is covered by the given total capital K . In addition to this interpretationand as we saw in Section 2.2, detecting the mode(s) of X | { S = K } is beneficial for discovering hiddenrisky scenarios and evaluating the soundness of risk allocations. In this section we focus on the global mode x ∗ and study its properties desired to hold as a risk allocation. For notational convenience we denote by U d ( K ) the set of all d -dimensional random vectors X such that X and ( X (cid:48) , S ) admit density functions, and x (cid:55)→ f X ( x ) { x ∈K d ( K ) } has a unique maximum. For X ∈ U d ( K ), X (cid:48) | { S = K } admits a density through (4), and f X (cid:48) |{ S = K } has a unique maximum attained by the modeof X (cid:48) | { S = K } . By Proposition 6, elliptical random vectors with continuous and decreasing densitygenerators form a subclass of U d ( K ). Although some exchangeable random vectors possessing negativedependence, such as Model (b) in Section 2.2, may not be included in U d ( K ), we believe that most lossmodels used in practice of risk management are contained in U d ( K ). As explained in Section 3.5, uniquenessof the mode of X (cid:48) | { S = K } and its unimodality are different concepts, and thus the class U d ( K ) containsmultimodal random vectors in the sense that the level set L t ( X (cid:48) | { S = K } ) is not connected for some t > f X (cid:48) |{ S = K } has multiple local maximizers (we call them the local modes of X (cid:48) | { S = K } ).In this section we solely focus on the unique global maximizer of f X (cid:48) |{ S = K } (not on local ones) and studyproperties of the mode as a risk allocation. As we emphasized in Section 2.2, such multimodal distributionsshould be treat with care. In Section 5, we show that multimodality can be detected by searching for themodes of f X (cid:48) |{ S = K } .In the following we define the unique mode of X (cid:48) | { S = K } as a risk allocation of K . Definition 5 (Maximum likelihood allocation) . For
K > and X ∈ U d ( K ) , the maximum likelihoodallocation (MLA) on a set K ⊆ K d ( K ) is defined by K M [ X ; K ] = argmax { f X ( x ) : x ∈ K} , provided the function x (cid:55)→ f X ( x ) { x ∈K} has a unique maximum. When K = K d ( K ) , we call it the maximumlikelihood allocation.
13y (4), MLA of K on K can be equivalently formulated as K M [ X ; K ] = argmax { f X (cid:48) |{ S = K } ( x (cid:48) ) : ( x (cid:48) , K − (cid:62) d (cid:48) x (cid:48) ) ∈ K} . By definition, MLA on
K ⊆ K d ( K ) is an allocation of K in the sense that it satisfies the full allocationproperty (cid:62) d K M [ X ; K ] = K . We mainly study the case when K = K d ( K ). However, as we will see inSection 4.2 and Appendix D.2, the set K can be taken so that K M [ X ; K ] satisfies some desirable propertiesfor risk allocation. We now investigate properties of MLA as a risk allocation principle. For desirable properties of riskallocation in the case when the capital K is exogenously given as a constant, see Maume-Deschamps et al.(2016). By construction, K M [ X ; K ] always satisfies the full allocation property (1). The following proposi-tion summarizes other desirable properties of MLA. Proposition 7 (Properties of MLA) . Suppose
K > and X ∈ U d ( K ) .1. Translation invariance : K M [ X + c ; K d ( K + (cid:62) d c )] = K M [ X ; K d ( K )] + c for c ∈ R d .2. Positive homogeneity : K M [ c X ; K d ( cK )] = c K M [ X ; K d ( K )] for c > .3. Symmetry : For ( i, j ) ∈ { , . . . , d } , i (cid:54) = j , let ˜ X be a d -dimensional random vector such that ˜ X j = X i , ˜ X i = X j and ˜ X k = X k , k ∈ { , . . . , d }\{ i, j } . If X d = ˜ X , then K M [ X ; K d ( K )] i = K M [ X ; K d ( K )] j ,where K M [ X ; K d ( K )] l is the l th component of K M [ X ; K d ( K )] for l = 1 , . . . , d .4. Continuity : Suppose X n , X ∈ U d ( K ) have densities f n and f for n = 1 , , . . . , respectively. If f n isuniformly continuous and bounded for n = 1 , , . . . , and X n → X weakly, then lim n →∞ K M [ X n ; K d ( K )]= K M [ X ; K d ( K )] . Translation invariance states that a sure loss c ∈ R d requires the same amount of risk allocation andthe rest of the total capital is allocated to the random loss X . Positive homogeneity means that, for aproportion c >
0, 100 c % of the loss X requires 100 c % of the total capital K and the resulting MLA of c X is 100 c % of the allocation derived based on X and K . Symmetry implies that, if exchanging two marginallosses does not change the distribution of the joint loss, then equal amounts of capitals are allocated tothem. Finally, continuity ensures that if MLA is calculated based on an estimated model f n of f , then thisestimate of MLA can be close to the true MLA as long as f n correctly estimates f .Next we cover properties that need to be considered separately.1. RORAC compatibility and core compatibility : RORAC compatibility and core compatibility are important properties of risk allocations since eitherof them characterizes Euler allocation; see Tasche (1995) and Denault (2001). However, the definitionsof these properties are not meaningful when K is exogenously given as a constant. Moreover, similarconstraints as in core compatibility can be additionally imposed on K d ( K ) so that the resulting MLAsatisfies desirable core properties; see Appendix D.2 for details.2. Riskless asset :The riskless asset condition requires the sure loss X j = c j for c j ∈ R to be covered by the amountof allocated capital c j . This property needs to be considered separately since in this case X doesnot admit a density. Suppose that X j = c j ∈ R for j ∈ I ⊆ { , . . . , d } , and that X − I := ( X j , j ∈{ , . . . , d }\ I ) admits a density f X − I . Since( X I , X − I ) | { S = K } d = ( c , X − I ) | { (cid:62)|− I | X − I = K − (cid:62)| I | c } d = ( c , X − I | { (cid:62)|− I | X − I = K − (cid:62)| I | c } ) , (11)14ny realization x of X | { S = K } satisfies x I = c and the likelihood of x is quantified throughthe density f X − I |{ (cid:62)|− I | X − I = K − (cid:62)| I | c } ( x − I ). According to this discussion, a natural extension of thedefinition of MLA to such a random vector X is K M [ X ; K d ( K )] I = c , K M [ X ; K d ( K )] − I = K M [ X − I ; K |− I | ( K − (cid:62)| I | c )] , (12)which is compatible with the riskless asset property.3. Allocation under comonotonicity :Suppose X is a comonotone random vector with continuous margins F , . . . , F d . By Proposition 3 X | { S = K } = ( F − ( u ∗ ) , . . . , F − d ( u ∗ )) almost surely, where u ∗ ∈ [0 ,
1] is the unique solution of (cid:80) dj =1 F − j ( u ) = K . According to the extended definition (12) we have that K M ( X ; K d ( K )) = ( F − ( u ∗ ) , . . . , F − d ( u ∗ )) . We now discuss suitability of MLA as a risk allocation method and compare Euler and maximum likeli-hood allocations. Here we define Euler allocation by E [ X | { S = K } ], which are the VaR contributions (2)with K = VaR p ( S ) for some confidence level p ∈ (0 , E [ X + c | { (cid:62) d ( X + c ) = K + (cid:62) d c } ] = E [ X | { (cid:62) d X = K } ] + c for c ∈ R d (translation invariance), E [ c X | { (cid:62) d ( c X ) = cK } ] = c E [ X | { (cid:62) d X = K } ] for c > X is elliptically distributed. From a statistical point of view, one advantage ofMLA as a mode of X (cid:48) | { S = K } is that it is robust to outliers, that is, MLA is insensitive to severe butlittle plausible scenarios. Another advantage of MLA is that one can detect multimoality of X (cid:48) | { S = K } and thus hidden risky scenarios by searching for the modes of X (cid:48) | { S = K } . Based on this information onmultimodality, one can evaluate the soundness of risk allocations and design more flexible allocations for ex-ample by averaging the (local) modes with appropriate weights. On the other hand, the main disadvantagecompared with Euler allocation is that estimating modes becomes more difficult than estimating a meanas the dimension of the portfolio becomes larger. Summarizing these aspects, we believe that MLA andthe procedure for searching for (local) modes of X (cid:48) | { S = K } are best suited for assessing the soundnessof risk allocations in stress testing applications, for discovering hidden multiple scenarios likely to occur inthe stressed situation { S = K } and eventually for constructing more flexible risk allocations by weightingimportant scenarios. In this section we conduct an empirical and a simulation study to compute Euler and maximum likelihoodallocations, and compare them for various models. Simulation of the conditional distribution given aconstant sum is in general challenging. Throughout this section, we adopt the so-called (crude) MonteCarlo (MC) method to simulate X (cid:48) | { S = K } according to which unconditional samples from X arefirst generated and samples falling in the region K d ( K, δ ) = { x ∈ R d : K − δ < (cid:62) d x < K + δ } for asufficiently small δ > KX j / (cid:80) dj =1 X j sothat their componentwise sum equals K . Finally the standardized samples are used as pseudo-samples from X (cid:48) | { S = K } . See Appendix D.1 for the potential bias caused by this method, and more sophisticatedsimulation approaches of X (cid:48) | { S = K } based on MCMC methods. In this section we estimate the proposed MLA nonparametrically for real financial data. We considerdaily log-returns of the stock indices FTSE X t, , S&P 500 X t, and Dow Jones Index (DJI) X t, fromJanuary 2, 1990 to March 25, 2004, which contains 3713 days and thus T = 3712 log-returns. We consider15able 1: Maximum likelihood estimates and estimated standard errors of the ST-GARCH(1,1) model X t,j = µ j + σ t,j Z t,j with σ t,j = ω j + α j X t − ,j + β j σ t − ,j and Z t,j iid ∼ ST( ν j , γ j ) for j = 1 , . . . , d . µ j ω j α j β j γ j ν j X pos/neg t, .
053 0 .
006 0 .
052 0 .
943 0 .
969 6 . .
013 0 .
002 0 .
008 0 .
008 0 .
021 0 . X pos t, .
050 0 .
003 0 .
049 0 .
950 0 .
983 6 . .
013 0 .
001 0 .
007 0 .
007 0 .
021 0 . X neg t, − .
050 0 .
003 0 .
049 0 .
950 1 .
018 6 . .
013 0 .
001 0 .
007 0 .
007 0 .
022 0 . X pos/neg t, .
031 0 .
011 0 .
071 0 .
920 0 .
966 10 . .
014 0 .
003 0 .
009 0 .
010 0 .
023 1 . X pos t = ( X t, , X t, , X t, ) and (b) X neg t = ( X t, , − X t, , X t, ). For each portfolio, we aimat allocating the capital K = 1 based on the conditional loss distribution at time T + 1 given the historyup to and including time T . Taking into account the stylized facts of stock returns listed in Chapter 3 ofMcNeil et al. (2015) (such as unimodality, heavy-tailedness and volatility clusters), we adopted a copula-GARCH model with marginal skew- t innovations (ST-GARCH; see, for example, Jondeau and Rockinger(2006) and Huang et al. (2009)). We utilize GARCH(1 ,
1) model with skew- t innovations with degrees offreedom ν j > γ j > j th marginal time series. That is, within a fixedtime period { , . . . , T + 1 } the j th return series ( X ,j , . . . , X T +1 ,j ) follows X t,j = µ j + σ t,j Z t,j , σ t,j = ω j + α j X t − ,j + β j σ t − ,j , Z t,j iid ∼ ST( ν j , γ j ) , j = 1 , . . . , d, where ω j > , α j , β j ≥ α j + β j <
1, and Z t,j follows a skew- t distribution ST( ν j , γ j ) with density givenby f j ( x j ; ν j , γ j ) = 2 γ j + γ j (cid:8) t ( x j , ν j )1 [ x j ≥ + t ( γ j x j , ν j )1 [ x j < (cid:9) , (13)where t ( x, ν ) is the density function of a Student t -distribution with degrees of freedom ν > γ > γ = 1 leading to the standard symmetric case; see Fern´andez and Steel (1998) for moredetails. The copula among the stationary process Z t = ( Z t, , . . . , Z t,d ), denoted as C , is estimated nonpara-metrically. Under this model, the joint distribution of the returns X T +1 |F T = ( X T +1 , |F T , . . . , X T +1 ,d |F T )has marginal distributions ST( µ j , σ t +1 ,j , ν j , γ j ), j = 1 , . . . , d , and a copula C , where ST( µ j , σ t +1 ,j , ν j , γ j )is a skew- t distribution with density f j ( x j − µ j σ t +1 ,j ; ν j , γ j ) with f j ( · ; ν j , γ j ) defined in (13). Parameters of theST-GARCH(1,1) models are estimated with the maximum likelihood method; the results are summarizedin Table 1.For each case of (a) and (b), we take K = 1 and estimate the Euler allocation and MLA by a resam-pling method. After extracting the marginal standardized residuals, we build their pseudo-observationsas a pseudo-sample from C . We then generate samples of size N = 3712 by resampling with replace-ment. The samples from C are then marginally transformed by skew- t distributions with parametersspecified as in Table 1. From these samples of X T +1 |F T , we extract the subsamples falling in the region K d ( K, δ ) := (cid:110) x ∈ R : K − δ < (cid:80) j =1 x j < K + δ (cid:111) where δ = 0 .
3. These samples are then standardizedvia KX t,j / (cid:80) dj =1 X t,j to add up to K . Scatter plots of the first two components of these data are shown inFigure 2.The 3712 data points lead to 354 and 558 samples from X pos T +1 |F T and X neg T +1 |F T on K d ( K, δ ), respectively.Based on these conditional samples, we estimate the Euler allocation E [ X | { S = K } ] and the maximumlikelihood allocation, that is, the mode of f X |{ S = K } provided it is unique. The (possibly multiple) modes16 ll ll lllll lllll l l lllll ll ll lll ll llll l ll ll lll ll lll l ll ll l ll lllll lllll ll lllll ll ll ll l ll llll ll lll lll ll ll llll l lll l ll l lll lll llllllll llll llll llll l ll l lllll llll lll lll l lll ll l ll l ll lll ll ll lll llll l l ll l lll ll llll l ll l l lll ll l l lll l ll ll lll ll lll ll lll llll llll ll l ll l ll l lll lll l ll l lll l lllll ll lll lllll l ll ll lllll ll ll l lll ll llll llll ll ll l lllll l ll ll l ll lll lll lll ll lll l lll ll lllll ll lll lll l −3 −2 −1 0 1 2 3 − − − (a) X X ll l ll l lll llll ll ll l ll ll lllll ll ll ll l ll lll l llll ll l llll lllll ll llll l lll lll lll lll llll l lll ll ll lll lll l l lll llll ll l lll l lll ll ll ll l lll ll l ll lll l ll l ll ll ll lllll ll lll l l lll ll ll ll l l ll ll lll l llll l llll ll lll ll lll l lll lll lll ll lll l ll l lll ll lll lll ll ll lll ll l ll ll l lllll lll ll l ll ll ll l l lll ll lll l lll l l l l lll l llll l l lll lll l l lll l l lll ll ll lll ll ll l ll lll l l ll ll lll lll ll ll lllll llll llll ll lll llll ll l l ll l lll ll llll lll ll lll l ll ll ll l ll ll l ll l lll lll l ll l ll ll ll ll l lllll l lll l llll lll l l ll lll ll lll ll ll lll ll l ll ll l llll lll l l ll ll ll l ll ll lllll ll ll ll l ll ll ll l ll ll ll lll l lll l ll l l llll lll lllll ll ll llll lll l lll ll ll l lll l ll lll lll l l ll l −3 −2 −1 0 1 2 3 − − − (b) X - X ll Figure 2: Scatter plots (black dots) of the first two components of (a) X pos t = ( X t, , X t, , X t, ) and (b) X neg t = ( X t, , − X t, , X t, ) for daily log-returns of the stock indices FTSE X t, , S&P 500 X t, and DowJones Index (DJI) X t, falling in the region K d ( K, δ ) = (cid:110) x ∈ R : K − δ < (cid:80) j =1 x j < K + δ (cid:111) where δ = 0 . K = 1. The dotted lines represent the line x + y = K . The red dot represents the Euler allocation E [ X (cid:48) | { S = K } ] and the blue dot represents the maximum likelihood allocation, the mode of f X (cid:48) |{ S = K } .were estimated by the function kms of the R package ks . As was inferred from the ellipticality of the scatterplots in Figure 2, the unique mode was discovered in each case. The first two components of the twoallocations are pointed out in Figure 2.Next, we estimate the standard errors of the Euler and maximum likelihood allocations using the boot-strap method. We compute the Euler allocation, MLA and their standard errors based on the B = 100number of samples of size N = 3712 resampled from the original data with replacement. The results aresummarized in Table 2.In Figure 2 we can observe that compared with Case (a) the distribution in Case (b) is more spread outand losses take larger absolute values. If the samples are regarded as stressed scenarios, the scenario set inCase (b) contains a wider variety of scenarios than in Case (a) since both of positive and negative lossescan appear in Case (b) whereas most realizations are positive in Case (a). Nevertheless, as is observed fromTable 2, in both cases the Euler allocation and the MLA are close to each other also in terms of standarderrors. This observation does not conflict with the stylized fact that the joint log-returns nearly follow anelliptical distribution, and thus the mean (Euler allocation) of X | { S = K } coincides with its mode; seeProposition 1 and Proposition 6 Part 1. A potential drawback of the nonparametric estimation of Euler and maximum likelihood allocations isthat the sample size is often not sufficient for statistical estimation due to the sum constraint. To avoid thisissue, one can first fit a parametric model based on the unconditional samples, and then take subsamples ofsimulated samples from the fitted parametric model to estimate Euler and maximum likelihood allocations.In this section, we consider four models, referred to as (M1), (M2), (M3) and (M4), respectively, with d = 3and having the same marginal distributions X ∼ Par(2 . , X ∼ Par(2 . ,
5) and X ∼ Par(3 ,
5) (wherePar( θ, λ ) denotes Pareto distribution with shape parameter θ > λ >
0) but different17able 2: Bootstrap estimates and estimated standard errors of the Euler allocation and MLA of X pos =( X , X , X ) and X neg = ( X , − X , X ) for daily log-returns of the stock indices FTSE X , S&P 500 X and Dow Jones Index (DJI) X . The subsample size is N = 3712 and the bootstrap sample size is B = 100.Estimator Standard error X X X X X X E [ X pos | { S = K } ] 0.378 0.338 0.285 0.019 0.022 0.038 K M [ X pos ; K d ( K )] 0.367 0.365 0.268 0.019 0.024 0.041 E [ X neg | { S = K } ] 0.345 − K M [ X neg ; K d ( K )] 0.371 − t copulas with degrees of freedom ν = 5 and dispersion matrices P = . . . . . . , P = . . . . . . ,P = .
50 1 00 . , P = − . . − . − . . − . , (14)respectively. For these parametric models, we first simulate N = 10 samples from the unconditionaldistribution and then extract subsamples falling in the region K d ( K, δ ) with K = 40 and δ = 1. The(pseudo) samples from X (cid:48) | { S = K } are shown in Figure 3. The red point in the figure represents theEuler allocation and the blue points are the (local) modes, which are estimated similarly as in Section 5.1.In Figure 3 we can observe that the conditional distribution is more concentrated under positive depen-dence (Model (M1) and (M2)) and it is more dispersed under negative dependence (Model (M4)). Regardingthe samples as stressed scenarios, the sets in Model (M3) and (M4) are more worrisome than those of Model(M1) and (M2) since the former contain two distinct scenarios, one around the first axis and one aroundthe upper-left corner of the plot region, both of which are likely to occur in the stressed situation { S = K } .Unimodality of the conditional distribution in Model (M1) and (M2) leads to closer Euler allocation andMLA. For Model (M1) and (M2), the choice of Euler allocation and MLA does not significantly changethe resulting allocation. On the other hand, for Model (M3) and (M4), the conditional distributions aremultimodal. Therefore, MLA may not be uniquely determined and the Euler allocation provides an aver-age among multiple modes. For these two Models, more careful decision making is required not only bycomputing the Euler allocation but also searching for the distinct modes interpreted as risky scenarios, andconsidering how much weight is to be assigned for each of them based on, for example, expert opinion andthe tolerance of each business line.To investigate the standard errors of the estimators, we compute the estimates of Euler allocation and(local) modes of f X (cid:48) |{ S = K }
100 times for each model. For each repetition, we simulate samples from X sothat there are 500 samples in the region K d ( K, δ ). The estimates and standard errors are computed basedon the 100 replications and the results are summarized in Table 3. We can again see that for Models (M1)and (M2) the two allocations are close. On the other hand, especially for Models (M3) and (M4) where theconditional distributions are multimodal, the standard errors of the (local) modes are higher than those ofthe Euler allocation.
In this paper we investigated properties of the conditional distribution X given the constant sum con-straint { S = K } (motivated from scenario analysis of risk allocations) and introduced the novel risk allo-cation method called maximum likelihood allocation (MLA). We first provided a motivating example for18 lllll l lll lll ll ll ll lll ll ll l lll llll llll ll ll ll l llll l l lll ll lll l ll ll lll ll l ll ll ll ll lll l l lll l lll l ll ll ll l l ll l lll l lll lll l ll lll ll ll ll lll ll ll l ll l ll lll lll ll l l llll ll l l ll llll l llll llll l lll lll ll lll l l l lll llll l lll ll l lll l lll ll lllll lll l l ll lll l l l lll ll lll ll l l l lllllll l lll l l lll ll l ll l ll l lllll l llll lll ll ll ll ll lll ll ll ll ll l lll ll lll ll lll ll l ll llll lllll lllll ll ll l lll ll ll l lll lllll l llllll l l ll lll ll ll l lll lllll l ll l lll l ll ll l lll lll lll ll ll l lll ll ll ll llll lll l l ll ll ll l lll ll ll l ll lll lll l ll l lll l l ll ll l ll ll l llll l llll ll lll l ll ll ll ll l ll l lll ll ll ll lll ll lll l ll lll llll l ll l lll ll lll l ll l lll lll llll ll ll ll l ll ll ll ll ll lll l lll l ll l ll l lllll l llll ll l ll llll ll l ll lll l ll ll ll ll l llll ll lll lll ll ll llllll l ll ll l lll ll lll l ll ll l lll lll ll lll l ll lll ll ll ll lll ll l ll ll ll lll ll ll ll ll lll ll l l l ll lll l ll lll ll l l lll l ll lll ll ll l l llll llll lllll l l ll lll ll lllll l ll lllll l ll ll lll ll lll ll ll l ll l lll l ll llll ll ll ll ll llll l l lllll lllll l l lll llll l llll ll l l ll llll ll l l ll ll lll ll ll lll l ll ll lll ll llll lll l ll l l ll ll l ll lll ll ll lll ll llll ll lll llll lll lll ll llll ll llll llll l lll ll ll ll l l l ll lllll ll l l llll ll l lll lllll l l ll lll lll lll l l (M1) X X ll ll l llll ll lll ll l llll lll ll ll ll lll ll lll l l ll lll l llll ll llll ll lll lll lll ll ll ll ll lll l ll lll lll ll l ll l l llll l lll l ll lll l l ll llll l ll lll ll ll ll ll ll l llll l l ll ll l lll l lll llll ll llll l ll lll ll llll ll ll l lll l lll ll lll l ll l lll ll l ll lll lll ll lll l ll l l ll l ll ll ll ll lll l l lll l ll ll ll l l llll ll lll l l lll lll ll ll lll lll l ll ll lllll ll l lll ll l lll l l l ll l llll l l lll lll llll lll ll ll lllll ll lll ll ll ll ll lll l ll llll ll l l l l ll ll l ll lll l lll l lll ll l ll ll lll l lll lll ll ll l l l ll ll lll ll ll l lll lll l l ll ll lll l ll llll ll l ll lll ll ll ll l lll ll l ll l l l lll lll l lll ll l ll ll ll l l lll ll lll l l l lll l l ll l llll l ll l ll lll l lll lll l l ll ll l l lll ll ll ll l ll lll ll l l l lll l ll l ll ll l llll ll ll ll ll l ll ll l l ll llll ll l ll ll ll ll l ll lll lllll l lll ll ll ll lllll lll l ll llll lll l l lll ll l l ll ll lll l llll ll ll lll l l lll l ll ll llll l llll ll lll ll llll l l l ll l ll ll ll l l ll ll l llll lll ll lll l llll l ll llll ll ll ll ll l ll ll ll lll ll lll ll l ll l lll l lllll ll lll l l l l lll llll l lll ll l lll l l lll ll l lll ll l ll ll l l l l lll l ll ll l lllll l lll l lll ll ll l l llll ll l lll lll lll lllll ll lll ll lll l l lll lll lll ll lll l ll l ll ll ll ll lll ll l llll l ll l lll ll ll llll ll ll lll l lll ll ll lll ll lll l ll llll l ll ll ll lll lll l ll ll ll ll llll ll l ll lll ll ll (M2) X X l l l l ll ll l ll l ll l ll ll l lll ll lll lll lll l ll ll l ll ll l ll ll lllll l lll ll l ll llll ll ll l ll lll ll lll l llll lll ll lll ll ll ll ll ll ll l l ll lll ll ll l ll l ll ll ll l ll lll ll lll ll l lll ll lll ll lll l ll llll ll llll ll ll lll ll l l lllll ll ll lll lll l l ll lll ll ll l ll ll ll ll l ll llll lll l l ll ll lll lll ll ll lll ll l ll l lll ll ll lll ll l ll ll llll lll ll ll ll llll l ll l lll ll l lll ll ll ll l lll ll l ll l lllll l l lll l lll ll ll ll lll ll l l l ll l l ll l ll l lll ll l ll l lll l l ll ll llll l l llll ll ll ll l ll lll llll lll ll lll lll lll l ll ll ll l ll l ll l lll llll l lll l l ll lll l ll l ll l l lll l ll ll l lll lll l l lll l l ll lll l llll l lll l ll ll l lll l l ll lll ll ll ll l llll lll l lll l ll l lll ll lll ll ll lll l ll l l lll lll lllll l l lll l lll ll l lll l ll lll l l lll l l ll ll ll ll ll ll lll l lll lll llll l lll lll lll lll llll lllll llll ll ll l ll l lll lll l l lll lll l lll lll llll ll l lll l l ll l lll lll ll ll ll l l ll llll llll lll l ll lll lll ll ll lll lll ll llll l ll l ll l ll ll ll ll ll llll ll ll lll ll lll l l ll ll ll l llll ll ll ll l ll ll l ll ll llll lll ll l ll lll l l lll ll l llll ll ll ll ll ll lll l lll ll ll ll lll ll lll ll lll lll lll ll l ll l l ll ll l llll ll ll l ll ll ll ll ll ll lll l ll ll l lll ll l l ll ll l lll l ll lll l ll l lll ll ll lll l lll lll l ll ll ll lllll l llll ll ll ll llll l lll lll ll l ll ll lll l ll ll l ll ll lll l (M3) X X l ll ll ll ll l l l l ll lll l l l ll ll ll l l l ll ll l lll l ll ll l ll ll l l llll ll ll l ll ll ll l lll ll ll lll ll lllll lll l ll lll l lll lll ll llll l ll ll lll ll l ll l llll l ll lll l ll ll llll l ll lll ll l l lll l ll lll ll l l ll l l lll llll lll lll ll ll ll llll llll l ll lll ll l llll ll llll l ll ll llll ll lll lll ll lll lll ll l ll ll l lll lll ll l lll ll ll ll llll l ll l l ll l llll lll ll lll ll ll ll ll ll ll l lll ll lll l ll l ll ll l lll ll lll ll ll l ll ll lll l lll lll l ll ll ll l ll ll lll lll lll ll lll ll lll ll lll l lllll lll ll ll ll llll l lll l ll l l l lll lll ll ll ll lll l lll lll ll l ll ll ll ll l l lll ll ll llll ll lllll ll ll l ll ll llllll l ll ll lll l ll lll ll ll lll l ll lll ll l ll lll l lll l ll ll l l ll ll lll l llll l ll ll lll l lllll ll llll l llll ll lll l lll l l ll ll ll l l l lll lll ll ll ll lll lll lll ll ll llll l lll llll lll llll lll ll l ll lll lll ll l l l ll lllll lll ll l lll l ll ll l ll ll lll ll l lllllll ll ll llll l llll l ll l lll ll l llll l ll ll ll l l llll ll ll l ll lll lll l l ll l llll l l ll ll ll l ll l ll l ll ll lll ll l l lll l ll ll l ll l l l llll lll l lll ll l ll l l lll l ll l ll llll llll l ll lll l ll l l ll ll ll ll l ll ll l ll ll ll l llll l llll l l ll ll l ll lll lll lll ll l l lllll ll l lll l lll ll l l lll lllll ll ll ll lll lll lll l lll ll ll lllll l l ll ll l lll lll lll lll lll l l ll lll lll ll l lll l lll ll l ll l l ll ll ll lll ll llll lll (M4) X X l ll Figure 3: Scatter plots (black dots) of the first two components of the four models (M1), (M2), (M3)and (M4) falling in the region K d ( K, δ ) with K = 40 and δ = 1. All the four models have the samemarginal distributions X ∼ Par(2 . , X ∼ Par(2 . ,
5) and X ∼ Par(3 ,
5) but different t copulas withparameters provided in (14). The red lines represent x + y = K . The red dot represents the Euler allocation E [ X (cid:48) | { S = K } ] and the blue dots represent the (local) modes of f X (cid:48) |{ S = K } .19able 3: Estimates and estimated standard errors of the Euler allocation and MLA of the four models (M1),(M2), (M3) and (M4) all having the same marginal distributions X ∼ Par(2 . , X ∼ Par(2 . ,
5) and X ∼ Par(3 ,
5) but different t copulas with parameters provided in (14). Estimates and estimated standarderrors are computed based on 100 replications, each of which utilizing 500 conditional samples falling in theregion K d ( K, δ ) with K = 40 and δ = 1. Estimator Standard error X X X X X X (M1) Pareto + t copula: strong positive dependence E [ X | { S = K } ] 15.549 13.889 10.562 0.336 0.157 0.288 K M [ X ; K d ( K )] 15.849 14.434 9.718 0.482 0.213 0.356(M2) Pareto + t copula: positive dependence E [ X | { S = K } ] 16.228 13.042 10.562 0.399 0.355 0.288 K M [ X ; K d ( K )] 17.689 12.481 9.830 0.759 0.663 0.475(M3) Pareto + t copula: independence E [ X | { S = K } ] 17.479 11.368 10.562 0.517 0.530 0.288 K M , [ X ; K d ( K )] 25.678 3.107 11.215 1.185 0.278 1.205 K M , [ X ; K d ( K )] 2.639 35.275 2.086 0.973 1.306 0.424(M4) Pareto + t copula: negative dependence E [ X | { S = K } ] 19.062 9.272 10.562 0.556 0.614 0.288 K M , [ X ; K d ( K )] 28.353 0.684 10.962 2.125 1.646 2.154 K M , [ X ; K d ( K )] 0.710 38.385 0.905 1.719 3.537 2.705why to consider X | { S = K } , especially its level sets and modality. The level set of X | { S = K } canbe regarded as a set of stress (severe and plausible) scenarios, and the modality of X | { S = K } can beinterpreted as a number of distinct risky scenarios, which turned out to be an important feature related tothe soundness of risk allocations. We then studied properties of X | { S = K } , for example, dependence(Proposition 2 and Proposition 3), tail behavior (Proposition 4 and Proposition 5) and modality (Propo-sition 6), most of which are inherited from those of the unconditional loss X . Next we defined MLA as amode of X | { S = K } . Various properties of MLA, such as translation invariance and positive homogeneitywere studied in Proposition 7. Euler allocation and MLA were then compared in numerical experiments.Through these experiments, we demonstrated that Euler allocation and MLA lead to close values and X | { S = K } is typically unimodal when X possesses positive dependence. On the other hand, when thelosses are negatively dependent, multimodality is likely to occur and the two allocation principles result indistinct values. For such a case, searching for the modes of X | { S = K } is beneficial to discover the riskyscenarios which cannot be captured by a single vector of risk allocation, and to inspect the soundness of riskallocations. The detected (local) modes can also be helpful to design more flexible allocations, for example,by averaging the (local) modes with appropriate weights.Although we empirically observed the relationship between multimodality of X | { S = K } and negativedependence among X , this relationship requires further theoretical investigation. Another aspect of futureresearch is to study more distributional properties, such as tail dependence and measures of concordance,of X | { S = K } especially without assuming the existence of a density. In the end, efficient simulationapproaches of X | { S = K } may need to rely on MCMC methods as introduced in Appendix D, andfurther investigation is required to assess in how far the distributional properties proven in this paper carryover to MCMC methods since the performance of MCMC methods typically depends on tail-heaviness andmodality of the target distribution. 20 unding This research was funded by NSERC through Discovery Grant RGPIN-5010-2015.
References
Asimit, V., Peng, L., Wang, R., and Yu, A. (2019). An efficient approach to quantile capital allocation andsensitivity analysis.
Mathematical Finance .Aumann, R. J. and Shapley, L. S. (2015).
Values of non-atomic games . Princeton University Press,Princeton, New Jersey.Balkema, G. and Nolde, N. (2010). Asymptotic independence for unimodal densities.
Advances in AppliedProbability , 42(2):411–432.Boonen, T. J., De Waegenaere, A., and Norde, H. (2020). A generalization of the aumann–shapley valuefor risk capital allocation problems.
European Journal of Operational Research , 282(1):277–287.Breuer, T., Jandacka, M., Rheinberger, K., Summer, M., et al. (2009).
How to find plausible, severe, anduseful stress scenarios . ¨Osterr. Nationalbank.Denault, M. (2001). Coherent allocation of risk capital.
Journal of Risk , 4(1):1–34.Dhaene, J., Henrard, L., Landsman, Z., Vandendorpe, A., and Vanduffel, S. (2008). Some results on thecte-based capital allocation rule.
Insurance: Mathematics and Economics , 42(2):855–863.Dhaene, J., Tsanakas, A., Valdez, E. A., and Vanduffel, S. (2012). Optimal capital allocation principles.
Journal of Risk and Insurance , 79(1):1–28.Dharmadhikari, S. and Joag-Dev, K. (1988).
Unimodality, convexity, and applications . Elsevier.Ding, P. (2016). On the conditional distribution of the multivariate t distribution.
The American Statistician ,70(3):293–295.Fang, K. W. (2018).
Symmetric multivariate and related distributions . Chapman and Hall/CRC.Fernandez, C., Osiewalski, J., and Steel, M. F. (1995). Modeling and inference with υ -spherical distributions. Journal of the American Statistical Association , 90(432):1331–1340.Fern´andez, C. and Steel, M. F. (1998). On bayesian modeling of fat tails and skewness.
Journal of theAmerican Statistical Association , 93(441):359–371.Huang, J.-J., Lee, K.-J., Liang, H., and Lin, W.-F. (2009). Estimating value at risk of portfolio by conditionalcopula-garch method.
Insurance: Mathematics and economics , 45(3):315–324.Joe, H. and Li, H. (2019). Tail densities of skew-elliptical distributions.
Journal of Multivariate Analysis ,171:421–435.Jondeau, E. and Rockinger, M. (2006). The copula-garch model of conditional dependencies: An interna-tional stock market application.
Journal of international money and finance , 25(5):827–853.Kalkbrener, M. (2005). An axiomatic approach to capital allocation.
Mathematical Finance , 15(3):425–437.Karlin, S. and Rinott, Y. (1980a). Classes of orderings of measures and related correlation inequalities. i.multivariate totally positive distributions.
Journal of Multivariate Analysis , 10(4):467–498.Karlin, S. and Rinott, Y. (1980b). Classes of orderings of measures and related correlation inequalities ii.multivariate reverse rule distributions.
Journal of Multivariate Analysis , 10(4):499–516.21oike, T. and Hofert, M. (2020). Markov chain monte carlo methods for estimating systemic risk allocations.
Risks , 8(1):6.Koike, T. and Minami, M. (2019). Estimation of risk contributions with mcmc.
Quantitative Finance ,19(9):1579–1597.Laeven, R. J. and Goovaerts, M. J. (2004). An optimization approach to the dynamic allocation of economiccapital.
Insurance: Mathematics and Economics , 35(2):299–319.Landsman, Z. M. and Valdez, E. A. (2003). Tail conditional expectations for elliptical distributions.
NorthAmerican Actuarial Journal , 7(4):55–71.Li, H. (2013). Toward a copula theory for multivariate regular variation. In
Copulae in mathematical andquantitative finance , pages 177–199. Springer.Li, H. and Hua, L. (2015). Higher order tail densities of copulas and hidden regular variation.
Journal ofMultivariate Analysis , 138:143–155.Li, H. and Wu, P. (2013). Extremal dependence of copulas: A tail density approach.
Journal of MultivariateAnalysis , 114:99–111.Maume-Deschamps, V., Rulli`ere, D., and Said, K. (2016). On a capital allocation by minimization of somerisk indicators.
European Actuarial Journal , 6(1):177–196.McNeil, A. J., Frey, R., and Embrechts, P. (2015).
Quantitative risk management: Concepts, techniquesand tools . Princeton University Press, Princeton.M¨uller, A. and Stoyan, D. (2002).
Comparison methods for stochastic models and risks , volume 389. WileyNew York.Norkin, V. and Roenko, N. (1991). α -concave functions and measures and their applications. Cyberneticsand Systems Analysis , 27(6):860–869.Osiewalski, J. (1993). Robust bayesian inference in lq-spherical models.
Biometrika , 80(2):456–460.Resnick, S. I. (2007).
Heavy-tail phenomena: probabilistic and statistical modeling . Springer Science &Business Media.Roth, M. (2012).
On the multivariate t distribution . Link¨oping University Electronic Press.Saumard, A. and Wellner, J. A. (2014). Log-concavity and strong log-concavity: a review.
Statistics Surveys ,8:45.Schmidt, R. (2002). Tail dependence for elliptically contoured distributions.
Mathematical Methods ofOperations Research , 55(2):301–327.Sweeting, T. J. et al. (1986). On a converse to scheff´e’s theorem.
The Annals of Statistics , 14(3):1252–1256.Tasche, D. (1995). Risk contributions and performance measurement. Working Paper, Techische Universit¨atM¨unchen.Wang, B. and Wang, R. (2011). The complete mixability and convex minimization problems with monotonemarginal densities.
Journal of Multivariate Analysis , 102(10):1344–1360.22 ppendicesA Proofs
Proof of Proposition 1
Proof.
Notice that ( X (cid:48) , S ) = A X ∼ E d ( A µ , A Σ A (cid:62) , ψ ) where A = (cid:18) I d d (cid:62) d (cid:19) ∈ R d × d . Therefore, theconditional distribution X (cid:48) | { S = K } also follows an elliptical distribution with the location parameter µ K and the dispersion parameter Σ K as specified in (6). The corresponding characteristic generator ψ K can bespecified through Theorem 2.18 of Fang (2018). If X admits a density with density generator g , then f X (cid:48) |{ S = K } ( x (cid:48) ) = f ( X (cid:48) ,S ) ( x (cid:48) , K ) f S ( K ) ∝ g d (cid:32)
12 ( x (cid:48) − µ (cid:48) , K − µ S ) (cid:62) (cid:18) Σ (cid:48) (Σ d ) (cid:48) (Σ d ) (cid:48)(cid:62) σ S (cid:19) − ( x (cid:48) − µ (cid:48) , K − µ S ) (cid:33) . The quadratic term reduces to( x (cid:48) − µ (cid:48) , K − µ S ) (cid:62) (cid:18) Σ (cid:48) (Σ d ) (cid:48) (Σ d ) (cid:48)(cid:62) σ S (cid:19) − ( x (cid:48) − µ (cid:48) , K − µ S ) = ( x (cid:48) − µ K ) (cid:62) Σ − K ( x (cid:48) − µ K ) + ( K − µ S ) σ S . Therefore, we have that f X (cid:48) |{ S = K } ( x (cid:48) ) ∝ g (cid:18)
12 ( x (cid:48) − µ K ) (cid:62) Σ − K ( x (cid:48) − µ K ) + ∆ K (cid:19) = g K (cid:18)
12 ( x (cid:48) − µ K ) (cid:62) Σ − K ( x (cid:48) − µ K ) (cid:19) , where ∆ K = ( K − µ S ) / (2 σ S ) and g K ( t ) = g ( t + ∆ K ) as specified in (7). Proof of Proposition 2
Proof.
By (4) we have, for x (cid:48) , y (cid:48) ∈ R d (cid:48) , that f X (cid:48) |{ S = K } ( x (cid:48) ) f X (cid:48) |{ S = K } ( y (cid:48) ) = f ( X (cid:48) ,S ) ( x (cid:48) , K ) f ( X (cid:48) ,S ) ( y (cid:48) , K ) f S ( K ) ≤ f ( X (cid:48) ,S ) ( x (cid:48) ∧ y (cid:48) , K ∧ K ) f ( X (cid:48) ,S ) ( x (cid:48) ∨ y (cid:48) , K ∨ K ) f S ( K )= f X (cid:48) |{ S = K } ( x (cid:48) ∧ y (cid:48) ) f X (cid:48) |{ S = K } ( x (cid:48) ∨ y (cid:48) ) , which proves the first part on MTP2. The MRR2 and TP2 parts are shown in a similar manner. Proof of Proposition 3
Proof.
When X has continuous margins F , . . . , F d , then F − j , j = 1 , . . . , d , are continuous and strictlyincreasing, and thus the equation (cid:80) dj =1 F − j ( u ) = K has a unique solution u ∗ . Therefore, P (cid:18) d (cid:91) j =1 { X j (cid:54) = F − j ( u ∗ ) } | { S = K } (cid:19) = P (cid:18) d (cid:91) j =1 (cid:8) F − j ( U ) (cid:54) = F − j ( u ∗ ) (cid:9) (cid:12)(cid:12)(cid:12)(cid:12)(cid:26) d (cid:88) j =1 F − j ( U ) = K (cid:27)(cid:19) = 0 . Proof of Proposition 4
Proof.
Let ˜ X = ( X (cid:48) , K − X d ). Since the density of ˜ X is written as f ˜ X ( x , . . . , x d ) = f X ( x , . . . , x d (cid:48) , K − x d ),we have, by (4), that f X (cid:48) |{ S = K } ( x (cid:48) ) = f X ( x (cid:48) , K − (cid:62) d (cid:48) x (cid:48) ) f S ( K ) = f ˜ X ( x (cid:48) , (cid:62) d (cid:48) x (cid:48) ) f S ( K ) , x (cid:48) ∈ R d (cid:48) + . X has a limit function ˜ λ , then the density of X (cid:48) | { S = K } satisfieslim t →∞ f X (cid:48) |{ S = K } ( t y (cid:48) ) f X (cid:48) |{ S = K } ( t x (cid:48) ) = lim t →∞ f ˜ X ( t y (cid:48) , t (cid:62) d (cid:48) y (cid:48) ) f ˜ X ( t x (cid:48) , t (cid:62) d (cid:48) x (cid:48) ) = ˜ λ (( x (cid:48) , (cid:62) d (cid:48) x (cid:48) ) , ( y (cid:48) , (cid:62) d (cid:48) y (cid:48) )) =: λ (cid:48) ( x (cid:48) , y (cid:48) ) , for any x (cid:48) , y (cid:48) ∈ R d (cid:48) + since ( x (cid:48) , (cid:62) d (cid:48) x (cid:48) ) , ( y (cid:48) , (cid:62) d (cid:48) y (cid:48) ) ∈ R d + . Similarly, if ˜ X is MRV( ∞ ), thenlim t →∞ f X (cid:48) |{ S = K } ( st x (cid:48) ) f X (cid:48) |{ S = K } ( t x (cid:48) ) = lim t →∞ f ˜ X ( st x (cid:48) , st (cid:62) d (cid:48) x (cid:48) ) f ˜ X ( t x (cid:48) , t (cid:62) d (cid:48) x (cid:48) ) = (cid:40) , s > , ∞ , < s < , for any s > x (cid:48) ∈ R d (cid:48) + . Proof of Proposition 5
Proof.
Proposition 1 yields that X (cid:48) | { S = K } follows a d (cid:48) -dimensional elliptical distribution with locationvector µ K , dispersion matrix Σ K and density generator g K . If g is regularly varying, thenlim t →∞ f X (cid:48) |{ S = K } ( t y (cid:48) ) f X (cid:48) |{ S = K } ( t x (cid:48) ) = lim t →∞ g K (cid:0) ( t y (cid:48) − µ K ) (cid:62) Σ − K ( t y (cid:48) − µ K ) (cid:1) g K (cid:0) ( t x (cid:48) − µ K ) (cid:62) Σ − K ( t x (cid:48) − µ K ) (cid:1) = lim t →∞ g (cid:0) t ( y (cid:48) − µ K /t ) (cid:62) Σ − K ( y (cid:48) − µ K /t ) + ∆ K (cid:1) g (cid:0) t ( x (cid:48) − µ K /t ) (cid:62) Σ − K ( x (cid:48) − µ K /t ) + ∆ K (cid:1) = lim t →∞ g ( t y (cid:48)(cid:62) Σ − K y (cid:48) ) g ( t x (cid:48)(cid:62) Σ − K x (cid:48) ) = λ g ( x (cid:48)(cid:62) Σ − K x (cid:48) , y (cid:48)(cid:62) Σ − K y (cid:48) ) = λ K ( x (cid:48) , y (cid:48) ) , for any x (cid:48) , y (cid:48) ∈ R d (cid:48) , where the third equality comes from continuity of g and the fourth equality holds since x (cid:48)(cid:62) Σ − K x (cid:48) , y (cid:48)(cid:62) Σ − K y (cid:48) >
0. Therefore, X (cid:48) | { S = K } is MRV( λ K ). For the rapidly varying case,lim t →∞ f X (cid:48) |{ S = K } ( st x (cid:48) ) f X (cid:48) |{ S = K } ( t x (cid:48) ) = lim t →∞ g ( t s x (cid:48)(cid:62) Σ − K x (cid:48) ) g ( t x (cid:48)(cid:62) Σ − K x (cid:48) ) = (cid:40) , s > , ∞ , < s < , for any s > x (cid:48) , y (cid:48) ∈ R d (cid:48) since s > s > < s < < s < s >
0. Therefore, X (cid:48) | { S = K } is rapidly varying. Proof of Proposition 6
Proof.
1. By Proposition 1, X (cid:48) | { S = K } follows a d (cid:48) -dimensional elliptical distribution with locationvector µ K , dispersion matrix Σ K and density generator g K . Furthermore, g K is decreasing if g is.Therefore, for 0 < s ≤ c K t ∗ K / (cid:112) | Σ K | , L s ( X (cid:48) | { S = K } ) = (cid:40) x (cid:48) ∈ R d (cid:48) : g K (cid:18)
12 ( x (cid:48) − µ K ) (cid:62) Σ − K ( x (cid:48) − µ K ) (cid:19) ≥ s (cid:112) | Σ K | c K (cid:41) = (cid:40) x (cid:48) ∈ R d (cid:48) : 0 ≤ ( x (cid:48) − µ K ) (cid:62) Σ − K ( x (cid:48) − µ K ) ≤ (cid:40) g − (cid:32) s (cid:112) | Σ K | c K (cid:33) − ∆ K (cid:41)(cid:41) , which is a convex set with ellipsoid as surface. Moreover, when s ∗ = c K t ∗ K / (cid:112) | Σ K | , we have L s ∗ ( X (cid:48) | { S = K } ) = (cid:110) x (cid:48) ∈ R d (cid:48) : ( x (cid:48) − µ K ) (cid:62) Σ − K ( x (cid:48) − µ K ) = 0 (cid:111) = { µ K } and thus X (cid:48) | { S = K } has a mode µ K . 24. For t > x (cid:48) ∈ R d (cid:48) , we have the equivalence relation: x (cid:48) ∈ L t ( X (cid:48) | { S = K } ) if and only if ( x (cid:48) , K − (cid:62) d (cid:48) x (cid:48) ) ∈ L tf S ( K ) ( X ) (15)since f X (cid:48) |{ S = K } ( x (cid:48) ) = f X ( x (cid:48) , K − (cid:62) d (cid:48) x (cid:48) ) /f S ( K ) and thus L t ( X (cid:48) | { S = K } ) = { x (cid:48) ∈ R d (cid:48) : f X (cid:48) |{ S = K } ( x (cid:48) ) ≥ t } = { x (cid:48) ∈ R d (cid:48) : f X ( x (cid:48) , K − (cid:62) d (cid:48) x (cid:48) ) ≥ tf S ( K ) } . Suppose x (cid:48) , y (cid:48) ∈ L t ( X (cid:48) | { S = K } ). By (15), we have that ( x (cid:48) , K − (cid:62) d (cid:48) x (cid:48) ) , ( y (cid:48) , K − (cid:62) d (cid:48) y (cid:48) ) ∈ L tf S ( K ) ( X ). Since X is convex unimodal, L tf S ( K ) ( X ) is a convex set. Therefore, we have, for θ ∈ (0 , θ ( x (cid:48) , K − d (cid:48) x (cid:48) ) + (1 − θ )( y (cid:48) , K − (cid:62) d (cid:48) y (cid:48) ) = ( θ x (cid:48) + (1 − θ ) y (cid:48) , θ ( K − (cid:62) d (cid:48) x (cid:48) ) + (1 − θ )( K − (cid:62) d (cid:48) y (cid:48) )= ( θ x (cid:48) + (1 − θ ) y (cid:48) , K − (cid:62) d (cid:48) ( θ x (cid:48) + (1 − θ ) y (cid:48) )) ∈ L tf S ( K ) ( X ) , which implies that θ x (cid:48) + (1 − θ ) y (cid:48) ∈ L t ( X (cid:48) | S = K ) by (15). Proof of Proposition 7
Proof. Translation invariance : Let ˜ X = X + c , ˜ S = S + (cid:62) d c and ˜ K = K + (cid:62) d c . Since f X + c ( x ) = f X ( x − c ), we have that f ˜ X (cid:48) |{ ˜ S = ˜ K } ( ˜ x (cid:48) ) = f ( ˜ X , ˜ S ) ( ˜ x (cid:48) , ˜ K ) f ˜ S ( ˜ K ) = f ( X (cid:48) ,S ) ( ˜ x (cid:48) − c (cid:48) , K ) f S ( K ) = f X (cid:48) |{ S = K } ( ˜ x (cid:48) − c (cid:48) ) . Therefore, uniqueness of the maximizer of f X (cid:48) |{ S = K } implies that of f ˜ X (cid:48) |{ ˜ S = ˜ K } , and these maximizersare related via K M [ X + c ; K d ( K + (cid:62) d c )] = K M [ X ; K d ( K )] + c .2. Positive homogeneity : Let ˜ X = c X , ˜ S = cS and ˜ K = cK . Since f c X ( x ) = f X ( x /c ), we have that f ˜ X (cid:48) |{ ˜ S = ˜ K } ( ˜ x (cid:48) ) = f ( ˜ X , ˜ S ) ( ˜ x (cid:48) , ˜ K ) f ˜ S ( ˜ K ) = f ( X (cid:48) ,S ) ( ˜ x (cid:48) /c, K ) f S ( K ) = f X (cid:48) |{ S = K } ( ˜ x (cid:48) /c ) . As seen in the case of translation invariance, this equality implies that ˜ X ∈ U d ( ˜ K ) and K M [ X ; K d ( cK )] = c K M [ X ; K d ( K )].3. Symmetry : Without loss of generality, consider i = 1 and j = 2. Let ˜ X = ( X , X , X − (1 , ) and˜ S = (cid:62) d ˜ X , where x − (1 , is a shorthand for ( x , . . . , x d ) for x ∈ R d . Then f ˜ X ( x ) = f X ( ˜ x ) for x =( x , x , x − (1 , ) ∈ R d and ˜ x = ( x , x , x − (1 , ) ∈ R d . Moreover, when X d = ˜ X , we have ˜ X ∈ U d ( K )and f X = f ˜ X . Consequently, we have that f X (cid:48) |{ S = K } ( x (cid:48) ) = f X ( x (cid:48) , K − (cid:62) d (cid:48) x (cid:48) ) f S ( K ) = f ˜ X ( x (cid:48) , K − (cid:62) d (cid:48) x (cid:48) ) f S ( K ) = f X ( ˜ x (cid:48) , K − (cid:62) d (cid:48) ˜ x (cid:48) ) f S ( K ) = f X (cid:48) |{ S = K } ( ˜ x (cid:48) ) , (16)where the third equation holds since (cid:62) d (cid:48) x (cid:48) = (cid:62) d (cid:48) ˜ x (cid:48) . Now suppose K M [ X ; K d ( K )] (cid:54) = K M [ X ; K d ( K )] .Then two distinct vectors K M [ X ; K d ( K )] and ( K M [ X ; K d ( K )] , K M [ X ; K d ( K )] , K M [ X ; K d ( K )] − (1 , )attain the maximum of f X (cid:48) |{ S = K } by (16). Since K M [ X ; K d ( K )] is obtained by the unique maximizerof f X (cid:48) |{ S = K } ( x (cid:48) ), this leads to a contradiction.4. Continuity : When f n is uniformly continuous and bounded for n = 1 , , . . . , the sequence ( f n ) isasymptotically uniformly equicontinuous and bounded in the sense introduced in Sweeting et al.(1986). Together with the assumption that X n → X weakly, Theorem 2 of Sweeting et al. (1986)implies that f n → f pointwise and uniformly in R d for the uniformly continuous density f of X .25efine g n ( x (cid:48) ) = f n ( x (cid:48) , K − (cid:62) d (cid:48) x (cid:48) ) for n = 1 , , . . . and g ( x (cid:48) ) = f ( x (cid:48) , K − (cid:62) d (cid:48) x (cid:48) ), x (cid:48) ∈ R d (cid:48) . By (4)and since X n , X ∈ U d ( K ), the maximizers of g n and g are uniquely determined. Denote them as x ∗ n = argmax x ∈ R d (cid:48) g n ( x ) and x ∗ = argmax x ∈ R d (cid:48) g ( x ). By definition of x ∗ n , we have that g n ( x ∗ n ) ≥ g n ( x ) for any x ∈ R d (cid:48) . Since g n converges uniformly to g , it holds that g (lim sup n →∞ x ∗ n ) ≥ g ( x ) and g (lim inf n →∞ x ∗ n ) ≥ g ( x ) for any x ∈ R d (cid:48) . If lim sup n →∞ x ∗ n > lim inf n →∞ x ∗ n , then two points attain the maximum of g , which contradicts theuniqueness of the maximizer of g . As a consequence, lim sup n →∞ x ∗ n = lim inf n →∞ x ∗ n = lim n →∞ x ∗ n = x ∗ and thus lim n →∞ K M [ X n ; K d ( K )] = K M [ X ; K d ( K )]. B Modality and s -concave densities As we saw in Section 3.5, neither joint unimodality nor marginal unimodality imply the other . However,unimodality is preserved under marginalization for some specific class of densities, so-called the s -concavedensities. In this appendix we briefly introduce the connection between unimodality and the s -concavity ofthe conditional distribution given a constant sum. Definition 6 ( s -concavity) . For s ∈ R , a density f on R d is called s - concave on a convex set A ⊆ R d if f ( θ x + (1 − θ ) y ) ≥ M s ( f ( x ) , f ( y ); θ ) , x , y ∈ A, θ ∈ (0 , , where M s is called the generalized mean defined, by continuity, as M s ( a, b ; θ ) = { θa s + (1 − θ ) b s } /s , < s < ∞ or ( −∞ < s < and ab (cid:54) = 0) , , −∞ < s < and ab = 0 ,a θ b − θ , s = 0 ,a ∧ b, s = −∞ ,a ∨ b, s = + ∞ , for s ∈ R , a, b ≥ and θ ∈ (0 , . Definition 6 of s -concavity is based on densities and can be extended to a measure-based definitionfor distributions that do not admit a density; see Dharmadhikari and Joag-Dev (1988). For s = −∞ , s -concavity is also known as quasi-concavity and 0-concavity is also known as log-concavity . By definition, for0 < s < ∞ , f is s -concave if and only if f s is a concave function. As shown in Dharmadhikari and Joag-Dev(1988), the function s (cid:55)→ M s ( a, b ; θ ) is increasing for fixed ( a, b ; θ ). From this we have that t -concavityof f implies s -concavity for s < t . Examples of s -concave densities include the skew-normal distribution(Balkema and Nolde, 2010), Wishart distribution, Dirichlet distribution with certain range of parameters(Dharmadhikari and Joag-Dev, 1988) and the uniform distribution on a convex set in R d (Norkin andRoenko, 1991).Convex unimodality (Definition 3) is related to s -concavity since a density f is convex unimodal ifand only if it is −∞ -concave (Dharmadhikari and Joag-Dev, 1988). Therefore, f is convex unimodal ifit is s -concave for some s ∈ R . Furthermore, it is straightforward to show that X (cid:48) | { S = K } has an s -concave density if X has. As shown in Dharmadhikari and Joag-Dev (1988) and Saumard and Wellner(2014), s -concavity is preserved under marginalization, convolution and weak-limit for certain ranges of s ∈ R . Therefore, convex unimodality can also be preserved under these operations if the density f X of X is s -concave. 26 Properties which do not hold for MLA
In this appendix we summarize properties which intuitively hold but not for MLA in general.1.
Invariance under independence :For two integers d, ˜ d ≥
2, consider a d -dimensional random vector X with S = (cid:62) d X and a ˜ d -dimensional random vector ˜ X with ˜ S = (cid:62) ˜ d ˜ X . For K, ˜ K >
0, we call the MLA invariant underindependence if K M (( X , ˜ X ); K d + ˜ d ( K + ˜ K )) = ( K M ( X ; K d ( K )) , K M ( ˜ X ; K ˜ d ( ˜ K )))provided that X and ˜ X are independent with each other. This property does not hold since f ( X , ˜ X ) |{ S + ˜ S = K + ˜ K } (( x , ˜ x )) = f ( X , ˜ X ) ( x , ˜ x ) { (cid:62) d x + (cid:62) ˜ d ˜ x = K + ˜ K } f S + ˜ S ( K + ˜ K ) = f X ( x ) f ˜ X ( ˜ x ) { (cid:62) d x + (cid:62) ˜ d ˜ x = K + ˜ K } f S + ˜ S ( K + ˜ K ) ∝ f X ( x ) f ˜ X ( ˜ x ) { (cid:83) { ( k, ˜ k ) ∈ R , k +˜ k = K + ˜ K } { (cid:62) d x = k }∩{ ˜ d ˜ x =˜ k }} , (17)and f X |{ S = K } ( x ) f ˜ X |{ ˜ S = ˜ K } ( ˜ x ) ∝ f X ( x ) { (cid:62) d x = K } f ˜ X ( ˜ x ) { ˜ d ˜ x = ˜ K } , (18)are in general not equal (up to a constant). For example, let d = d (cid:48) and X and ˜ X be two independentand identically distributed standard normal distributions. Then the maximum of (17) is attained at( K + ˜ K ) d / d whereas that of (18) is attained at ( K d /d, ˜ K d /d ). These maximizers are not equalunless K = ˜ K .2. Additivity under convolution :Consider two independent d -dimensional random vectors X and ˜ X with S = (cid:62) d X and ˜ S = (cid:62) d ˜ X .For K, ˜ K >
0, we call K M additive under convolution if K M ( X + ˜ X ; K d ( K + ˜ K )) = K M ( X ; K d ( K )) + K M ( ˜ X ; K d ( ˜ K )) . This property does not hold in general; for example, let X ∼ N d ( µ , Σ) and ˜ X ∼ N d ( ˜ µ , ˜Σ) be twoindependent normal random vectors for µ , ˜ µ ∈ R d and Σ , ˜Σ ∈ M d × d + . By Proposition 1, Equation (6),and Proposition 6, Part 1, we have that K M ( X ; K d ( K )) = µ (cid:48) + K − µ S σ S (Σ d ) (cid:48) and K M ( ˜ X ; K d ( ˜ K )) = ˜ µ (cid:48) + ˜ K − µ ˜ S σ S ( ˜Σ d ) (cid:48) . Similarly, since X + ˜ X ∼ N d ( µ + ˜ µ , Σ + ˜Σ), we have that σ S + ˜ S = σ S + σ S and that K M ( X + ˜ X ; K d ( K + ˜ K )) = µ (cid:48) + ˜ µ (cid:48) + K + ˜ K − ( µ S + µ ˜ S ) σ S + σ S ((Σ + ˜Σ) d ) (cid:48) = µ (cid:48) + ˜ µ (cid:48) + (cid:32) σ S σ S + σ S K − µ S σ S + σ S σ S + σ S ˜ K − µ ˜ S σ S (cid:33) ((Σ + ˜Σ) d ) (cid:48) , which is not equal to K M ( X ; K d ( K )) + K M ( ˜ X ; K d ( ˜ K )) unless, for instance, Σ = ˜Σ. D Simulation of X | { S = K } with MCMC Efficient simulation of the conditional distribution of X given a constant sum { S = K } for K ∈ R is a challenging task in general. In Section 2.2 and Section 5, the constraint { S = K } was replaced by { K − δ < S < K + δ } for a small δ > P ( K − δ < S < K + δ ) >
0. However, this modificationdistorts the conditional distribution X | { S = K } and the resulting estimates of risk allocations suffer frominevitable biases. To overcome this issue, we briefly review MCMC methods, specifically the Metropolis-Hastings (MH) algorithm, and then demonstrate their efficiency for simulating X | { S = K } .27 .1 MCMC methods As mentioned in Section 2.1, it suffices to simulate X (cid:48) | { S = K } = ( X , . . . , X d (cid:48) ) | { S = K } for d (cid:48) = d −
1. Assume that X (cid:48) | { S = K } admits a density (4). We call this density the target density anddenote it as π . In the MCMC approach a Markov chain is constructed such that its stationary distributionis π . Constructing such a Markov chain can be achieved by the MH algorithm as we now explain. From thecurrent state X (cid:48) n , a candidate Y (cid:48) n of the next state is simulated from q ( X (cid:48) n , · ) where q ( x (cid:48) , y (cid:48) ), x (cid:48) , y (cid:48) ∈ R d (cid:48) is called the proposal density satisfying the two conditions: (i) x (cid:48) (cid:55)→ q ( x (cid:48) , y (cid:48) ) is measurable for all y (cid:48) ∈ R d (cid:48) and (ii) y (cid:48) (cid:55)→ q ( x (cid:48) , y (cid:48) ) is a density function for all x (cid:48) ∈ R d (cid:48) . The candidate is accepted, that is, X (cid:48) n +1 = Y (cid:48) n ,with probability α ( X (cid:48) n , Y (cid:48) n ) where α ( x (cid:48) , y (cid:48) ) = 1 ∧ q ( x (cid:48) , y (cid:48) ) π ( y (cid:48) ) q ( y (cid:48) , x (cid:48) ) π ( x (cid:48) ) = 1 ∧ q ( x (cid:48) , y (cid:48) ) f X ( y (cid:48) , K − (cid:62) d (cid:48) y (cid:48) ) q ( y (cid:48) , x (cid:48) ) f X ( x (cid:48) , K − (cid:62) d (cid:48) x (cid:48) ) , (19)and otherwise the chain stays at the current state X (cid:48) n +1 = X (cid:48) n . Calculation of the acceptance probability (19)is often possible since it does not depend on f S ( K ). The resulting Markov chain is shown to have π as astationary distribution and thus X (cid:48) , X (cid:48) , . . . can be used as samples from π in order to estimate Euler andmaximum likelihood allocations.An appropriate choice of q is important since MCMC samples are typically positively correlated dueto the acceptance-rejection procedure. At the n th iteration, ρ ( X (cid:48) n , X (cid:48) n +1 ) = 1 if the candidate is rejected.To avoid this, α needs to be maintained high, and thus a candidate Y (cid:48) n ∼ q ( X (cid:48) n , · ) is required to not tochange the level of π too much. Therefire, to reduce positive correlation among MCMC samples, q mustreflects properties of π , such as the shape of its support, tail-heaviness and modality, which are investigatedin Section 3. First, the support of π must be taken into account since a candidate outside of supp( π ) isimmediately rejected. Second, tail-heaviness of π requires specific design of q since most standard MCMCmethods such as random walk MH, independent MH, Gibbs samplers and the Hamiltonian Monte Carlomethod cannot guarantee the theoretical convergence when π is heavy-tailed. Finally, multimodality of π also requires to be handled specifically since the chain needs to traverse from one mode to another to samplefrom the entire support of π . D.2 An application of MCMC to core allocation
In this section we compute the Euler allocation and MLA on the restricted set of allocations called the (atomic) core defined by K C d ( K ; r ) = { x ∈ R d : (cid:62) d x = K, λ (cid:62) x ≤ r ( λ ) , λ ∈ { , } d } ⊆ K d ( K ) , where r : { , } d → R is called a participation profile function typically determined as r ( λ ) = (cid:37) ( λ (cid:62) X ) fora d -dimensional loss random vector X . We call an element of K C d ( K ; r ) a core allocation . As explained inDenault (2001), core allocations possess an important property as risk allocations, that is, any subportfolioof X = ( X , . . . , X d ) of the form ( λ X , . . . , λ d X d ) gains benefit of capital reduction from managing risk asa portfolio X . In fact, for a participation profile λ = ( λ , . . . , λ d ) where λ j ∈ { , } represents the presence( λ j = 1) or the absence ( λ j = 0) of the j th entity, the total amount of capital required to cover the loss λ (cid:62) X is λ (cid:62) x for an allocation x ∈ K d ( K ). The value r ( λ ) = (cid:37) ( λ (cid:62) X ) is interpreted as a stand-alone capitalthat would have been required if the total loss λ (cid:62) X had been managed individually. Therefore, under thecore allocation x ∈ K C d ( K ; r ), the subportfolio ( λ X , . . . , λ d X d ) gains benefit of capital reduction by λ (cid:62) x in comparison to r ( λ ).Given K , r and the joint loss X , we are interested in calculating the core-compatible versions of Eulerallocation E [ X | { X ∈ K C d ( K ; r ) } ], MLA K M [ X ; K C d ( K ; r )] and local modes of f X |{ X ∈K C d ( K ; r ) } if they exist.However, generating a large number of samples from X (cid:48) | { X ∈ K C d ( K ; r ) } is computationally involved sincean unconditional sample X is first filtered by the condition X ∈ { x ∈ R d : K − δ < (cid:62) d x < K + δ } := K d ( K, δ ) for a small δ >
0, and then filtered again by the core condition λ (cid:62) X ≤ r ( λ ) for all possible λ ∈ { , } d . To overcome the issue, we utilize an MCMC method (the Hamiltonian Monte Carlo (HMC)method with reflection ) to directly simulate f X (cid:48) |{ X ∈K C d ( K ; r ) } . Note that the support of X (cid:48) | { X ∈ K C d ( K ; r ) } is a projection of K C d ( K ; r ) onto R d (cid:48) , which is an intersection of hyperplanes { x (cid:48) ∈ R d (cid:48) : λ (cid:62) ( x (cid:48) , K − (cid:62) d (cid:48) x (cid:48) ) ≤ ll lll ll l l lll lll lll l ll lllll l ll ll l ll l ll llll lll l lll ll l lll l ll l ll l l ll ll lllll lll lll l ll l ll lll l ll lll l ll l lll ll ll l lll l l ll l llll l ll l ll l ll lllll ll l llll ll l ll llllll lll ll l ll llll ll ll ll ll ll l l lll lll ll l lll l ll lll l ll lll l ll ll ll lll lll llll lll lll ll l ll lll lll l ll ll lll l lll ll l ll l ll ll lllll ll ll ll l ll lll lll lll lll l ll lll l ll ll ll ll ll l ll lll ll ll l lll lll l l lll llll l llll ll lll ll l ll ll lll l ll l l lll ll ll l ll lllll l lll lll l lll l l lll lll ll ll lll lll l llll l lll ll ll l ll llll l ll ll llll l ll lll ll lll llll ll lll lll l llll l ll ll ll ll ll ll lll llll l lllll l ll ll ll ll lll ll ll ll ll ll ll lll l l lll l lll lll llll ll l l lll l l ll lll ll lll lll lll l l ll ll lll ll lll ll ll l ll lll lll l lll llll ll ll ll ll lll ll l ll ll ll l l llll l lll ll ll ll lll l ll lll ll l ll ll l llll l lllll ll l ll lll l ll l lll ll l lll lll l ll ll ll ll l ll lllll l ll ll l ll l ll ll lll ll l ll l ll ll l lllll l ll lll l lll l lll lllll llll l llll ll lll ll l lll lll l ll l ll lll l ll l ll l ll ll l lll l ll l llll ll l l lll ll lll l lll l l llll ll ll lll ll ll lll ll l lll l lll ll llll ll l lll ll ll l ll ll ll llll l lll l llll ll lll ll l ll l l ll ll l ll lll ll lll ll l ll l ll ll l ll ll ll lll ll lllll ll lll ll lllll l ll l lll l ll ll l llll l ll lll llll l llll l ll lll l ll lll ll ll l ll lll l ll llll ll ll l ll llll ll ll l ll l ll l ll ll ll lll ll ll llll ll llll lll lll lll ll ll ll l l ll lll l l ll ll ll lll llll llll ll l lll l ll ll ll l l lll l ll lll lll l ll ll l lll llll ll ll lllll ll ll llll l l ll ll lll ll ll ll ll lll ll ll llll ll lll ll ll lllll lll lll llll ll ll ll ll l ll ll lllll ll l lll lll llll lllll lll lll l ll l llll l ll ll lll ll l lll ll ll l ll llll llll lll ll l l ll ll llll ll lll ll l l ll ll l ll lll l llll ll lll l lll lll llll l ll llll ll ll llllll l ll llllll l ll lll l l lll ll lll ll ll ll ll lll l lll l ll ll l lll ll l l lll ll l l ll ll lll ll ll lll ll ll lll llll l ll ll ll lll ll ll l l ll ll ll l ll l ll lll ll lll l ll lll ll ll lll l ll lll ll lll l l llll lll ll l llllll lll l ll ll ll ll l llll l ll l ll l ll ll ll ll l ll lll l ll ll ll lll l ll ll l ll ll l ll lll ll lll ll lll ll ll l ll ll ll l ll lll lll ll ll ll llll ll ll lll ll l ll ll l ll lllll ll l lll ll ll llll l ll ll l l lll ll l ll l lllll l ll l ll l ll ll l ll l lll ll ll l lll l llll ll l ll llll l lllll l l l lll ll l l lll lll l ll ll l l l ll ll ll ll llll lllll lll ll l l llll ll ll ll ll ll l ll ll llll lllll l lll ll ll ll lll l llll lllll ll ll lll l lll lll l lll l ll ll llll llll lll lll lll ll ll lll l ll lll ll lll lll llll ll ll ll lllll l lllll lll ll lll l l llll l ll llll l lll l lll lll lll l lllllll l l llll llll ll ll lll ll l lll l lll ll llll ll l ll l l lll l ll ll l lll llll lll llll −2 0 2 4 6 8 10 − (a) X X lll llllllllll ll ll llll lll lllll llll llllll l ll lllll ll ll lll llll llll llllll ll llllll ll ll llllllll lllll lllll llll lllll lll lllll ll ll llllll lll lll lllll llllllll ll lll ll lllllll lll l llll lll llll lll llllll lll llll ll l ll l lll l ll ll ll lll ll lll l ll llll lll l ll llllll ll ll lll ll llll l ll lll ll ll llll l lll ll lllll ll ll ll ll ll l ll ll l lll lll l ll l ll l llll l ll ll l lll l ll l ll llllll l ll ll lllll l ll l lll l ll lll ll l ll ll ll l ll ll l l llll lll l ll lll lll ll lll llll ll lll ll l ll ll ll l llll l ll ll l ll l l lll l ll ll l llll lll ll l llll llll l ll ll l ll ll ll ll l lll ll l llll l ll l ll l ll l ll lll llll lll ll lllll ll ll l ll l l ll lll ll l ll lll l ll l ll llll lll lll lll lll lllll lll ll ll ll lll lll llll llll l ll lll ll ll ll ll ll ll l ll lllll ll l ll ll lll l ll l ll l llll ll lll llll llllllll ll ll ll llll ll llll ll l l ll lll l ll ll lll l llll lll lll lll ll llll ll ll lll ll lll l ll l l lll lll llll l lll ll lll l ll l l lll lll l l ll lllll ll lll lll l l llll l ll ll ll llll lll lll ll l ll ll ll l ll ll lllllll l ll ll l ll l l lllll ll l lll l ll ll llll llll ll ll ll ll ll lll l lll ll ll ll llll ll lll ll ll ll ll ll ll ll lll ll llll ll ll llll lll l llll lllll ll lll lll l llll l ll ll lllll l lllll ll llll l l ll l llll ll llll ll l l llll l lll l ll ll ll l l ll ll lll lll l lll l llllll llll l ll ll l lll l l ll ll lll l l ll lllll lll ll llll ll ll ll l lll ll l lll ll ll l ll ll lll ll ll l ll ll ll lll ll ll ll llllllll ll l lllll l ll lll lllllll ll lll l lllll ll lll ll lllll ll l l l ll lll lllll l lll lll l lll lll lll l lll ll llll ll l llll l ll lll ll ll llll ll ll lllllll ll llll lll lll ll ll ll ll l ll ll ll llll lll lll lll l l ll ll ll ll ll lll l lll l l ll l lll llll ll l lll l ll l lll llll ll llll l l l ll ll lll ll ll l ll ll ll lll ll ll lll ll lllll lll l l ll llll l ll ll lll llll l lll llll l l llll ll l lll l llll l l ll lll lll ll llll ll ll ll llll lll l ll ll ll l lll ll llll llll ll ll ll l lll ll lll ll ll ll lll ll lll ll ll ll ll l llll ll l l lll ll lll ll l llll llll ll lll lll ll lllll ll l l l ll ll ll lll llll ll ll l ll ll lll lll ll l ll ll ll ll l l lll lll ll l lll l llllll llll ll l ll ll ll l lll ll ll lll ll llll lllll llll ll ll ll ll l ll lll ll ll llll ll lll ll ll ll l lll lll l ll ll ll ll l llll l l ll ll ll llll ll l l ll l llll lll ll lll l lllll l l ll lll ll l lll llll l lll ll ll l ll ll l ll ll ll l llllll lll ll l lll ll lll ll l lll ll l ll ll l l lllll ll lll ll l l ll ll llll ll l ll l lll ll l ll lll l l lll lllll ll ll l lllll l ll lllll ll llll ll l l lll ll llll lll llll lll lllllllllllll ll l ll llll llll ll llll ll ll ll ll ll ll l ll ll ll lll ll lllllll l ll ll l ll lll llll l ll lll ll lll llll l ll ll l ll ll l llllll ll llll ll lll l llll lll lll l llll llll l ll ll ll lll llllll lll ll l ll l lll ll ll llll l ll lllll l ll ll l ll lllll l lll lll l ll lll llll lll lll ll ll ll ll ll lll llll l ll ll lllll ll lllllll lll llll lll ll ll ll l l ll ll l ll ll l ll ll lll ll lll ll llll ll l lll l l lll l l l lllllll lll lll l lll l lll l ll ll l ll ll llllll llll lllll ll ll l lllll ll lllll l lll lll l lllll lll ll llllllll ll lllll l ll l ll lllll ll llll ll l l ll ll ll lllllll ll lll lllll ll ll ll l llllll ll llll l lll ll lll ll l l llll lll l ll ll lll lllll ll l l l l ll ll lll ll lll ll ll l ll ll ll lll ll ll ll ll lllll ll lll llll ll l llll l ll ll l llllll ll lll ll lll lll ll ll l ll l ll lll lll ll ll ll ll l l lll ll l ll l lllll llll llll llll llll lll l ll ll lll ll ll lllllll lll ll llll ll ll ll l l lll llll lllll lll ll l lll ll lll l lll l lll llll ll lll ll ll l ll lll lll llll ll lll ll l lll ll ll l ll ll l lll ll ll lll llll l lll llll lll ll llll llll llll l lll l lll ll ll ll l lll l l llll ll l lll lll ll l lll ll lll ll l ll ll l ll ll ll l lll ll l ll llll llll l lllll ll lll l ll lll lll l ll lll llll l l lll ll l ll l lll ll ll lll l l lll l lll l l llll lll ll l lllll l l lll ll lll ll lll l ll ll lll ll ll ll lll lll ll ll l ll ll ll ll l ll l lll ll llll lll l ll lll l llll ll lll l lll ll ll l ll lll ll lll l ll lll l lll lllll lll l ll ll l ll ll lllll ll l l ll llll lll llllll lll llll ll l lll ll lll lll ll l lllll l lll lll lll l llll l l ll lllll ll ll l llllll ll ll ll ll l lll llllll llllllll l llll lll l lll lll ll ll lllllll llll llll ll lll lll lll ll ll l lll lll l llll lll ll ll l ll llll lllll ll ll ll ll l ll ll ll lll ll l . . . . . (b) X X Figure 4: Scatter plots of (a) MC samples from X (cid:48) | { X ∈ K d ( K, δ ) } (black) and X (cid:48) | { X ∈ K C d ( K, δ ; r ) } (blue), and of (b) MCMC samples from X (cid:48) | { X ∈ K C d ( K ; r ) } (black) where X ∼ t ν ( d , P ) with d = 3, ν = 5and P = ( ρ i,j ) being a correlation matrix with ρ , = ρ , = 1 / ρ , = 2 / r ( λ ) = VaR p ( λ (cid:62) X ) with p = 0 .
99 for λ ∈ { , } , K = r ( ) and δ = 0 . { x (cid:48) ∈ R : λ (cid:62) ( x (cid:48) , K − (cid:62) x (cid:48) ) = r ( λ ) } for λ ∈ { , } . r ( λ ) } for λ ∈ { , } d . In the HMC method, a candidate is proposed according to the so-called Hamiltoniandynamics, and the chain reflects at the boundaries { x (cid:48) ∈ R d (cid:48) : λ (cid:62) ( x (cid:48) , K − (cid:62) d (cid:48) x (cid:48) ) = r ( λ ) } , λ ∈ { , } d sothat it does not violate the support constraint; see Koike and Hofert (2020) for details.For a numerical experiment, let X ∼ t ν ( d , P ) with d = 3, ν = 5 and P = ( ρ ij ) being a correlationmatrix with ρ = ρ = 1 / ρ = 2 /
3. For p = 0 .
99, we set r ( λ ) = VaR p ( λ (cid:62) X ) for λ ∈ { , } and K = r ( ). For δ = 0 . N MC = 10 samples from X and estimate K and( r ( λ ) , λ ∈ { , } ) from these samples. Then we extract samples of X falling in the region K C d ( K, δ ; r ) = K d ( K, δ ) ∩ { x ∈ R d : λ (cid:62) x ≤ r ( λ ) , λ ∈ { , } \{ }} . Figure 4 (a) shows the first two components of the MC samples from X and the conditional samples fallingin K C d ( K, δ ; r ). Among the N MC = 10 samples, 2000 samples were contained in K d ( K, δ ) and only 189samples fell in K C d ( K, δ ; r ). Therefore, this crude simulation method is not efficient since 99.98% of theunconditional samples are discarded.Instead, we conduct an MCMC simulation to generate N MCMC = 10 samples directly from X | { X ∈K C d ( K ; r ) } . Hyperparameters of the HMC method are estimated based on the 189 MC samples; see Koikeand Hofert (2020). The resulting stepsize and integration time are ε = 0 .
105 and T = 24, respectively. Ittook 49.534 seconds to simulate a Markov chain with length N MCMC = 10 on a MacBook Air with 1.4GHz Intel Core i5 processor and 4 GB 1600 MHz of DDR3 RAM. The resulting acceptance rate was 0.866and serial correlations were below 0.03 at lag 1. Based on these inspections we conclude that the MCMCmethod performed correctly. The first 3000 MCMC samples of X (cid:48) | { X ∈ K C d ( K ; r ) } are plotted in Figure 4(b).By Proposition 1, X (cid:48) | { X ∈ K d ( K ) } still follows a multivariate Student t distribution, and thus themode of this conditional distribution is uniquely determined by K M [ X ; K d ( K )] = E [ X | { X ∈ K d ( K ) } ]by Part 1 of Proposition 6. Moreover, when this point is contained in the core K C d ( K ; r ), we have K M [ X ; K d ( K )] = K M [ X ; K C d ( K ; r )] since the distributions of X | { X ∈ K d ( K ) } and X | { X ∈ K C d ( K ; r ) } share the same mode. We check these observations numerically by calculating the corresponding estimates.Table 4 summarizes the MC and MCMC estimates and standard errors of the Euler and maximum29able 4: Monte Carlo (superscript “MC”) and Markov chain Monte Carlo (superscript “MCMC”) estimatesand standard errors of the Euler and maximum likelihood allocations on K d ( K ) and those on the atomiccore K C d ( K ; r ). The MC sample size of the unconditional sample X is N MC = 10 and the sample size ofthe conditional sample X | { X ∈ K C d ( K ; r ) } in the MCMC method is N MCMC = 10 .Estimator Standard error X X X X X X ˆ E MC [ X | { X ∈ K d ( K ) } ] 2.865 2.310 2.846 0.026 0.034 0.026ˆ K MCM [ X ; K d ( K )] 2.861 2.366 2.793 – – –ˆ E MC [ X | { X ∈ K C d ( K ; r ) } ] 2.852 2.267 2.903 0.016 0.019 0.016ˆ K MCM [ X ; K C d ( K ; r )] 2.838 2.262 2.920 – – –ˆ E MCMC [ X | { X ∈ K C d ( K ; r ) } ] 2.876 2.269 2.877 0.002 0.003 0.002ˆ K MCMCM [ X ; K C d ( K ; r )] 2.866 2.283 2.871 – – –likelihood allocations on K d ( K ) and those on the atomic core K C d ( K ; r ). MC estimates are calculated basedon the samples in Figure 4 (a) and MCMC estimates are computed based on the samples in Figure 4 (b).As expected by theory, the MC estimates of K M [ X ; K d ( K )] and E [ X | { X ∈ K d ( K ) } ] were close to eachother. We can also observe that the MC and MCMC estimates are close to each other for all the estimators.The standard errors of the MCMC estimator of E [ X | { X ∈ K C d ( K ; r ) } ] were smaller than those of theMC estimator because of sample efficiency. Provided that ˆ K MCM [ X ; K d ( K )] belongs to the core K C d ( K ; r ),we expect an estimate of K M [ X ; K C d ( K ; r )] to be close to ˆ K MCM [ X ; K d ( K )]. Although this was the casefor both of the MC and MCMC estimates of K M [ X ; K C d ( K ; r )], the MCMC estimate was slightly closer toˆ K MCM [ X ; K d ( K )] than the MC estimate. Consequently, the MCMC estimator of K M [ X ; K C d ( K ; rr