Computation of systemic risk measures: a mixed-integer linear programming approach
aa r X i v : . [ q -f i n . M F ] J a n Computation of systemic risk measures:a mixed-integer linear programming approach
C¸ a˘gın Ararat ∗ Nurtai Meimanjanov † January 17, 2021
Abstract
Systemic risk is concerned with the instability of a financial system whose members areinterdependent in the sense that the failure of a few institutions may trigger a chain of de-faults throughout the system. Recently, several systemic risk measures are proposed in theliterature that are used to determine capital requirements for the members subject to joint riskconsiderations. We address the problem of computing systemic risk measures for systems withsophisticated clearing mechanisms. In particular, we consider the Eisenberg-Noe network modeland the Rogers-Veraart network model, where the former one is extended to the case where op-erating cash flows in the system are unrestricted in sign. We propose novel mixed-integer linearprogramming problems that can be used to compute clearing vectors for these models. Due tothe binary variables in these problems, the corresponding (set-valued) systemic risk measuresfail to have convex values in general. We associate nonconvex vector optimization problems tothese systemic risk measures and provide theoretical results related to the weighted-sum andminimum step-length scalarizations of these problems under the extended Eisenberg-Noe andRogers-Veraart models. We test the proposed formulations on computational examples andperform sensitivity analyses with respect to some model-specific and structural parameters.
Keywords and phrases: systemic risk measure, set-valued risk measure, Eisenberg-Noe model,Rogers-Veraart model, mixed-integer linear programming, vector optimization.
Mathematics Subject Classification (2010): ∗ Bilkent University, Department of Industrial Engineering, Ankara, Turkey, [email protected]. † Deloitte CIS, Bishkek, Kyrgyzstan, [email protected]. Introduction
Financial contagion is usually associated with a chain of failures in a financial system triggered byexternal correlated shocks as well as direct or indirect interdependencies among the members ofthe system leading to, from an economic point of view, undesirable consequences such as financialcrisis, necessity for bailout loans, economic regression, rise in national debt and so on. A goodexample is a bank run, when a large number of holders withdraw their money from a bank dueto panic or decrease in confidence in the bank, causing insolvency of the bank. In turn, the bankmay call its claims from the other banks, decreasing confidence in them and causing new bankruns. Being unable to meet their liabilities, some of the banks may become bankrupt and, thus,aggravate the contagion even further. Unlike the more traditional institutional risk, systemic riskis related to the strength of an entire financial system against financial contagions.In this paper, we consider financial systems in which members have direct links to each otherthrough contractual liabilities. When the members realize their operating cash flows, the actualinterbank payments are determined through a clearing procedure. As an example of such systems,Eisenberg and Noe (2001) models a financial system as a static directed network of banks whereinterbank liabilities are attached to the arcs. Assuming a positive operating cash flow for eachbank, the paper develops two approaches to calculate a clearing vector, that is, a vector of pay-ments to meet interbank liabilities. The first is a simple algorithm, called the fictitious defaultalgorithm , which gradually calculates a clearing vector by finitely many updates. The second is alaconic mathematical programming problem with linear constraints determined by the liabilities,the operating cash flows, and an arbitrary strictly increasing objective function. In particular, onecan choose a linear objective function so that a clearing vector is calculated as an optimal solutionof a linear programming problem .As an important extension of the Eisenberg-Noe model, Rogers and Veraart (2013) introducesdefault costs to the model in Eisenberg and Noe (2001). In addition, one of the main focuses inRogers and Veraart (2013) is the investigation of the necessity of bailing out procedures for thedefaulting institutions. It is shown that under strictly positive default costs, it might be beneficialfor some of the solvent institutions to take over insolvent institutions. For a detailed review of2learing systems, the reader is referred to the survey Kabanov et al. (2017), which focuses on theexistence and uniqueness of clearing vectors as well as their calculations by certain variations of thefictitious default algorithm in Eisenberg and Noe (2001). However, to the best of our knowledge,none of these works builds on the mathematical programming approach of Eisenberg and Noe(2001).On the other hand, the operating cash flows of the members of a network are typically subjectto uncertainty due to correlated risk factors. Hence, these cash flows can be modeled as onepossible realization of a random vector with possibly correlated components. Then, the resultingclearing vector is a deterministic function of the operating cash flow random vector, where thedeterministic function is defined through the underlying clearing mechanism. Based on the randomclearing vector, one can define various systemic risk measures to calculate the necessary capitalallocations for the members of the network in order to control some (nonlinear) averages overdifferent scenarios. This is the main focus of a recent stream of research started with Chen et al.(2013). Using the clearing mechanism, one defines a random aggregate quantity associated to theclearing vector, such as the total debt paid in the system or the total equity made by all membersas a result of clearing. This aggregate quantity can be seen as a deterministic and scalar function,called the aggregation function , of the operating cash flow vector. In Chen et al. (2013), a systemicrisk measure is defined as a scalar functional of the operating cash flow vector that measures therisk of the random aggregate quantity through a convex risk measure (F¨ollmer and Schied, 2011,Chapter 4) such as negative expected value, average value-at-risk or entropic risk measure.The value of the systemic risk measure in Chen et al. (2013) can be seen as the total capitalrequirement for the system to keep the risk of the aggregate quantity at an acceptable level. How-ever, since the total capital is used only after the shock is aggregated, the allocation of this totalback into the members of the system remains as a question to be addressed by an additional proce-dure. To that end, set-valued and scalar systemic risk measures that are considered “sensitive” tocapital levels are proposed in Feinstein et al. (2017) and Biagini et al. (2019), respectively. Thesesystemic risk measures look for deterministic capital allocation vectors that are directly used toaugment the random operating cash flow vector. Hence, the new augmented cash flow vector isaggregated and the risk of the resulting random aggreagate quantity is controled by a convex riskmeasure as in Chen et al. (2013). In particular, the value of the set-valued systemic risk measure3n Feinstein et al. (2017) is the set of all “feasible” capital allocation vectors, which addresses themeasurement and allocation of systemic risk as a joint problem.The sensitive systemic risk measures studied in Feinstein et al. (2017) and Biagini et al. (2019)have convenient theoretical properties when the underlying aggregation function is simple enough.In Ararat and Rudloff (2020), assuming a monotone and concave aggregation function, it has beenshown that the set-valued sensitive systemic risk measure is a convex set-valued risk measure in thesense of Hamel et al. (2011) and dual representations are obtained in terms of the conjugate functionof the aggregation function. In particular, the aggregation function for the Eiseberg-Noe model,assuming positive operating cash flows as in the original formulation in Eisenberg and Noe (2001),is monotone and concave, and an explicit dual representation is obtained for the correspondingsystemic risk measure of this model.
In this paper, we are concerned with the computation of a sensitive systemic risk measure. Werelate the value of this systemic risk measure to a vector (multiobjective) optimization problemwhose “efficient frontier” corresponds to the boundary of the systemic risk measure. The vectoroptimization problem has a risk constraint written in terms of the aggregation function. The mainchallenge in solving this problem is that the aggregation function needs to be evaluated for everyscenario of the underlying probability space as well as for every choice of the capital allocationvector, which is the decision variable of the optimization problem. For the standard Eisenberg-Noemodel, thanks to the linear programming characterization of the clearing vectors, one can formulatethe aggregation function in terms of a linear programming problem parametrized by the scenarioand the capital allocation vector. Hence, the ultimate vector optimization problem can be seen asa nested optimization problem.We focus particularly on models beyond the standard Eisenberg-Noe framework with positiveoperating cash flows. First, we extend the Eisenberg-Noe model by relaxing the positivity assump-tion. Second, we study the Rogers-Veraart model with default costs. It turns out that both modelshave a common type of singularity that can be formulated in terms of binary variables , a novelfeature studied in this paper. One of our main contributions is to develop mixed-integer linearprogramming problems (MILP) whose optimal solutions yield clearing vectors in these models.4e choose the objective functions of these optimization problems in such a way that the optimalvalues give the total debts paid at clearing in the corresponding models. Hence, we calculate theaggregation functions as the optimal values of these optimization problems.The existence of binary variables in the associated optimization problems results in lack ofconcavity for the corresponding aggregation functions. Consequently, in contrast to the existingliterature on set-valued and scalar systemic risk measures, the sensitive systemic risk measures forthe two models do not possess the nice theoretical feature of being convex. In particular, the dualrepresentations studied for systemic risk measures in Ararat and Rudloff (2020) are not applicablein our setting. Indeed, we even have that the values of these systemic risk measures fail to beconvex sets, in general. Therefore, one of our fundamental observations is that binary variables and the accompanying lack of concavity/convexity show up naturally at the cost of using moresophisticated aggregation mechanisms beyond the standard Eisenberg-Noe framework.Although our main interest is on the signed Eisenberg-Noe and Rogers-Veraart models for com-putations, we follow a unifying approach by using a general aggregation function defined in termsof a mixed-integer optimization problem for the theoretical development. To be able to approxi-mate the nonconvex values of the corresponding systemic risk measure, we associate a (generallynonconvex) vector optimization problem to it. As a general paradigm, algorithms for solving vectoroptimization problems iterate by solving certain scalarization problems (for instance, weighted-sumscalarizations) along with additional computational procedures (for instance, vertex enumerationsubroutines). For instance, Benson’s algorithm (Benson, 1998) for linear vector optimization, theBenson-type algorithm in L¨ohne et al. (2014) for convex vector optimization, and the Benson-typealgorithm in Nobakhtian and Shafiei (2017) for nonconvex vector optimization follow this pattern.Consequently, the validity of such algorithms are conditional on the ability to solve the scalarizationproblems.We develop methods to solve two commonly used scalarization problems for the vector optimiza-tion problem for systemic risk measures: the weighted-sum scalarization problem and Pascoletti-Serafini scalarization, which consists of calculating the minimum step-length to enter hit a set witha fixed direction. We prove that both scalarization problems for both signed Eisenberg-Noe andRogers-Veraart models can be formulated as MILP problems. We also prove some results related tothe feasibility and boundedness of these MILP problems. With these methods, we solve the scalar-5zation problems as subroutines of the nonconvex algorithm in Nobakhtian and Shafiei (2017). Itshould be noted the choice of this algorithm is not arbitrary at all; indeed, due to the existenceof binary variables in the formulations, the corresponding vector optimization problems for bothmodels are nonconvex, hence the use of an algorithm that works without convexity assumptions isessential.We perform a detailed computational study for both models as well as sensitivity analyseswith respect to some model parameters such as the default cost parameters in the Rogers-Veraartmodel, the threshold level used in the risk constraint, and also some parameters determining theinterconnectedness of the network.The rest of this paper is organized as follows. We study the Eisenberg-Noe and Rogers-Veraartnetwork models in detail together with the mathematical programming characterizations of clearingvectors in Section 2. In Section 3, we study sensitive systemic risk measures in the context of thesemodels and the associated nonconvex vector optimization problems. The proofs of the results inSections 2, 3 are deferred to Appendices A, B, respectively. We present the computational resultsin Section 4.
In this section, we first review the original Eisenberg-Noe model in Section 2.1. In this model,clearing vectors can be calculated by solving a simple linear programming problem (Proposition 2.3).In Section 2.2, we propose a seniority-based extension of the Eisenberg-Noe model by allowing signedoperating cash flows and provide a novel mixed-integer linear programming (MILP) formulation ofclearing vectors (Theorem 2.7). Finally, in Section 2.3, we consider the Rogers-Veraart model andprovide a novel MILP formulation of clearing vectors (Theorem 2.15).Let us introduce the related notation. Let n ∈ N := { , , . . . } . Given a, b ∈ R , we write a ∧ b = min { a, b } , a ∨ b = max { a, b } , a + = 0 ∨ a , and a − = 0 ∨ ( − a ). Similarly, given a =( a , . . . , a n ) T , b = ( b , . . . , b n ) T ∈ R n , we write a ∧ b = ( a ∧ b , . . . , a n ∧ b n ) T , a ∨ b = ( a ∨ b , . . . , a n ∨ b n ) T
6s well as a + = ∨ a , and a − = 0 ∨ ( − a ), where = (0 , . . . , T ∈ R n . We sometimes use = (1 , . . . , T ∈ R n as well. The vector a ⊙ b = ( a b , . . . , a n b n ) T denotes the Hadamard productof a , b . We write a ≤ b if and only if a i ≤ b i for each i ∈ { , . . . , n } . In this case, we also define therectangle [ a , b ] = [ a , b ] × . . . × [ a n , b n ] ⊆ R n . Using ≤ on R n , we define R n + := { x ∈ R n | ≤ x } ,whose elements are said to be positive . Finally, k a k ∞ := max {| a | , . . . , | a n |} is the ℓ ∞ -norm of a . In this section, the original Eisenberg-Noe network model in Eisenberg and Noe (2001) and itscorresponding aggregation function are provided for completeness.
Definition 2.1.
A quadruple ( N , π , ¯ p , x ) is called an Eisenberg-Noe network if N = { , . . . , n } forsome n ∈ N , π = ( π ij ) i,j ∈N ∈ R n × n + is a stochastic matrix with π ii = 0 and P nj =1 π ji < n for each i ∈ N , ¯ p = (¯ p , . . . , ¯ p n ) T ∈ R n ++ , and x = ( x , . . . , x n ) T ∈ R n + .In Definition 2.1, N is the index set of nodes in a network that represents a financial system of n institutions. For every i ∈ N , ¯ p i > i . We call ¯ p the total obligation vector .For every i, j ∈ N such that i = j , π ij > i owed to node j . We call π the relative liabilities matrix . For every i ∈ N , the assumption π ii = 0means that node i cannot have liabilities to itself. By P nj =1 π ji < n for every i ∈ N , we assumethat no node owns all the claims in the network. Note that, given ¯ p and π , for every i, j ∈ N , thenominal liability l ij of node i to node j can be calculated as l ij = π ij ¯ p i .For each i ∈ N , x i ≥ i . We call x the operating cashflow vector .Let ( N , π , ¯ p , x ) be an Eisenberg-Noe network. For each i ∈ N , let p i ≥ i to the other nodes in the network. Then, p = ( p , . . . , p n ) T ∈ R n + iscalled a payment vector . Definition 2.2.
A vector p ∈ [ , ¯ p ] is called a clearing vector for ( N , π , ¯ p , x ) if it satisfies thefollowing properties: • Limited liability: for each i ∈ N , p i ≤ P nj =1 π ji p j + x i , which implies that node i cannot paymore than it has. 7 Absolute priority: for each i ∈ N , either p i = ¯ p i or p i = P nj =1 π ji p j + x i , which implies thatnode i either meets its obligations in full or else it defaults by paying as much as it has.Let Φ EN + : [ , ¯ p ] → [ , ¯ p ] be defined byΦ EN + ( p ) := (cid:0) π T p + x (cid:1) ∧ ¯ p . (2.1)It is shown in Eisenberg and Noe (2001) that a clearing vector p for ( N , π , ¯ p , x ) is a fixed point ofΦ EN + , that is, Φ EN + ( p ) = p .Next, we recall the mathematical programming characterization of clearing vectors, which isthe basis of our generalizations to follow. We say that a function f : R n → R is strictly increasingif a ≤ b and a = b imply f ( a ) < f ( b ) for every a , b ∈ R n . Proposition 2.3. (Eisenberg and Noe, 2001, Lemma 4) Let f : R n + → R be a strictly increasingfunction. Consider the following optimization problem with linear constraints:maximize f ( p )subject to p ≤ π T p + x , p ∈ [ , ¯ p ] . (2.2)If p ∈ R n + is an optimal solution to this optimization problem, then it is a clearing vector for( N , π , ¯ p , x ).Each member in a network has its impact on economy. As in the recent literature on systemicrisk measures (see Section 1.1), we use aggregation functions to summarize these individual effectsand to quantify the total impact of the network. We define the aggregation function Λ EN + : R n + → R for the Eisenberg-Noe network ( N , π , ¯ p , x ) byΛ EN + ( x ) := sup (cid:8) f ( p ) | p ≤ π T p + x , p ∈ [ , ¯ p ] (cid:9) (2.3)for each x ∈ R n + , where f : R n + → R is a strictly increasing function, namely, Λ EN + ( x ) is the optimalvalue of the problem in (2.2). As a special case, f can be taken to be a linear function in whichcase the problem in (2.2) is a linear programming problem.8 .2 Signed Eisenberg-Noe network model In the original Eisenberg-Noe network model, it is assumed that the operating cash flow vectorhas positive components. In reality, however, it may happen that an institution has liabilities toexternal entities that are not modeled as part of the network, which results in a negative operatingcash flow or a positive operating cost . Definition 2.4.
A quadruple ( N , π , ¯ p , x ) is called a signed Eisenberg-Noe network if N , π and ¯ p are as in Definition 2.1, and x = ( x , . . . , x n ) T ∈ R n .Note that Definition 2.4 removes the positivity assumption on the operating cash flow vector x . Our aim is to provide a new definition of clearing vector by extending Definition 2.2 with anadditional seniority assumption for negative operating cash flows. Based on this definition, weprove a fixed-point and a mathematical programming characterization of clearing vectors. Finally,we introduce an associated aggregation function through a MILP problem.Let ( N , π , ¯ p , x ) be a signed Eisenberg-Noe network. We assume that the nodes that haveobligations outside the network, that is, the nodes with negative operating cash flows have to meetthese obligations first, and if they do not default in this “first round,” then they should meettheir obligations to the other nodes inside the network. At this “second round”, as in the originalEisenberg-Noe network model, they either meet their obligations to the other nodes in full or payas much as they have at hand and default. This motivates the following definition. Definition 2.5.
A vector p ∈ [ , ¯ p ] is called a clearing vector for ( N , π , ¯ p , x ) if it satisfies thefollowing properties: • Immediate default: for each i ∈ N , if P nj =1 π ji p j + x i ≤
0, then p i = 0. • Limited liability: for each i ∈ N , if P nj =1 π ji p j + x i >
0, then p i ≤ P nj =1 π ji p j + x i , whichimplies that if node i has a strictly positive operating cash flow, then it cannot pay more thanit has. • Absolute priority: for each i ∈ N , if P nj =1 π ji p j + x i >
0, then either p i = ¯ p i or p i = P nj =1 π ji p j + x i , which implies that if node i has a strictly positive operating cash flow, thenit either meets its obligations in full or else it defaults by paying as much as it has.9et Φ EN : [ , ¯ p ] → [ , ¯ p ] be defined byΦ EN ( p ) := (cid:0) ¯ p ∧ ( π T p + x ) (cid:1) + , (2.4)or more explicitly, for each i ∈ N ,Φ EN i ( p ) = P nj =1 π ji p j + x i ≤ , P nj =1 π ji p j + x i if 0 < P nj =1 π ji p j + x i ≤ ¯ p i , ¯ p i if P nj =1 π ji p j + x i > ¯ p i . (2.5)Observe that, if x ∈ R n + , then Φ EN coincides with the function Φ EN + in (2.1) defined for the originalEisenberg-Noe network model.We establish the fixed point characterization of clearing vectors next. Proposition 2.6.
A vector p ∈ [ , ¯ p ] is a clearing vector for ( N , π , ¯ p , x ) if and only if it is a fixedpoint of Φ EN .The next theorem is the main result of Section 2.2. It extends Proposition 2.3 for the signedEisenberg-Noe network model by showing that a clearing vector can be calculated as an optimalsolution of a certain MILP. Hence, relaxing the positivity assumption on the operating cash flowvector is at the cost of using binary variables in the mathematical programming characterization ofclearing vectors, hence, adding a discrete feature to the originally continuous optimization problem. Theorem 2.7.
Let x ∈ R n and denote by Λ EN ( x ) the optimal value of the MILP problemmaximize f ( p ) (2.6)subject to p i ≤ n X j =1 π ji p j + x i + M (1 − s i ) , i ∈ N , (2.7) p i ≤ ¯ p i s i , i ∈ N , (2.8) n X j =1 π ji p j + x i ≤ M s i , i ∈ N , (2.9)0 ≤ p i ≤ ¯ p i , s i ∈ { , } , i ∈ N , (2.10)10here f : R n + → R is a strictly increasing linear function and M = n k ¯ p k ∞ + k y k ∞ . If ( p , s ) is anoptimal solution to above problem, then p is a clearing vector for ( N , π , ¯ p , x ).The proof of Theorem 2.7 is in Appendix A.2. Remark 2.8.
The function Λ EN fails to be concave in general.Let u = ( u , . . . , u n ) T ∈ { , } n be a binary vector, where u i = 0 if x i <
0, and u i = 1 if x i ≥
0, for each i ∈ N . Then ( p , s ) = ( , u ) ∈ R n × Z n is a feasible solution to the MILP in (2.6).Moreover, since f is a bounded function on the rectangle [ , ¯ p ] ⊆ R n + , by Meyer (1974, Theorem2.1), the MILP has an optimal solution. Observe that, by Theorem 2.7, the existence of an optimalsolution to the MILP in (2.6) proves the existence of a clearing vector for the network ( N , π , ¯ p , x ). Remark 2.9.
The linearity of f is not a necessary condition for Theorem 2.7 to hold. Remark 2.10.
Instead of the seniority-based approach developed above, a naive approach wouldbe to introduce an additional node and consider negative operational cash flows of the nodes asliabilities to this additional node, which itself has neither obligations nor an operating cash flow, assuggested in Eisenberg and Noe (2001). This approach is valid for the fictitious default algorithmdescribed in Eisenberg and Noe (2001) and this way a clearing vector for the original networkcan be found. However, the modified network lacks a solid interpretation in terms of the originalnetwork since the relative liabilities matrix of the new network depends on the operational cashflow vector. Hence, we do not follow this route here.
In Rogers and Veraart (2013), the original Eisenberg-Noe network model is extended by includingdefault costs. It is assumed that a defaulting node is not able to use all of its liquid assets to meetits obligations. Unlike the Eisenberg-Noe model, the possibility of a mathematical programmingformulation for clearing vectors seems to be an open problem for the Rogers-Veraart. We fill up thisgap by proposing a MILP whose optimal solution includes a clearing vector for the Rogers-Veraartnetwork model. Based on this characterization, we define an aggregation function and provideits relationship to the network model. Finally, inspired by Definition 2.2, we propose a weakerdefinition of a clearing vector for the Rogers-Veraart network model.11 efinition 2.11.
A sextuple ( N , π , ¯ p , x , α, β ) is called a Rogers-Veraart network if N = { , . . . , n } for some n ∈ N , π = ( π ij ) i,j ∈N ∈ R n × n + is a stochastic matrix with π ii = 0 and P nj =1 π ji < n foreach i ∈ N , ¯ p = (¯ p , . . . , ¯ p n ) T ∈ R n ++ , x = ( x , . . . , x n ) T ∈ R n + and α, β ∈ (0 , N is the set of nodes in a network with n institutions, ¯ p is the totalobligation vector , π is the matrix of relative liabilities and x is the operating cash flow vector . It isassumed that a defaulting node may not be able to use all of its liquid assets to meet its obligations.For this purpose, we use α as the fraction of the operating cash flow and β as the fraction of thecash inflow from other nodes that can be used by a defaulting node to meet its obligations.Let ( N , π , ¯ p , x , α, β ) be a Rogers-Veraart network. For each i ∈ N , let p i ≥ i to the other nodes in the network. Then, p = ( p , . . . , p n ) T ∈ R n + iscalled a payment vector .Motivated by Definition 2.2 of a clearing vector for an Eisenberg-Noe network, we suggest thefollowing similar definition of a clearing vector for the Rogers-Veraart network ( N , π , ¯ p , x , α, β ). Definition 2.12.
A vector p ∈ [ , ¯ p ] is called a clearing vector for ( N , π , ¯ p , x , α, β ) if it satisfiesthe following properties: • Limited liability: for each i ∈ N , p i ≤ x i + P nj =1 π ji p j , which implies that node i cannot paymore than it has. • Absolute priority: for each i ∈ N , either p i = ¯ p i or p i = αx i + β P nj =1 π ji p j , which impliesthat node i either has to meet its obligations in full or else it defaults by paying as much asit can.Let Φ RV + : [ , ¯ p ] → [ , ¯ p ] be defined byΦ RV + i ( p ) := ¯ p i if ¯ p i ≤ x i + P nj =1 π ji p j ,αx i + β P nj =1 π ji p j if ¯ p i > x i + P nj =1 π ji p j , (2.11)for each i ∈ N .Observe that, if α = 1 and β = 1, then the function Φ RV + becomes the usual Φ EN + in (2.1)from the original Eisenberg-Noe network model.12he next proposition is novel to this work and it is useful in calculating clearing vectors inRogers-Veraart model. Its proof is given in Appendix A.3. Proposition 2.13.
A fixed point p ∈ [ , ¯ p ] of Φ RV + is a clearing vector for ( N , π , ¯ p , x , α, β ). Remark 2.14.
The converse of Proposition 2.13 fails to hold in general. Here is a counterexample.Consider a Rogers-Veraart network ( N , π , ¯ p , x , α, β ) and payment vector p , where N = { , } , π = , ¯ p = , x = , p = ,α = 0 . β = 0 .
5. According to Definition 2.12, p is a clearing vector for ( N , π , ¯ p , x , α, β ) sinceit satisfies absolute priority and limited liability . However, by (2.11), Φ RV + ( p ) = 25 > p = 15.Hence, p is not a fixed point of Φ RV + .The next theorem is the main result of Section 2.3. In the spirit of Theorem 2.7 for a signedEisenberg-Noe network, it provides a MILP characterization of clearing vectors for the Rogers-Veraart nework ( N , π , ¯ p , x , α, β ). Theorem 2.15.
For each x ∈ R n + , denote by Λ RV + ( x ) the optimal value of the MILPmaximize f ( p ) (2.12)subject to p i ≤ αx i + β n X j =1 π ji p j + ¯ p i s i , i ∈ N , (2.13)¯ p i s i ≤ x i + n X j =1 π ji p j , i ∈ N , (2.14)0 ≤ p i ≤ ¯ p i , s i ∈ { , } , i ∈ N , (2.15)where f : R n + → R is a strictly increasing linear function; for each x ∈ R n \ R n + , set Λ RV + ( x ) = −∞ for definiteness. If ( p , s ) is an optimal solution to the above MILP, then p is a clearing vector for( N , π , ¯ p , x , α, β ).The proof of Theorem 2.15 is given in Appendix A.4. Remark 2.16.
The function Λ RV + fails to be concave in general.13 emark 2.17. Let us comment on the MILP problems in Theorem 2.7 and Theorem 2.15. Whileboth problems have a discrete feature through the binary variables, the natures of this feature arequite different from each other. In Theorem 2.7, the binary variables serve for quantifying theswitch from the “first round” to the “second round” in the definition of Φ EN , which is describedabove Definition 2.5. In this case, in addition to the binary variables, one also uses a large constant M in the problem formulation. On the other hand, binary variables are used in Theorem 2.15 tomodel the discontinuity in Φ RV + which occurs when α < β <
1. In this case, a formulationwithout using a large constant M is possible. Remark 2.18.
It is easy to check that ( p , s ) = ( , ) ∈ R n × Z n is a feasible solution to theMILP in (2.12). Moreover, since f is a bounded function on the interval [ , ¯ p ] ⊆ R n + , by Meyer(1974, Theorem 2.1), the MILP in (2.12) has an optimal solution. Observe that, by Theorem 2.15,the existence of an optimal solution to the MILP in (2.12) proves the existence of a clearingvector for ( N , π , ¯ p , x , α, β ). Hence, Theorem 2.15 provides an alternative argument for the proofof Rogers and Veraart (2013, Theorem 3.1) on the existence of a clearing vector.The MILP aggregation functions Λ EN and Λ RV + developed in this section are used in Section 3to define and calculate systemic risk measures. In this section, we consider the computation of (sensitive) systemic risk measures , which are set-valued functionals of a random operating cash flow vector and defined in terms of the aggregationfunction of the underlying network model. While the aforementioned articles focus mainly onthe case where the aggregation function is concave which results in the convex-valuedness of thecorresponding systemic risk measure, the aggregation functions we use are not concave and thecorresponding systemic risk measures fail to have convex values, in general.Without specifying a particular network model, we consider a financial network with n ∈ N institutions. As in Section 2, we write N = { , . . . , n } . Similarly, let K = { , . . . , K } for some K ∈ N . We consider a finite probability space (Ω , F , P ), where Ω = (cid:8) ω , . . . , ω K (cid:9) , F is the power setof Ω, and P is a probability measure determined by the elementary probabilities q k := P (cid:8) ω k (cid:9) > ∈ K . We denote by L ( R n ) the linear space of all random vectors X : Ω → R n . For every X ∈ L ( R n ), let k X k ∞ := max i ∈N , k ∈K | X i ( ω k ) | . We use the notion of grouping, also discussed in Feinstein et al. (2017), to keep the dimensionof the systemic risk measure at a reasonable level for computational purposes. This notion allowsone to categorize the members of the network into groups and assign the same capital level for allthe members of a group. To that end, let G ≥ G = { , . . . , G } the set of groups in the network. For the computations in Section 4, we will use G = 2 or G = 3 groups. Let ( N ℓ ) ℓ ∈G be a partition of N , where N ℓ denotes the set of all institutionsthat belong to group ℓ ∈ G . For each ℓ ∈ G , let n ℓ := |N ℓ | and denote by B ℓ ∈ R G × n ℓ the matrixhaving 1’s in the ℓ th row and 0’s elsewhere. Let B ∈ R G × n be the grouping matrix defined by B := (cid:20) B B . . . B G (cid:21) . (3.1)We consider the (grouped) sensitive systemic risk measure R OPT : L ( R n ) → R n defined by R OPT ( X ) := (cid:8) z ∈ R n | Λ OPT ( X + B T z ) ∈ A (cid:9) , (3.2)where Λ OPT : R n → R ∪ {−∞} is an aggregation function and A ⊆ L ( R n ) is an acceptance set , thatis, the set of all random aggregate outputs that are at an acceptable level of risk. We assume thatΛ OPT is a general optimization aggregation function of the formΛ
OPT ( x ) := sup { f ( p ) | ( p , s ) ∈ Y ( x ) , p ∈ R n , s ∈ Z n } , (3.3)where f : R n → R is a strictly increasing and continuous function, and Y : R n → R n × Z n is a set-valued constraint function such that Y ( x ) is either the empty set or a nonempty compact set forevery x ∈ R n . In particular, this general structure covers the aggregation functions Λ OPT = Λ EN and Λ OPT = Λ RV + defined in Section 2. On the other hand, we assume that A is a halfspace-typeacceptance set defined by A = { Y ∈ L ( R n ) | E [ Y ] ≥ γ } , (3.4)15here γ ∈ R is some suitable threshold. Hence, the corresponding systemic risk measure R OPT becomes R OPT ( X ) = (cid:8) z ∈ R G | E [Λ OPT ( X + B T z )] ≥ γ (cid:9) . (3.5)We write R OPT = R EN when Λ OPT = Λ EN and R OPT = R RV + when Λ OPT = Λ RV + , and referto them as the Eisenberg-Noe and Rogers-Veraart systemic risk measures, respectively. Remark 3.1.
For R RV + ( X ), the definition of Λ RV + in Theorem 2.15 implies the implicit condition X + B T z ≥ R OPT . Let us fix X ∈ L ( R n ) and consider the vector optimization problemminimize z ∈ R G with respect to ≤ subject to E [Λ OPT ( X + B T z )] ≥ γ, (3.6)where ≤ denotes the usual componentwise ordering in R n . Note that R OPT ( X ) coincides with theso-called upper image of this vector optimization problem in the sense that R OPT ( X ) = (cid:8) z + R G + | E [Λ OPT ( X + B T z )] ≥ γ (cid:9) . (3.7)In the next two subsections, we propose methods to solve certain scalarization problems asso-ciated to R OPT ( X ) by exploiting the structure of the optimization aggregation function Λ OPT . For each w ∈ R G + \ { } , we consider the weighted-sum scalarization problem P ( w ) = inf z ∈ R OPT ( X ) w T z = inf z ∈ R G (cid:8) w T z | E [Λ OPT ( X + B T z )] ≥ γ (cid:9) . (3.8)The following theorem provides an alternative formulation for P ( w ).16 heorem 3.2. Let w ∈ R G + \ { } . Consider the problem in (3.8) and let Z ( w ) := inf z ∈ R G n w T z | X k ∈K q k f ( p k ) ≥ γ, ( p k , s k ) ∈ Y ( X ( ω k )+ B T z ) , p k ∈ R n , s k ∈ Z n ∀ k ∈ K o . (3.9)Then, P ( w ) = Z ( w ). In particular, if one of the problems in (3.8) and (3.9) has a finite optimalvalue, then so does the other one and the optimal values coincide.The proof of Theorem 3.2 is given in Appendix B.1. Remark 3.3.
Let ℓ ∈ G and e ℓ the corresponding standard unit vector in R G . Observe that theweighted-sum scalarization problem P ( e ℓ ) = inf z ∈ R G (cid:8) z ℓ | E [Λ OPT ( X + B T z )] ≥ γ (cid:9) (3.10)is a single-objective optimization problem of the vector optimization problem in (3.6). By Theo-rem 3.2, P (cid:0) e ℓ (cid:1) = Z (cid:0) e ℓ (cid:1) . Remark 3.4.
Let z ideal ∈ R G be the ideal point of the vector optimization problem in (3.6) in thesense that the entries of z ideal minimize each of the objective functions of the vector optimizationproblem. In other words, one can define z ideal := (cid:0) P ( e ) , . . . , P ( e G ) (cid:1) T ∈ R G (3.11)assuming that P ( e ℓ ) is finite for each ℓ ∈ G . Theorem 3.2 allows one to solve G optimizationproblems with compact feasible sets, namely, the problems ( Z ( e ℓ )) ℓ ∈G , to obtain the ideal pointof the vector optimization problem in (3.6).In the following two subsections, we apply Theorem 3.2 to the special cases Λ OPT = Λ EN andΛ OPT = Λ RV + , respectively. For this purpose, we fix the function f : R n → R in the objectivefunctions of the MILP aggregation functions as f ( p ) := T p .
17t is clear that f is a strictly increasing continuous linear function bounded on the interval [ , ¯ p ] ⊆ R n . Moreover, since the vector optimization algorithm in Section C requires solving weighted-sumscalarization problems only for the standard unit vectors, we state our results for such directionvectors. Let ( N , π , ¯ p , X ) be a signed Eisenberg-Noe network. Corollary 3.5.
Let ℓ ∈ G . Consider the single-objective optimization problem P EN1 ( e ℓ ) := inf z ∈ R G (cid:8) z ℓ | E [Λ EN ( X + B T z )] ≥ γ (cid:9) , (3.12)and let Z EN1 ( e ℓ ) be the optimal value of the MILP problemminimize z ℓ (3.13)subject to X k ∈K q k T p k ≥ γ, (3.14) p ki ≤ n X j =1 π ji p kj + ( X i ( ω k ) + ( B T z ) i ) + M (1 − s ki ) , ∀ i ∈ N , k ∈ K , (3.15) p ki ≤ ¯ p i s ki , ∀ i ∈ N , k ∈ K , (3.16) n X j =1 π ji p kj + ( X i ( ω k ) + ( B T z ) i ) ≤ M s ki , ∀ i ∈ N , k ∈ K , (3.17)0 ≤ p ki ≤ ¯ p i , s ki ∈ { , } , ∀ i ∈ N , k ∈ K , (3.18) z ∈ R G , (3.19)where M = 2 k X k ∞ + ( n + 1) k ¯ p k ∞ . Then, P EN1 ( e ℓ ) = Z EN1 ( e ℓ ). In particular, if one of theproblems in (3.12) and (3.13) has a finite optimal value, then so does the other one and the optimalvalues coincide.The next proposition presents some boundedness and feasibility results for the problem in (3.13). Proposition 3.6.
Let ℓ ∈ G and consider the MILP problem in (3.13).18. If the problem has an optimal solution, then P EN1 ( e ℓ ) = Z EN1 ( e ℓ ) ≤ k X k ∞ + k ¯ p k ∞ .
2. If the problem has a feasible solution, then it has a finite optimal value, that is, Z EN1 (cid:0) e ℓ (cid:1) ∈ R .3. The problem has a feasible solution if and only if γ ≤ T ¯ p .The proofs of the results in this subsection are given in Appendix B.2. Remark 3.7.
Let ℓ ∈ G . Suppose that there exists an optimal solution ( z , ( p k , s k ) k ∈K ) of the MILPproblem in (3.13). By the structure of the matrix B , for each i ∈ N , it holds (cid:0) B T z (cid:1) i = z t for some t ∈ G . Hence, by Proposition 3.6(i), ( B T z ) i ≤ k X k ∞ + k ¯ p k ∞ holds for each i ∈ N . In addition,for every i ∈ N , k ∈ K , and p k ∈ [ , ¯ p ], it holds P nj =1 π ji p kj < n k ¯ p k ∞ and X i (cid:0) ω k (cid:1) ≤ k X k ∞ .Hence, the choice of M = 2 k X k ∞ + ( n + 1) k ¯ p k ∞ in Corollary 3.5 is justified, since, to ensure thefeasibility of the constraint in (3.17), it is enough to choose M such that n X j =1 π ji p kj + ( X i ( ω k ) + ( B T z ) i ) ≤ M for every i ∈ N , k ∈ K and p k ∈ [ , ¯ p ]. Let ( N , π , ¯ p , X , α, β ) be a Rogers-Veraart network. Corollary 3.8.
Let ℓ ∈ G . Consider the single-objective optimization problem P RV + ( e ℓ ) := inf z ∈ R G (cid:8) z ℓ | E [Λ RV + ( X + B T z )] ≥ γ (cid:9) , (3.20)and let Z RV + ( e ℓ ) be the optimal value of the MILP problemminimize z ℓ (3.21)subject to X k ∈K q k T p k ≥ γ, (3.22)19 ki ≤ α (cid:0) X i ( ω k ) + ( B T z ) i (cid:1) + β n X j =1 π ji p kj + ¯ p i s ki , ∀ i ∈ N , k ∈ K , (3.23)¯ p i s ki ≤ (cid:0) X i ( ω k ) + ( B T z ) i (cid:1) + n X j =1 π ji p kj , ∀ i ∈ N , k ∈ K , (3.24) X i ( ω k ) + ( B T z ) i ≥ , ∀ i ∈ N , k ∈ K , (3.25)0 ≤ p ki ≤ ¯ p i , s ki ∈ { , } , ∀ i ∈ N , k ∈ K , (3.26) z ∈ R G . (3.27)(Here, constraint (3.25) ensures X + B T z ≥ RV + ( X ( ω k ) + B T z ) = −∞ for every k ∈ K .)Then, P RV + ( e ℓ ) = Z RV + ( e ℓ ). In particular, if one of the problems in (3.20) and (3.21) has a finiteoptimal value, then so does the other one and the optimal values coincide.The next three propositions present some boundedness and feasibility results for the problemin (3.21). Proposition 3.9.
Let ℓ ∈ G . Consider the MILP problem in (3.21).1. If the problem has an optimal solution, then P RV + ( e ℓ ) = Z RV + ( e ℓ ) ≤ k X k ∞ + 1 α k ¯ p k ∞ .
2. If the problem has a feasible solution, then it has a finite optimal value, that is, Z RV + (cid:0) e ℓ (cid:1) ∈ R .3. The problem has a feasible solution if and only if γ ≤ T ¯ p .The proofs of the results in this subsection are given in Appendix B.3. Weighted-sum scalarizations are used to calculate supporting halfspaces for the value of a systemicrisk measure and they can be sufficient to characterize the entire risk set when the set is convex.In our nonconvex case, we make use of additional scalarizations that are used to calculate theminimum step-lengths to the enter the risk set from possibly outside points. Such scalarizationsare well-known in vector optimization; see Pascoletti and Serafini (1984), Gerstewitz and Iwanow(1985), G¨opfert et al. (2003), for instance. 20or each v ∈ R G , we consider P ( v ) := inf (cid:8) µ ∈ R | B T v + µ ∈ R OPT ( X ) (cid:9) = inf (cid:8) µ ∈ R | E [Λ OPT ( X + B T v + µ )] ≥ γ (cid:9) , (3.28)which can be interpreted as the minimum step-length in the direction from the point v to theboundary of the set R OPT ( X ).The following theorem provides an alternative formulation for P ( v ). Theorem 3.10.
Let v ∈ R n . Consider the problem in (3.28) and let Z ( v ) := inf n µ ∈ R | X k ∈K q k f ( p k ) ≥ γ, ( p k , s k ) ∈ Y ( X ( ω k )+ B T v + µ ) , p k ∈ R n , s k ∈ Z n ∀ k ∈ K o . (3.29)Then, P ( v ) = Z ( v ). In particular, if one of the problems in (3.28) and (3.29) has a finite optimalvalue, then so does the other one and the optimal values coincide.The proof of Theorem 3.10 is given in Appendix B.4.The following two subsections apply Theorem 3.10 to the special cases Λ OPT = Λ EN and Λ OPT =Λ RV + , respectively. Let ( N , π , ¯ p , X ) be an Eisenberg-Noe network. Corollary 3.11.
Let v ∈ R G . Consider the problem P EN2 ( v ) := inf (cid:8) µ ∈ R | E [Λ EN ( X + B T v + µ )] ≥ γ (cid:9) , (3.30)and let Z EN2 ( v ) be the optimal value of the MILP problemminimize µ (3.31)subject to X k ∈K q k T p k ≥ γ, (3.32)21 ki ≤ n X j =1 π ji p kj + ( X i ( ω k ) + ( B T v ) i + µ ) + M (1 − s ki ) , ∀ i ∈ N , k ∈ K , (3.33) p ki ≤ ¯ p i s ki , ∀ i ∈ N , k ∈ K , (3.34) n X j =1 π ji p kj + ( X i ( ω k ) + ( B T v ) i + µ ) ≤ M s ki , ∀ i ∈ N , k ∈ K , (3.35)0 ≤ p ki ≤ ¯ p i , s ki ∈ { , } , ∀ i ∈ N , k ∈ K , (3.36) µ ∈ R , (3.37)where M = 2 k X k ∞ + 2 k v k ∞ + ( n + 1) k ¯ p k ∞ . Then, P EN2 ( v ) = Z EN2 ( v ). In particular, if one ofthe problems in (3.30) and (3.31) has a finite optimal value, then so does the other one and theoptimal values coincide.The next proposition presents some boundedness and feasibility results for the problem in (3.31). Proposition 3.12.
Let v ∈ R G . Consider the MILP problem in (3.31).1. If the problem has an optimal solution, then P EN2 ( v ) = Z EN2 ( v ) ≤ k X k ∞ + k v k ∞ + k ¯ p k ∞ .
2. If the problem has a feasible solution, then it has a finite optimal value, that is, Z EN2 ( v ) ∈ R .3. The problem has a feasible solution if and only if γ ≤ T ¯ p .The proofs of the results in this subsection are given in Appendix B.5. Remark 3.13.
Let v ∈ R G and ( µ, ( p k , s k ) k ∈K ) an optimal solution of the MILP problem in (3.31).By Proposition (3.12), µ ≤ k X k ∞ + k v k ∞ + k ¯ p k ∞ . By the structure of the matrix B , for each i ∈ N , it holds ( B T v ) i = v t for some t ∈ G . Hence, for every v ∈ R G , ( B T v ) i ≤ k v k ∞ . In addition,for every i ∈ N , k ∈ K , and p k ∈ [ , ¯ p ], it holds P nj =1 π ji p kj < n k ¯ p k ∞ and X i ( ω k ) ≤ k X k ∞ .Hence, the choice of M as M = 2 k X k ∞ + 2 k v k ∞ + ( n + 1) k ¯ p k ∞ in Corollary 3.11 is justified,since, to ensure the feasibility of constraint (3.35), it is enough to choose M such that n X j =1 π ji p kj + (cid:0) X i ( ω k ) + ( B T v ) i + µ (cid:1) ≤ M i ∈ N , k ∈ K , v ∈ R G and p k ∈ [ , ¯ p ]. Remark 3.14.
Proposition 3.6(ii) shows that if the MILP problem Z EN1 (cid:0) e ℓ (cid:1) in (3.13) is feasiblefor every ℓ ∈ G , then the ideal point z ideal ∈ R n exists for the vector optimization problem in (3.6)with Λ OPT = Λ EN . Proposition 3.9(ii) provides the same result for the vector optimization problemin (3.6) with Λ OPT = Λ RV + . In addition, Propositions 3.6(i), 3.6(iii), 3.12(i), 3.12(iii) allow oneto choose the exact value for the upper bound M in the corresponding MILP problems instead ofassuming some heuristic values. Let ( N , π , ¯ p , X , α, β ) be a Rogers-Veraart network. Corollary 3.15.
Let v ∈ R G . Consider the problem P RV + ( v ) := inf (cid:8) µ ∈ R | E [Λ RV + ( X + B T v + µ )] ≥ γ (cid:9) , (3.38)and let Z RV + ( v ) be the optimal value of the MILP problemminimize µ (3.39)subject to X k ∈K q k T p k ≥ γ, (3.40) p ki ≤ α (cid:0) X i ( ω k ) + ( B T v ) i + µ (cid:1) + β n X j =1 π ji p kj + ¯ p i s ki , ∀ i ∈ N , k ∈ K , (3.41)¯ p i s ki ≤ (cid:0) X i ( ω k ) + ( B T v ) i + µ (cid:1) + n X j =1 π ji p kj , ∀ i ∈ N , k ∈ K , (3.42) X i ( ω k ) + ( B T v ) i + µ ≥ , ∀ i ∈ N , k ∈ K , (3.43)0 ≤ p ki ≤ ¯ p i , s ki ∈ { , } , ∀ i ∈ N , k ∈ K , (3.44) µ ∈ R . (3.45)(Here, constraint (3.43) ensures X + B T v + µ ≥ RV + ( X ( ω k ) + B T v + µ ) = + ∞ forevery k ∈ K .) Then, P RV + ( v ) = Z RV + ( v ). In particular, if one of the problems in (3.38) and (3.39)has a finite optimal value, then so does the other one and the optimal values coincide.23he next proposition presents some boundedness and feasibility results for the problem in (3.39). Proposition 3.16.
Let v ∈ R G . Consider the MILP problem in (3.39).1. If the problem has an optimal solution, then P RV + ( v ) = Z RV + ( v ) ≤ k X k ∞ + k v k ∞ + 1 α k ¯ p k ∞ .
2. If the problem has a feasible solution, then it has a finite optimal value, that is Z RV + ( v ) ∈ R .3. The problem has a feasible solution if and only if γ ≤ T ¯ p .The proofs of the results in this subsection are given in Appendix B.6. Remark 3.17.
For ℓ ∈ G and v ∈ R G , the threshold γ appearing in R EN , R RV + can be taken assome percentage of T ¯ p , the sum of the debts of all nodes in the network. Then this thresholdensures that the expected total amount of payments exceeds this fraction of the total debt in thesystem. Indeed, Propositions 3.6(iii), 3.9(iii), 3.12(iii), 3.16(iii) show that the MILP problems forcalculating Z EN1 (cid:0) e ℓ (cid:1) , Z RV + (cid:0) e ℓ (cid:1) , Z EN2 ( v ), Z RV + ( v ) are feasible if and only if γ ≤ T ¯ p . Hence,this choice of γ is justified. The methods for solving weighted-sum and Pascoletti-Serafini scalarizations developed in the pre-ceeding section can be embedded into any meta vector optimization algorithm which makes use ofthese scalarizations. For the computational analysis in this section, we employ the Benson-typealgorithm in Nobakhtian and Shafiei (2017), which is provided in Section C for the convenience ofthe reader.We implement the algorithm on Java Photon (Release 4.8.0) calling Gurobi Interactive Shell(Version 7.5.2) and run it on an Intel(R) Core(TM) i7-4790 processor with 3.60 GHz and 4 GBRAM. We approximate the Eisenberg-Noe and Rogers-Veraart systemic risk measures within a two-group framework and perform a detailed sensitivity analysis. In the last part, we present severalcomputational results for three-group networks. 24ecall that n is the number of institutions in a financial system, refered to as banks here, n ℓ is the number of nodes in a group ℓ ∈ G , K is the number of scenarios, ǫ is a user-definedapproximation error and z UB is a user-defined upper-bound vector that limits the approximatedregion of a systemic risk measure. Throughout the computation of systemic risk measures, exceptfor the Rogers-Veraart case in a three-group framework (Section 4.6), z UB is taken as z UB = z ideal + 2 k ¯ p k ∞ , where z ideal is the ideal point of the corresponding systemic risk measure (Remark 3.4) for the case γ = T ¯ p , that is, when it is required that the expected total value of payments is at least as muchas the total amount of liabilities in the network.For convenience, let us write γ = γ p ( T ¯ p ), where γ p ∈ [0 , We consider a network with n banks forming G = 2 or G = 3 groups. Recall that G = { , . . . , G } , N = S ℓ ∈G N ℓ = { , . . . , n } , and n ℓ = |N ℓ | . When G = 2, the groups ℓ = 1 and ℓ = 2 correspond tobig and small banks, respectively. When G = 3, the groups ℓ = 1, ℓ = 2 and ℓ = 3 correspond tobig, medium and small banks, respectively.In order to construct a signed Eisenberg-Noe network ( N , π , ¯ p , X ) and a Rogers-Veraart net-work ( N , π , ¯ p , X , α, β ), the corresponding interbank liabilities matrix l := ( l ij ) i,j ∈N ∈ R n × n + andthe random operating cash flow vector X are generated in the following fashion. For l , we usean Erd¨os-R´enyi random graph model (Erd¨os and R´enyi, 1959; Gilbert, 1959). First, we fix a connectivity probabilities matrix q con := ( q con ℓ, ˆ ℓ ) ℓ, ˆ ℓ ∈G ∈ R G × G and an intergroup liabilities matrix l gr := ( l gr ℓ, ˆ ℓ ) ℓ, ˆ ℓ ∈G ∈ R G × G . For any two banks i, j ∈ N with i ∈ N ℓ , j ∈ N ˆ ℓ and ℓ, ˆ ℓ ∈ G , q con ℓ, ˆ ℓ is interpreted as a probability that bank i owes l gr ℓ, ˆ ℓ amount to bank j . Then, the liability l ij isgenerated by the Bernoulli trial l ij = l gr ℓ, ˆ ℓ , if U ij < q con ℓ, ˆ ℓ , , otherwise , U ij is the realization of a continuous random variable with a standard uniform distributionon a separate probability space. Then, the relative liabilities matrix π and the total obligationvector ¯ p are calculated accordingly.Recall that the operating cash flow vector X = ( X , . . . , X n ) ∈ L ( R n ) is a multivariate randomvector and Ω is a finite set of K scenarios. It is assumed that all scenarios are equally likely tohappen, the operating cash flows have a common standard deviation σ , and there is a commoncorrelation ̺ between any two operating cash flows. Then, each entry X i , i ∈ N , is generated as arandom sample of size K as described below.For the Eisenberg-Noe network, the mean values of operating cash flows in each group, ν :=( ν ℓ ) ℓ ∈G , are fixed and the random vector X is generated from K instances of a Gaussian randomvector. For the Rogers-Veraart network, first, shape parameters κ := ( κ ℓ ) ℓ ∈G and scale parameters θ := ( θ ℓ ) ℓ ∈G are fixed in accordance with the choices of σ, ̺ and then, X is generated from K instances of a random vector whose cumulative distribution function is stated in terms of a Gaussiancopula with gamma marginal distributions with the chosen parameters. In particular, ν ℓ = κ ℓ θ ℓ and σ = √ κ ℓ θ ℓ for each ℓ ∈ G . nodes We consider a two-group Eisenberg-Noe network with n = 50 banks that consists of n = 15 bigbanks, n = 35 small banks. We take K = 100, σ = 100, ̺ = 0 . q con = . . . . , l gr =
10 58 5 , ν = (cid:20) − − (cid:21) . In the corresponding Eisenberg-Noe systemic risk measure, we take γ p = 0 . ǫ Innerapprox.vertices Outerapprox.vertices P problems Avg. time per P prob.(seconds) Total algorithmtime (seconds) Total algorithmtime (hours)
20 18 19 18 663.546 11944 3.31810 35 36 35 541.419 18950 5.2645 73 74 73 512.998 37449 10.4031 394 395 394 492.597 194083 53.912Table 1: Computational performance of the algorithm for a network of 15 big and 35 small banks,100 scenarios and approximation errors ǫ ∈ { , , , } .26 con1 , Innerapprox.vertices Outerapprox.vertices P problems Avg. time per P prob.(seconds) Total algorithmtime (seconds) Total algorithmtime (hours) q con1 , ∈ { . , . , . , . , . } .The Benson-type algorithm is run with four different approximation errors ǫ to demonstratedifferent inner approximation levels. Table 1 presents the computational performance of the al-gorithm for ǫ ∈ { , , , } . As the number of P problems increases, the average computa-tion time per P problem decreases. This may be attributed to the warm start feature of theGurobi solver. When a sequence of MILP problems are solved, the solver constructs an initialsolution out of the previously obtained optimal solution. This feature is explained in detail inGurobi Optimizer Reference Manual (2018, Chapter 10.2, pp. 594-595).In the rest of this section, we perform sensitivity analyses on this network with respect to theconnectivity probabilities between big and small banks and on the number of scenarios. Connectivity probabilities play a major role in determining the topology of the network because theydefine the existence of liabilities between the banks. We would like to identify the sensitivity of thesystemic risk measure with respect to the changes in the connectivity probability q con1 , correspondingto the liabilities of big banks to small banks, and the probability q con2 , corresponding to the liabilitiesof small banks to big banks.For the sensitivity analysis with respect to q con1 , , originally taken as q con1 , = 0 .
3, we present inTable 2 the computational performance of the algorithm for q con1 , ∈ { . , . , . , . , . } . Figure 1consists of the corresponding inner approximations.Observe from Table 2 that the average time per P problem increases with q con1 , . This is thecase because as q con1 , increases, big and small banks in the network become more connected in termsof liabilities. Hence, the corresponding MILP formulations of P problems need more time to besolved. This seems to be the only factor behind the increase because most of the algorithm runtime27s devoted to solving P problems and the number of P problems in each case does not changemuch.Figure 1: Inner approximations of the Eisenberg-Noe systemic risk measure for q con1 , ∈{ . , . , . , . , . } .It can be observed that, as q con1 , increases, the corresponding inner approximations of systemicrisk measures in Figure 1 shift from the top left corner towards the bottom right corner. Itcan be interpreted as follows: as q con1 , increases, the first group, the group of big banks, losescapital allocation options, while the second group, the group of small banks, gains a wider rangeof capital allocation options. It can also be observed from Figure 1 that generating a networkwith q con1 , = 0 . q con1 , ∈ { . , . , . , . } , the corresponding Eisenberg-Noe systemic risk measures seem tobe convex sets. For these cases, there might be some breakpoint between 0 . . q con1 , is less than this breakpoint, big banks are less likely to be liable tosmall banks and have even more capital allocation options than they have in the other cases.Next, for the sensitivity analysis with respect to q con2 , , we present in Table 3 the computationalperformance of the algorithm for q con2 , ∈ { . , . , . , . , . } . Figure 2 consists of the correspond-ing inner approximations.As in the previous sensitivity analysis, observe from Table 3 that the average time per P problem increases with q con2 , . Hence, it is another justification of the presumption that this happens28 con2 , Innerapprox.vertices Outerapprox.vertices P problems Avg. time per P prob.(seconds) Total algorithmtime (seconds) Total algorithmtime (hours) q con2 , ∈ { . , . , . , . , . } .because with higher connectivity probabilities the network becomes more connected in terms ofliabilities and the corresponding MILP formulations of P problems need more time to be solved.Note that as q con2 , increases, the inner approximations of the corresponding Eisenberg-Noe sys-temic risk measures in Figure 2 shift from the bottom right corner towards the top left corner.Conversely to the previous sensitivity analysis, it can be interpreted as follows: as q con2 , increases,the first group gains a wider range of capital allocation options, while the second group loses cap-ital allocation options. It can also be observed from Figure 2 that generating a network with q con2 , = 0 . q con2 , ∈ { . , . , . , . } , the corresponding Eisenberg-Noe systemic risk measures seem to be con-vex sets. As in the previous sensitivity analysis, it can be presumed that for these cases there issome breakpoint between 0 . . q con2 , ∈{ . , . , . , . , . } . 29 Innerapprox.vertices Outerapprox.vertices P problems Avg. time per P prob.(seconds) Total algorithmtime (seconds) Total algorithmtime (hours)
10 376 377 376 3.088 1 161 0.32320 380 381 380 11.977 4 551 1.26430 389 390 389 28.134 10 944 3.04040 381 382 381 56.685 21 597 5.99950 373 374 373 96.488 35 990 9.99760 381 382 381 151.635 57 773 16.04870 385 386 385 206.924 79 666 22.12980 390 391 390 293.155 114 330 31.75890 381 382 381 378.346 144 150 40.042100 394 395 394 492.597 194 083 53.912Table 4: Computational performance of the algorithm for K ∈ { , , . . . , } .from a convex shape to a nonconvex one, meaning that, whenever the probability q con2 , is higherthan this breakpoint, small banks are more likely to be liable to big banks and the latter have evenmore capital allocation options than they have in the other cases. Next, we analyze how computation times and the corresponding systemic risk measures changewith the number K of scenarios. Since the network structure remains the same all the time, it isexpected that there will be no major changes in Eisenberg-Noe systemic risk measures. However,since each scenario adds n continuous and n binary variables to the corresponding P problem andits MILP formulation Z EN2 (Corollary 3.11) one would expect major changes in computation times.Table 4 shows the computational performance of the algorithm for K ∈ { , , . . . , } . Theplots in Figure 3 suggest that the average time per P problem and the total algorithm time increasefaster than linearly with K . Hence, the results obtained justify the expectations. nodes In this section, we consider an Eisenberg-Noe network ( N , π , ¯ p , X ) with n = 70, n = 10, n = 60, K = 50, σ = 100, ̺ = 0 .
05 and q con = . . . . , l gr =
10 58 5 , ν = (cid:20) − − (cid:21) . Figure 3: Scenarios-average time per P problem and scenarios-total algorithm time plots for thesigned Eisenberg-Noe network of 50 banks.In the corresponding Eisenberg-Noe systemic risk measure, we take γ p = 0 .
9. The approximationerror in the algorithm is taken as ǫ = 1.On this network, we perform sensitivity analyses with respect to the threshold γ p , the distribu-tion of nodes among groups, and the number of scenarios. γ p Innerapprox.vertices Outerapprox.vertices P problems Avg. time per P prob.(seconds) Total algorithmtime (seconds) Total algorithmtime (hours) γ p ∈ { . , . , . . . , . , . , . , } . We investigate how the Eisenberg-Noe systemic risk measures and their computation times changewhen the requirement that some fraction of the total amount of liabilities in the network shouldbe met on average gets more strict. Table 5 illustrates the computational performance of thealgorithm for γ p ∈ { . , . , . , . . . , . , . , . , } and Figure 4 represents the corresponding31igure 4: Inner approximations of the Eisenberg-Noe systemic risk measure for γ p ∈{ . , . , . . . , . , . , . , } .inner approximations of the Eisenberg-Noe systemic risk measures.It can be noted from Table 5 that the average times per P problem are high for the values of γ p around 0 .
3, and the number of P problems are high for the values of γ p around 0 .
5. These twofactors result in high total algorithm times for the values of γ p around 0 .
4. In addition, it can beobserved that the difference between the number of inner and outer approximation vertices and thenumber of P problems increase drastically for the values of γ p around 0 .
5. This happens becausethe boundaries of the corresponding Eisenberg-Noe systemic risk measures in Figure 4 contain“flat” regions, which makes the algorithm solve more P problems without actually improvingthe approximation. Observe from Figure 4 that as γ p increases, each subsequent Eisenberg-Noesystemic risk measure is contained in the previous one. This result is fully consistent with thecorresponding Eisenberg-Noe systemic risk measure since capital allocations that are feasible at aparticular γ p level are also feasible for any level lower than γ p . In this part, we perform a sensitivity analysis with respect to the distribution of nodes among thegroups for a fixed total number of nodes n = 70. We take the number of big banks n in the set { , , , . . . , , } . Then, the number of small banks is n = n − n . The generated randomoperating cash flows remain the same all the time, while the network structure changes at each run.32ence, the corresponding Eisenberg-Noe systemic risk measures are expected to vary significantly.Table 6 shows the computational performance of the algorithm for n ∈ { , , , . . . , , } and Figure 5 represents the corresponding inner approximations of the Eisenberg-Noe systemic riskmeasures.Note that the average time per P problem in Table 6 tends to increase as the number of bigbanks increases. This happens because the highest connectivity probability, q con1 , = 0 .
7, is theprobability that one big bank is liable to another big bank. Hence, as the number of big banksincreases, the nodes in the network become more connected with liabilities and it takes more timeto solve a P problem because the MILP formulations of P problems get more complex in terms of n Innerapprox.vertices Outerapprox.vertices P problems Avg. time per P prob.(seconds) Total algorithmtime (seconds) Total algorithmtime (hours) n ∈ { , , , . . . , , } .Figure 5: Inner approximations of the Eisenberg-Noe systemic risk measure for n ∈{ , , , . . . , , } . 33onstraints. In addition, it can be observed that the difference between the numbers of inner andouter approximation vertices and the number of P problems increases as the distribution of nodeschanges toward the two extreme cases: 5 big banks and 65 big banks. As in the previous sensitivityanalysis, this happens because the boundaries of the Eisenberg-Noe systemic risk measures aroundthese extreme cases in Figure 5 contain “flat” regions, which makes the algorithm solve more P problems without actually improving the approximation.We observe from Figure 5 that as the number of big banks increases and the number of smallbanks decreases, the small banks get a wider range of capital allocation options, as opposed to thebig banks. This happens because the total number of banks is fixed and the group with less numberof banks has a wider range of capital allocation options since it has more claims to the other group’sbanks. When the number of banks in each group is evenly distributed, the group of big banks has awider range of capital allocation options. The reason lies behind connectivity probabilities. Recallthat for this set-up it is assumed that the connectivity probability from big banks to small banksis q con12 = 0 .
1, while the connectivity probability from small banks to big banks is q con21 = 0 .
5. Itmeans that small banks are more likely to be liable to big banks and, since big banks have moreclaims compared to small banks, they have a wider range of capital allocation options. nodes In this section, we consider a Rogers-Veraart network ( N , π , ¯ p , X , α, β ) generated with the followingparameters: n = 45, n = 15, n = 30, K = 50, ̺ = 0 .
05 and q con = . . . . , l gr =
200 10050 50 . In addition, the liquid fraction of the random operating cash flows available to a defaulting nodeis fixed as α = 0 .
7, and the liquid fraction of the realized claims available to a defaulting node isfixed as β = 0 .
9. The shape and scale parameters of gamma distributions of the random operatingcash flows X i , i ∈ N ℓ , ℓ ∈ G , are chosen as κ = [100 64], θ = [1 1 . ν = [100 80], and the commonstandard deviation is σ = 10. In the corresponding Rogers-Veraart systemic risk measure, we take34 Innerapprox.vertices Outerapprox.vertices P problems Avg. time per P prob.(seconds) Total algorithmtime (seconds) Total algorithmtime (hours) α ∈ { . , . , . , . , . } . γ p = 0 .
9. The approximation error in the algorithm is taken as ǫ = 1. α parameter In this part, we perform a sensitivity analysis with respect to α , the liquid fraction of the operatingcash flow that can be used by a defaulting node to meet its obligations. The generated network( N , π , ¯ p , X , α, β ) remains the same in all cases. Table 7 illustrates the computational performanceof the algorithm for α ∈ { . , . , . , . , . } and Figure 6 consists of the inner approximations ofthe corresponding Rogers-Veraart systemic risk measures.Note from Table 7 that the average time per P problem decreases with α . It can be presumedthat this happens because of the following observation: as α parameter increases, the discontinuityin the fixed-point characterization of clearing vectors in the Rogers-Veraart model in (2.11) decreasesFigure 6: Inner approximations of the Rogers-Veraart systemic risk measures for α ∈{ . , . , . , . , . } . 35 Innerapprox.vertices Outerapprox.vertices P problems Avg. time per P prob.(seconds) Total algorithmtime (seconds) Total algorithmtime (hours) β ∈ { . , . , . , . , . } .and it gets easier to solve the corresponding MILP formulation of a P problem because it containsthe constraints of the problem in Theorem 2.15, the MILP characterization of clearing vectors.Observe from Figure 6 that the Rogers-Veraart systemic risk measures expand significantly as α increases. It means that both big and small banks get less strict capital requirements as defaultcosts decrease. One can also observe that in each case allocating zero capital requirement to thegroups is not an available option. In addition, in each case big banks can be allocated a negativeamount of capital requirement given that the capital requirements for small banks are high enough.On the other hand, small banks do not have this privilege. β parameter In this part, we perform a sensitivity analysis with respect to β , the liquid fraction of the realizedclaims from the other nodes that can be used by a defaulting node to meet its obligations. Thegenerated network ( N , π , ¯ p , X , α, β ) remains the same in all cases. Table 8 shows the computa-tional performance of the algorithm for β ∈ { . , . , . , . , . } and Figure 7 provides the innerapproximations of the corresponding Rogers-Veraart systemic risk measures.Note from Table 8 that the total number of P problems increases with β . We can observesmaller average times per P problem for higher values of β . As in the case of the α parameter, itcan be presumed that this happens because of the following observation: as β parameter increases,the discontinuity in the fixed-point characterization of clearing vectors in the Rogers-Veraart modelin (2.11) decreases, which makes it easier to solve the MILP formulation of a P problem.Observe from Figure 7 that the Rogers-Veraart systemic risk measures expand significantly as β increases. It means that both big and small banks get less strict capital requirements if defaultingbanks are able to use larger fractions of realized claims. It can also be observed that in each case36igure 7: Inner approximations of the Rogers-Veraart systemic risk measures for β ∈{ . , . , . , . , . } .allocating zero capital requirement to the groups is not an available option. In addition, if β = 0 . γ p Innerapprox.vertices Outerapprox.vertices P problems Avg. time per P prob.(seconds) Total algorithmtime (seconds) Total algorithmtime (hours) γ p ∈ { . , . , . . . , . , . , . , } .37 .4.3 Threshold level In this part, different γ p levels are compared. Table 9 shows the computational performance of thealgorithm for γ p ∈ { . , . , . . . , . , . , . , } and Figure 8 consists of the inner approximationsof the corresponding Rogers-Veraart systemic risk measures.Figure 8: Inner approximations of the Rogers-Veraart systemic risk measure for γ p ∈{ . , . , . . . , . , . , . , } .It can be noted from Table 9 that the average time per P problem and the total algorithmtime are high for γ p values around 0 .
7. In addition, the number of P problems increases up to γ p = 0 . γ p values contain the ones that have higher γ p values, which is consistent with the definition of these risk measures. In this part, we perform a sensitivity analysis by changing the distribution of nodes among thegroups for a fixed total number of nodes n = 45 where the number of big banks n takes valuesin { , , , , , , , } . Then, the number of small banks is n = n − n . Table 10 showsthe computational performance of the algorithm and Figure 9 provides the inner approximationsof the corresponding Rogers-Veraart systemic risk measures.Note that the average time per P problem in Table 10 is relatively high for the values n ∈ n ∈{ , , , , , , } . { , , , , } . In addition, the number of P problems is greater for the values around n = 20.Observe from Figure 9 that as the number of big banks increases and the number of small banksdecreases, the small banks get a wider range of capital allocation options, as opposed to the bigbanks. This happens because the total number of banks is fixed and the group with less number ofbanks has a wider range of capital allocation options since it has more claims to the other group’sbanks in the scope of this set-up. n Innerapprox.vertices Outerapprox.vertices P problems Avg. time per P prob.(seconds) Total algorithmtime (seconds) Total algorithmtime (hours) n ∈ { , , , , , , , } .39 nnerapprox.vertices Outerapprox.vertices P problems Avg. time per P prob.(seconds) Total algorithmtime (seconds) Total algorithmtime (hours)
413 516 1250 2.904 3631 1.009Table 11: Computational performance of the algorithm for a signed Eisenberg-Noe network with10 big, 20 medium and 30 small banks, 50 scenarios and approximation error ǫ = 20. In this section, we consider a three-group signed Eisenberg-Noe network ( N , π , ¯ p , X ) generatedwith n = 60, n = 10, n = 20, n = 30, K = 50, σ = 100, ̺ = 0 .
05 and q con = . . . . . . . . . , l gr =
20 15 815 10 68 6 5 , ν = (cid:20) − − − (cid:21) . In the corresponding Eisenberg-Noe systemic risk measure, we take γ p = 0 . ǫ = 20. Figure 10 representsthe inner approximation of the corresponding three-group Eisenberg-Noe systemic risk measure. Itcan be presumed that the value of this Eisenberg-Noe systemic risk measure is convex.Figure 10: Inner approximation of the three-group Eisenberg-Noe systemic risk measure with 60nodes, 50 scenarios and approximation error ǫ = 20.40 .6 A three-group Rogers-Veraart network with 60 nodes In this section, we consider a Rogers-Veraart network ( N , π , ¯ p , X , α, β ) generated with n = 60, n = 10, n = 20, n = 30, K = 50, ̺ = 0 .
05, and q con = . . . . . . . . . , l gr =
200 190 180190 190 180180 180 170 . Innerapprox.vertices Outerapprox.vertices P problems Avg. time per P prob.(seconds) Total algorithmtime (seconds) Total algorithmtime (hours)
975 1323 19382 0.427 8284 2.301Table 12: Computational performance of the algorithm for a Rogers-Veraart network with 10 big,20 medium and 30 small banks, 50 scenarios and approximation error ǫ = 40.In addition, the liquid fraction of the random operating cash flows and the liquid fraction ofthe realized claims available to defaulting banks are fixed as α = β = 0 .
9. The shape and scaleparameters of gamma distributions of X i , i ∈ N ℓ , ℓ ∈ G , are chosen as κ = [100 81 64], θ = 1 . γ p = 0 .
99. The upper boundpoint in the approximation is taken as z UB = ˆ z ideal + k ¯ p k ∞ .Table 12 shows the computational performance of the algorithm for ǫ = 40. Figure 11 providesthe inner approximation of the corresponding three-group Rogers-Veraart systemic risk measure.It can be observed that the value of this Rogers-Veraart systemic risk measure is not convex. A Proofs of the results in Section 2
A.1 Proof of Proposition 2.6
To prove the “only if” part, let p = ( p , . . . , p n ) T ∈ [ , ¯ p ] be a clearing vector. To show that p is afixed point of Φ EN , let i ∈ N .If P nj =1 π ji p j + x i ≤
0, then p i = 0, by immediate default , and Φ EN i ( p ) = 0, by (2.4). Hence,Φ EN i ( p ) = p i .If P nj =1 π ji p j + x i >
0, then, by absolute priority , either p i = ¯ p i or p i = P nj =1 π ji p j + x i .41igure 11: Inner approximation of the three-group Rogers-Veraart systemic risk measure with 60nodes, 50 scenarios and approximation error ǫ = 40.If p i = ¯ p i , then, by limited liability , ¯ p i ≤ P nj =1 π ji p j + x i and, thus, by (2.4), Φ EN i ( p ) = ¯ p i .Hence, Φ EN i ( p ) = p i . On the other hand, if p i = P nj =1 π ji p j + x i < ¯ p i , then, by (2.4), Φ EN i ( p ) = P nj =1 π ji p j + x i . Hence, again Φ EN i ( p ) = p i . Thus, p is a fixed point of Φ EN .To prove the “if” part, let p = ( p , . . . , p n ) T be a fixed point of Φ EN . In other words, for every i ∈ N , Φ EN i ( p ) = p i . To show that p is a clearing vector, let i ∈ N .If P nj =1 π ji p j + x i ≤
0, then Φ EN i ( p ) = p i = 0, by (2.4). Hence, immediate default holds.If P nj =1 π ji p j + x i >
0, then Φ EN i ( p ) = p i ≤ P nj =1 π ji p j + x i , by (2.4). Hence, limited liability holds.Now assume P nj =1 π ji p j + x i >
0. If P nj =1 π ji p j + x i ≤ ¯ p i , then Φ EN ( p ) = p i = P nj =1 π ji p j + x i .If P nj =1 π ji p j + x i > ¯ p i , then Φ EN ( p ) = p i = ¯ p i , by (2.4). Hence, absolute priority holds as well.Hence, p is a clearing vector. A.2 Proof of Theorem 2.7
The proof of Theorem 2.7 is based on the following lemma.
Lemma A.1.
Let ( p , s ) be an optimal solution to the MILP for Λ EN ( x ). Let i ∈ N such that0 < P nj =1 π ji p j + x i . Then, p i = min { P nj =1 π ji p j + x i , ¯ p i } .42 roof. If s i = 0, then constraint (2.9) is infeasible by assumption. Hence, s i = 1, and this yields p i ≤ P nj =1 π ji p j + x i and p i ≤ ¯ p i , by constraints (2.7) and (2.8), respectively. Hence, p i ≤ min n n X j =1 π ji p j + x i , ¯ p i o . To get a contradiction to the claim of the lemma, suppose that p i < min { P nj =1 π ji p j + x i , ¯ p i } . Nowlet p ǫ ∈ R n + be equal to p in all components except the i th one, and let p ǫi = p i + ǫ , where ǫ := min (cid:26) min n n X j =1 π ji p j + x i , ¯ p i o − p i , M − max l ∈N (cid:16) n X j =1 π jl p j + x l (cid:17) , ǫ ′ (cid:27) > , and ǫ ′ := min n(cid:12)(cid:12)(cid:12) n X j =1 π jl p j + x l (cid:12)(cid:12)(cid:12) | n X j =1 π jl p j + x l < , l ∈ N o . (Here, we assume that ǫ ′ = + ∞ if there is no qualifying l ∈ N in the above definition.) This choiceof ǫ ensures p ǫi ≤ ¯ p i and p ǫi ≤ n X j =1 π ji p ǫj + x i , and will also be justified by other technical details later in this proof.Let s ǫ ∈ { , } n be a vector of binaries, where s ǫl = 0 if P nj =1 π jl p ǫj + x l < s ǫl = 1 if P nj =1 π jl p ǫj + x l ≥
0, for each l ∈ N . We show that ( p ǫ , s ǫ ) is a feasible solution to Λ EN ( x ) byshowing that all constraints in (2.6) are satisfied. First, for fixed k ∈ N \ { i } , we verify the k th constraints in (2.6) for ( p ǫ , s ǫ ). We consider three cases:1. Assume that P nj =1 π jk p j + x k <
0. If s k = 1, then, by constraint (2.7), p k ≤ n X j =1 π jk p j + x k + M (1 −
1) = n X j =1 π jk p j + x k < , which is a contradiction to the feasibility of ( p , s ) in constraint (2.10). Hence, s k = 0, which inits turn implies p k = 0 by (2.8) and (2.10).By the definitions of p ǫ and s ǫ , it holds that p ǫk = p k = 0 since k = i , and s ǫk = 0. Constraint432.7) holds as p ǫk = p k = 0 ≤ n X j =1 π jk p ǫj + x k + M (1 − s ǫk ) = n X j =1 π jk p j + x k + M + ǫπ ik by the feasibility of p k = 0 and s k = 0, and since ǫ > π ik ≥
0. Constraint (2.9) holds as n X j =1 π jk p ǫj + x k = n X j =1 π jk p j + x k + ǫπ ik ≤ n X j =1 π jk p j + x k + ǫ ≤ M s ǫk since P nj =1 π jk p j + x k < π ik ≤ ǫ > P nj =1 π jk p j + x k + ǫ ≤
0. Constraints (2.8) and (2.10) for node k hold trivially by the feasibility of p k = 0 and s k = 0. Hence, p ǫk = 0 and s ǫk = 0 satisfy the corresponding constraints in (2.6).2. Assume that P nj =1 π jk p j + x k = 0. Now, either s k = 0 or s k = 1 holds. If s k = 0, then p k = 0 by constraints (2.8) and (2.10). If s k = 1, then, by the assumption of this case and (2.7), p k ≤ P nj =1 π jk p j + x k + M (1 −
1) = 0, which, together with (2.10), implies p k = 0.Also, p ǫk = p k = 0 and s ǫk = 1, by the definitions of p ǫ and s ǫ . Constraint (2.7) holds as p ǫk = p k = 0 ≤ n X j =1 π jk p ǫj + x k + M (1 − s ǫk )= n X j =1 π jk p j + x k + M (1 −
1) + ǫπ ik = ǫπ ik , since P nj =1 π jk p j + x k = 0, ǫ > π ik ≥
0. Constraint (2.9) holds as n X j =1 π jk p ǫj + x k = n X j =1 π jk p j + x k + ǫπ ik = ǫπ ik ≤ M s ǫk = M since P nj =1 π jk p j + x k = 0, ǫ ≤ min l ∈N { M − ( P nj =1 π jl p j + x l ) } ≤ M by the definition of ǫ ,and 0 ≤ π ik ≤
1. It is easy to observe that all other constraints in (2.6) for node k are satisfiedtrivially by p ǫk = 0 and s ǫk = 1.3. Assume that 0 < P nj =1 π jk p j + x k . If s k = 0, then, by constraint (2.9), n X j =1 π jk p j + x k ≤ M s k = 0 , s k = 1. Also, s ǫk = 1, by the definition of s ǫ .Since s k = 1, (2.8) and (2.10) hold by the feasibility of p k since p ǫk = p k for k = i . Also, (2.9)holds since ǫ > n X j =1 π jk p ǫj + x k = n X j =1 π jk p j + x k + ǫπ ik ≤ M. (A.1)Indeed, recall the assumption P nj =1 π jl < n , for each l ∈ N . Hence, for each l ∈ N and forevery p ∈ [ , ¯ p ], P nj =1 π jl p j + x l < M , where M = n k ¯ p k ∞ + k x k ∞ . So, (A.1) is guaranteed bythe choice of ǫ . (This is the reason behind including the term M − max l ∈N ( P nj =1 π jl p j + x l ) inthe definition of ǫ .)Note that, since s k = 1, p k ≤ P nj =1 π jk p j + x k holds. Then constraint (2.7) is satisfied since p ǫk = p k ≤ n X j =1 π jk p j + x k ≤ n X j =1 π jk p j + x k + ǫπ ik = X j ∈N j = i π jk p j + π ik ( p i + ǫ ) + x k = n X j =1 π jk p ǫj + x k . Constraint (2.10) is satisfied trivially. Hence, p ǫk and s ǫk satisfy the corresponding constraints in(2.6).Next, we show that p ǫi and s ǫi satisfy the constraints in (2.6) for i . It holds s ǫi = 1, since P nj =1 π ji p j + x i > p ǫi = p i + ǫ > p ǫi = p i + ǫ ≤ p i + ¯ p i − p i ≤ ¯ p i , where ǫ ≤ ¯ p i − p i holds since ǫ ≤ min { P nj =1 π ji p j + x i , ¯ p i } − ¯ p i . Constraint (2.7) holds as p ǫi = p i + ǫ ≤ p i + n X j =1 π ji p j + x i − p i = n X j =1 π ji p j + x i ≤ n X j =1 π jk p j + x k + ǫπ ik = X j ∈N j = i π jk p j + π ik ( p i + ǫ ) + x k = n X j =1 π jk p ǫj + x k , where ǫ ≤ P nj =1 π ji p j + x i − p i holds since ǫ ≤ min { P nj =1 π ji p j + x i , ¯ p i } − ¯ p i . Constraint (2.9) holds45s n X j =1 π ji p ǫj + x i = n X j =1 π ji p j + x i + ǫπ ii = n X j =1 π ji p j + x i ≤ M by the feasibility of p and since π ll = 0, for each l ∈ N . Constraint (2.10) is satisfied trivially.Hence, p ǫi and s ǫi satisfy the corresponding constraints in (2.6).Hence, ( p ǫ , s ǫ ) is a feasible solution to Λ EN ( x ). However, since p ǫ ≥ p with p ǫ = p and f is astrictly increasing function, it holds that f ( p ǫ ) > f ( p ), which is a contradiction to the optimalityof p . Hence, p i = min { P nj =1 π ji p j + x i , ¯ p i } . Proof of
Theorem 2.7 . Let ( p , s ) be an optimal solution to the MILP for Λ EN ( x ). To prove that p is a clearing vector, by Proposition 2.6, we equivalently show that Φ EN ( p ) = p . Let i ∈ N .Recalling (2.5), we consider three cases:1. Assume that P nj =1 π ji p j + x i ≤
0. Then, by (2.4), Φ EN i ( p ) = 0. By the arguments from theproof of the Lemma A.1 for this case, p i = 0. Hence, p i = 0 = Φ EN i ( p ).2. Assume that 0 < P nj =1 π ji p j + x i ≤ ¯ p i . Then, by (2.4), Φ EN i ( p ) = P nj =1 π ji p j + x i . Since0 < P nj =1 π ji p j + x i , by Lemma A.1, p i = min n n X j =1 π ji p j + x i , ¯ p i o = n X j =1 π ji p j + x i . Hence, p i = P nj =1 π ji p j + x i = Φ EN i ( p ).3. Assume P nj =1 π ji p j + x i > ¯ p i . Then, by (2.4), Φ EN i ( p ) = ¯ p i . Since P nj =1 π ji p j + x i > ¯ p i > p i = min n n X j =1 π ji p j + x i , ¯ p i o = ¯ p i . Hence, p i = ¯ p i = Φ EN i ( p ).Therefore, p is a clearing vector for ( N , π , ¯ p , x ). Remark A.2.
In Theorem 2.7, M = n k ¯ p k ∞ + k x k ∞ is taken to ensure the feasibility of theconstraint (2.9). In other words, it is enough to choose M such that P nj =1 π ji p j + x i ≤ M , for46ach i ∈ N and for every p ∈ [ , ¯ p ]. Furthermore, for each i ∈ N and for every p ∈ [ , ¯ p ], since P nj =1 π ji < n , it holds P nj =1 π ji p j < n k ¯ p k ∞ . Hence, P nj =1 π ji p j + x i ≤ n k ¯ p k ∞ + k x k ∞ = M . A.3 Proof of Proposition 2.13
Let p = ( p , . . . , p n ) T be a fixed point of Φ RV + . To show that p is a clearing vector for ( N , π , ¯ p , x , α, β ),let i ∈ N .If ¯ p i ≤ x i + P nj =1 π ji p j , then Φ RV + i ( p ) = ¯ p i = p i ≤ x i + P nj =1 π ji p j , and if ¯ p i > x i + P nj =1 π ji p j ,then Φ RV + i ( p ) = αx i + β P nj =1 π ji p j = p i ≤ x i + P nj =1 π ji p j , by the definition of Φ RV + in (2.11) andsince p is a fixed point of Φ RV + . Hence, both limited liability and absolute priority in Definition 2.12hold. Hence, p is a clearing vector for ( N , π , ¯ p , x , α, β ). A.4 Proof of Theorem 2.15
The proof of Theorem 2.15 relies on the following three lemmata.
Lemma A.3.
Let ( p , s ) be an optimal solution to the MILP for Λ RV + ( x ). Let i ∈ N such that αx i + β n X j =1 π ji p j < ¯ p i ≤ x i + n X j =1 π ji p j . Then, s i = 1. Proof.
To get a contradiction, suppose that s i = 0. Then p i ≤ αx i + β P nj =1 π ji p j < ¯ p i by constraint(2.13) and the assumption. Let p ′ ∈ R n + be equal to p in all components except the i th one, andlet p ′ i = ¯ p i . Also, let s ′ ∈ R n + be equal to s in all components except the i th one, and let s ′ i = 1.We show that ( p ′ , s ′ ) is a feasible solution to Λ RV + ( x ) by checking that all constraints in(2.12) are satisfied. First, for fixed k ∈ N \ { i } , we verify the k th constraints in (2.12) for ( p ′ , s ′ ).Constraints (2.13), (2.14) hold as p ′ k = p k ≤ αx k + β n X j =1 π jk p j + ¯ p k s k ≤ αx k + β n X j =1 π jk p j + ¯ p k s k + π ik (¯ p i − p i ) = αx k + β n X j =1 π jk p ′ j + ¯ p k s ′ k , p k s ′ k = ¯ p k s k ≤ x k + n X j =1 π jk p j ≤ x k + n X j =1 π jk p j + π ik ( ¯ p i − p i ) = x k + n X j =1 π jk p ′ j , since p ′ k = p k , s ′ k = s k for every k ∈ K such that k = i , ¯ p i − p i > π ik ≥
0, and by the feasibilityof ( p , s ). Constraint (2.15) holds trivially by the feasibility of ( p , s ).Next, we verify the i th constraints in (2.12) for p ′ i = ¯ p i , s ′ i = 1. Constraints (2.13), (2.14) holdas p ′ i = ¯ p i ≤ αx i + β n X j =1 π ji p j + ¯ p i s ′ i = αx i + β n X j =1 π ji p j + ¯ p i + π ii (¯ p i − p i ) = αx i + β n X j =1 π ji p ′ j + ¯ p i , and ¯ p i s ′ i = ¯ p i ≤ x i + n X j =1 π ji p j = x i + n X j =1 π ji p j + π ii (¯ p i − p i ) = x i + n X j =1 π ji p ′ j , since αx i + β P nj =1 π ji p ′ j ≥ π ii = 0 and by the assumption of Lemma A.3. Constraint (2.15) issatisfied trivially.Hence, ( p ′ , s ′ ) is a feasible solution to Λ RV + ( x ). However, since p ′ ≥ p with p ′ = p and f is astrictly increasing function, it holds that f ( p ′ ) > f ( p ), which is a contradiction to the optimalityof p . Hence, s i = 1. Lemma A.4.
Let ( p , s ) be an optimal solution to the MILP for Λ RV + ( x ). Let i ∈ N with¯ p i ≤ x i + P nj =1 π ji p j . Then, p i = ¯ p i . Proof.
To get a contradiction, suppose that p i < ¯ p i . Let p ′ ∈ R n + be equal to p in all componentsexcept the i th one, and let p ′ i = ¯ p i We show that ( p ′ , s ) is a feasible solution to Λ RV + ( x ) by showing that all constraints in (2.12)are satisfied. First, for fixed k ∈ N \ { i } , we verify the k th constraints in (2.12) for ( p ′ , s ). Con-48traints (2.13) and (2.14) hold as p ′ k = p k ≤ αx k + β n X j =1 π jk p j + ¯ p k s k ≤ αx k + β n X j =1 π jk p j + ¯ p k s k + π ik (¯ p i − p i ) = αx k + β n X j =1 π jk p ′ j + ¯ p k s k , and ¯ p k s k ≤ x k + n X j =1 π jk p j ≤ x k + n X j =1 π jk p j + π ik (¯ p i − p i ) = x k + n X j =1 π jk p ′ j , since p ′ k = p k for every k ∈ K such that k = i , ¯ p i − p i > π ik ≥ p , s ).Constraint (2.15) holds trivially by the feasibility of ( p , s ).Next, we verify the i th constraints in (2.12) for p ′ i = ¯ p i , s i . We consider two cases:1. Assume that ¯ p i ≤ αx i + β P nj =1 π ji p j . Then, constraints (2.13) and (2.14) hold for both s i = 0and s i = 1 as p ′ i = ¯ p i ≤ αx i + β n X j =1 π ji p j + ¯ p i s i = αx i + β n X j =1 π ji p j + ¯ p i s i + π ii (¯ p i − p i ) = αx i + β n X j =1 π ji p ′ j + ¯ p i s i , and ¯ p i s i ≤ x i + n X j =1 π ji p j = x i + n X j =1 π ji p j + π ii (¯ p i − p i ) = x i + n X j =1 π ji p ′ j , since π ii = 0 and by the assumption of Lemma A.4. Constraint (2.15) holds trivially.2. Assume that αx i + β P nj =1 π ji p j < ¯ p i . Then, by Lemma A.3, s i = 1. Then constraints (2.13)and (2.14) hold as p ′ i = ¯ p i ≤ αx i + β n X j =1 π ji p j + ¯ p i s i = αx i + β n X j =1 π ji p j + ¯ p i + π ii (¯ p i − p i ) = αx i + β n X j =1 π ji p ′ j + ¯ p i , p i s i = ¯ p i ≤ x i + n X j =1 π ji p j = x i + n X j =1 π ji p j + π ii ( ¯ p i − p i ) = x i + n X j =1 π ji p ′ j , since π ii = 0 and by the assumption of Lemma A.4. Constraint (2.15) is satisfied trivially.Hence, ( p ′ , s ) is a feasible solution to Λ RV + ( x ). However, since p ′ ≥ p with p ′ = p and f is astrictly increasing function, it holds that f ( p ′ ) > f ( p ), which is a contradiction to the optimalityof p . Hence, p i = ¯ p i . Lemma A.5.
Let ( p , s ) be an optimal solution to the MILP for Λ RV + ( x ). Let i ∈ N with¯ p i > x i + P nj =1 π ji p j . Then, p i = αx i + β P nj =1 π ji p j . Proof.
To get a contradiction, suppose that p i = αx i + β P nj =1 π ji p j . If s i = 1, then constraint(2.14) is not satisfied by assumption. Hence, s i = 0 and p i < αx i + β P nj =1 π ji p j by constraint (2.13).Let p ′ ∈ R n + be equal to p in all components except the i th one, and let p ′ i = αx i + β P nj =1 π ji p j .We show that ( p ′ , s ) is a feasible solution to Λ RV + ( x ) by checking that all constraints in(2.12) are satisfied. First, for fixed k ∈ N \ { i } , we verify the k th constraints in (2.12) for ( p ′ , s ).Constraints (2.13) and (2.14) hold as p ′ k = p k ≤ αx k + β n X j =1 π jk p j + ¯ p k s k ≤ αx k + β n X j =1 π jk p j + ¯ p k s k + π ik (¯ p i − p i ) = αx k + β n X j =1 π jk p ′ j + ¯ p k s k , and ¯ p k s k ≤ x k + n X j =1 π jk p j ≤ x k + n X j =1 π jk p j + π ik (¯ p i − p i ) = x k + n X j =1 π jk p ′ j , since p ′ k = p k for every k ∈ K such that k = i , ¯ p i − p i > π ik ≥ p , s ).Constraint (2.15) holds trivially by the feasibility of ( p , s ).Next, we verify the i th constraints in (2.12) for p ′ i = αx i + β P nj =1 π ji p j , s i = 0. Constraints502.13) and (2.14) hold as p ′ i = αx i + β n X j =1 π ji p j ≤ αx i + β n X j =1 π ji p j + ¯ p i s i = αx i + β n X j =1 π ji p j + π ii (¯ p i − p i ) = αx i + β n X j =1 π ji p ′ j , and ¯ p i s i = 0 ≤ x i + n X j =1 π ji p j = x i + n X j =1 π ji p j + π ii (¯ p i − p i ) = x i + n X j =1 π ji p ′ j , since π ii = 0 and x i + P nj =1 π ji p j ≥
0. Constraint (2.15) is satisfied trivially.Hence, ( p ′ , s ) is a feasible solution to Λ RV + ( x ). However, since p ′ ≥ p with p ′ = p and f is astrictly increasing function, it holds that f ( p ′ ) > f ( p ), which is a contradiction to the optimalityof p . Hence, p i = ¯ p i .We combine the results of the above lemmata to conclude with the proof of Theorem 2.15. Proof of
Theorem 2.15 . Let ( p , s ) be an optimal solution to the MILP for Λ RV + ( x ). To provethat p is a clearing vector, thanks to Proposition 2.13, it suffices to show Φ RV + ( p ) = p .Let us fix i ∈ N . Recalling (2.11), we consider two cases:1. Assume that ¯ p i ≤ x i + P nj =1 π ji p j . Then, by (2.11), Φ RV + i ( p ) = ¯ p i . By Lemma A.4, p i = ¯ p i .Hence, p i = ¯ p i = Φ RV + i ( p ).2. Assume that ¯ p i > x i + P nj =1 π ji p j . Then, by Definition (2.11), Φ RV + i ( p ) = αx i + β P nj =1 π ji p j .By Lemma A.5, p i = αx i + β P nj =1 π ji p j . Hence, p i = αx i + β P nj =1 π ji p j = Φ RV + i ( p ).Therefore, p is a clearing vector. B Proofs of the results in Section 3
B.1 Proof of Theorem 3.2
Let ( z , ( p k , s k ) k ∈K ) be a feasible solution for the problem in (3.9). Then, for each k ∈ K , ( p k , s k )is a feasible solution to Λ OPT ( X ( ω k ) + B T z ) in (3.3) because the optimization problem in (3.9)51ncludes the constraints of (3.3). Hence, for every k ∈ K ,Λ OPT ( X ( ω k ) + B T z ) ≥ f ( p k ) , which implies E [Λ OPT ( X + B T z )] ≥ K X k =1 q k f ( p k ) ≥ γ, where the second inequality holds by feasibility of ( z , ( p k , s k ) k ∈K ). Hence, z is a feasible solutionfor the problem in (3.8). So P ( w ) ≤ Z ( w ).Conversely, let • z be a feasible solution for the problem in (3.8). For each k ∈ K , there exists anoptimal solution ( • p k , • s k ) to the problem for Λ OPT ( X ( ω k ) + B T • z ). Then, K X k =1 q k f ( • p k ) = E [Λ OPT ( X + B T • z )] ≥ γ, by the definition of P ( w ). Hence, ( • z , ( • p k , • s k ) k ∈K ) is a feasible solution for the problem in (3.9).So P ( w ) ≥ Z ( w ). B.2 Proofs of the results in Section 3.1.1
Proof of
Corollary 3.5 . Let Y EN : R n → R n × Z n be a set-valued function defined by Y EN ( x ) := n ( p , s ) ∈ R n × Z n | p ≤ (cid:0) Π T p + x + M ( − s ) (cid:1) ∧ ( ¯ p ⊙ s ) , Π T p + x ≤ M s , p ∈ [ , ¯ p ] , s ∈ { , } n o . (B.1)Then, applying Theorem 3.2 with Y = Y EN gives P EN1 ( e ℓ ) = Z EN1 ( e ℓ ). Proof of
Proposition 3.6 .
1. Let ( z , ( p k , s k ) k ∈K ) be an optimal solution of the problem. To get a contradiction, suppose that z ℓ > k X k ∞ + k ¯ p k ∞ . Let z ′ ∈ R G be the vector such that z ′ ℓ = k X k ∞ + k ¯ p k ∞ and z ′ ˆ ℓ = z ˆ ℓ foreach ˆ ℓ ∈ G \ { ℓ } . We claim that ( z ′ , ( p k , s k ) k ∈K ) is a feasible solution of the problem. Indeed,52or each i ∈ N , k ∈ K such that (cid:0) B T z ′ (cid:1) i = k X k ∞ + k ¯ p k ∞ , constraint (3.15) holds as p ki ≤ n X j =1 π ji p kj + (cid:0) X i ( ω k ) + ( B T z ′ ) i (cid:1) + M (1 − s ki )= n X j =1 π ji p kj + X i ( ω k ) + k X k ∞ + k ¯ p k ∞ + M (1 − s ki )since P nj =1 π ji p kj ≥ X i ( ω k ) + k X k ∞ ≥ p ki ≤ ¯ p i ≤ k ¯ p k ∞ , M (1 − s ki ) ≥
0. Also, for each i ∈ N , k ∈ K such that ( B T z ′ ) i = k X k ∞ + k ¯ p k ∞ , constraint (3.17) holds as n X j =1 π ji p kj + (cid:0) X i ( ω k ) + ( B T z ′ ) i (cid:1) = n X j =1 π ji p kj + X i ( ω k ) + k X k ∞ + k ¯ p k ∞ < n X j =1 π ji p kj + X i ( ω k ) + z ℓ ≤ M s ki , which holds by the supposition k X k ∞ + k ¯ p k ∞ < z ℓ and the feasibility of ( z , ( p k , s k ) k ∈K ). Allthe other constraints in (3.13) hold by the feasibility of ( z , ( p k , s k ) k ∈K ), since they are free of k X k ∞ + k ¯ p k ∞ . Hence, the claim follows, which yields z ℓ = Z EN1 (cid:0) e ℓ (cid:1) ≤ z ′ ℓ = k X k ∞ + k ¯ p k ∞ . Asthis is a contradiction, the result follows.2. To get a contradiction, suppose that the problem has a feasible solution but Z EN1 ( e ℓ ) = −∞ .Since Z EN1 ( e ℓ ) = −∞ , there exist ǫ > z , ( p k , s k ) k ∈K ), where z ∈ R G and ( p k , s k ) ∈ R n × Z n for each k ∈ K , such that e ℓ T z = z ℓ = − M and ( z − ǫ e ℓ , ( p k , s k ) k ∈K ) is a feasiblesolution for the problem. Fix i ∈ N , k ∈ K such that ( B T z ) i = z ℓ = − M . Then, constraint(3.15) contradicts constraint (3.18) as p ki ≤ n X j =1 π ji p kj + ( X i ( ω k ) + ( B T ( z − ǫ e ℓ )) i ) + M (1 − s ki ) ≤ n X j =1 π ji p kj + X i ( ω k ) − M − ǫ + M = n X j =1 π ji p kj + X i ( ω k ) − ǫ − k X k ∞ − ( n + 1) k ¯ p k ∞ = (cid:16) n X j =1 π ji p kj − n k ¯ p k ∞ (cid:17) + (cid:0) X i ( ω k ) − k X k ∞ (cid:1) − k ¯ p k ∞ − ǫ < P nj =1 π ji p kj < n k ¯ p k ∞ , X i ( ω k ) ≤ k X k ∞ , − k ¯ p k ∞ < − ǫ <
0. Hence, ( z − ǫ e ℓ , ( p k , s k ) k ∈K )is infeasible, which is a contradiction to the assumption. Hence, Z EN1 ( e ℓ ) > −∞ . In addition,the existence of a feasible solution implies that Z EN1 ( e ℓ ) < + ∞ . Hence, Z EN1 ( e ℓ ) ∈ R .3. Assume that γ ≤ T ¯ p . Let z = ( k X k ∞ + k ¯ p k ∞ ) , p k = ¯ p , s k = for each k ∈ K . We showthat ( z , ( p k , s k ) k ∈K ) is a feasible solution for the problem. Since p k = ¯ p for each k ∈ K , itis clear that P k ∈K q k T p k = T ¯ p ≥ γ . Hence, constraint (3.14) holds. Let i ∈ N , k ∈ K .Constraint (3.15) holds as n X j =1 π ji p kj + (cid:0) X i ( ω k ) + ( B T z ) i (cid:1) + M (1 − s ki )= n X j =1 π ji p kj + X i ( ω k ) + ( k X k ∞ + k ¯ p k ∞ )( B T ) i + M (1 − n X j =1 π ji p kj + X i ( ω k ) + k X k ∞ + k ¯ p k ∞ ≥ ¯ p i = p ki since P nj =1 π ji p kj ≥ X i ( ω k ) + k X k ∞ ≥
0, ( B T ) i = 1, s ki = 1. Constraint (3.17) holds as n X j =1 π ji p kj + (cid:0) X i ( ω k ) + ( B T z ) i (cid:1) = n X j =1 π ji p kj + X i ( ω k ) + k X k ∞ + k ¯ p k ∞ ≤ k X k ∞ + ( n + 1) k ¯ p k ∞ = M = M s ki , since P nj =1 π ji p kj ≤ n k ¯ p k ∞ . All the other constraints in (3.13) hold trivially by the choice of z , p k and s k , for each k ∈ K . Hence, ( z , ( p k , s k ) k ∈K ) is a feasible solution of the problem.Conversely, if γ > T ¯ p , then constraint (3.14) is infeasible, since P k ∈K q k T p k ≤ T ¯ p < γ byconstraint (3.18). Hence, the problem is infeasible, which concludes the proof.54 .3 Proofs of the results in Section 3.1.2 Proof of
Corollary 3.8 . Let Y RV + : R n → R n × Z n be a set-valued function defined by Y RV + ( x ) := n ( p , s ) ∈ R n × Z n | p ≤ α x + β Π T p + ¯ p ⊙ s , ¯ p ⊙ s ≤ x + Π T p , p ∈ [ , ¯ p ] , s ∈ { , } n o . (B.2)for each x ∈ R n + and Y RV + ( x ) = ∅ for each x ∈ R n \ R n + . Then, applying Theorem 3.2 with Y = Y RV + gives P RV + (cid:0) e ℓ (cid:1) = Z RV + (cid:0) e ℓ (cid:1) . Proof of
Proposition 3.9 .
1. Let ( z , ( p k , s k ) k ∈K ) be an optimal solution of the problem. To get a contradiction, supposethat z ℓ > k X k ∞ + α k ¯ p k ∞ . Let z ′ ∈ R n be the vector such that z ′ ℓ = k X k ∞ + α k ¯ p k ∞ and z ′ ˆ ℓ = z ˆ ℓ for each ˆ ℓ ∈ G \ { ℓ } . Similar to the argument in the proof of Proposition 3.6(i), it canbe checked that ( z ′ , ( p k , s k ) k ∈K ) is a feasible solution of the problem. Hence, z ℓ = Z RV + (cid:0) e ℓ (cid:1) ≤ z ′ ℓ = k X k ∞ + α k ¯ p k ∞ . As this is a contradiction, the result follows.2. To get a contradiction, suppose that the problem has a feasible solution but Z RV + ( e ℓ ) = −∞ .Let M = k X k ∞ + α ( n + 1) k ¯ p k ∞ . Since Z RV + ( e ℓ ) = −∞ , there exist ǫ > z , ( p k , s k ) k ∈K ),where z ∈ R n and ( p k , s k ) ∈ R n × Z n for each k ∈ K , such that e ℓ T z = z ℓ = − M and( z − ǫ e ℓ , ( p k , s k ) k ∈K ) is a feasible solution for the problem. Fix i ∈ N , k ∈ K such that (cid:0) B T z (cid:1) i = z ℓ = − M . Similar to the argument in the proof of Proposition 3.6(ii), it can bechecked that constraint (3.23) contradicts constraint (3.26). Hence, ( z − ǫ e ℓ , ( p k , s k ) k ∈K ) isinfeasible, which is a contradiction to the assumption. Hence, Z RV + ( e ℓ ) > −∞ . In addition,the existence of a feasible solution implies that Z RV + ( e ℓ ) < + ∞ . Hence, Z RV + ( e ℓ ) ∈ R .3. Assume that γ ≤ T ¯ p . Let z = (cid:0) k X k ∞ + α k ¯ p k ∞ (cid:1) , p k = ¯ p , s k = for each k ∈ K .As in the proof of Proposition 3.6(iii), it can be checked that ( z , ( p k , s k ) k ∈K ) is a feasiblesolution for the problem. Conversely, if γ > T ¯ p , then constraint (3.22) is infeasible, since P k ∈K q k T p k ≤ T ¯ p < γ by constraint (3.26). Hence, the problem in (3.21) is infeasible.55 .4 Proof of Theorem 3.10 Let ( µ, ( p k , s k ) k ∈K ) be a feasible solution of the problem in (3.29). For each k ∈ K , ( p k , s k ) is afeasible solution to Λ OPT ( X (cid:0) ω k (cid:1) + B T v + µ ) in (3.3) because the problem in (3.29) includes theconstraints of (3.3). Hence, for each k ∈ K ,Λ OPT ( X ( ω k ) + B T v + µ ) ≥ f ( p k ) , which implies E [Λ OPT ( X + B T v + µ )] ≥ K X k =1 q k f ( p k ) ≥ γ, where the second inequality holds by feasibility of ( µ, ( p k , s k ) k ∈K ). Then, µ is a feasible solutionfor the problem in (3.28). Hence, P ( v ) ≤ Z ( v ).Conversely, let • µ ∈ R be a feasible solution for the problem in (3.28). Then, for each k ∈ K ,Λ OPT ( X ( ω k ) + B T v + • µ ) ∈ R and, by the compactness of Y ( X ( ω k ) + B T v + • µ ), there exists anoptimal solution ( • p k , • s k ) for the problem Λ OPT ( X ( ω k ) + B T v + • µ ) in (3.3). Then, K X k =1 q k f ( • p k ) = E [Λ OPT ( X + B T v + • µ )] ≥ γ by the definition of P ( v ). Hence, ( • µ, ( • p k , • s k ) k ∈K ) is a feasible solution for the problem in (3.29).Hence, P ( v ) ≥ Z ( v ). B.5 Proofs of the results in Section 3.2.1
Proof of
Corollary 3.11 . Let Y = Y EN as in the proof of Corollary 3.5. Then, applying Theo-rem 3.10 gives P EN2 ( v ) = Z EN2 ( v ). Proof of
Proposition 3.12 .
1. Let ( µ, ( p k , s k ) k ∈K ) be an optimal solution of the problem. To get a contradiction, supposethat µ > k X k ∞ + k v k ∞ + k ¯ p k ∞ . We claim that ( µ max , ( p k , s k ) k ∈K ) is a feasible solution of the56roblem. Let i ∈ N , k ∈ K . Note that constraint (3.33) holds as p ki ≤ n X j =1 π ji p kj + ( X i ( ω k ) + ( B T v ) i + µ max ) + M (1 − s ki )= n X j =1 π ji p kj + X i ( ω k ) + ( B T v ) i + k X k ∞ + k v k ∞ + k ¯ p k ∞ + M (1 − s ki )= n X j =1 π ji p kj + ( X i ( ω k ) + k X k ∞ ) + (( B T v ) i + k v k ∞ ) + k ¯ p k ∞ + M (1 − s ki ) , since P nj =1 π ji p kj ≥ X i ( ω k ) + k X k ∞ ≥
0, ( B T v ) i + k v k ∞ ≥ p ki ≤ k ¯ p k ∞ , M (1 − s ki ) ≥ n X j =1 π ji p kj + (cid:0) X i ( ω k ) + ( B T v ) i + k X k ∞ + k v k ∞ + k ¯ p k ∞ (cid:1) < n X j =1 π ji p kj + (cid:0) X i ( ω k ) + ( B T v ) i + µ (cid:1) ≤ M s ki = M by the assumption k X k ∞ + k v k ∞ + k ¯ p k ∞ < µ and the feasibility of ( µ, ( p k , s k ) k ∈K ). All the otherconstraints in (3.31) hold by the feasibility of ( µ, ( p k , s k ) k ∈K ), since they are free of k X k ∞ + k v k ∞ + k ¯ p k ∞ . Hence, the claim follows, which yields µ = Z EN2 ( v ) ≤ k X k ∞ + k v k ∞ + k ¯ p k ∞ .As this is a contradiction, we obtain the desired result.2. To get a contradiction, suppose that the problem has a feasible solution but Z EN2 ( v ) = −∞ .Then, there exist ǫ > p k , s k ) k ∈K , where ( p k , s k ) ∈ R n × Z n for each k ∈ K , such that( − M − ǫ, ( p k , s k ) k ∈K ) is a feasible solution of the problem. Fix i ∈ N , k ∈ K . Then constraint(3.33) violates constraint (3.36) as p ki ≤ n X j =1 π ji p kj + (cid:0) X i ( ω k ) + ( B T v ) i − M − ǫ (cid:1) + M (1 − s ki ) ≤ n X j =1 π ji p kj + X i ( ω k ) + ( B T v ) i − ǫ − M = n X j =1 π ji p kj + X i ( ω k ) + ( B T v ) i − ǫ − k X k ∞ − k v k ∞ − ( n + 1) k ¯ p k ∞ = (cid:16) n X j =1 π ji p kj − ( n + 1) k ¯ p k ∞ (cid:17) + (cid:0) X i ( ω k ) − k X k ∞ (cid:1) + (cid:0) ( B T v ) i − k v k ∞ (cid:1) − ǫ < , P nj =1 π ji p kj < ( n + 1) k ¯ p k ∞ , X i ( ω k ) ≤ k X k ∞ , ( B T v ) i ≤ k v k ∞ , − ǫ <
0. Hence,( − M − ǫ, ( p k , s k ) k ∈K ) is infeasible, which is a contradiction to the assumption. Hence, Z EN2 ( v ) > −∞ . On the other hand, Z EN2 ( v ) < + ∞ by the existence of a feasible solution. So Z EN2 ( v ) ∈ R .3. Assume that γ ≤ T ¯ p . Let µ = k X k ∞ + k v k ∞ + k ¯ p k ∞ , p k = ¯ p , s k = for each k ∈ K . Weshow that ( µ, ( p k , s k ) k ∈K ) is a feasible solution for the problem. Since p k = ¯ p for each k ∈ K ,it holds that P k ∈K q k T p k = T ¯ p ≥ γ . Hence, constraint (3.32) holds. Now fix i ∈ N , k ∈ K .Constraint (3.33) holds as n X j =1 π ji p kj + (cid:0) X i ( ω k ) + ( B T v ) i + µ (cid:1) + M (1 − s ki )= n X j =1 π ji p kj + X i ( ω k ) + ( B T v ) i + µ + M (1 − n X j =1 π ji p kj + X i ( ω k ) + ( B T v ) i + k X k ∞ + k v k ∞ + k ¯ p k ∞ ≥ ¯ p i = p ki , since P nj =1 π ji p kj ≥ X i ( ω k ) + k X k ∞ ≥
0, ( B T v ) i + k v k ∞ ≥
0, and s ki = 1, by the choice of s k . Constraint (3.35) holds as n X j =1 π ji p kj + (cid:0) X i ( ω k ) + ( B T v ) i + µ (cid:1) = n X j =1 π ji p kj + X i ( ω k ) + ( B T v ) i + k X k ∞ + k v k ∞ + k ¯ p k ∞ ≤ k X k ∞ + 2 k v k ∞ + ( n + 1) k ¯ p k ∞ = M = M s ki , since P nj =1 π ji p kj ≤ n k ¯ p k ∞ . All the other constraints hold trivially by the choice of µ , p k and s k , for each k ∈ K . Hence, ( µ, ( p k , s k ) k ∈K ) is a feasible solution for the problem.Conversely, if γ > T ¯ p , then constraint (3.32) is infeasible, since P k ∈K q k T p k ≤ T ¯ p < γ , byconstraint (3.36). Hence, the problem is infeasible, which finishes the proof.58 .6 Proofs of the results in Section 3.2.2 Proof of
Corollary 3.15 . Let Y = Y RV + as in the proof of Corollary 3.8. By Theorem 3.10, theresult follows. Proof of
Proposition 3.16 .
1. Let ( µ, ( p k , s k ) k ∈K ) be an optimal solution for the problem. To get a contradiction, suppose that µ > k X k ∞ + k v k ∞ + α k ¯ p k ∞ . Following similar arguments as in the proof of Proposition 3.12(i),it can be shown that ( k X k ∞ + k v k ∞ + α k ¯ p k ∞ , ( p k , s k ) k ∈K ) is a feasible solution for the problem.Hence, µ = Z RV + ( v ) ≤ k X k ∞ + k v k ∞ + α k ¯ p k ∞ , which is a contradiction.2. To get a contradiction, suppose that the problem has a feasible solution but Z RV + ( v ) = −∞ .Let M = k X k ∞ + k v k ∞ + α ( n + 1) k ¯ p k ∞ . Then, there exist ǫ > p k , s k ) k ∈K , where (cid:0) p k , s k (cid:1) ∈ R n × Z n for each k ∈ K , such that ( − M − ǫ, ( p k , s k ) k ∈K ) is a feasible solution forthe problem. Fix i ∈ N , k ∈ K . As in the proof of Proposition 3.12(ii), it can be checked thatconstraint (3.41) violates constraint (3.44). Hence, ( − M − ǫ, ( p k , s k ) k ∈K ) is infeasible, which isa contradiction to the assumption. Hence, Z RV + ( v ) > −∞ . Together with the feasibility of theproblem, it follows that Z RV + ( v ) ∈ R .3. Assume that γ ≤ T ¯ p . Let µ = k X k ∞ + k v k ∞ + α k ¯ p k ∞ , p k = ¯ p , s k = for each k ∈ K . As inthe proof of Proposition 3.12(iii), it can be shown that ( µ, ( p k , s k ) k ∈K ) is a feasible solution forthe problem. Conversely, if γ > T ¯ p , then constraint (3.40) is infeasible, since P k ∈K q k T p k ≤ T ¯ p < γ , by constraint (3.44). Hence, the problem is infeasible, which concludes the proof. C The nonconvex Benson-type algorithm
In this section, we present an algorithm that approximates the Eisenberg-Noe and Rogers-Veraartsystemic risk measures. The risk measures are approximated with respect to a user-defined ap-proximation error ǫ > z UB ∈ R G . The algorithm is based onthe Benson-type algorithm for nonconvex multi-objective programming problems described inNobakhtian and Shafiei (2017), from which the following definitions are borrowed.59et L ⊆ R G . A point v ∈ L is called a vertex of L if there exists a neighborhood N of v forwhich v cannot be expressed as a strict convex combination of two distinct points in L ∩ N . Theset of all vertices of L is denoted by vert L . The notation int L denotes the interior of L . Given apoint z ∈ R G and L ⊆ R G , we define L| z := { v ∈ L | v ≤ z } .Let R, L , U ⊆ R G , z ∈ R G and ǫ > L is called an outer approximation for R with respect to ǫ and z , if R ⊆ L and L| z ⊆ R + B ( , ǫ ), where B ( , ǫ ) is the closed ball in R G centered at with radius ǫ . The set U is called an inner approximation for R with respect to ǫ and z if R is an outer approximation for U with respect to ǫ and z .The algorithm that calculates inner and outer approximations of a systemic risk measure worksas follows. It is provided in detail only for the Eisenberg-Noe systemic risk measures, since it workssimilarly for the Rogers-Veraart systemic risk measures. Let ( N , π , ¯ p , X ) be a signed Eisenberg-Noe network. Let G be the number of groups in the network and G = { , . . . , G } . Consider thecorresponding Eisenberg-Noe systemic risk measure R EN ( X ). Let z ideal ∈ R G be the ideal pointof the vector optimization problem in (3.6) with Λ OPT = Λ EN , see Remark 3.4 for its definition.One can calculate z ideal = ( Z EN1 ( e ) , . . . , Z EN1 ( e G )) T by Corollary 3.5. In addition, for v ∈ R G ,the minimum step-length P EN2 ( v ) can be obtained by solving the MILP problem Z EN2 ( v ) in Corol-lary 3.11.The algorithm starts with the initial inner approximation U := z UB + R G + and the initial outerapproximation L := z ideal + R G + , which satisfy U ⊆ R EN ( X ) ⊆ L . Let ε = ǫ and initially set t ←
0. At the t th iteration, for a vertex v t ∈ (cid:0) vert L t | z UB (cid:1) such that v t + ε / ∈ int U t , the algorithmsolves Z EN2 ( v t ) to obtain the minimum step-length µ t from the point v t to the boundary of R EN ( X )in the direction ∈ R G . In other words, y t = v t + µ t is a boundary point of the set R EN ( X ).Then the algorithm excludes the cone y t − R G + from L t to obtain L t +1 by L t +1 := L t \ ( y t − R G + ), andadds the cone y t + R G + to U t to obtain U t +1 as follows: U t +1 := U t ∪ ( y t + R G + ). Therefore, at eachstep of the algorithm, we have U t ⊆ U t +1 ⊆ R EN ( X ) ⊆ L t +1 ⊆ L t . At the end of the t th iteration,vert L t +1 is computed. The computation of vert L t +1 is described in detail in Gourion and Luc(2010). The above process repeats for t ← t + 1. The algorithm stops at T th iteration, when (cid:0) vert L T | z UB (cid:1) + ε ⊆ int U T . The sets U T and L T are the inner and outer approximations for R EN ( X )with respect to ǫ > z UB ∈ R G . Note that z UB has to be chosen such that z UB ∈ R EN ( X ) toget nonempty approximations. The pseudocode of the algorithm for the Eisenberg-Noe systemic60isk measures is provided in Algorithm 1. Algorithm 1.
Inner and outer approximation algorithm for R EN ( X ) Initialization.(i1)
Let z UB ∈ R EN ( X ) , L = z ideal + R G + , U = z UB + R G + and ǫ > . (i2) Put ε = ǫ and set t ← , S ← ∅ . Iterations.(k1) If ( vert L t | z UB ) ⊆ S , then set T = t and go to (r1). Otherwise, choose v t ∈ ( vert L t | z UB ) \ S . (k2) If v t + ε ∈ int U t , then set S ← S ∪ { v t } and go to (k1). (k3) Suppose that µ t = P EN ( v t ) . Define y t = v t + µ t . (k4) Define L t +1 := L t \ (cid:0) y t − R G + (cid:1) and U t +1 := U t ∪ (cid:0) y t + R G + (cid:1) . (k5) Determine vert L t +1 and set t ← t + 1 . Go to (k1). Results.(r1) L T is an outer approximation and U T is an inner approximation for R EN ( X ) . References
C¸ . Ararat and B. Rudloff. Dual representations for systemic risk measures.
Mathematics and FinancialEconomics , 14(1):139–174, 2020.P. Artzner, F. Delbaen, J.-M. Eber, and D. Heath. Coherent measures of risk.
Mathematical Finance , 9(3):203–228, 1999.H. P. Benson. An outer approximation algorithm for generating all efficient extreme points in the outcomeset of a multiple objective linear programming problem.
Journal of Global Optimization , 13:1–24, 1998.F. Biagini, J.-P. Fouque, M. Fritelli, and T. Meyer-Brandis. A unified approach to systemic risk measuresvia acceptance sets.
Mathematical Finance , 29(1):329–367, 2019.C. Chen, G. Iyengar, and C. C. Moallemi. An axiomatic approach to systemic risk.
Management Science ,59(6):1373–1388, 2013.L. Eisenberg and T. H. Noe. Systemic risk in financial systems.
Management Science , 47(2):236–249, 2001.P. Erd¨os and A. R´enyi. On random graphs I.
Publicationes Mathematicae , 6:290–297, 1959.Z. Feinstein, B. Rudloff, and S. Weber. Measures of systemic risk.
SIAM Journal on Financial Mathematics ,8(1):672–708, 2017.H. F¨ollmer and A. Schied.
Stochastic Finance: an Introduction in Discrete Time . De Gruyter TextbookSeries, 3 rd edition, 2011. . Gerstewitz and E. Iwanow. Dualit¨at f¨ur nichtkonvexe Vektoroptimierungsprobleme. Wiss. Z. Tech.Hochsch. Ilmenau , 2:61–81, 1985.E. N. Gilbert. Random graphs.
Annals of Mathematical Statistics , 30(4):1141–1144, 1959.A. G¨opfert, H. Riahi, C. Tammer, and C. Zalinescu.
Variational Methods in Partially Ordered Spaces .Springer-Verlag New York, 2003.D. Gourion and D. T. Luc. Finding efficient solutions by free disposal outer approximation.
SIAM Journalon Optimization , 20(6):2939–2958, 2010.Gurobi Optimizer Reference Manual. Version 8.0. Gurobi Optimization, LLC, 2018. URL .A. H. Hamel and F. Heyde. Duality for set-valued measures of risk.
SIAM Journal on Financial Mathematics ,1(1):66–95, 2010.A. H. Hamel, F. Heyde, and B. Rudloff. Set-valued risk measures for conical market models.
Mathematicsand Financial Economics , 5(1):1–28, 2011.A. H. Hamel, A. L¨ohne, and B. Rudloff. A benson type algorithm for linear vector optimization andapplications.
Journal of Global Optimization , 59(4):811–836, 2014.J. Jahn.
Vector Optimization: Theory, Applications, and Extensions . Springer, 2 nd edition, 2011.Yu. Kabanov, R. Mokbel, and Kh. El Bitar. Clearing in financial networks. Theory of Probability and ItsApplications , 62(2):311–344, 2017.A. L¨ohne, B. Rudloff, and F. Ulus. Primal and dual approximation algorithms for convex vector optimizationproblems.
Journal of Global Optimization , 60(4):713–736, 2014.R. R. Meyer. On the existence of optimal solutions to integer and mixed-integer programming problems.
Mathematical Programming , 7(1):223–235, 1974.S. Nobakhtian and N. Shafiei. A Benson type algorithm for nonconvex multiobjective programming problems.
TOP , 25(2):271–287, 2017.A. Pascoletti and P. Serafini. Scalarizing vector optimization problems.
Journal of Global Optimization , 42(4):499–524, 1984.L. C. G. Rogers and L. A. M. Veraart. Failure and rescue in an interbank network.
Management Science ,59(4):882–898, 2013.,59(4):882–898, 2013.