A linear input dependence model for interdependent networks
AA Linear Input Dependence Model for Interdependent Networks
Hemanshu Kaul ∗ and Adam Rumpf † February 11, 2021
Abstract
We consider a linear relaxation of a generalized minimum-cost network flow problem withbinary input dependencies. In this model the flows through certain arcs are bounded by linear(or more generally, piecewise linear concave) functions of the flows through other arcs. Thisformulation can be used to model interrelated systems in which the components of one sys-tem require the delivery of material from another system in order to function (for example,components of a subway system may require delivery of electrical power from a separate sys-tem). We propose and study randomized rounding schemes for how this model can be usedto approximate solutions to a related mixed integer linear program for modeling binary inputdependencies. The introduction of side constraints prevents this problem from being solved us-ing the well-known network simplex algorithm, however by characterizing its basis structure wedevelop a generalization of network simplex algorithm that can be used for its efficient solution.
As society continues its trend towards urbanization it also becomes more reliant on highly intercon-nected civil infrastructure systems, which include services such as transportation, electrical power,telecommunications, water, and waste management. Interconnections arise when the functionalityof one system relies on another, for example the reliance of telephone lines and subway systems onthe functionality of the electrical power grid. These interdependencies make the analysis of suchsystems much more complicated, and cause them to become more vulnerable to both attack andnatural disaster due to the possibility of cascading failures .A cascading failure occurs when the failure of one component of a system causes another com-ponent (possibly in a different system) to also fail. This may in turn cause another componentto fail, and then another, resulting in a chain reaction that causes damage far beyond the initialfailures. This phenomenon has been the subject of a great deal of recent research incorporating awide variety of modeling techniques [21]. For example, Kinney et al. 2005 [13] studied the NorthAmerican power grid by examining the effects of removing random substations, which may resultin a chain reaction of substation overloads that damages a large portion of the grid. Ouyang andDue˜nas-Osorio 2011 [23] examined different strategies for designing the interfaces between differentinfrastructure systems in an effort to maximize resistance to random natural disaster scenarios,Ouyang and Wang 2015 [24] examined different interdependent network restoration strategies, andOuyang 2017 [22] examined worst-case scenarios as the results of directed attacks. Zhong et al. 2019[31] studied the repair process of various interdependent networks with load-dependent cascadingfailures. ∗ Department of Applied Mathematics, Illinois Institute of Technology, Chicago, IL 60616. E-mail: [email protected] † Department of Applied Mathematics, Illinois Institute of Technology, Chicago, IL 60616. E-mail: [email protected] a r X i v : . [ m a t h . O C ] F e b any interdependent network studies focus purely on topological aspects of the networks, draw-ing from random graph theory and percolation theory to examine how damage can propagatethrough a collection of interconnected networks. Gao et al. 2012 [10] showed that reducing thenumber of interdependencies between networks could reduce their susceptibility to attack. Paran-dehgheibi and Modiano 2013 [26], Nguyen et al. 2013 [18], and Sen et al. 2014 [29] all studiedthe interaction between an electrical power network and a communication network by finding op-timal strategies for causing as much damage propagation as possible by removing as few nodes aspossible. Havlin et al. 2014 [11] showed results for various attack strategies on various types ofrandom network. Lam and Tai 2018 [15] used fuzzy set theory to model networks with uncertaininterdependencies.Rather than focusing purely on the network topology, many other studies of interdependentinfrastructure systems have instead made use of network flows models [25], which have long beenin use for other common civil infrastructure problems like transportation design [9]. Under thismodeling paradigm each infrastructure system is thought of as transporting a flow of materialthrough a collection of nodes and arcs, with the flows and the network components representingdifferent things in different systems. For example, in part of the transportation network, flow mightrepresent cars, arcs roads, and nodes intersections, while in the water network, flow might representwater, arcs pipelines, and nodes water treatment facilities and homes. This model allows designand recovery problems to be stated as minimum-cost network flow (MCNF) problems [1, pp. 4–5].Interdependencies can be introduced into the standard MCNF problem by including side con-straints in addition to the usual flow conservation and and capacity constraints. Lee et al. 2007[16] put forward a model for describing these relationships. This paper described a type of inter-dependence called input dependence , in which a component of one infrastructure system requiresdelivery of resources from another system in order to function (for example, the relationship be-tween the electrical power network and the subway network described above). Input dependencecan be modeled by causing an arc in one network (the child ) to become unable to transport flowunless a demand node in another network (the parent ) receives its full flow demand. Instead ofstrictly enforcing that all demand be satisfied as in the standard MCNF, shortfall is allowed butis penalized by adding a penalty cost to the objective and by causing other components of thenetwork to fail. This type of constraint has since been incorporated into other models, notably byCavdaroglu et al., whom used it as part of their disaster recovery scheduling model [7] to study aseries of similar problems involving interdependent infrastructure systems [8, 19, 20, 30]. A similarnotion of input dependence has been used in other network flows-based models [3, 12].The modeling technique used by Lee et al. included a binary indicator variable for each par-ent/child pair with logical constraints to force its value to 0 if the parent’s demand was not fullymet and 1 otherwise. This variable was used as a multiplier for the child arc’s capacity constraint,effectively leaving its capacity unchanged if the parent’s demand was satisfied and setting it to 0otherwise. The use of binary variables can cause the resulting mixed integer linear program (MILP)to become computationally intractable for large civil infrastructure systems.The purpose of this paper is to examine the linear program (LP) relaxation of the Lee et al. inputdependence MILP model, obtained by replacing the binary indicator variables with real-valuedvariables on the interval [0 , minimum-cost network flow problem with linear interdependencies (MCNFLI). Weexplore applications of the MCNFLI, including its ability to describe relationships outside the scopeof the original binary input dependence model, and its use in a randomized rounding algorithmfor obtaining near-optimal feasible solutions to the MILP model. We present a generalized versionof the well-known network simplex algorithm [1, pp. 402–460] that can be used for its efficientsolution. 2 tructure of this paper: In Section 2 we describe the formulation of the MCNFLI model bygeneralizing the LP relaxation of the binary input dependence model from Lee et al. 2007 andthe motivation underlying its formulation. Section 3 discusses how the solutions of the MCNFLImodel combined with a variety of randomized rounding schemes can be used to build approximatesolutions for the MILP model. This motivates us to efficiently find solutions for the MCNFLImodel, which we study in the remainder of the paper. In Section 4 we explore the structure of theMCNFLI and derive some important theoretical results, including the main result of this paper:the characterization of the basic feasible solution. These results are then applied in Section 5 toderive a generalized network simplex algorithm. Finally we conclude in Section 6 with a summaryof results and a discussion of future work.
The MCNFLI consists of an MCNF with side constraints. Since any collection of interdependentnetworks can be merged into a single network using artificial arcs with high cost, for the remainderof the paper we will consider a flow network G = ( V, E ) with node set V and arc set E . Foreach node i ∈ V we have a supply value b i which is positive for supply nodes, negative for demandnodes, and zero for transshipment nodes. We assume that (cid:80) i ∈ V b i = 0. For each arc ij ∈ E wehave a constant upper bound u ij ∈ [0 , ∞ ], a unit flow cost of c ij , and a nonnegative flow variableof x ij .We also have a set I of interdependencies. Rather than the node-to-arc interdependenciesdescribed in Section 1, we will use arc-to-arc interdependencies in which one arc acts as the parentof a different arc. This formulation can be obtained from a node-to-arc interdependence using atransformation similar to that of the minimum-cost formulation of the maximum flow problem [5,pp. 430–433], with a parent arc being saturated corresponding to a parent node’s demand being met.Each element of I is an ordered pair ( ij, kl ) ∈ E × E whose first element represents a parent andwhose second element represents the corresponding child. We assume a one-to-one correspondencebetween parents and children. For each interdependence in I we also are also given constants α klij and β klij which define the linear interdependence. The resulting model, which we will refer to asLP-I, is min (cid:88) ij ∈ E c ij x ij (1)s . t . (cid:88) j : ij ∈ E x ij − (cid:88) j : ji ∈ E x ji = b i ∀ i ∈ V (2)0 ≤ x ij ≤ u ij ∀ ij ∈ E (3) x kl ≤ α klij x ij + β klij ∀ ( ij, kl ) ∈ I (4)Note that (1)–(3) are exactly the objective and constraints of the MCNF. The new side con-straints (4) describe the interdependencies, bounding the flow through a child arc x kl by a linearfunction α klij x ij + β klij of its corresponding parent arc x ij .The binary input dependence model from Lee et al. 2007, which we will refer to as MILP-I, alsotakes the form of the MCNF with side constraints. It makes use of a binary linking variable y klij for each interdependence ( ij, kl ) ∈ I . Additional constraints and slack variables force y klij = 0 when x ij < u ij and y klij = 1 when x ij = u ij , allowing y klij to act as an indicator of whether the parent issaturated. The interdependence, itself, is then enforced by a constraint of the form x kl ≤ u kl y klij ,forcing the child’s flow to be bounded by u kl if the parent is saturated and 0 otherwise.3he LP relaxation of MILP-I involves allowing the linking variables y klij to take values in theinterval [0 , { , } . Doing so allows the linking variables to be eliminatedvia substitution. The resulting linking constraint takes the form x kl ≤ u kl u ij x ij , which is the specialcase of LP-I for which α klij = u ij u kl and β klij = 0 for all ( ij, kl ) ∈ I . For this reason, throughout theremainder of the paper we will refer to LP-I as the LP relaxation of MILP-I.Linear input dependencies have several modeling applications of their own beyond simply ap-proximating binary input dependencies. While binary input dependence renders a child arc com-pletely unusable if there is any shortfall at its parent arc, linear input dependence still permitspartial functionality from partial delivery, which may be more appropriate for modeling certainsystems. In addition, by placing a sequence of parents and children in series, linear input depen-dencies as modeled in LP-I can be used to generate a piecewise linear concave bound on the child’scapacity as a function of the parent’s saturation, which can model input dependencies in which thedelivery of more material has diminishing returns. Although, as discussed above, a linear input dependency model like LP-I is important in its ownright, in this section we will investigate how solutions of LP-I can be applied to find good approxima-tions to the solutions of MILP-I. As MILP-I is NP-complete, it is not in general computationallytractable to solve large instances of it exactly, however its linear relaxation LP-I can be solvedquickly and then those solutions can be used to build a solution for MILP-I. In the following sec-tions we describe randomized rounding algorithms for obtaining a near-optimal, feasible solution toMILP-I based on its linear relaxation, and then use computational experiments to obtain empiricalresults for the method’s effectiveness.
In MILP-I, the binary linking variable y klij for each interdependent pair ( ij, kl ) ∈ I either forces theparent arc to be saturated (when y klij = 1) or the child arc to be unused (when y klij = 0). A commonapproximation technique for such a program with binary variables would be to solve the linearrelaxation (restricting y klij to [0 ,
1] instead of { , } ) and then deterministically round the relaxedbinary variables to 1 if above some threshold or 0 if below some threshold, fixing the roundedvariables and then finally solving the resulting pure LP. Unfortunately there is no deterministicrounding rule that could be applied in this case that would guarantee the existence of a feasiblesolution. As an alternative we consider the related idea of randomized rounding algorithms.Let x ∗ be the optimal flow for the LP relaxation of MILP-I. For each interdependence ( ij, kl ) ∈ I we set y klij to 1 with probability P klij and 0 otherwise, where P klij describes the relative saturationof either the parent or the child. If the resulting binary variables define an infeasible LP then therandom binary variables can be re-realized, repeating as necessary until a feasible solution is found.As it would be reasonable to base the value of y klij on the saturation of either the parent or thechild, we define three basic types of randomized rounding algorithm: randomized rounding basedon child flow (RRC) in which P klij := x ∗ kl /u kl , randomized rounding based on parent flow (RRP) inwhich P klij := x ∗ ij /u ij , and the control test fair randomized rounding (RRF) in which P klij := 0 . P klij is allowed to equal exactly 0 or 1, as shown in the examples in Appendix B. To prevent this wedefine modified versions of the RRC and RRP schemes with alternate definitions for P klij . Let RRC04e the version of RRC described above, for which P klij is set to exactly x ∗ kl /u kl . We define RRC1in the same way, but with its value of P klij restricted to the interval [0 . , .
99] (that is, equal tomax { min { x ∗ kl /u kl , . } , . } ) in order to avoid probabilities too close to 0 or 1. We define RRC5analogously but restricted to the interval [0 . , . We used a modified version of NETGEN [14], a random MCNF problem generator, to create ourtest cases, adding a procedure to generate pairs of interdependent arcs alongside the standardMCNF problem parameters and used these to define instances of MILP-I, as well as a preliminarycheck to ensure that all problem instances were feasible. We tested two main types of problem:those in which parent arcs corresponded to demand nodes, and those in which parent arcs weredistributed randomly throughout the network. We will refer to these problems as Type 1 and Type2, respectively. For Type 1 problems, we applied a transformation to convert parent demand nodesinto equivalent parent arcs, relaxing the demand values at these nodes as well as the supply values.The full source code for the computational trial generation can be viewed online [27].For each problem type, we generated test networks on 256 and 512 nodes. For each test networkthe number of arcs was either 4 or 8 times the number of nodes. For Type 1 problems we randomlyselected either 2%, 5%, 10%, or 15% of the demand nodes as parents, with their correspondingchildren being chosen uniformly at random from the arc set. For Type 2 problems we randomlyselected either 1%, 2%, 5%, or 10% of the arcs within the network as parents and children.For each combination of parameters, 60 test cases were generated. In all cases, 20% of the nodeswere sources and 20% were sinks (in Type 1 problems, this refers to the original number of sinknodes, before some were converted into transshipment nodes during the parent arc transformation).Each arc’s cost was chosen uniformly at random from the interval [1 , , .3 Computational Results In this section we summarize the results of the computational trials described above. Full datatables can be found in Appendix E. Across all groups of trials the RRC and RRP schemes performedso similarly that only the RRC and RRF results will be discussed here.
LP Relaxation:
For the LP relaxation the primary value of interest is the relative error, whichwas extremely small across all trials. Among the Type 1 trial sets all showed a mean error of lessthan 0.2% with a maximum of less than 1.7%. More than half of the Type 1 trial errors wereexactly zero. Among the Type 2 trials all mean errors were less than 1.7% with a maximum of lessthan 6.3% and a median of less than 0.2%.
Randomized Rounding, Type 1 Problems:
For the randomized rounding schemes there aretwo values of interest: the relative error and the number of iterations required to reach a feasiblesolution. For Type 1 problems, all randomized rounding schemes except for one were able to finda feasible solution after a single iteration. The one exception occurred on a test network with 512nodes, 4 arcs per node, and 10% of sinks acting as interdependencies, for which RRC0 failed tofind a solution within 1000 iterations, although the remaining randomized rounding schemes allsucceeded. .
02 0 .
04 0 .
06 0 .
08 0 . .
12 0 . . . . . .
05 RRC0RRC1RRC5RRF
Figure 1: Relative error in randomized rounding solutions for Type 1 trials. Each point displaysmean relative error versus interdependence density (fraction of sink nodes converted into parentarcs), calculated from the 60 Type 1 trials on networks with 512 nodes and 4 arcs per node, ignoringthe single trial for which RRC0 failed to find a feasible solution.Figure 1 shows the mean relative errors for the three RRC schemes and the single RRF schemeapplied to Type 1 problem instances, although all other Type 1 test groups showed a similar trend.All schemes show a general trend of the mean error increasing as the interdependence densityincreases, with the largest errors occurring for the networks with the lowest arc density. The RRC0scheme achieved the smallest relative error for most trial groups followed very closely by the RRC1scheme, with all relative errors being less than 6.5%. The RRP5 trials showed larger errors, but stillall less than 9.4%. The RRF trials were by far the largest, with a maximum error of approximately18.7%.
Randomized Rounding, Type 2 Problems:
Type 1 problems require relaxing some demandconstraints while Type 2 problems do not, making it more difficult to obtain a feasible solution.Figure 2a shows the fraction of each type of trial in which each randomized rounding scheme failedto find a feasible solution within 1000 attempts. As expected, in most cases failure rates increaseas the number of interdependencies increases, since this introduces additional side constraints andmakes it more difficult to find a feasible solution. Other network sizes displayed less of a clear trend6or small numbers of interdependencies, but in all cases the largest failure rates coincided with thelargest number of interdependencies. .
02 0 .
04 0 .
06 0 .
08 0 . . . . (a) Randomized rounding scheme failurerates for Type 2 trials. .
02 0 .
04 0 .
06 0 .
08 0 . . . . . . (b) Randomized rounding relative errors for Type 2trials. Statistics restricted only to successful trials. Figure 2: Randomized rounding schemes for Type 2 trials. Statistics calculated from the 60 Type2 trials on networks with 512 nodes and 4 arcs per node. The RRF series’ have been cut off afterthe first two iterations since their failure rates increase to 100% for the remaining values.More important is the comparison between the failure rates of the different randomized roundingschemes. In all trial groups except for one (256 nodes, 4 arcs per node, and 2% of arcs interde-pendent) the RRF scheme showed by far the largest failure rate, followed by RRC0, with RRC1and RRC5 showing by far the lowest failure rates. For the trial groups in which 10% of arcs wereinterdependent, all RRC failure rates fell below 15% while all RRF failure rates were above 95%.Among the RRC schemes, as seen in Figure 2a, RRC0 failed significantly more often than RRC1and RRC5. The RRC0 failure rate generally increased with the number of interdependencies, whilethe RRC1 and RRC5 failure rates were all exactly zero except for one trial group (256 nodes, 4arcs per node, and 10% of arcs interdependent), in which both failure rates were still below 5%.Figure 2b shows the mean for the relative error of each randomized rounding scheme for Type2 problems, restricted only to instances for which a feasible solution was obtained within 1000attempts. The trend appears extremely similar to that of the Type 1 trials. All mean errorsincrease with the number of interdependencies. In all cases RRF produces by far the largest error,followed by RRC5, and then RRC1 and finally RRC0. While RRC1 and RRC0 performed almostidentically in the Type 1 trials there is a more pronounced difference in the Type 2 trials, for whichRRC0 produces clearly smaller errors than RRC1. Across all trial sets the largest mean error forthe RRC5 solutions was approximately 16.8%, while for RRC1 it was 4.1% and for RRC0 it was0.8%.
The computational results of the previous section show that the LP relaxation and the resultingrandomized rounding solutions typically produce reasonable bounds for the MILP solution. Themean relative error of the LP relaxation had a mean of less than 1.7% for all trial groups. For Type1 problems, with one exception the RRC0 scheme was able to find a feasible solution at a relativeerror of less than 6.5%, while RR1 achieved nearly the same accuracy with a better success rate.For Type 2 problems the RRC0 scheme achieved the smallest relative errors and the RRC1 schemeachieved the smallest failure rate, but both performed well especially for the smallest trials. Theseresults illustrate the value in being able to quickly solve LP-I.The remainder of the paper will be dedicated to developing and proving the correctness of ageneralized version of network simplex capable of solving an arbitrary instance of the MCNFLI.7his problem is technically a special case of the MCNF problem with side constraints studied inMathies and Mevert 1998 [17], but we can exploit the structure of our side constraints to define asimpler solution algorithm. Similar algorithms have been studied in Calvete 2003 [6] for the generalequal flow problem and in Bah¸ceci and Feyzio˜glu 2012 [4] for the proportional flow problem.
In this section we present some of the notation and theoretical results that will be used throughoutthe rest of the paper. We prove some important properties of the matrix form that will be used inSection 5 to develop a modified network simplex algorithm, most importantly the characterizationof a basis for the MCNFLI in Section 4.2. Before moving on, it will be useful to express LP-I inmatrix form. To this end we introduce a slack variable s klij for each ( ij, kl ) ∈ I and rewrite thelinking constraint (4) as the equality x kl − α klij x ij + s klij = β klij (5)Let m = | V | , n = | E | , and p = | I | , and assume without loss of generality that n > m > p . Let x ∈ R n be the vector of flow variables, s ∈ R p be the vector of slack variables, c ∈ R n be the vectorof unit flow costs, u ∈ R n be the vector of arc capacities, b ∈ R m be the vector of supply values, β ∈ R p be the vector of linear inequality constants, and ˆA ∈ R m × n be the node-arc incidencematrix of G [5, p. 277]. Let ˆQ ∈ R p × n be a matrix with one column for each arc ij ∈ E andone row for each interdependence ( ij, kl ) ∈ I . Within the row corresponding to interdependence( ij, kl ), element ij is − α klij , element kl is 1, and all other elements are zero. Then the block matrixform of LP-I, which we will refer to as LP-II, ismin c (cid:48) x (6)s . t . (cid:20) ˆA 0ˆQ I (cid:21) (cid:20) xs (cid:21) = (cid:20) b β (cid:21) (7) ≤ x ≤ u (8) ≤ s (9)where c (cid:48) denotes the transpose of c , I is a p × p identity matrix, and is a matrix or vector ofzeros of the appropriate dimensions. Let A ∈ R ( m + p ) × ( n + p ) be the matrix on the lefthand side of(7), which constitutes the constraint matrix of LP-II. For brevity we will use the terms “arc”, “flow variable”, and “column” interchangeably to refer toan arc, an arc’s associated flow variable, and a flow variable’s associated column of A . Similarlywe will use the terms “node”, “flow conservation constraint”, and “row” interchangeably to referto a node, a node’s associated flow conservation constraint, and a constraint’s associated row of A .During each iteration of simplex, let B denote the set of basic variables. For such B , let B bethe submatrix of A containing only basic columns. Let L and U be the sets of nonbasic variables attheir lower or upper bound, respectively. Let a variable be called interdependent if it is involved inany interdependence as either a parent arc, child arc, or slack variable, and independent otherwise.For each interdependence ( ij, kl ) ∈ I , we say that flow variables x ij and x kl and slack variable s klij are all linked to each other. We call arcs ij and kl partners of each other. For any interdependentarc ij , let ˆ a ij be defined as the coefficient of variable x ij in its corresponding linking constraint (5),which is − α klij if ij is a parent and 1 if it is a child.8 .2 Basis Characterization A basis of LP-II consists of a set B of basic variables whose corresponding columns of A are full-rank, plus a partition of all nonbasic variables into sets L and U . We begin by observing the rankof A . Observation 1.
The coefficient matrix A of LP-II has rank m + p − . The proof of this follows from the fact that, as an m × n incidence matrix, submatrix ˆA hasrank m − p × p identity matrix I in the lower right has rank p , and the blockstructure of A makes it clear that the last p rows are linearly independent when taken togetherwith any full-rank subset of the first m rows. This tells us that our basis must always contain m + p − G contains at leastone spanning tree made up entirely of independent arcs. As in the MCNF, the subgraph of basicarcs must contain only a single component. Observation 2.
Given any basis B , the subgraph of G containing only the arcs in B must beconnected. The proof of this relies on the fact that the basis matrix B can be rearranged into a block formwith one block for each component of the basic subgraph. Each block is an incidence matrix andthus rank-deficient by 1, meaning that adding all rows corresponding to one component results ina zero row. Since B is rank-deficient by only 1, only one zero row should be possible, which impliesthat only one component is possible.Unlike the standard MCNF it is possible for the MCNFLI basis to contain cycles of arcs, howeverno such cycle can consist entirely of independent arcs. Observation 3.
Given any basis B , the subgraph of G containing only the independent arcs in B must be free of cycles. The proof of this is almost identical to that of the MCNF problem [5, p. 283]. Observations 2and 3, combined, imply that the subgraph of basic independent arcs must form a spanning forestof G . Let a spanning forest with k components be referred to as a k -spanning forest .For the remainder of this paper we will let r represent the number of basic interdependentvariables, which could include a combination of interdependent arcs and slack variables. Note thatit is always the case that r ≥ p or else Observation 3 would be violated. This allows us to definethe structure of the basic independent arcs. Lemma 1.
The basic independent arcs must form an ( r − p + 1) -spanning forest of G . This follows from Observations 2 and 3. Then the collection of basic variables consists of aspanning forest of independent arcs, plus possibly some interdependent arcs linking the componentsof the forest, plus possibly some slack variables. For the remainder of the paper we will use F = { T , . . . , T r − p +1 } to refer to the spanning ( r − p +1)-forest of G formed by the basic independentarcs, where T h , h = 1 , . . . , r − p + 1 are the subgraphs of G that form the components of the forest.Define matrix D ∈ R r × r as D := δ · · · δ r a ... . . . ... δ r − p · · · δ r − pr a Q ˆI (10)9here r a is the number of basic interdependent arcs, r s is the number of basic slack variables, ˆI ∈ R p × r s and Q ∈ R p × r a are the submatrices of ˆI and ˆQ containing only basic columns, and δ hij isdefined as δ hij := i ∈ T h and j / ∈ T h − i / ∈ T h and j ∈ T h ∀ ij ∈ E, ∀ h = 1 , . . . , r − p + 1 (11)This matrix proves to be the key for determining the rank of B . Lemma 2.
The rank of the basis matrix B is m + p − if and only if matrix D as defined in (10)has rank r .Proof. We can reorder the columns and rows of any basis matrix B to obtain the block structure B = T B T B . . . ... T r − p +1 B r − p +1 Q ˆI (12)where T h is the incidence matrix of T h and B h is the submatrix of B whose columns correspondto interdependent and slack variables and whose rows correspond to the nodes of T h . For notationalconvenience, for the remainder of the proof we will refer to arcs by a single index rather thanthe indices of both of their endpoints. Let the basic interdependent arcs be labeled with indices e = 1 , . . . , r a corresponding to the order of the columns in each submatrix B h from (12).Consider one of the submatrices T h . As the incidence matrix of a tree, it has exactly one morerow than it has columns and it is rank-deficient by exactly 1 [5, p. 280]. Summing all rows of T h yields the zero vector. Transform B into an equivalent matrix by adding all rows corresponding to T h to the first row of T h , for each tree h . After this addition, the element corresponding to thefirst row of T h in column e will become (cid:80) k ∈ T h a ke , which is exactly δ he . The result is B ∼ (cid:48) δ · · · δ r a ˜T ˜B (cid:48) δ · · · δ r a ˜T ˜B . . . ... (cid:48) δ r − p +11 · · · δ r − p +1 r a ˜T r − p +1 ˜B r − p +1 Q ˆI (13)where ˜T h and ˜B h are exactly T h and B h , respectively, with their first row removed. All matrices ˜T h are full-rank [5, p. 281], and since they occupy disjoint sets of columns, the rows associatedwith ˜T h must all be linearly independent. Their total rank when taken together is m − r + p − m nodes, minus one deleted row for each of the r − p + 1 trees). Then B has rank m + p − r , and are linearly independent when taken together with the10ows of ˜T h , h = 1 , . . . , r − p + 1. Since the first row associated with each matrix T h is zero in (13)we need only examine the remaining rows for columns 1 through r . The submatrix defined by thissubset is δ · · · δ r a ... . . . ... δ r − p +11 · · · δ r − p +1 r a Q ˆI (14)This dimensions of this matrix are ( r + 1) × r , since it has one column for each basic interde-pendent variable and one row for each of the r − p + 1 trees in the spanning forest, plus one for eachof the p interdependencies. Consider the effects of summing rows 1 through r − p + 1. In column e ,from our earlier conclusion that δ he = (cid:80) k ∈ T h a ke , this is (cid:80) r − p +1 h =1 δ he = (cid:80) r − p +1 h =1 (cid:80) k ∈ T h a ke = (cid:80) k ∈ V a ke ,which is zero since all columns of an incidence matrix sum to zero. Because of this we may dropone row, say the one corresponding to tree T r − p +1 , without affecting the rank of this matrix. Doingso gives exactly the r × r matrix D defined earlier in (10).Adopting the terminology of [2, 4, 6], we refer to an ( r − p + 1)-spanning forest of G as a good ( r − p + 1) -forest with respect to a set S of r non-independent variables if D is full-rank. Then thepreceding results allow us to finally characterize the basis of LP-II. Theorem 1.
A basis of
LP-II consists of a set B of m + p − basic variables, containing a good ( r − p + 1) -spanning forest of G verified by a set S of r interdependent variables, plus an assignmentof all nonbasic variables to a category L or U , containing variables at their lower or upper bounds,respectively. The proof of Theorem 1 follows directly from Lemma 2 and the definition of a basis. It shouldalso be noted that the basis characterization presented in Theorem 1 can be modified to dependon only a submatrix of D , as shown in Appendix A. In this section we will explain how the network simplex algorithm can be modified to solve themore general problem of the MCNFLI. Details about the implementation of network simplex arewell known [1, pp. 402–421] and so this section will focus primarily on the elements of networksimplex that need to be modified. In Section 5.1 we describe the modified reduced cost calculationprocedure (which searches for candidates to enter the basis). In Section 5.2 we describe the modifiedchange of basis procedure (consisting of pivoting to bring a candidate into the basis while anothervariable leaves). We conclude in Section 5.3 with an analysis of of their computational complexity.See also Appendix D for an illustrative numerical example of the algorithm carried out on a testnetwork.
The vector of reduced costs c π ∈ R n + p is defined by ( c π ) (cid:48) := c (cid:48) − π (cid:48) A , where π ∈ R m + p is the potential vector [5, p. 84]. There is one reduced cost c πij for each flow variable x ij and one ( c π ) klij for each slack variable s klij . There is one potential π i for reach node i ∈ V and one π klij for each11nterdependence ( ij, kl ) ∈ I . Given our definitions for c π and A , we have the relationships c πij = c ij − ( π i − π j ) if ij is independent (15) c πij = c ij − ( π i − π j − α klij π klij ) if ( ij, kl ) ∈ I (16) c πij = c ij − ( π i − π j + π ijkl ) if ( kl, ij ) ∈ I (17)( c π ) klij = − π klij if ( ij, kl ) ∈ I (18)The reduced cost of a basic variable is zero, allowing us to use equations (15)–(18) to solvefor the potentials by adopting a two-step strategy based on that of Calvete 2003 [6] and Bah¸ceciand Feyzio˜glu 2012 [4]. First we make an initial guess π for the potential vector. To do this,within each tree T h of F , arbitrarily select a root and arbitrarily assign it a potential. Assign apotential of zero to all interdependencies of basic slack variables and arbitrarily assign a potentialto all remaining interdependencies. Then equations (15) can be used to solve for all remaining nodepotentials within each tree in order from root to leaf, since c πij = 0 for all ij ∈ T h .Second, we test whether the guessed potentials were correct by testing whether they producereduced costs of zero for all basic variables when substituted into (15)–(18). If so, then π is thecorrect potential vector and we can move on. If not, then we must calculate a “corrected potential”vector ˜ π defined by ˜ π i := π i + σ h ∀ i ∈ T h , ∀ h = 1 , . . . , r − p (19)˜ π klij := σ klij ∀ ( ij, kl ) ∈ I (20)The vector σ of correction terms is defined as the solution of the linear system D (cid:48) σ = c π , whichincludes the reduced cost vector c π resulting from the initial guess π . Proposition 1. If σ is the solution to the system D (cid:48) σ = c π , then the vector of corrected potentials ˜ π calculated by equations (19)–(20) will yield a reduced cost of 0 for all basic variables calculatedby equations (15)–(18).Proof. Let σ be the solution to D (cid:48) σ = c π . From the first part of the two-step strategy describedabove it is clear that π results in c πij = 0 for all ij ∈ T h , and since the same constant offset σ h isapplied to all node potentials in T h , this remains the case after applying the correction terms. Fromthe structure of D we find that the row of system D (cid:48) σ = c π corresponding to interdependence( ij, kl ) is simply the equation σ klij = ( c π ) klij . For a basic slack variable s klij we have ( c π ) klij = 0, inwhich case the corrected potential and thus the new reduced cost are all also zero.All that remains is to show that the corrected potentials also result in zero reduced cost for thebasic interdependent arcs. For convenience define σ r − p +1 ≡
0. For any basic parent arc ij with( ij, kl ) ∈ I , where i ∈ T h and j ∈ T g , substituting the corrected potentials into the righthand side of(16) and simplifying gives c πij − ( σ h − σ g − α klij σ klij ), which is zero if and only if σ h − σ g − α klij σ klij = c πij .To show this, the row of D (cid:48) σ corresponding to arc ij is (cid:80) r − pq =1 δ qij σ q − α klij σ klij . If h (cid:54) = g , then δ hij is 1for q = h , δ gij = −
1, and δ qij = 0 for all other q , resulting in (cid:80) r − pq =1 δ qij σ q − α klij σ klij = σ h − σ g − α klij σ klij .The same result holds even if h = g , since in this case δ hij = δ gij = 0 and σ h − σ g = 0. In either case,this row of D (cid:48) σ is set equal to c πij , meaning that σ must satisfy σ h − σ g − α klij σ klij = c πij . As notedabove, this implies that ˜ π produces c πij = 0 when substituted into (16). A similar argument holdsfor the basic child arcs, thus the overall claim holds.At the end of the two-step process the resulting corrected potential vector ˜ π can be applied toequations (15)–(18) to compute the reduced cost of any nonbasic variable. As noted at the end of12ection 4.2, there is a way to characterize the basis using only a submatrix of D . This reducedversion of D also gives rise to a reduced version of the above algorithm, explained in Appendix A.In addition, note that the full version of the above algorithm only needs to be used during the initialiteration. Between consecutive iterations of simplex, within most components of F the previouspotential vector will still satisfy (15) after the change of basis. As a result the previous potentialvector π can be used to calculate the correction vector σ without the need to recalculate all nodepotentials within each component of F .To explain, if a component T h of F remains unchanged between iterations (meaning that no arcwithin T h leaves the basis and no independent arc incident to a node in T h enters the basis), or if T h loses an arc and splits into two separate components, then all previous node potentials within T h still satisfy (15). Only if an independent arc enters the basis, causing two components of F tomerge, can one of these equations become violated. In this event equations (15) can be satisfiedby increasing all node potentials within one of the merged components by an appropriate constant.If independent arc ij with endpoints in separate components T h and T g of F , respectively, entersthe basis, then either ( π i − c ij ) − π j can be added to all node potentials in T g or ( π j + c ij ) − π i can be added to all node potentials in T h . All basic slack potentials can then be set to 0 as before,after which the correction vector can be calculated and used it to calculate the corrected potentialvector.In any case, if all reduced costs of variables in L are nonnegative and all reduced costs ofvariables in U are nonpositive, then the current basic solution is optimal and the algorithm mayterminate. Otherwise we choose some variable in L with a negative reduced cost or in U with apositive reduced cost to enter the basis and conduct change of basis. After deciding on a variable to enter the basis we conduct pivoting and update the solution vector.Before describing this process it will be helpful to first discuss how to calculate the values of thebasic variables given a particular basis.
Let (
B, L, U ) be the current basis. The variables in L and U are fixed at their lower or upperbound, respectively, and the variables in B must satisfy transshipment constraints (2) and linkingconstraints (5). For h = 1 , . . . , r − p + 1, define E − h := { ij | x ij ∈ U, i ∈ T h , j / ∈ T h } and E + h := { ij | x ij ∈ U, i / ∈ T h , j ∈ T h } .Adopting the terminology of Calvete 2003 [6], for each tree T h of F we define a net requirement b ( T h ) := (cid:80) i ∈ T h b i + (cid:80) ij ∈ E + h u ij − (cid:80) ij ∈ E − h u ij , which represents the net inflow to T h through nonbasicarcs and supply values. Let ˜x ∈ R r be the vector of basic interdependent flow variables and allslack variables, arranged in the same order as the columns of D . For interdependence ( ij, kl ) ∈ I corresponding to row t of D , define its net requirement b ( t ) as b ( t ) := β klij if x ij , x kl / ∈ Uβ klij + α klij u ij if x ij ∈ U, x kl / ∈ Uβ klij − u kl if x ij / ∈ U, x kl ∈ Uβ klij + α klij u ij − u kl if x ij , x kl ∈ U (21)Let ˜b ∈ R r be a vector of the form (cid:2) b ( T ) · · · b ( T r − p ) b (1) · · · b ( p ) (cid:3) (cid:48) , containing all treenet requirement values b ( T h ) for h = 1 , . . . , r − p followed by all interdependence net requirement13alues b ( t ) for t = 1 , . . . , p . These net requirement values can be used to compute the values of thebasic variables. Proposition 2.
Given a basis ( B, L, U ) , the values of the basic interdependent variables and theslack variables are given by the solution ˜x of the system D˜x = ˜b .Proof. Suppose that ˜x satisfies D˜x = ˜b . Our goal is to show that the values in ˜x satisfy constraints(2) and (5). Let E ind be the set of basic independent arcs, E int be the set of basic interdependentarcs, ˜ E ind be the set of nonbasic independent arcs at their upper bound, and ˜ E int be the set ofnonbasic interdependent arcs at their upper bound. Defineˆ b i := b i − (cid:88) j : ij ∈ ˜ E ind u ij + (cid:88) j : ji ∈ ˜ E ind u ji − (cid:88) j : ij ∈ ˜ E int u ij + (cid:88) j : ji ∈ ˜ E int u ji ∀ i ∈ V (22)This is the outflow from node i required to satisfy (2) after all nonbasic arc flows are taken intoaccount. The basic flow variables must carry this amount of net outflow from node i , giving (cid:88) j : ij ∈ E ind x ij − (cid:88) j : ji ∈ E ind x ji + (cid:88) j : ij ∈ E int x ij − (cid:88) j : ji ∈ E int x ji = ˆ b i ∀ i ∈ V (23)For any tree T h , consider the result of summing equations (23) over all i ∈ T h . All basicindependent arcs have either both or neither endpoints in T h , thus for all ij ∈ E ind ∩ T h the terms x ij and − x ij will each appear in one equation, causing the two to cancel. The same cancellationoccurs when summing equations (22) over all i ∈ T h . Equating the two gives (cid:88) i ∈ T h (cid:88) j : ij ∈ E int x ij − (cid:88) j : ji ∈ E int x ji = (cid:88) i ∈ T h b i − (cid:88) j : ij ∈ ˜ E int u ij + (cid:88) j : ji ∈ ˜ E int u ji ∀ h = 1 , . . . , r − p (24)Both sides of (24) represent the net outflow of an entire tree T h after taking nonbasic flowsinto account. The only arcs capable of transporting a variable amount of flow between trees arethe basic interdependent arcs, and so equations (24) define a necessary and sufficient condition forthe basic interdependent flow variables to satisfy (2). On the lefthand side, for any arc ij ∈ E int ,the term x ij is present if ij exits T h , the term − x ij is present if ij enters T h , and otherwise noterm of the form x ij is present. Then the lefthand side is equal to (cid:80) ij ∈ T h δ hij x ij . Similarly, on therighthand side, for any arc ij ∈ ˜ E int , the term u ij is present if ij exits T h , the term − u ij is presentif ij enters T h , and otherwise no term of the form u ij is present. Then the righthand side is equalto (cid:80) i ∈ T h b i − (cid:80) ij ∈ E − h u ij + (cid:80) ij ∈ E + h u ij , which is exactly the definition of b ( T h ). Combining thesetwo gives (cid:80) ij ∈ T h δ hij x ij = b ( T h ), which is row h of system D˜x = ˜b .All that remains is to show that the slack variables in ˜x satisfy constraints (5). Each linkingconstraint has the form x kl − α klij x ij + s klij = β klij . If either flow variable is nonbasic then its valueis either zero or its upper bound (depending on whether it is in L or U ), and placing the resultingconstant value on the righthand side produces the net requirement b ( t ) as defined in (21). Thenlinking constraint t is exactly row t of the system D˜x = ˜b . Combining this with our earlier resultwe may conclude that ˜x consists of the values of the basic interdependent variables and slackvariables.After obtaining the values of all basic interdependent variables and slack variables, the values ofthe basic independent variables can be computed in the same way as in standard network simplex[1, pp. 413–415] within each tree. With this procedure in mind, we are finally ready to describethe change of basis procedure. 14 .2.2 Change of Basis We consider here only entering variables that begin at their lower bound, but an analogous proce-dure may be described for those at their upper bound. Pivoting consists of increasing the enteringvariable by θ units while some of the basic variables change in response in order to maintain feasi-bility. Assuming finite capacities, at some increase θ = θ ∗ the entering variable or a basic variablewill reach one of its bounds, becoming a blocking variable (with infinite capacities there may be noblocking variable, in which case the algorithm can terminate with an objective value of −∞ ). Astandard pivoting rule [5, p. 111] can be used to select a blocking variable to leave the basis into L or U while the entering variable takes its place in B , at which point the structure of the basicforest F is updated.We will use ∆ x ij (and ∆ s klij ) to refer to the increase in a given variable as a result of increasingthe entering variable by θ units. Our main goal is to determine the vector of increases ∆ x . Theprocess for doing so depends on whether changing the entering variable causes flow to be transferredbetween different components of F . Case 1
If increasing the entering variable causes no flow to be transferred between componentsof F , then pivoting can be conducted similarly to standard network simplex [5, p. 284]. This casecan occur when the entering variable is an independent arc or an interdependent arc with a basicslack variable, and when both endpoints of the arc lie within the same tree, in which case θ unitsof flow are circulated around the unique cycle defined by the arc in the tree (and ∆ s klij = − ˆ a ij θ for the basic linked slack variable, if it exists). It can also occur when the entering variable isinterdependent with a basic linked arc whose endpoints both lie in the same component of F ,in which case additional flow is circulated around the unique cycle defined by the entering arc’spartner. The amount of flow circulated is − θ/ ˆ a ij if the entering variable is slack, α klij θ if it is theparent of a child kl , and θ/α ijkl if it is the child of a parent kl . Case 2
If increasing the entering variable does transfer flow between different components of F , we apply a two-step process: first determine the flow increases ∆ ˜x of the basic interdependentvariables, and then determine the flow increases of the basic independent arcs independently withineach component of F . For the first step, if the entering variable is an arc ij with i ∈ T h , j ∈ T g ,and h (cid:54) = g , then increasing its flow effectively increases the net requirements b ( T h ) and b ( T g ) by − θ and θ , respectively. If ij is interdependent, then the value b ( t ) associated with its interdependenceis also increased by − ˆ a ij θ . These changes in net requirements can be accommodated by solvingthe following perturbed version of the system D˜x = ˜b . D˜x = (cid:2) b ( T ) · · · b ( T h ) − θ · · · b ( T g ) + θ · · · b ( T r − p ) b (1) · · · b ( t ) − ˆ a ij θ · · · b ( p ) (cid:3) (cid:48) (25)The instance of θ in row t of the above system may be ignored if ij is independent. If theentering variable is slack then we may ignore the instances of θ in rows T h and T g and replaceˆ a ij with −
1. In any case solving system (25) gives a vector ˜x describing the values of the basicinterdependent variables after the entering variable has increased by θ units, from which ∆ ˜x canbe calculated. These are then used in the second step to calculate net requirement increases for all i ∈ V , which can in turn be used to calculate basic independent arc increases using the proceduredescribed in Section 5.2.1 to process the arcs within each component of F .In either case, upon completion we obtain a vector ∆ x of basic (and entering) variable increases.These can be used to calculate the maximum increase θ ∗ as well as the blocking variables as in15etwork simplex [5, p. 290]. We then decide on a leaving variable and update the basis structure( B, L, U ), the solution vector x , the basic forest F , and the matrix D . As a specialized form of the simplex algorithm, this modified network simplex algorithm goesthrough exactly the same number of basis changes as standard simplex: exponentially many in theworst case, but empirically only polynomially many [5, pp. 127–128]. For this reason a more usefulpoint of comparison is to measure the time complexity of a single iteration of both algorithms.Table 1 shows the time complexities and memory requirements of two implementations of standardsimplex [5, p. 107] alongside our modified network simplex algorithm, as applied to LP-II.
Memory Requirement Worst-Case TimeSimplex (Tableau) O (( m + p )( n + p )) O (( m + p )( n + p )) Simplex (Revised) O (( m + p ) ) O (( m + p )( n + p )) Modified Network Simplex O ( m + n + p ) O ( m + n + p )Table 1: Memory requirements and worst-case time complexities (in number of elementary oper-ations) for the full tableau implementation of simplex, revised simplex, and our modified networksimplex algorithm, applied to the ( m + p ) × ( n + p ) system in LP-II.The memory requirement for our algorithm is O ( m + n + p ), dominated by the O ( m ) node-levelattributes, O ( n ) arc-level attributes, and O ( p ) elements of D . The time complexity is O ( m + n + p ),dominated by the O ( m ) operations to set node potentials and net requirements, O ( n ) operationsto set flow values, and O ( p ) operations to solve systems D (cid:48) σ = c π and D˜x = ˜b by Gaussianelimination. For the applications of the MCNFLI described earlier in Section 3 it is reasonableto assume that a relatively small portion of the network’s arcs are interdependent, in which case p (cid:28) n and the O ( p ) and O ( p ) terms become insignificant. If this occurs then both versions ofsimplex have memory and time requirements that are quadratic in m and n , while our specializedalgorithm’s requirements are only linear in m and n . As with standard network simplex, this allowsit to become significantly more efficient than standard simplex for large networks. The MCNFLI model introduced in Section 2 began as simply an LP relaxation of the binary inputdependence model developed by Lee et al. From our discussion, we see that the MCNFLI hassome interesting modeling applications, beyond its original purpose, to interdependent networkswhere the flows through certain arcs are bounded by piecewise linear concave functions of theflows through other arcs. In Section 3, we also see how it can be used to obtain approximatesolutions and near-optimal feasible mixed integer solutions to the original MILP. Due to its specialbasis structure, the modified network simplex algorithm described in Section 5 allows for it to besolved much faster than with general LP solution techniques. If the number of interdependenciesis sufficiently small in comparison to the size of the overall network, we can even achieve a runningtime that approaches that of network simplex, in spite of the fact that network simplex cannot bedirectly applied to this problem.From the computational results presented in Section 3.2 it is clear that the linear relaxation LP-Iof the binary interdependence model MILP-I typically produces extremely similar objective valueswhile being significantly less computationally expensive to solve. These computational savings16ould become even more pronounced within a larger model that requires solving instances of MILP-I repeatedly during its solution process, for example in the disaster recovery models by Cavdarogluet al. [7, 8, 19, 20, 30]. Of particular interest are applications for which only the objective valueof MILP-I is significant within the overall model and not the flow vector, itself, since in thiscase the objective of LP-I can be used as a direct substitute for that of MILP-I without requiring arandomized rounding algorithm to obtain a feasible solution. Rumpf 2020 [28] explored applicationsof the MCNFLI model to network interdiction games for planning interdependent network defensesagainst intelligent attacks, but further computational experiments in this area are needed.
Acknowledgments
The authors would like to thank Dr. Robert Ellis, Dr. Michael Pelsmajer, Dr. Lili Du, andDr. Zongzhi Li for their valuable input, as well as Susanne Mathies for assistance regarding theirprevious research. The random problem instance generator mentioned in Section 3.2 is based ona NETGEN implementation programmed in C by Norbert Schlenker. Thank you also to DanielSimmons, Dr. Qipeng Phil Zheng, and Dr. Leandro C. Coelho for their CPLEX guides which provedinvaluable in setting up our computational trials.
References [1] R. K. Ahuja, T. L. Magnanti, and J. B. Orlin.
Network Flows: Theory, Algorithms, and Applications .Prentice Hall, Englewood Cliffs, NJ, 1993.[2] R. K. Ahuja, J. B. Orlin, G. M. Sechi, and P. Zuddas. Algorithms for the simple equal flow problem.
Management Science , 45(10):1440–1455, 1999.[3] Y. Almoghathawi, K. Barker, and L. A. Albert. Resilience-driven restoration model for interdependentinfrastructure networks.
Reliability Engineering and System Safety , 185:12–23, 2019.[4] U. Bah¸ceci and O. Feyzio˜glu. A network simplex based algorithm for the minimum cost proportionalflow problem with disconnected subnetworks.
Optimization Letters , 6:1173–1184, 2012.[5] D. Bertsimas and J. N. Tsitsiklis.
Introduction to Linear Optimization . Athena Scientific, Belmont,MA, 1997.[6] H. I. Calvete. Network simplex algorithm for the general equal flow problem.
European Journal ofOperational Research , 150:585–600, 2003.[7] B. Cavdaroglu, S. G. Nurre, J. E. Mitchell, T. C. Sharkey, and W. A. Wallace. Decomposition methodsfor restoring infrastructure systems. In B. M. Ayyub, editor,
Vulnerability, Uncertainty, and Risk:Analysis, Modeling, and Management , pages 171–179. American Society of Civil Engineers, 2011.[8] B. Cavdaroglu, E. Hammel, J. E. Mitchell, T. C. Sharkey, and W. A. Wallace. Integrating restorationand scheduling decisions for disrupted interdependent infrastructure systems.
Annals of OperationalResearch , 203:279–294, 2013.[9] R. Z. Farahani, E. Miandoabchi, W. Y. Szeto, and H. Rashidi. A review of urban transportation networkdesign problems.
European Journal of Operational Research , 229:281–302, 2013.[10] J. Gao, S. V. Buldyrev, H. E. Stanley, and S. Havlin. Networks formed from interdependent networks.
Nature Physics , 8:40–48, 2012.[11] S. Havlin, D. Y. Kenett, A. Bashan, J. Gao, and H. E. Stanley. Vulnerability of network of networks.
European Physical Journal Special Topics , 223:2087–2106, 2014.
12] R. Holden, D. V. Val, R. Burkhard, and S. Nodwell. A network flow model for interdependent infras-tructures at the local scale.
Safety Science , 53:51–60, 2013.[13] R. Kinney, P. Crucitti, R. Albert, and V. Latora. Modeling cascading failures in the North Americanpower grid.
European Physical Journal B , 46:101–106, 2005.[14] D. Klingman, A. Napier, and J. Stutz. NETGEN: A program for generating large scale capacitatedassignment, transportation, and minimum cost flow network problems.
Management Science , 20(5):814–821, 1974.[15] C. Y. Lam and K. Tai. Modeling infrastructure interdependencies by integrating network and fuzzy settheory.
International Journal of Critical Infrastructure Protection , 22:51–61, 2018.[16] E. E. Lee, II, J. E. Mitchell, and W. A. Wallace. Restoration of services in interdependent infrastructuresystems: A network flows approach.
IEEE Transactions on Systems, Man, and Cybernetics, Part C:Applications and Reviews , 37(6):1303–1317, 2007.[17] S. Mathies and P. Mevert. A hybrid algorithm for solving network flow problems with side constraints.
Computers & Operations Research , 25(9):745–756, 1998.[18] D. T. Nguyen, Y. Shen, and M. T. Thai. Detecting critical nodes in interdependent power networks forvulnerability assessment.
IEEE Transactions on Smart Grid , 4(1):151–159, 2013.[19] S. G. Nurre and T. C. Sharkey. Integrated network design and scheduling problems with parallelidentical machines: Complexity results and dispatching rules.
Networks , 63:306–326, 2014.[20] S. G. Nurre, B. Cavdaroglu, J. E. Mitchell, T. C. Sharkey, and W. A. Wallace. Restoring infrastruc-ture systems: An integrated network design and scheduling (INDS) problem.
European Journal ofOperational Research , 223:794–806, 2012.[21] M. Ouyang. Review on modeling and simulation of interdependent critical infrastructure systems.
Reliability Engineering and System Safety , 121:43–60, 2014.[22] M. Ouyang. A mathematical framework to optimize resilience of interdependent critical infrastructuresystems under spatial localized attacks.
European Journal of Operational Research , 262(3):1072–1084,2017.[23] M. Ouyang and L. Due˜nas-Osorio. An approach to design interface topologies across interdependenturban infrastructure systems.
Reliability Engineering and System Safety , 96:1462–1473, 2011.[24] M. Ouyang and Z. Wang. Resilience assessment of interdependent infrastructure systems: With a focuson joint restoration modeling and analysis.
Reliability Engineering and System Safety , 141:74–82, 2015.[25] M. Ouyang, L. Hong, Z.-J. Mao, M.-H. Yu, and F. Qi. A methodological approach to analyze vul-nerability of interdependent infrastructures.
Simulation Modelling Practice and Theory , 17:817–828,2009.[26] M. Parandehgheibi and E. Modiano. Robustness of interdependent networks: The case of communicationnetworks and the power grid. In
IEEE Global Communications Conference, GLOBECOM , pages 2164–2169, Atlanta, GA, December 2013.[27] A. Rumpf. MCNFLI computational trials. GitHub repository, URL https://github.com/adam-rumpf/mcnfli-trials , 2019.[28] A. Rumpf.
Mathematics of Civil Infrastructure Network Optimization . PhD thesis, Illinois Institute ofTechnology, 2020.
29] A. Sen, A. Mazumder, J. Banerjee, A. Das, and R. Compton. Identification of K most vulnerable nodesin multi-layered network using a new model of interdependency. In
Proceedings - IEEE INFOCOM ,pages 831–836. Institute of Electrical and Electronics Engineers Inc., 2014.[30] T. C. Sharkey, B. Cavdaroglu, H. Nguyen, J. Holman, J. E. Mitchell, and W. A. Wallace. Interdependentnetwork restoration: Modeling restoration interdependencies and evaluating the value of information-sharing.
European Journal of Operational Research , 244(1):309–321, 2015.[31] J. Zhong, F.-M. Zhang, S. Yang, and D. Li. Restoration of interdependent network against cascadingoverload failure.
Physica A: Statistical Mechanics and its Applications , 514:884–891, 2019.
A Simplified Basis Characterization
In this appendix we describe an alternative way to characterize the basis that does not require storing theentire matrix D as defined in (10). For the purposes of this explanation we will call a basic interdependentarc loose if its linked slack variable is basic, and tight otherwise. Suppose that the columns are arranged tothat all tight arcs are listed before all loose arcs, and that the rows are arranged so that all interdependencieswith nonbasic slack variables are listed above all interdependencies with basic slack variables. Then Q hasthe block structure Q = (cid:20) Q t Q l (cid:21) (26)where Q t contains only the columns and rows corresponding to tight arcs, and Q l contains only thecolumns and rows corresponding to loose arcs. Similarly, each submatrix B h has the block structure B h = (cid:2) B ht B hl (cid:3) . Then (12) becomes B = T B t B l T B t B l . . . ... ... T r − p +1 B r − p +1 t B r − p +1 l Q t Q l I I (27)where I and I represent identity matrices of the appropriate dimensions. Starting from this blockstructure and going through the same steps as in the proof of Lemma 2 results in the following blockstructure for D . D = δ · · · δ r t δ r t +1 · · · δ r t + r l ... . . . ... ... . . . ... δ r − p · · · δ r − pr t δ r − pr t +1 · · · δ r − pr t + r l Q t Q l I I (28)Here, r t is the number of tight arcs and r l is the number of loose arcs. This matrix D ∈ R r × r is identicalto the version shown in (10), simply arranged into a different block structure. By Lemma 2, B is rank m + p − D is full-rank. Consider the submatrix ˆD of D defined by ˆD := δ · · · δ r t δ r t +1 · · · δ r t + r l ... . . . ... ... . . . ... δ r − p · · · δ r − pr t δ r − pr t +1 · · · δ r − pr t + r l Q t (29) his matrix ˆD ∈ R ( r t + r l ) × ( r t + r l ) is created by dropping from D all columns corresponding to basic slackvariables and all rows corresponding to interdependencies whose associated slack variable is basic. Note thatit is possible to have r t = r l = 0, in which case ˆD is 0 × p slack variables are basic). For convenience we consider a 0 × Observation 4.
Matrix D as defined in (10) is full-rank if and only if ˆD as defined in (29) is full-rank. The proof of this relies on the fact that the rows dropped from D to obtain ˆD each contain the onlynonzero entry in one of the columns that is also dropped. Thus the dropped rows are always linearlyindependent when taken together with the remaining rows, and their contents does not affect the rank of D .Then we have the following corollary: Corollary 1.
The rank of matrix B is m + p − if and only if matrix ˆD as defined in (29) is full-rank. This follows immediately from Lemma 2 and Observation 4. The advantage of maintaining matrix ˆD instead of D is that ˆD is smaller (its dimensions are between 0 × p × p , in contrast to the dimensionsof D which are between p × p and 3 p × p ), and can thus be used to reduce the sizes of the linear systemsthat need to be solved during the modified network simplex algorithm presented in Section 5. This requiressome minor modifications to the procedures defined above.Within the reduced cost calculation algorithm described in Section 5.1, the system D (cid:48) σ = c π is solvedto obtain a potential correction term for each component of F and each slack variable. Let ˆ σ and ˆc π bethe subvectors of σ and c π , respectively, which exclude the elements corresponding to basic slack variables.The proof of Proposition 1 shows that the system D (cid:48) σ = c π results in σ klij = 0 for all basic slack variables s klij , and so it suffices to solve the system ˆD (cid:48) ˆ σ = ˆc π in place of D (cid:48) σ = c π and then set σ klij ≡ D˜x = ˜b is solved to determine the values of the basic interdependent variables. Let ˆx be the subvector of ˜x whichexcludes elements corresponding to basic slack variables, and let ˆb be the subvector of ˜b which excludeselements corresponding to the interdependencies of loose arcs. Then the system ˆD (cid:48) ˆx = ˆb can be solved toobtain the flows ˆx of all basic interdependent arcs, which along with the definitions of L and U gives theflows of all interdependent arcs. These flows, along with the linking equations (5), give the values of allbasic slack variables. Because ˆx excludes basic slack variables, additional steps are required to calculate theentries of ∆ ˜x corresponding to slack variables. If the incoming variable has a basic linked slack variable, say s klij , then ∆ s klij = ˆ a ij θ . If s klij is basic and both linked arcs are also basic, then ∆ s klij = α klij ∆ x ij − ∆ x kl (if ij is nonbasic then we can set ∆ x ij ≡
0, and likewise if kl is nonbasic we can set ∆ x kl ≡ ˆD in place of D does not change the worst-case time complexityof the modified network simplex algorithm, which remains O ( m + n + p ) since in the worst case ˆD still hasdimensions of O ( p ). However in practice ˆD is much smaller than D , resulting in a practical reduction incomputational time as well as improving the best-case time complexity from O ( n + m + p ) to O ( n + m ),which occurs when ˆD is 0 × B Examples of Randomized Rounding Failure
The RRC0 and RRP0 randomized rounding schemes described in Section 3.1 carry the potential of failingto terminate for certain problem instances. This can occur if P klij takes a value of either 0 or 1, in whichcase the scheme will choose the same value for y klij in every iteration, even if that decision can never lead toa feasible MILP solution. Figures 3a and 3b show two different ways in which this can occur.Figure 3a shows an example in which RRC0 and RRP0 both attempt in each iteration to force thesaturation of an arc which is unusable in any feasible MILP solution. In this network the optimal LPsolution saturates both (7 ,
8) and (9 , P klij equal to 1. However, child arcs (5 ,
7) and (9 ,
10) cannot be used in any feasible MILP solution as theirparent arcs (2 ,
3) and (7 ,
8) cannot be saturated due to bottlenecks. As a result, RRC0 and RRP0 bothcreate an infeasible MILP in each iteration. a) Failure due to LP solution saturatingarcs that no feasible MILP solution can use.Interdependencies: x (5 , ≤ x (2 , , x (9 , ≤ x (7 , (b) Failure due to LP solution excludingarcs required by any feasible MILP solution.Interdependencies: x (4 , ≤ x (2 , , x (8 , ≤ x (6 , Figure 3: Example networks for which RRC0 and RRP0 both fail in every iteration. The numbernext to each node indicates the supply value at that node, while the ordered pair next to eacharc indicates its cost and capacity, respectively. Unlisted supply values are zero, while unlistedcost/capacity pairs are (0 , ∞ ). Dashed and dotted arcs indicate interdependent pairs, with thebolder arc indicating the parent and the thinner arc indicating the child in each pair. Interdepen-dencies are listed under each network. Similarly, Figure 3b shows an example in which RRC0 and RRP0 both attempt in each iteration todisallow the use of an arc which is required in any feasible MILP solution. In this network the optimal LPsolution sends no flow through either (6 ,
7) or (8 , P klij equal to 0. However in this case all feasible MILP solutions require the use of both(6 ,
7) and (8 , C Examples of Arbitrarily Bad Theoretical Bounds
As noted in Section 3.1, the LP relaxation of MILP-I provides a lower bound for its optimal value while anyrandomized rounding solution provides an upper bound, but both of these bounds may be arbitrarily faraway from the true value. Figures 4a and 4b show examples for which each of these is the case.(a) Example for which the LP re-laxation of MILP-I produces anarbitrarily bad lower bound forthe optimal cost. (b) Example for which the ran-domized rounding solutions ofMILP-I can produce arbitrarilybad upper bounds for the opti-mal cost.
Figure 4: Example networks showing that the LP relaxation of MILP-I and the randomized round-ing solutions can produce arbitrarily bad upper and lower bounds for the optimal cost. The labelingsystem is the same as used in Figure 3. The arc cost M is an arbitrary constant.21 n the network shown in Figure 4a, the optimal LP solution sends 1 unit of flow through node 2 to half-saturate parent arc (2 , , ,
4) at a cost of M , and since M can be arbitrarily large, the LP objective can bearbitrarily far below the MILP objective.Similarly, in the network shown in Figure 4b the optimal LP solution splits the flow equally between thepath through node 2 and the path through node 4 by half-saturating the parent/child pair (2 ,
4) and (3 , M >
1, the optimal MILP solution sends all flow through node 2 at a total cost of 4. Becauseboth the parent and the child are half-saturated in the LP solution, any randomized rounding scheme hasa probability of 0.5 of either forcing the use of (2 ,
4) or disallowing the use of (3 , ,
4) is disallowed,then the only remaining feasible MILP solution sends all flow through (1 ,
4) at a cost of 4 M , and so therandomized rounding objective can be arbitrarily far above the MILP objective. D Numerical Example
In order to illustrate how our proposed solution algorithm works in practice we will conduct a few iterationson the example network shown in Figure 5. This network G is based on the example network from Calvete2003 [6]. It has m = 11 nodes, n = 24 arcs, and p = 4 interdependencies, meaning that its basis must contain m + p − c (1 , = 5 c (1 , = 8 c (1 , = 12 c (2 , = c (2 , = c (2 , = c (2 , = 1 c (3 , = 10 c (3 , = c (3 , = 3 c (4 , = 8 c (4 , = 5 c (5 , = 1 c (5 , = 1 c (5 , = 1 c (5 , = 3 c (6 , = 3 c (6 , = c (6 , = c (7 , = 5 c (7 , = 4 c (8 , = 4 c (10 , = 2 c (11 , = 6 x (11 , ≤ x (4 , + 1 , x (5 , ≤ x (3 , + 2 , x (1 , ≤ x (3 , + , x (5 , ≤ x (5 , Figure 5: Example network G on which to conduct the modified network simplex algorithm. Allarc capacities are 15. Supply values are listed next to each node (no value listed means zero).Arc costs are listed below the network. Interdependent arcs are highlighted, with the specificinterdependencies listed below the network. Initial Basis
The initial basis contains the arcs shown in Figure 6a as well as slack variables s (11 , , , s (5 , , , and s (5 , , . Most nonbasic variables are in L and currently have a value of zero, while x (2 , and (5 , are in U , putting them at their upper bound of 15. The initial values of the basic variables are x (1 , = 6 x (1 , = 8 x (2 , = 6 x (3 , = x (3 , = x (4 , = 4 x (5 , = x (6 , = x (7 , = 0 x (7 , = 0 x (8 , = 1 s (11 , , = 3 s (5 , , = 2 s (5 , , = 2(a) Initial basis. (b) Iteration 1. (c) Iteration 2. Figure 6: Basis structure of the example network after each iteration. Only the basic arcs are shown.Solid lines represent basic independent arcs, while dashed lines represent basic interdependent arcs.All bases contain slack variables s (11 , , , s (5 , , , and s (5 , , . The basic forest F contains four components: T = { , } , T = { , , , , , } , T = { } , and T = { , } . All components, combined, contain the 7 basic independent arcs, while the remaining 7 basicvariables are interdependent: x (4 , , x (3 , , x (1 , , x (5 , , s (11 , , , s (5 , , , and s (5 , , . This means that r = 7,thus D is 7 ×
7. Arranging the basic interdependent variables and trees in this order, the matrix D asdescribed in (10) is D = − − − − − Iteration 1
To calculate potentials using the method described in Section 5.1, we begin by tentativelyassigning a potential of zero to each interdependence. For the node potentials we select an arbitrary rootfor each component of the basic forest and assign it a potential of zero, in this case selecting 1 as the root of T , 2 as the root of T , 5 as the root of T , and 8 as the root of T . We then use equations (15) to calculatethe remaining node potentials from root to leaf, obtaining a complete potential vector of π = (cid:2) − − − − − (cid:3) (cid:48) We then use these values in equations (16)–(18) for the basic interdependent variables, obtaining c π (4 , = 13 c π (3 , = c π (1 , = 5 c π (5 , = − ( c π ) (11 , , = 0Some of these are nonzero, and so we must calculate a correction term for each forest component andinterdependence. Solving D (cid:48) σ = c π gives a correction vector of σ = (cid:2) − − −
10 0 (cid:3) (cid:48) pplying these to the initial potentials π using (19)–(20) gives a corrected potential vector of ˜ π = (cid:2) − − − − − − − −
10 0 (cid:3) (cid:48)
Equipped with the potential values, we can use equations (15)–(18) to calculate the reduced cost of anynonbasic variable. In this case, we might notice that c π (1 , = c (1 , − (˜ π − ˜ π ) = 12 − (13 + ) = − . Since x (1 , ∈ L and it has a negative reduced cost it is a candidate to enter the basis, and so we proceed with achange of basis.(1 ,
5) is an independent arc and bridges the gap between T and T , making this a Case 2 change ofbasis. We first determine the net requirement values ˜b as defined in Section 5.2.1. For each componentof F we find the total supply value and inflow from arcs in U , resulting in b ( T ) = 10, b ( T ) = − b ( T ) = 0. Interdependencies t = 1 , , U , and so by (21) we have b (1) = β (11 , , = 1, b (2) = β (5 , , = 2, and b (3) = β (1 , , = . Interdependent arc (5 ,
9) is in U , so b (4) = β (5 , , + α (5 , , u (5 , = .We use these values to form the perturbed system D˜x = ˜b , adding θ to b ( T ) and − θ to b ( T ) becausethe incoming arc leaves T and enters T . The solution to the system is ˜x = (cid:2) − θ − θ (cid:3) (cid:48) Subtracting the initial basic feasible solution from this vector gives us the change vector∆ ˜x = (cid:2) − θ − θ (cid:3) (cid:48) Next we must determine which basic independent variables change. We may ignore T because it containsno arcs, and T because it is not incident to any of the affected arcs. Within T and T , we must calculatechanges in the modified net requirements ˚ b ( i ) of each affected node i as referenced in Section 5.2.1. We needonly calculate the modified requirement values of nodes 1, 2, 3, and 5, since these are the only nodes incidentto affected arcs. This gives ∆˚ b (1) = 0 ∆˚ b (2) = θ ∆˚ b (3) = − θ ∆˚ b (5) = 0The only nonzero changes occur at nodes 2 and 3. Finally we can scan through each tree from leaf toroot, calculating the change in each independent flow variable as described in Section 5.2.1. There are onlytwo nonzero changes: ∆ x (2 , = − θ and ∆ x (3 , = θ .Having determined the full change vector ∆ x , we use the method outlined in Section 5.2.2 to calculate θ ∗ and the blocking variables. The first variable to reach a bound as θ increases is x (3 , , which reaches zerowhen θ = . Then the change of basis consists of (1 ,
5) entering the basis from L , (3 ,
5) leaving the basisinto L , and a change increment of θ ∗ = . The new basis is shown in Figure 6b, and the values of the basicvariables are x (1 , = x (1 , = 8 x (2 , = x (1 , = x (3 , = 6 x (4 , = 4 x (5 , = x (6 , = x (7 , = 0 x (7 , = 0 x (8 , = 1 s (11 , , = 3 s (5 , , = 2 s (5 , , = 2 Iteration 2
Due to the previous basis change, components T and T merge into a single component. Wewill label the new components T = { , , } , T = { , , , , , } , and T = { , } . Under this namingscheme, and keeping the interdependencies in their previous order, we now have D = − − − Going through the same steps as in the previous iteration, the corrected potential vector is ˜ π = (cid:2)
12 112 − − (cid:3) (cid:48) rom this we can calculate c π (10 , −
2, and since x (10 , ∈ L , it is a candidate to enter the basis. (10 , T and T . This leads to another instance of change ofbasis Case 2. The perturbed net requirement vector is ˜b = (cid:2) − − θ
12 152 (cid:3) (cid:48)
Solving
D˜x = ˜b gives ˜x = (cid:2) − θ
12 112 + θ − θ − θ (cid:3) (cid:48) The relevant supply value changes are∆˚ b (4) = − θ ∆˚ b (5) = θ ∆˚ b (8) = θ ∆˚ b (9) = − θ ∆˚ b (10) = 0The only nonzero changes occur for nodes in components T and T , and so only arcs in these trees needbe considered. Calculating the changes in their arcs as before, we find three nonzero values: ∆ x (1 , = − θ ,∆ x (1 , = θ , and ∆ x (8 , = − θ .A total of eight variables changes as θ increases. The first to reach a bound is x (8 , , which becomes zerowhen θ ∗ = 1. Then x (10 , moves from L into B , x (8 , moves from B into L , and the remaining variablesare adjusted by substituting θ = 1 into their change terms. The new basis is shown in Figure 6c, and thevalues of the basic variables are now x (1 , = x (1 , = 7 x (2 , = x (1 , = x (3 , = 6 x (4 , = 3 x (5 , = x (6 , = x (7 , = 0 x (7 , = 0 x (10 , = 1 s (11 , , = s (5 , , = 2 s (5 , , = 1 Iteration 3
If the new components are labeled as T = { , , } , T = { , , , , , , } , and T = { } ,then D remains exactly the same as in the previous iteration since all basic independent arcs still bridge thesame tree indices. Applying the same potential calculation technique, the corrected potential vector is ˜ π = (cid:2)
12 112 − − (cid:3) (cid:48) Using these potentials to calculate reduced costs, we find that no variable in L has a negative reducedcost and no variable in U has a positive reduced cost. This implies that the current solution is optimal. Wemay terminate the algorithm and output the current solution vector x , which has a total cost of 189 . E Computational Trial Data Tables
This section contains the full data tables for all trials described in Section 3.2. Tables 2 and 3 show theresults of the LP relaxation trials for Type 1 and Type 2 problems, respectively. Within each table, columnscorrespond to different percentages of sink nodes (for Type 1) or arcs (for Type 2) acting as parents, whilerows correspond to different network densities. Each table entry gives the mean and standard deviation inthe relative error for all successful trials of the given network type, as well as the number ( n ) of trials onwhich the statistics are based. All tables throughout this section are organized in the same format.Tables 4–7 show the results of the RRC0, RRC1, RRC5, and RRF trials for Type 1 problems, respectively,and corresponds to Figure 1. All results displayed are based on the full set of 60 trials, except for the singlefailed case of RRC0 for 512 nodes and 10% of sinks interdependent. Tables 8–11 show the failure rates of theRRC0, RRC1, RRC5, and RRF trials for Type 2 problems, respectively, in terms of the percentage of the60 trials in each category that failed to reach a feasible solution within 1000 attempts. Tables 12–15 showthe relative errors for the RRC0, RRC1, RRC5, and RRF trials for Type 2 problems, respectively, limitedto only the successful trials. Tables 8–11 correspond to Figure 2a while Tables 12–15 correspond to Figure2b. ype 1, LP Nodes Arcs/Node 2% 5% 10% 15%256 4 Mean 0.03218% 0.09867% 0.08198% 0.06282%Std. Dev. 0.09196% 0.29471% 0.20290% 0.12985% n
60 60 60 608 Mean 0.02036% 0.03139% 0.06379% 0.05054%Std. Dev. 0.06776% 0.13905% 0.24593% 0.14242% n
60 60 60 60512 4 Mean 0.01396% 0.06195% 0.07669% 0.13231%Std. Dev. 0.03793% 0.11006% 0.10339% 0.18200% n
60 60 60 608 Mean 0.00692% 0.00984% 0.03386% 0.02207%Std. Dev. 0.02401% 0.03239% 0.06100% 0.06443% n
60 60 60 60Table 2: Relative error for LP relaxation in Type 1 trials. Columns indicate percentage of sinknodes acting as parents.
Type 2, LP
Nodes Arcs/Node 1% 2% 5% 10%256 4 Mean 0.18053% 0.33397% 0.29048% 1.60084%Std. Dev. 0.37410% 0.46781% 0.33056% 1.48633% n
60 60 60 608 Mean 0.09824% 0.13669% 0.25430% 0.71868%Std. Dev. 0.24346% 0.31658% 0.43304% 1.06470% n
60 60 60 60512 4 Mean 0.09306% 0.28175% 0.38826% 1.43813%Std. Dev. 0.14561% 0.36669% 0.36483% 0.92611% n
60 60 60 608 Mean 0.07098% 0.12283% 0.29450% 0.82939%Std. Dev. 0.27000% 0.33859% 0.44518% 0.95601% n
60 60 60 60Table 3: Relative error for LP relaxation in Type 2 trials. Columns indicate percentage of arcsacting as parents. 26 ype 1, RRC0
Nodes Arcs/Node 2% 5% 10% 15%256 4 Mean 0.08396% 0.12902% 0.30359% 0.47040%Std. Dev. 0.45237% 0.30627% 0.69299% 0.91533% n
60 60 60 608 Mean 0.04560% 0.05072% 0.12079% 0.20614%Std. Dev. 0.20377% 0.19954% 0.39895% 0.64658% n
60 60 60 60512 4 Mean 0.11863% 0.15723% 0.22745% 0.53016%Std. Dev. 0.33121% 0.36941% 0.42859% 0.67835% n
60 60 59 608 Mean 0.05038% 0.09224% 0.20583% 0.48200%Std. Dev. 0.14546% 0.29561% 0.47322% 0.70649% n
60 60 60 60Table 4: Mean and standard deviation of relative error for the RRC0 scheme solutions over allType 1 computational trials. Note that the statistics for 512 nodes, 4 arcs per node, and 10% ofsinks interdependent were calculated based only on the 59 successful trials.
Type 1, RRC1
Nodes Arcs/Node 2% 5% 10% 15%256 4 Mean 0.15995% 0.17999% 0.42908% 0.51373%Std. Dev. 0.73355% 0.50584% 1.06876% 0.93875% n
60 60 60 608 Mean 0.08235% 0.05400% 0.16980% 0.20614%Std. Dev. 0.34519% 0.20031% 0.53964% 0.64658% n
60 60 60 60512 4 Mean 0.12555% 0.16074% 0.24860% 0.61767%Std. Dev. 0.33302% 0.36889% 0.43587% 0.89859% n
60 60 60 608 Mean 0.08089% 0.10822% 0.22707% 0.51905%Std. Dev. 0.25132% 0.32182% 0.47803% 0.75221% n
60 60 60 60Table 5: Mean and standard deviation of relative error for the RRC1 scheme solutions over allType 1 computational trials. 27 ype 1, RRC5
Nodes Arcs/Node 2% 5% 10% 15%256 4 Mean 0.15995% 0.27764% 0.83326% 0.99746%Std. Dev. 0.73355% 0.70453% 1.78568% 1.82337% n
60 60 60 608 Mean 0.17594% 0.14096% 0.24590% 0.26612%Std. Dev. 0.64976% 0.53061% 0.62247% 0.70883% n
60 60 60 60512 4 Mean 0.29846% 0.32916% 0.51934% 1.02361%Std. Dev. 0.63220% 0.63879% 0.77623% 1.32727% n
60 60 60 608 Mean 0.15358% 0.17735% 0.40364% 0.63427%Std. Dev. 0.39285% 0.43903% 0.63889% 0.89402% n
60 60 60 60Table 6: Mean and standard deviation of relative error for the RRC5 scheme solutions over allType 1 computational trials.
Type 1, RRF
Nodes Arcs/Node 2% 5% 10% 15%256 4 Mean 1.69196% 1.68475% 3.71021% 5.60403%Std. Dev. 2.51891% 1.88534% 3.31362% 4.90118% n
60 60 60 608 Mean 1.30573% 1.33120% 2.03500% 2.76798%Std. Dev. 2.21282% 1.83162% 2.67148% 3.29595% n
60 60 60 60512 4 Mean 0.93481% 1.80408% 3.24969% 5.04507%Std. Dev. 0.98142% 1.50691% 2.20683% 3.20177% n
60 60 60 608 Mean 0.92729% 1.27037% 1.27321% 1.92335%Std. Dev. 1.00769% 1.40316% 1.14424% 1.81029% n
60 60 60 60Table 7: Mean and standard deviation of relative error for the RRF scheme solutions over all Type1 computational trials.
Type 2, RRC0
Nodes Arcs/Node 1% 2% 5% 10%256 4 0.0% 1.7% 0.0% 8.3%8 0.0% 0.0% 0.0% 5.0%512 4 0.0% 1.7% 5.0% 15.0%8 0.0% 1.7% 0.0% 1.6%Table 8: Failure rates for RRC0 in Type 2 trials. Percentages indicate the fraction of trials for whichthe randomized rounding scheme was unable to obtain a feasible solution within 1000 iterations.28 ype 2, RRC1
Nodes Arcs/Node 1% 2% 5% 10%256 4 0.0% 0.0% 0.0% 3.3%8 0.0% 0.0% 0.0% 0.0%512 4 0.0% 0.0% 0.0% 5.0%8 0.0% 0.0% 0.0% 1.6%Table 9: Failure rates for RRC1 in Type 2 trials. Percentages indicate the fraction of trials for whichthe randomized rounding scheme was unable to obtain a feasible solution within 1000 iterations.
Type 2, RRC5
Nodes Arcs/Node 1% 2% 5% 10%256 4 0.0% 0.0% 0.0% 1.7%8 0.0% 0.0% 0.0% 0.0%512 4 0.0% 0.0% 0.0% 1.7%8 0.0% 0.0% 0.0% 1.6%Table 10: Failure rates for RRC5 in Type 2 trials. Percentages indicate the fraction of trialsfor which the randomized rounding scheme was unable to obtain a feasible solution within 1000iterations.
Type 2, RRF
Nodes Arcs/Node 1% 2% 5% 10%256 4 0.0% 0.0% 10.0% 96.7%8 0.0% 8.3% 95.0% 100.0%512 4 0.0% 11.7% 100.0% 100.0%8 15.0% 71.7% 100.0% 100.0%Table 11: Failure rates for RRF in Type 2 trials. Percentages indicate the fraction of trials for whichthe randomized rounding scheme was unable to obtain a feasible solution within 1000 iterations.
Type 2, RRC0
Nodes Arcs/Node 1% 2% 5% 10%256 4 Mean 0.07359% 0.09553% 0.29521% 0.78186%Std. Dev. 0.31944% 0.36233% 0.61692% 1.10045% n
60 59 60 558 Mean 0.00632% 0.08292% 0.40856% 0.17906%Std. Dev. 0.02826% 0.34117% 1.55011% 0.49942% n
60 60 60 57512 4 Mean 0.08997% 0.10265% 0.26172% 0.49078%Std. Dev. 0.35509% 0.25362% 0.53550% 0.57372% n
60 59 57 518 Mean 0.01221% 0.09384% 0.09704% 0.34873%Std. Dev. 0.05460% 0.30559% 0.20834% 0.60217% n
60 59 60 60Table 12: Mean and standard deviation of relative error for RRC0 scheme solutions over all suc-cessful Type 2 computational trials. 29 ype 2, RRC1
Nodes Arcs/Node 1% 2% 5% 10%256 4 Mean 0.24676% 0.27409% 0.68012% 1.50022%Std. Dev. 0.82156% 0.72796% 1.47352% 2.16678% n
60 60 60 588 Mean 0.72814% 1.09969% 2.20014% 3.54538%Std. Dev. 1.68054% 2.93195% 4.44797% 5.15958% n
60 60 60 60512 4 Mean 0.11146% 0.35749% 0.41304% 1.65035%Std. Dev. 0.36230% 0.92922% 0.77179% 2.41296% n
60 60 60 578 Mean 0.49182% 0.71963% 1.22240% 4.06781%Std. Dev. 1.44531% 1.38286% 1.90552% 3.92718% n
60 60 60 60Table 13: Mean and standard deviation of relative error for RRC1 scheme solutions over all suc-cessful Type 2 computational trials.
Type 2, RRC5
Nodes Arcs/Node 1% 2% 5% 10%256 4 Mean 1.22894% 1.33160% 2.65336% 5.33097%Std. Dev. 2.20261% 2.25426% 3.94643% 5.02341% n
60 60 60 598 Mean 1.25221% 3.44290% 9.62586% 16.80720%Std. Dev. 2.00615% 4.67988% 9.13473% 10.32456% n
60 60 60 60512 4 Mean 0.47022% 1.13194% 1.65448% 4.94240%Std. Dev. 1.07715% 1.66150% 2.01748% 3.47922% n
60 60 60 598 Mean 1.19089% 3.87011% 8.87171% 15.76693%Std. Dev. 2.17442% 3.59570% 5.58055% 8.10469% n
60 60 60 60Table 14: Mean and standard deviation of relative error for RRC5 scheme solutions over all suc-cessful Type 2 computational trials. 30 ype 2, RRF
Nodes Arcs/Node 1% 2% 5% 10%256 4 Mean 6.52553% 12.06161% 20.14598% 57.94037%Std. Dev. 8.15366% 7.11377% 7.33234% 0.78205% n
60 60 35 28 Mean 20.74206% 38.37295% 86.34556% –Std. Dev. 9.68122% 18.65668% 14.60570% – n
60 55 7 0512 4 Mean 6.75030% 10.59501% 22.63133% –Std. Dev. 3.57433% 5.17104% 8.17028% – n
60 53 5 08 Mean 19.25974% 37.17727% – –Std. Dev. 6.78381% 11.84658% – – nn