The Simplex Algorithm is NP-mighty
aa r X i v : . [ c s . D M ] A p r The Simplex Algorithm is NP-mighty
Yann Disser Martin SkutellaSeptember 28, 2018
Abstract
We propose to classify the power of algorithms by the complexity of the problems that theycan be used to solve. Instead of restricting to the problem a particular algorithm was designedto solve explicitly , however, we include problems that, with polynomial overhead, can be solved‘ implicitly ’ during the algorithm’s execution. For example, we allow to solve a decision problemby suitably transforming the input, executing the algorithm, and observing whether a specificbit in its internal configuration ever switches during the execution.We show that the Simplex Method, the Network Simplex Method (both with Dantzig’soriginal pivot rule), and the Successive Shortest Path Algorithm are NP-mighty, that is, eachof these algorithms can be used to solve any problem in NP. This result casts a more favorablelight on these algorithms’ exponential worst-case running times. Furthermore, as a consequenceof our approach, we obtain several novel hardness results. For example, for a given input to theSimplex Algorithm, deciding whether a given variable ever enters the basis during the algorithm’sexecution and determining the number of iterations needed are both NP-hard problems. Finally,we close a long-standing open problem in the area of network flows over time by showing thatearliest arrival flows are NP-hard to obtain.
Understanding the complexity of algorithmic problems is a central challenge in the theory of comput-ing. Traditionally, complexity theory operates from the point of view of the problems we encounterin the world, by considering a fixed problem and asking how nice an algorithm the problem admitswith respect to running time, memory consumption, robustness to uncertainty in the input, deter-minism, etc. In this paper we advocate a different perspective by considering a particular algorithmand asking how powerful (or mighty ) the algorithm is, i. e., what the most difficult problems arethat the algorithm can be used to solve ‘implicitly’ during its execution.
Related literature.
A traditional approach to capturing the mightiness of an algorithm is toask how difficult the exact problem is that the algorithm was designed to solve, i. e., what is thecomplexity of predicting the final output of the algorithm. For optimization problems, however, ifthere are multiple optimum solutions to an instance, predicting which optimum solution a specificalgorithm will produce might be more difficult than finding an optimum solution in the first place.If this is the case, the algorithm can be considered to be mightier than the problem it is solvingsuggests. A prominent example for this phenomenon are search algorithms for problems in thecomplexity class
PLS (for polynomial local search ), introduced by Johnson, Papadimitriou, andYannakakis [11]. Many problems in
PLS are complete with respect to so-called tight reductions,which implies that finding any optimum solution reachable from a specific starting solution vialocal search is
PSPACE -complete [20]. Any local search algorithm for such a problem can thus beconsidered to be
PSPACE -mighty. Recently, in a remarkable paper by Goldberg, Papadimitriou,1nd Savani [8], similar
PSPACE -completeness results were established for algorithms solving searchproblems in the complexity class
PPAD (for polynomial parity argument in directed graphs [19]),and in particular for the well-known Lemke-Howson algorithm [16] for finding Nash equilibria inbimatrix games.
A novel approach.
We take the analysis of the power of algorithms one step further and arguethat the mightiness of an algorithm should not only be classified by the complexity of the exactproblem the algorithm is solving, but rather by the most complex problem that the algorithm canbe made to solve implicitly . In particular, we do not consider the algorithm as a pure black boxthat turns a given input into a well-defined output. Instead, we are interested in the entire processof computation (i. e., the sequence of the algorithm’s internal states) that leads to the final output,and ask how meaningful this process is in terms of valuable information that can be drawn fromit. As we show in this paper, sometimes very limited information on an algorithm’s process ofcomputation can be used to solve problems that are considerably more complex than the problemthe algorithm was actually designed for.We define the mightiness of an algorithm via the problem of greatest complexity that it can solve implicitly in this way, and, in particular, we say that an algorithm is NP-mighty if it implicitly solvesall problems in NP (precise definitions are given below). Note that in order to make mightiness ameaningful concept, we need to make sure that mindless exponential algorithms like simple countersdo not qualify as being NP-mighty, while algorithms that explicitly solve hard problems do. Thisgoal is achieved by carefully restricting the allowed computational overhead as well as the access tothe algorithm’s process of computation.
Considered algorithms.
For an algorithm’s mightiness to lie beyond the complexity class of theproblem it was designed to solve, its running time must be excessive for this complexity class. Mostalgorithms that are inefficient in this sense would quickly be disregarded as wasteful and not meritingfurther investigation. Dantzig’s Simplex Method [3] is a famous exception to this rule. Empiricallyit belongs to the most efficient methods for solving linear programs. However, Klee and Minty [15]showed that the Simplex Algorithm with Dantzig’s original pivot rule exhibits exponential worst-case behavior. Similar results are known for many other popular pivot rules; see, e. g., Amenta andZiegler [1]. On the other hand, by the work of Khachiyan [13, 14] and later Karmarkar [12], it isknown that linear programs can be solved in polynomial time. Spielman and Teng [22] developedthe concept of smoothed analysis in order to explain the practical efficiency of the Simplex Methoddespite its poor worst-case behavior.Minimum-cost flow problems form a class of linear programs featuring a particularly rich combi-natorial structure allowing for numerous specialized algorithms. The first such algorithm is Dantzig’sNetwork Simplex Method [4] which is an interpretation of the general Simplex Method applied tothis class of problems. In this paper, we consider the primal (Network) Simplex Method togetherwith Dantzig’s pivot rule, which selects the nonbasic variable with the most negative reduced cost.We refer to this variant of the (Network) Simplex Method as the (Network) Simplex Algorithm .One of the simplest and most basic algorithms for minimum-cost flow problems is the SuccessiveShortest Path Algorithm which iteratively augments flow along paths of minimum cost in the resid-ual network [2, 9]. According to Ford and Fulkerson [5], the underlying theorem stating that suchan augmentation step preserves optimality “ may properly be regarded as the central one concerningminimal cost flows ”. Zadeh [25] presented a family of instances forcing the Successive Shortest PathAlgorithm and also the Network Simplex Algorithm into exponentially many iterations. On theother hand, Tardos [23] proved that minimum-cost flows can be computed in strongly polynomialtime, and Orlin [18] gave a polynomial variant of the Network Simplex Method.2 ain contribution.
We argue that the exponential worst-case running time of the (Network)Simplex Algorithm and the Successive Shortest Path Algorithm is not purely a waste of time. Whilethese algorithms sometimes take longer than necessary to reach their primary objective (namely tofind an optimum solution to a particular linear program), they collect meaningful information ontheir detours and implicitly solve difficult problems. To make this statement more precise, weintroduce a definition of ‘implicitly solving’ that is as minimalistic as possible with regards to theextent in which we are permitted to use the algorithm’s internal state. The following definition refersto the complete configuration of a Turing machine, i. e., a binary representation of the machine’sinternal state, contents of its tape, and position of its head.
Definition 1.
An algorithm given by a Turing machine T implicitly solves a decision problem P if, for a given instance I of P , it is possible to compute in polynomial time an input I ′ for T and abit b in the complete configuration of T , such that I is a yes-instance if and only if b flips at somepoint during the execution of T for input I ′ . An algorithm that implicitly solves a particular NP -hard decision problem implicitly solves allproblems in NP . We call such algorithms NP -mighty . Definition 2.
An algorithm is NP -mighty if it implicitly solves every decision problem in NP . Note that every algorithm that explicitly solves an NP -hard decision problem, by definition, alsoimplicitly solves this problem (assuming, w.l.o.g., that a single bit indicates if the Turing machinehas reached an accepting state) and thus is NP -mighty.The above definitions turn out to be sufficient for our purposes. We remark, however, thatslightly more general versions of Definition 1, involving constantly many bits or broader/free accessto the algorithm’s output, seem reasonable as well. In this context, access to the exact number ofiterations needed by the algorithm also seems reasonable as it may provide valuable information.In fact, our results below still hold if the number of iterations is all we may use of an algorithm’sbehavior. Most importantly, our definitions have been formulated with some care in an attempt todistinguish ‘clever’ exponential-time algorithms from those that rather ‘waste time’ on less mean-ingful operations. We discuss this critical point in some more detail.Constructions of exponential time worst-case instances for algorithms usually rely on gadgetsthat somehow force an algorithm to count, i. e., to enumerate over exponentially many configura-tions. Such counting behavior by itself cannot be considered ‘clever’, and, consequently, an algorithmshould certainly exhibit more elaborate behavior to qualify as being NP -mighty. As an example,consider the simple counting algorithm (Turing machine) that counts from a given positive numberdown to zero, i. e., the Turing machine iteratively reduces the binary number on its tape by one untilit reaches zero. To show that this algorithm is not NP -mighty, we need to assume that P = NP ,as otherwise the polynomial-time transformation of inputs can already solve NP -hard problems.Since, for sufficiently large inputs, every state of the simple counting algorithm is reached, and sinceevery bit on its tape flips at some point, our definitions are meaningful in the following sense. Proposition 1.
Unless P = NP , the simple counting algorithm is not NP -mighty while everyalgorithm that solves an NP -hard problem is NP -mighty. Our main result explains the exponential worst-case running time of the following algorithmswith their computational power.
Theorem 1.
The Simplex Algorithm, the Network Simplex Algorithm (both with Dantzig’s pivotrule), and the Successive Shortest Path Algorithm are NP -mighty.
3e prove this theorem by showing that the algorithms implicitly solve the NP -complete Partition problem (cf. [7]). To this end, we show how to turn a given instance of
Partition in polynomialtime into a minimum-cost flow network with a distinguished arc e , such that the Network SimplexAlgorithm (or the Successive Shortest Path Algorithm) augments flow along arc e in one of itsiterations if and only if the Partition instance has a solution. Under the mild assumption that inan implementation of the Network Simplex Algorithm or the Successive Shortest Path Algorithmfixed bits are used to store the flow variables of arcs, this implies that these algorithms implicitlysolve
Partition in terms of Definition 1.A central part of our network construction is a recursively defined family of counting gadgetson which these minimum-cost flow algorithms take exponentially many iterations. These countinggadgets are, in some sense, simpler than Zadeh’s 40 years old ‘bad networks’ [25] and thus inter-esting in their own right. By slightly perturbing the costs of the arcs according to the values of agiven
Partition instance, we manage to force the considered minimum-cost flow algorithms intoenumerating all possible solutions. In contrast to mindless counters, we show that the algorithmsare self-aware in the sense that whether or not they encountered a valid
Partition solution isreflected in their internal state (in the sense of Definition 1).
Further results.
We mention interesting consequences of our main results discussed above. Proofsof the following corollaries can be found in Appendix C. We first state complexity results that followfrom our proof of Theorem 1.
Corollary 1.
Determining the number of iterations needed by the Simplex Algorithm, the NetworkSimplex Algorithm, and the Successive Shortest Path Algorithm for a given input is NP -hard. Corollary 2.
Deciding for a given linear program whether a given variable ever enters the basisduring the execution of the Simplex Algorithm is NP -hard. Another interesting implication is for parametric flows and, more generally, parametric linearprogramming.
Corollary 3.
Determining whether a parametric minimum-cost flow uses a given arc (i. e., assignspositive flow value for any parameter value) is NP -hard. In particular, determining whether thesolution to a parametric linear program uses a given variable is NP -hard. Also, determining thenumber of different basic solutions over all parameter values is NP -hard. We also obtain the following complexity result on -dimensional projections of polyhedra. Corollary 4.
Given a d -dimensional polytope P by a system of linear inequalities, determining thenumber of vertices of P ’s projection onto a given -dimensional subspace is NP -hard. We finally mention a result for a long-standing open problem in the area of network flows overtime (see, e. g., [21] for an introduction to this area). The goal in earliest arrival flows is to findan s - t -flow over time that simultaneously maximizes the amount of flow that has reached the sinknode t at any point in time [6]. It is known since the early 1970ies that the Successive ShortestPath Algorithm can be used to obtain such an earliest arrival flow [17, 24]. All known encodings ofearliest arrival flows, however, suffer from exponential worst-case size, and ever since it has been anopen problem whether there is a polynomial encoding which can be found in polynomial time. Thefollowing corollary implies that, in a certain sense, earliest arrival flows are NP -hard to obtain. Corollary 5.
Determining the average arrival time of flow in an earliest arrival flow is NP -hard. Note that an s - t -flow over time is an earliest arrival flow if and only if it minimizes the averagearrival time of flow [10]. 4 utline. After establishing some minimal notation in Section 2, we proceed to proving Theorem 1for the Successive Shortest Path Algorithm in Section 3. In Section 4, we adapt the construction forthe Network Simplex Algorithm. Finally, Section 5 highlights interesting open problems for futureresearch. All proofs are deferred to the appendix.
In the following sections we show that the Successive Shortest Path Algorithm and the NetworkSimplex Algorithm implicitly solve the classical
Partition problem. An instance of
Partition isgiven by a vector of positive numbers ~a = ( a , . . . , a n ) ∈ Q n and the problem is to decide whetherthere is a subset I ⊆ { , . . . , n } with P i ∈ I a i = P i/ ∈ I a i . This problem is well-known to be NP -complete (cf. [7]). Throughout this paper we consider an arbitrary fixed instance ~a of Partition .Without loss of generality, we assume A := P ni =1 a i < / and that all values a i , i ∈ { , . . . , n } ,are multiples of ε for some constant ε > .Let ~v = ( v , . . . , v n ) ∈ Q n and k ∈ N , with k j ∈ { , } , j ∈ Z ≥ , being the j -th bit in the binaryrepresentation of k , i. e., k j := (cid:4) k/ j (cid:5) mod 2 . We define ~v [ k ] i ,i := P i j = i +1 ( − k j − v j , ~v [ k ] i := ~v [ k ]0 ,i ,and ~v [ k ] i,i = 0 .The following characterization will be useful later. Proposition 2.
The
Partition instance ~a admits a solution if and only if there is a k ∈ { , . . . , n − } for which ~a [ k ] n = 0 . Consider a network N with a source node s , a sink node t , and non-negative arc costs. TheSuccessive Shortest Path Algorithm starts with the zero-flow and iteratively augments flow along aminimum-cost s - t -path in the current residual network, until a maximum s - t -flow has been found.Notice that the residual network is a sub-network of N ’s bidirected network, where the cost of abackward arc is the negative of the cost of the corresponding forward arc. In this section we construct a family of networks for which the Successive Shortest Path Algorithmtakes an exponential number of iterations. Assume we have a network N i − with source s i − andsink t i − which requires i − iterations that each augment one unit of flow. We can obtain a newnetwork N i with only two additional nodes s i , t i for which the Successive Shortest Path Algorithmtakes i iterations. To do this we add two arcs ( s i , s i − ) , ( t i − , t i ) with capacity i − and cost ,and two arcs ( s i , t i − ) , ( s i − , t i ) with capacity i − and very high cost. The idea is that in the first i − iterations one unit of flow is routed along the arcs of cost and through N i − . After i − iterations both the arcs ( s i , s i − ) , ( t i − , t i ) and the subnetwork N i − are completely saturated andthe Successive Shortest Path Algorithm starts to use the expensive arcs ( s i , t i − ) , ( s i − , t i ) . Eachof the next i − iteration adds one unit of flow along the expensive arcs and removes one unit offlow from the subnetwork N i − .We tune the cost of the expensive arcs to i − − which turns out to be just expensive enough(cf. Figure 1, with v i = 0 ). This leads to a particularly nice progression of the costs of shortestpaths, where the shortest path in iteration j = 0 , , . . . , i − simply has cost j .5 t ; N ~v s i t i s i − t i − v i ; i − ( i − − v i ) ; i − ( i − − v i ) ; i − v i ; i − N ~vi − N ~vi Figure 1: Recursive definition of the counting gadget N ~vi for the Successive Shortest Path Algorithmand ~v ∈ { ~a, − ~a } . Arcs are labeled by their cost and capacity in this order. The cost of the shortest s i - t i -path in iteration j = 0 , . . . , i − is j + ~v [ j ] i .Our goal is to use this counting gadget to iterate over all candidate solutions for a Partition instance ~v (we later use the gadget for ~v ∈ { ~a, − ~a } ). Motivated by Proposition 2, we perturb thecosts of the arcs in such a way that the shortest path in iteration j has cost j + ~v [ j ] i . We achievethis by adding v i to the cheap arcs ( s i , s i − ) , ( t i − , t i ) and subtracting v i from the expensive arcs ( s i , t i − ) , ( s i − , t i ) . If the value of v i is small enough, this modification does not affect the overallbehavior of the gadget. The first i − iterations now have an additional cost of v i while the next i − iterations have an additional cost of − v i , which leads to the desired cost when the modificationis applied recursively.Figure 1 shows the recursive construction of our counting gadget N ~vn that encodes the Partition instance ~v . The following lemma formally establishes the crucial properties of the construction. Lemma 1.
For ~v ∈ { ~a, − ~a } and i = 1 , . . . , n , the Successive Shortest Path Algorithm applied tonetwork N ~vi with source s i and sink t i needs i iterations to find a maximum s i - t i -flow of minimumcost. In each iteration j = 0 , , . . . , i − , the algorithm augments one unit of flow along a path ofcost j + ~v [ j ] i in the residual network. Partition
We use the counting gadget of the previous section to prove Theorem 1 for the Successive ShortestPath Algorithm. Let G ~a ssp be the network consisting of the two gadgets N ~an , N − ~an , connected to anew source node s and a new sink t (cf. Figure 2). For both of the gadgets, we add the arcs ( s, s n ) and ( t n , t ) with capacity n and cost . We introduce one additional arc e (dashed in the figure) ofcapacity and cost from node s of gadget N ~an to node t of gadget N − ~an . Finally, we increasethe costs of the arcs ( s , t ) in both gadgets from to ε . Recall that ε > is related to ~a by thefact that all a i ’s are multiples of ε , i. e., a cost smaller than ε is insignificant compared to all othercosts. Lemma 2.
The Successive Shortest Path Algorithm on network G ~a ssp augments flow along arc e ifand only if the Partition instance ~a has a solution. We assume that a single bit of complete configuration of the Turing machine corresponding tothe Successive Shortest Path Algorithm can be used to distinguish whether arc e carries a flow6 tN ~an ; n ; n s t s t ε/ N − ~an ; n ; n s t s t ε/ ; Figure 2: Illustration of network G ~a ssp . The subnetworks N ~an and N − ~an are advanced independentlyby the Successive Shortest Path Algorithm without using arc e , unless the Partition instance ~a has a solution.of or a flow of during the execution of the algorithm and that the identity of this bit can bedetermined in polynomial time. Under this natural assumption, we get the following result, whichimplies Theorem 1 for the Successive Shortest Path Algorithm. Corollary 6.
The Successive Shortest Path Algorithm solves
Partition implicitly.
In this section we adapt our construction for the Simplex Algorithm and, in particular, for its inter-pretation for the minimum-cost flow problem, the Network Simplex Algorithm. In this specializedversion of the Simplex Algorithm, a basic feasible solution is specified by a spanning tree T suchthat the flow value on each arc of the network not contained in T is either zero or equal to itscapacity. We refer to this tree simply as the basis or the spanning tree. The reduced cost of aresidual non-tree arc e equals the cost of sending one unit of flow in the direction of e around theunique cycle obtained by adding e to T . For a pair of nodes, the unique path connecting these nodesin the spanning tree T is referred to as the tree-path between the two nodes. Note that while wesetup the initial basis and flow manually in the constructions of the following sections, determiningthe initial feasible flow algorithmically via the algorithm of Edmonds and Karp, ignoring arc costs,yields the same result.Our construction ensures that all intermediate solutions of the Network Simplex Algorithm arenon-degenerate. Moreover, in every iteration there is a unique non-tree arc of minimum reducedcost which is used as a pivot element. We design a counting gadget for the Network Simplex Algorithm(cf. Figure 3), similar to the gadget N ~vi of Section 3.1 for the Successive Shortest Path Algorithm. Since the Network Simplex Algorithmaugments flow along cycles obtained by adding one arc to the current spanning tree, we assumethat the tree always contains an external tree-path from the sink of the gadget to its source with a7 − t ; S ~v,r s i t i − s i − t i − v i ; x i + i − − r − v i ; x i + i − − ( − r ) − v i ; x i + v i ; x i + S ~v,ri − S ~v,ri Figure 3: Recursive definition of the counting gadget S ~v,ri for the Network Simplex Algorithm, ~v ∈ { ~a, − ~a } , and a parameter r ∈ (2 A, − A ) , r = 1 / . The capacities of the arcs of S ~a,ri \ S ~a,ri − are x i + 1 = 3 · i − . If we guarantee that there always exists a tree-path from t i to s i with sufficientlynegative cost outside of the gadget, the cost of iteration k , k = 0 , . . . , i − , within the gadget is k + ~v [ k ] i . Bold arcs are in the initial basis and carry a flow of at least throughout the execution ofthe algorithm.very low (negative) cost. This assumption will be justified below in Section 4.2 when we embed thecounting gadget into a larger network.The main challenge when adapting the gadget N ~vi is that the spanning trees in consecutiveiterations of the Network Simplex Algorithm differ in one arc only, since in each iteration a singlearc may enter the basis. However, successive shortest paths in N ~vi differ by exactly two tree-arcsbetween consecutive iterations. We obtain a new gadget S ~vi from N ~vi by modifying arc capacitiesin such a way that we get two intermediate iterations for every two successive shortest paths in N ~vi that serve as a transition between the two paths and their corresponding spanning trees. Recall thatin N ~vi the capacities of the arcs of N ~vi \ N ~vi − are exactly the same as the capacity of the subnetwork N ~vi − . In S ~vi , we increase the capacity by one unit relative to the capacity of S ~vi − . The resultingcapacities of the arcs in S ~vi \ S ~vi − are x i (for the moment), where x i = 2 x i − + 1 and x = 2 , i. e., x i = 3 · i − − .Similar to before, after x i − iterations the subnetwork S ~vi − is saturated. In contrast however,at this point the arcs ( s i , s i − ) , ( t i − , t i ) are not saturated yet. Instead, in the next two iterations,the arcs ( s i , t i − ) , ( s i − , t i ) enter the basis and one unit of flow gets sent via the paths s i , s i − , t i and s i , t i − , t i , which saturates the arcs ( s i , s i − ) , ( t i − , t i ) and eliminates them from the basis.Afterwards, in the next x i − iterations, flow is sent via ( s i , t i − ) , ( s i − , t i ) and through S ~vi − asbefore (cf. Figure 4 for an example execution of the Network Simplex Algorithm on S ~v ).For the construction to work, we need that, in every non-intermediate iteration, arc ( s , t ) notonly enters the basis but, more importantly, is also the unique arc to leave the basis. In otherwords, we want to ensure that no other arc becomes tight in these iterations. For this purpose, weadd an initial flow of along the paths s i , s i − , . . . , s and t , t , . . . , t i by adding supply 1 to s i , t and demand 1 to s , t i and increasing the capacities of the affected arcs by . The arcs of thesetwo paths are the only arcs from the gadget that are contained in the initial spanning tree. Wealso increase the capacities of the arcs ( s i , t i − ) , ( s i − , t i ) by one to ensure that these arcs are neversaturated.Finally, we also make sure that in every iteration the arc entering the basis is unique. To achievethis, we introduce a parameter r ∈ (2 A, − A ) , r = 1 / and replace the costs of i − − − v i of8 t s t s t s t s t s t s t s t s t s t s t s t s t s t s t s t s t s t s t s t s t s t s t s t s t s t s t s t s t s t s t s t s t Figure 4: Illustration of the iterations performed by the Network Simplex Algorithm on the countinggadget S ~a,r for r < / . The external tree-path from t to s is not shown. Bold arcs are in thebasis before each iteration, the red arc enters the basis and the dashed arc exits the basis. Arcsare oriented in the direction in which they are used next. Note that after x = 3 · − iterations the configuration is the same as in the beginning if we switch the roles of s and t .the arcs ( s i , t i − ) , ( s i − , t i ) by new costs i − − r − v i and i − − (1 − r ) − v i , respectively.We later use the final gadget S ~v,rn as part of a larger network G by connecting the nodes s n , t n to nodes in G \ S ~v,rn . The following lemma establishes the crucial properties of the gadget used insuch a way as a part of a larger network G . Lemma 3.
Let S ~v,ri , ~v ∈ { ~a, − ~a } , be part of a larger network G and assume that before everyiteration of the Network Simplex Algorithm on G where flow is routed through S ~v,ri there is a tree-path from t i to s i in the residual network of G that has cost smaller than − i +1 and capacity greaterthan . Then, there are exactly x i = 3 · i − iterations in which one unit of flow is routed from s i to t i along arcs of S ~v,ri . Moreover:1. In iteration j = 3 k , k = 0 , . . . , i − , arc ( s , t ) enters the basis carrying flow k mod 2 andimmediately exits the basis again carrying flow ( k + 1) mod 2 . The cost incurred by arcs of S ~v,ri is k + ~v [ k ] i .2. In iterations j = 3 k + 1 , k + 2 , k = 0 , . . . , i − , for some ≤ i ′ ≤ i , the cost incurred byarcs of S ~v,ri is k + r + ~v [ k ] i ′ ,i and k + (1 − r ) + ~v [ k ] i ′ ,i in order of increasing cost. One of the arcs ( s i ′ , s i ′ − ) , ( s i ′ − , t i ′ ) and one of the arcs ( s i ′ , t i ′ − ) , ( t i ′ − , t i ′ ) each enter and leave the basisin these iterations. x n + 2 t − x n − S ~a, / n ; ∞ ; ∞ s +0 t +0 ε/ S − ~a, / n ; ∞ ; ∞ s − t − ε/ ; / n +1 ; ∞ Figure 5: Illustration of network G ~a ns . The subnetworks S ~a, / n and S − ~a, / n are advanced indepen-dently by the Network Simplex Algorithm without using the dashed arc e , unless the Partition instance ~a has a solution. Bold arcs are in the initial basis and carry a flow of at least throughoutthe execution of the algorithm. Partition
We construct a network G ~a ns similar to the network G ~a ssp of Section 3.2. Without loss of generality,we assume that a = 0 . The network G ~a ns consists of the two gadgets S ~a, / n , S − ~a, / n , connected to anew source node s and a new sink t (cf. Figure 5). Let s + i , t + i denote the nodes of S ~a, / n and s − i , t − i denote the nodes of S − ~a, / n . We introduce arcs ( s, s + n ) , ( s, s − n ) , ( t + n , t ) , ( t − n , t ) , each with capacity ∞ and cost . The supply of s + n and s − n is moved to s and the initial flow on arcs ( s, s + n ) and ( s, s − n ) is set to . Similarly, the demand of t + n and t − n is moved to t and the initial flow on arcs ( t + n , t ) and ( t − n , t ) is set to . Finally, we add an infinite capacity arc ( s, t ) of cost n +1 , increase thesupply of s and the demand of t by x n , and set the initial flow on ( s, t ) to x n .In addition, we add two new nodes c + , c − and replace the arc ( s +0 , t +0 ) by two arcs ( s +0 , c + ) , ( c + , t +0 ) of capacity and cost (for the moment), and analogously for the arc ( s − , t − ) and c − .Finally, we move the demand of from s +0 to c + and the supply of from t − to c − . The arcs ( s +0 , c + ) and ( c − , t − ) carry an initial flow of and are part of the initial basis. Observe that thesemodifications do not change the behavior of the gadgets. In addition to the properties of Lemma 3we have that whenever the arc ( s , t ) previously carried a flow of , now the arc ( c + , t +0 ) or ( s − , c − ) is in the basis, and whenever ( s , t ) previously did not carry flow, now the arc ( s +0 , c + ) or ( c − , t − ) is in the basis.We slightly increase the costs of the arcs ( c + , t +0 ) and ( s − , c − ) from to ε , again withoutaffecting the behavior of the gadgets (note that we can perturb all costs in S − ~a, / n further toensure that every pivot step is unique). Finally, we add one more arc e = ( c + , c − ) with cost andcapacity . Lemma 4.
Arc e enters the basis in some iteration of the Network Simplex Algorithm on network G ~a ns if and only if the Partition instance ~a has a solution. Again, we assume that a single bit of the complete configuration of the Turing machine corre-10ponding to the Simplex Algorithm can be used to detect whether a variable is in the basis and thatthe identity of this bit can be determined in polynomial time. Under this natural assumption, weget the following result, which implies Theorem 1 for the Network Simplex Algorithm and thus theSimplex Algorithm.
Corollary 7.
The Network Simplex Algorithm implicitly solves
Partition . We have introduced the concept of NP -mightiness as a novel means of classifying the computationalpower of algorithms. Furthermore, we have given a justification for the exponential worst-casebehavior of Successive Shortest Path Algorithm and the (Network) Simplex Method (with Dantzig’spivot rule): These algorithms can implicitly solve any problem in NP .A natural open problem is whether the studied algorithms are perhaps even more powerful thanour results suggest. Maybe, similarly to the result of Goldberg et al. [8] for the Lemke-Howsonalgorithm, the Simplex Algorithm can be shown to implicitly solve even PSPACE -hard problems.In line with this question, it would be interesting to investigate how difficult it is to predict whichoptimum solution the Simplex Algorithm will produce for a fixed pivot rule, i. e., how difficult isthe problem the Simplex Algorithm is explicitly solving?We hope that our approach will turn out to be useful in developing a better understanding ofother algorithms that suffer from poor worst-case behavior. In particular, we believe that our resultscan be carried over to the Simplex Method with other pivot rules. Furthermore, even polynomial-time algorithms with a super-optimal worst-case running time are an interesting subject. Suchalgorithms might implicitly solve problems that are presumably more difficult than the problemthey were designed for. In order to achieve meaningful results in this context, our definition of‘implicitly solving’ (Definition 1) would need to be modified by further restricting the running timeof the transformation of instances.
References [1] N. Amenta and G. M. Ziegler. Deformed products and maximal shadows of polytopes. In
Advances in Discrete and Computational Geometry , pages 57–90. Amer. Math. Soc, 1996.[2] R. G. Busacker and P. J. Gowen. A procedure for determining a family of minimum-costnetwork flow patterns. Technical Paper ORO-TP-15, Operations Research Office, The JohnsHopkins University, Bethesda, Maryland, 1960.[3] G. B. Dantzig. Maximization of a linear function of variables subject to linear inequalities.In Tj. C. Koopmans, editor,
Activity Analysis of Production and Allocation – Proceedings of aConference , pages 339–347. Wiley, 1951.[4] G. B. Dantzig.
Linear programming and extensions . Princeton University Press, 1962.[5] L. R. Ford and D. R. Fulkerson.
Flows in Networks . Princeton University Press, 1962.[6] D. Gale. Transient flows in networks.
Michigan Mathematical Journal , 6:59–63, 1959.[7] M. R. Garey and D. S. Johnson.
Computers and Intractability, A Guide to the Theory ofNP-Completeness . W.H. Freeman and Company, 1979.118] P. W. Goldberg, C. H. Papdimitriou, and R. Savani. The complexity of the homotopy method,equilibrium selection, and lemke-howson solutions.
ACM Transactions on Economics and Com-putation , 1(2):1–25, 2013.[9] M. Iri. A new method of solving transportation-network problems.
Journal of the OperationsResearch Society of Japan , 3:27–87, 1960.[10] J. J. Jarvis and H. D. Ratliff. Some equivalent objectives for dynamic network flow problems.
Management Science , 28:106–108, 1982.[11] D. S. Johnson, C. H. Papadimitriou, and M. Yannakakis. How easy is local search?
Journal ofComputer and System Sciences , 37:79–100, 1988.[12] N. Karmarkar. A new polynomial-time algorithm for linear programming.
Combinatorica ,4:373–395, 1984.[13] L. G. Khachiyan. A polynomial algorithm in linear programming.
Soviet Mathematics Doklady ,20:191–194, 1979.[14] L. G. Khachiyan. Polynomial algorithms in linear programming.
U.S.S.R. ComputationalMathematics and Mathematical Physics , 20:53–72, 1980.[15] V. Klee and G. J. Minty. How good is the simplex algorithm? In O. Shisha, editor,
InequalitiesIII , pages 159–175. Academic Press, New York, 1972.[16] C. E. Lemke and J. T. Howson. Equilibrium points of bimatrix games.
Journal of the Societyfor Industrial and Applied Mathematics , 12(2):413—423, 1964.[17] E. Minieka. Maximal, lexicographic, and dynamic network flows.
Operations Research , 21:517–527, 1973.[18] J. B. Orlin. A polynomial time primal network simplex algorithm for minimum cost flows.
Mathematical Programming , 78:109–129, 1997.[19] C. H. Papadimitriou. On the complexity of the parity argument and other inefficient proofs ofexistence.
Journal of Computer and System Sciences , 48:498–532, 1994.[20] C. H. Papadimitriou, A. A. Schäffer, and M. Yannakakis. On the complexity of local search.In
Proceedings of the 22nd Annual ACM Symposium on Theory of Computing (STOC) , pages438–445, 1990.[21] M. Skutella. An introduction to network flows over time. In W. Cook, L. Lovász, and J. Vygen,editors,
Research Trends in Combinatorial Optimization , pages 451–482. Springer, 2009.[22] D. A. Spielman and S.-H. Teng. Smoothed analysis of algorithms: Why the simplex algorithmusually takes polynomial time.
Journal of the ACM , 51:385–463, 2004.[23] É. Tardos. A strongly polynomial minimum cost circulation algorithm.
Combinatorica , 5:247–255, 1985.[24] W. L. Wilkinson. An algorithm for universal maximal dynamic flows in a network.
OperationsResearch , 19:1602–1612, 1971.[25] N. Zadeh. A bad network problem for the simplex method and other minimum cost flowalgorithms.
Mathematical Programming , 5:255–266, 1973.12
Omitted proofs of Section 3
Lemma 1.
For ~v ∈ { ~a, − ~a } and i = 1 , . . . , n , the Successive Shortest Path Algorithm applied tonetwork N ~vi with source s i and sink t i needs i iterations to find a maximum s i - t i -flow of minimumcost. In each iteration j = 0 , , . . . i − , the algorithm augments one unit of flow along a path ofcost j + ~v [ j ] i in the residual network.Proof. We prove the lemma by induction on i , together with the additional property that after i iterations none of the arcs in N ~vi − carries any flow, while the arcs in N ~vi \ N ~vi − are fully saturated.First consider the network N ~v . In each iteration where N ~v does not carry flow, one unit of flow canbe routed from s to t . Conversely, when N ~v is saturated, one unit of flow can be routed from t to s . In either case the associated cost is 0. With this in mind, it is clear that on N ~v the SuccessiveShortest Path Algorithm terminates after two iterations. In the first, one unit of flow is sent alongthe path s , s , t , t of cost v = ~v [0]1 . In the second iteration, one unit of flow is sent along the path s , t , s , t of cost − v = ~v [1]1 . Afterwards, the arc ( s , t ) does not carry any flow, while all otherarcs are fully saturated.Now assume the claim holds for N ~vi − and consider network N ~vi , i > . Observe that every pathusing either of the arcs ( s i , t i − ) or ( s i − , t i ) has a cost of more than i − − / . To see this, note thatthe cost of these arcs is bounded individually by (2 i − − v i ) > i − − / , since | v i | < A < / .On the other hand, it can be seen inductively that the shortest t i − - s i − -path in the bidirectednetwork associated with N ~vi − has cost at least − i − + 1 − A > − i − + 3 / . Hence, using both ( s i , t i − ) and ( s i − , t i ) in addition to a path from t i − to s i − incurs cost at least i − − / . Byinduction, in every iteration j < i − , the Successive Shortest Path Algorithm thus does not use thearcs ( s i , t i − ) or ( s i − , t i ) but instead augments one unit of flow along the arcs ( s i , s i − ) , ( t i − , t i ) and along an s i − - t i − -path of cost j + ~v [ j ] i − < i − − / through the subnetwork N ~vi − . The totalcost of this s i - t i -path is v i + ( j + ~v [ j ] i − ) = j + ~v [ j ] i , since j < i − .After i − iterations, the arcs ( s i , s i − ) and ( t i − , t i ) are both fully saturated, as well as (byinduction) the arcs in N ~vi − \ N ~vi − , while all other arcs are without flow. Consider the residualnetwork of N ~vi − at this point. If we increase the costs of the four residual arcs in N ~vi − \ N ~vi − by (2 i − − and switch the roles of s i − and t i − , we obtain back the original subnetwork N ~vi − .The shift of the residual costs effectively makes every t i − - s i − -path more expensive by i − − ,but does not otherwise affect the behavior of the network. We can thus use induction again to inferthat in every iteration j = 2 i − , . . . , i − the Successive Shortest Path Algorithm augments oneunit of flow along a path via s i , t i − , N ~vi − , s i − , t i . Accounting for the shift in cost by i − − , weobtain that this path has a total cost of (2 i − − v i ) + ( j − i − + ~v [ j − i − ] i − ) − (2 i − −
1) = j + ~v [ j ] i , where we used ~v [ j − i − ] i − = ~v [ j ] i − and ~v [ j ] i − − v i = ~v [ j ] i for j ∈ [2 i − , i ) . After i iterations the arcs in N ~vi \ N ~vi − are fully saturated and all other arcs carry no flow. Lemma 2.
The Successive Shortest Path Algorithm on network G ~a ssp augments flow along arc e ifand only if the Partition instance ~a has a solution.Proof. First observe that our slight modification of the cost of arc ( s , t ) in both gadgets N ~an and N − ~an does not affect the behavior of the Successive Shortest Path Algorithm. This is because thecost of any path in G is perturbed by at most ε , and hence the shortest path remains the same inevery iteration. The only purpose of the modification is tie-breaking.13onsider the behavior of the Successive Shortest Path Algorithm on the network G ~a ssp with arc e removed. In each iteration, the shortest s - t -path goes via one of the two gadgets. By Lemma 1,each gadget can be in one of n + 1 states and we number these states increasingly from to n bythe order of their appearance during the execution of the Successive Shortest Path Algorithm. Theshortest s - t -path through either gadget in state j = 0 , . . . , n − has a cost in the range [ j − A, j + A ] ,and hence it is cheaper to use a gadget in state j than the other gadget in state j + 1 . This meansthat after every two iterations both gadgets are in the same state.Now consider the network G ~a ssp with arc e put back. We show that, as before, if the two gadgetsare in the same state before iteration j , j = 0 , . . . , n − , then they are again in the same state twoiterations later. More importantly, arc e is used in iterations j and j + 1 if and only if ~a [ j ] n = 0 .This proves the lemma since, by Proposition 2, ~a [ j ] n = 0 for some j < n if and only if the Partition instance ~a has a solution.To prove our claim, assume that both gadgets are in the same state before iteration j . Let P + be the shortest s - t -path that does not use any arc of N − ~an , P − be the shortest s - t -path that doesnot use any arc of N ~an , and P be the shortest s - t -path using arc e . Note that one of these pathsis the overall shortest s - t -path. We distinguish two cases, depending on whether the arc ( s , t ) currently carries flow or in both gadgets.If ( s , t ) carries flow , then P + , P − use arc ( s , t ) in forward direction. Therefore, byLemma 1, the cost of P + is j + ~a [ j ] n + ε , while the cost of P − is j − ~a [ j ] n + ε . On the other hand,path P follows P + to node s of N ~an , then uses arc e , and finally follows P − to t . The cost of thispath is exactly j . If ~a [ j ] n = 0 , then one of P + , P − is cheaper than P , and the next two iterationsaugment flow along paths P + and P − . Otherwise, if ~a [ j ] n = 0 , then P is the shortest path, followedin the next iteration by the path from s to node t of N − ~an along P − , along arc e in backwardsdirection to node s of N ~an , and finally to t along P + , for a total cost of j + ε .If ( s , t ) carries flow , then P + , P − use arc ( s , t ) in backward direction. By Lemma 1, thecost of P + is j + ~a [ j ] n − ε , while the cost of P − is j − ~a [ j ] n − ε . On the other hand, path P follows P + to node s of N ~an , then uses arc e , and finally follows P − to t . The cost of this path is j − ε .If ~a [ j ] n = 0 , then one of P + , P − is cheaper than P , and the next two iterations augment flow alongpaths P + and P − . Otherwise, if ~a [ j ] n = 0 , then P is the shortest path, followed in the next iterationby the path from s to node t of N − ~an along P − , along arc e in backwards direction to node s of N ~an , and finally to t along P + , for a total cost of j . B Omitted proofs of Section 4
Lemma 3.
Let S ~v,ri , ~v ∈ { ~a, − ~a } , be part of a larger network G and assume that before everyiteration of the Network Simplex Algorithm on G where flow is routed through S ~v,ri there is a tree-path from t i to s i in the residual network of G that has cost smaller than − i +1 and capacity greaterthan . Then, there are exactly x i = 3 · i − iterations in which one unit of flow is routed from s i to t i along arcs of S ~v,ri . Moreover:1. In iteration j = 3 k , k = 0 , . . . , i − , arc ( s , t ) enters the basis carrying flow k mod 2 andimmediately exits the basis again carrying flow ( k + 1) mod 2 . The cost incurred by arcs of S ~v,ri is k + ~v [ k ] i .2. In iterations j = 3 k + 1 , k + 2 , k = 0 , . . . , i − , for some ≤ i ′ ≤ i , the cost incurred byarcs of S ~v,ri is k + r + ~v [ k ] i ′ ,i and k + (1 − r ) + ~v [ k ] i ′ ,i in order of increasing cost. One of the arcs s i ′ , s i ′ − ) , ( s i ′ − , t i ′ ) and one of the arcs ( s i ′ , t i ′ − ) , ( t i ′ − , t i ′ ) each enter and leave the basisin these iterations.Proof. First observe that throughout the execution of the Network Simplex Algorithm on G , oneunit of flow must always be routed along both of the paths s i , s i − , . . . , s and t , t , . . . , t i . This isbecause there is an initial flow of one along these paths, all of s , . . . , s n − have in-degree , and allof t , . . . , t n − have out-degree , which means that the flow cannot be rerouted.We prove the lemma by induction on i > , together with the additional property, that after x i iterations the arcs in S ~v,ri − carry their initial flow values, while the arcs in S ~v,ri \ S ~v,ri − all carry x i additional units of flow (which implies that ( s i , s i − ) and ( t i − , t i ) are saturated). Also, theconfiguration of the basis is identical to the initial configuration, except that the membership inthe basis of arcs in S ~v,ri \ S ~v,ri − is inverted. In the following, we assume that r ∈ (2 A, / , the casewhere r ∈ (1 / , − A ) is analogous. In each iteration j , let P j denote the tree-path outside of S ~v,ri from t i to s i of cost c j < − i +1 and capacity greater than .For i = 1 , the Network Simplex Algorithm performs the following four iterations involving S ~v,r (cf. Figure 4 for and illustration embedded in S ~v,r ). In the first iteration, ( s , t ) enters the basisand one unit of flow is routed along the cycle s , s , t , t , P of cost v + c = ~v [0]1 + c . This saturatesarc ( s , t ) which is the unique arc to become tight (since P has capacity greater than ) and thusexits the basis again. In the second iteration, ( s , t ) enters the basis and one unit of flow is routedalong the cycle s , s , t , P of cost r + c = r + ~v [0]1 , + c , thus saturating (together with the initialflow of ) arc ( s , s ) of capacity x + 1 = 3 . Since P has capacity greater than , this is the onlyarc to become tight and it thus exits the basis. In the third iteration, ( s , t ) enters the basis andone unit of flow is routed along the cycle s , t , t , P of cost (1 − r ) + c = (1 − r ) + ~v [0]1 , + c . Similarto before, ( t , t ) is the only arc to become tight and thus exits the basis. In the fourth and finaliteration, ( s , t ) enters the basis and one unit of flow is routed along the cycle s , t , s , t , P ofcost − v + c = ~v [1]1 + c , which causes ( s , t ) to become empty and leave the basis. Thus, afterfour iterations, arc ( s , t ) in S ~v,r carries its initial flow of value , while the arcs in S ~v,r \ S ~v,r allcarry x additional units of flow. Also, the arcs ( s , t ) , ( s , t ) replaced the arcs ( s , s ) , ( t , t ) in the basis.To see, for i > , that S ~v,ri is saturated after x i units of flow have been routed from s i to t i ,consider the directed s i - t i -cut in S ~v,ri induced by { s i , t i − } containing the arcs ( s i , s i − ) , ( t i − , t i ) .The capacity of this cut is exactly x i + 2 and the initial flow over the cut is .Now assume our claim holds for S ~v,ri − and S ~v, − ri − and consider S ~v,ri . Consider the first x i − iterations j = 0 , . . . , x i − − and set k := ⌊ j/ ⌋ < i − . It can be seen inductively that theshortest path from t i − to s i − in the bidirected network associated with S ~v,ri − has cost at least − i − + 1 − A > − i − + 1 − r . Hence, every path from s i to t i using either or both of the arcs ( s i , t i − ) or ( s i − , t i ) has cost greater than i − − (1 − r ) − A > i − − A . By induction, wecan thus infer that none of these arcs enters the basis in iterations j < x i − , and instead an arcof S ~v,ri − enters (and exits) the basis and one unit of flow gets routed from s i to t i via the arcs ( s i , s i − ) , ( t i − , t i ) . We may use induction here since, before iteration j , the path t i − , t i , P j , s i , s i − has cost v i + c j < v i − i +1 < − i and its capacity is greater than , since both ( s i , s i − ) , ( t i − , t i ) have capacity x i + 1 = 2 x i − + 2 , leaving one unit of spare capacity even after a flow of x i − hasbeen routed along them in addition to the initial unit of flow. The additional cost contributed byarcs ( s i , s i − ) , ( t i − , t i ) is v i , which is in accordance with our claim since ~v [ k ] ℓ,i − + v i = ~v [ k ] ℓ,i for all ℓ ∈ { , . . . , i − } and k ∈ { , . . . , i − − } .Because S ~v,ri − is fully saturated after x i − iterations, in the next iteration j = 2 x i − = 3 · i − − , k := ⌊ j/ ⌋ = 2 i − − , arc ( s i − , t i ) is added to the basis and one unit of flow is sent along the path15 i , s i − , t i , thus saturating the capacity x i + 1 = 2 x i − + 2 of arc ( s i , s i − ) and incurring a cost of i − − (1 − r ) = k + r + ~v [ k ] i,i . Note that this cost is higher than the cost of each of the previousiterations. The saturated arc has to exit the basis since, by assumption, P j has capacity greaterthan . Similarly, in the following iteration j = 2 x i − + 1 = 3 · i − − , k := ⌊ j/ ⌋ = 2 i − − , thecost is i − − r = k + (1 − r ) + ~v [ k ] i,i and arc ( t i − , t i ) is replaced by ( s i , t i − ) in the basis.By induction, at this point ( s i − , t i − ) and ( s i − , t i − ) are in the basis, the arcs of S ~v,ri − \ S ~v,ri − carry a flow of x i − in addition to their initial flow, and S ~v,ri − is back to its initial configuration. Tobe able to apply induction on the residual network of S ~v,ri − , we shift the costs of the arcs at s i − by − (2 i − − r ) and the costs of the arcs at t i − by − (2 i − − (1 − r )) in the residual network of S ~v,ri − . Since we shift costs uniformly across cuts, this only affects the costs of paths but not thestructural behavior of the gadget. Specifically, the costs of all paths from t i − to s i − in the residualnetwork are increased by exactly i − − . If we switch roles of s i − and t i − , say ˜ s i − := t i − and ˜ t i − := s i − , we obtain the residual network of S ~v, − ri − with its initial flow. This allows us to useinduction again for the next x i − iterations.To apply the induction hypothesis, we need the tree-path from ˜ t i − = s i − to ˜ s i − = t i − tomaintain cost smaller than − i and capacity greater than . This is fulfilled since P j has costsmaller than − i +1 , which is sufficient even with the additional cost of i − − v i incurred by arcs ( s i , ˜ s i − ) , (˜ t i − , t i ) . The residual capacity of ( t i , ˜ t i − ) and (˜ s i − , s i ) is x i > x i − and thus sufficientas well. By induction for S ~v, − ri − , we may thus conclude that in iterations j = 2 x i − + 2 , . . . , x i − , k := ⌊ j/ ⌋ ≥ i − , one unit of flow is routed via ( s i , t i − ) , S ~v,ri − , ( s i − , t i ) . The cost of ( s i , ˜ s i − ) and (˜ t i − , t i ) together is i − − v i . The cost of iteration j ′ = j − x i − − , k ′ := ⌊ j ′ / ⌋ = k − i − , in S ~v, − ri − is k ′ + y + ~v [ k ′ ] ℓ,i − , for y ∈ { , r, (1 − r ) } and ℓ ∈ { , . . . , i − } chosen according to the differentcases of the lemma. Accounting for the shift by i − − of the cost compared with the residualnetwork of S ~v,ri − , the incurred total cost in S ~v,ri − is (2 i − − v i ) + ( k ′ + y + ~v [ k ′ ] ℓ,i − ) − (2 i − − i − + k ′ + y − v i + ~v [ k ′ ] ℓ,i − = k + y + ~v [ k ] ℓ,i , where we used − v i + ~v [ k ′ ] ℓ,i − = ~v [ k ′ +2 i − ] ℓ,i since k ′ < i − . This concludes the proof. Lemma 4.
Arc e enters the basis in some iteration of the Network Simplex Algorithm on network G ~a ns if and only if the Partition instance ~a has a solution.Proof. First observe that ~a [2 k ] n = ~a [2 k +1] n for k ∈ , . . . , n − since, by assumption, a = 0 .Similar to the proof of Lemma 2, in isolation each of the two gadgets can be in one of x n states(Lemma 3), which we label by the number of iterations needed to reach each state. Assuming thatboth gadgets are in state k after some number of iterations, we show that both gadgets will reachstate k + 12 together as well. In addition, we show that, in the iterations in-between, arc e entersthe basis if and only if ~a [4 k ] n = 0 and thus ~a [4 k +1] n = 0 , or ~a [4 k +2] n = 0 and thus ~a [4 k +3] n = 0 . Considerthe situation where both gadgets are in state k . Note that in this state the arcs in S ~v, / and S − ~v, / are back in their original configuration.Let P ± denote the tree-path from t ± to s ± , and let P ±∓ denote the tree-path from t ∓ to s ± .We refer to these paths as the outer paths. Observe that, since the gadgets are in the same state,the costs of the outer paths differ by at most A < / . In the next iterations, flow is sent alonga cycle containing one of the outer paths, and we analyze only the part of each cycle without theouter path. Let P ± , P ± , P ± , P ± be the four successive shortest paths within the gadget S ± ~a, / .16he costs of these paths are ε , / , / , − ε , respectively. Note that, since A < / , the costsof the paths stay in the same relative order within each gadget throughout the algorithm.If ~a [4 k ] n < , then P + is the cheapest of the outer paths by a margin of more than ε/ . Thus,in the first iteration, ( c + , t +0 ) replaces ( s +0 , c + ) in the basis closing the path P +0 . In the next fiveiterations, the paths P − , P +1 , P − , P +2 , P − are closed in this order. The final two iterations are P +3 , P − , similar to the first two iterations, as ~a [4 k +1] n = ~a [4 k ] n < . At this point, iterations havepassed and both gadgets are in state k + 6 .If ~a [4 k ] n > , then P − is the cheapest of the outer paths by a margin of more than ε/ . Thus,the first iteration closes the path P − . The next five iterations are via P +0 , P − , P +1 , P − , P +2 , in thisorder. The final two iterations are P − , P +3 , similar to the first two iterations, as ~a [4 k +1] n = ~a [4 k ] n > .At this point, iterations have passed and both gadgets are in state k + 6 .If ~a [4 k ] n = 0 , then all four outer paths have the same cost. The first iteration is via the path s +1 , s +0 , c + , c − , t − , t − , i. e., arc e enters and leaves the basis, for a cost of and an additional flow of / . The next two iterations are via P ± , each for a cost of ε and an additional flow of / . Thefourth iteration is via the path s − , s − , c − , c + , t +0 , t +1 , i. e., arc e enters and leaves the basis again, fora cost of ε and an additional flow of / . The next iterations are as before: via P +1 , P − , P − , P +2 ,in this order. The final four iterations are similar to the first four iterations, again twice using e , as ~a [4 k +1] n = ~a [4 k ] n = 0 . At this point, iterations have passed and both gadgets are in state k + 6 .The next four iterations (two for each gadget) do not involve the subnetworks S ~a, / and S − ~a, / ,and do thus not use e . The iterations going from state k + 6 to state k + 12 are analogous tothe above if we exchange the roles of s ± and t ± . This concludes the proof. C Omitted proofs of Corollaries
Corollary 1.
Determining the number of iterations needed by the Simplex Algorithm, the NetworkSimplex Algorithm, and the Successive Shortest Path Algorithm for a given input is NP -hard.Proof. We first show that determining the number of iterations needed by the Successive ShortestPath Algorithm for a given minimum-cost flow instance is NP -hard. We replace the arc e in G ~a ssp of Section 3 by two parallel arcs, each with a capacity of / and slightly perturbed costs. Thisway, every execution of the Successive Shortest Path Algorithm that previously did not use arc e is unaffected, while executions using e require additional iterations. Thus, by Lemma 2, theSuccessive Shortest Path Algorithm on network G ~a ssp takes more than n +1 iterations if and only ifthe Partition instance ~a has a solution.The proof for the Network Simplex Algorithm (and thus the Simplex Algorithm) follows from theproof of Lemma 4, observing that the Network Simplex Algorithm takes more than x n iterationsfor network G ~a ns if and only if the Partition instance ~a has a solution. Corollary 2.
Deciding for a given linear program whether a given variable ever enters the basisduring the execution of the Simplex Algorithm is NP -hard.Proof. The proof is immediate via Lemma 4 and the fact that
Partition is NP -hard. Corollary 3.
Determining whether a parametric minimum-cost flow uses a given arc (i. e., assignspositive flow value for any parameter value) is NP -hard. In particular, determining whether thesolution to a parametric linear program uses a given variable is NP -hard. Also, determining thenumber of different basic solutions over all parameter values is NP -hard. roof. This follows from the fact that the Successive Shortest Path Algorithm solves a parametricminimum-cost flow problem, together with Lemma 2 and Corollary 1.
Corollary 4.
Given a d -dimensional polytope P by a system of linear inequalities, determining thenumber of vertices of P ’s projection onto a given -dimensional subspace is NP -hard.Proof. Let P be the polytope of all feasible s - t -flows in network G ~a ssp of Section 3.2. Consider the -dimensional subspace S defined by flow value and cost of a flow. Let P ′ be the projection of P onto S . The lower envelope of P ′ is the parametric minimum-cost flow curve for G ~a ssp , while theupper envelope is the parametric maximum-cost flow curve for G ~a ssp .The s - t -paths of maximum cost in G ~a ssp are the four paths via s n , s n − , t n or via s n , t n − , t n inboth of the gadgets. Each of these paths has cost n − − and the total capacity of all paths togetheris n +1 which is equal to the maximum flow value from s to t . Therefore, the upper envelope of P ′ consists of a single edge.The number of edges on the lower envelope of P ′ is equal to the number of different costs amongall successive shortest paths in G ~a ssp . If we slightly perturb the costs of the two arcs in G ~a ssp withcost ε , we can ensure that each successive shortest path has a unique cost. The claim then followsby Corollary 1. Corollary 5.
Determining the average arrival time of flow in an earliest arrival flow is NP -hard.Sketch. The average arrival time can be obtained from the parametric minimum-cost flow curveconsidered in the proof of Corollary 4. By slightly perturbing the cost of arc e in network G ~a ssp ,the value of the average arrival time discloses whether ee