About the Complexity of Two-Stage Stochastic IPs
aa r X i v : . [ c s . D S ] F e b About the Complexity of Two-StageStochastic IPs ∗ Kim-Manuel KleinUniversity of Kiel [email protected]
We consider so called 2-stage stochastic integer programs (IPs) and theirgeneralized form of multi-stage stochastic IPs. A 2-stage stochastic IP is aninteger program of the form max { c T x | Ax = b, l ≤ x ≤ u, x ∈ Z nt + s } wherethe constraint matrix A ∈ Z r × s consists roughly of n repetition of a blockmatrix A on the vertical line and n repetitions of a matrix B ∈ Z r × t on thediagonal.In this paper we improve upon an algorithmic result by Hemmecke andSchultz form 2003 [17] to solve 2-stage stochastic IPs. The algorithm isbased on the Graver augmentation framework where our main contributionis to give an explicit doubly exponential bound on the size of the augmentingsteps. The previous bound for the size of the augmenting steps relied onnon-constructive finiteness arguments from commutative algebra and there-fore only an implicit bound was known that depends on parameters r, s, t and ∆, where ∆ is the largest entry of the constraint matrix. Our new im-proved bound however is obtained by a novel theorem which argues aboutthe intersection of paths in a vector space.As a result of our new bound we obtain an algorithm to solve 2-stagestochastic IPs in time poly ( n, t ) · f ( r, s, ∆), where f is a doubly exponentialfunction.To complement our result, we also prove a doubly exponential lower boundfor the size of the augmenting steps. ∗ This work was mostly done during the authors time at EPFL. The project was supported by the SwissNational Science Foundation (SNSF) within the project Convexity, geometry of numbers, and thecomplexity of integer programming (Nr.163071) Introduction
Integer programming is one of the most fundamental problems in algorithm theory.Many problems in combinatorial optimization and other areas can be modeled as aninteger program . An integer program (IP) is thereby of the formmax { c T x | Ax = b, l ≤ x ≤ u, x ∈ Z n } for some matrix A ∈ Z m × n , a right hand side b ∈ Z m , a cost vector c ∈ Z n and lowerand upper bounds l, u ∈ Z n . The famous algorithm of Kannan [21] computes an optimalsolution of the IP in time of roughly n O ( n ) · poly ( m, log ∆), where ∆ is the largest entryof A and b .In recent years there was significant progress in the development of algorithms for IPswhen the constraint matrix A has a specific structure. Consider for example the classof integer programs with a constraint matrix N of the form N = A A · · · AB · · · B . . . ...... . . . . . . 00 · · · B for some block matrices A ∈ Z r × s and B ∈ Z r × t . An IP of this specific structure iscalled an n -fold IP. This class of IPs has found numerous applications in the area ofstring algorithms [23], social choice games [12, 24] and scheduling [18, 22]. State-of-the-art algorithms compute a solution of an n -fold IP in time poly ( n, t )∆ O ( r s ) [9, 19, 26],where ∆ is the largest entry in matrices A and B . Stochastic programming deals with uncertainty of decision making over time [20]. One ofthe basic models in stochastic programming is 2-stage stochastic programming. In thismodel one has to decide on a solution at the first stage and in the second stage there is anuncertainty where n possible scenarios can happen. Each of n possible scenarios mighthave a different optimal solution and the goal is to minimize the costs of the solution ofthe first stage in addition to the expected costs of the solution of the second stage. In thecase that said scenarios can be modeled by an (integer) linear program, we are talkingabout 2 -stage stochastic (integer) linear programs . 2-stage stochastic linear programsthat do not contain any integer variable are well understood (we refer to standard textbooks [3, 20]). In contrast, 2-stage stochastic programs that contain integer variables arehard to solve and are topic of ongoing research. Typically, those IPs are investigated inthe context of decomposition based methods (we refer to a tutorial [27] or a survey [31]on the topic). For recent progress on 2-stage stochastic programs we refer to [1, 4, 31].The interest in solving 2-stage stochastic (I)LPs efficiently stems from their wide range2f applications for example in modeling manufacturing processes [8] or energy planing[16].In this paper we consider 2-stage stochastic IPs with only integral variables. This socalled pure integral 2-stage stochastic IPs have also been considered in the literaturefrom a practical perspective (see [13, 33]). The considered IP is then of the formmax c T x (1) A x = bl ≤ x ≤ ux ∈ Z s + nt for given objective vector c ∈ Z s + nt ≥ upper and lower bound ℓ, u ∈ Z s + nt ≥ . The constraintmatrix A has the shape A = A (1) B (1) · · · A (2) B (2) . . . ...... ... . . . . . . 0 A ( n ) · · · B ( n ) for given block matrices A (1) , . . . , A ( n ) ∈ Z r × s and B (1) , . . . , B ( n ) ∈ Z r × t . Typically,2-stage stochastic IPs are written in a slightly different (equivalent) form that explicitlyinvolves the scenarios and the probability distribution of the scenarios of the secondstage. In this presented form, roughly speaking, the solution for the first stage scenariois encoded in the variables corresponding to vertical block matrices. A solution foreach of the second stage scenarios is encoded in the variables corresponding to one ofthe diagonal block matrices and the expectation for the second stage scenarios can beencoded in a linear objective function. Since we do not rely on known techniques ofstochastic programming in this paper, we omit the technicalities surrounding 2-stagestochastic IPs and simply refer to a survey for further details [31].Despite their similarity, it seems that 2-stage IPs are significantly harder to solve than n -fold IPs. While it is known that the 2-stage stochastic IP with constraint matrix S canbe solved in running time of the form poly ( n ) · f ( r, s, t, ∆) for some computable function f , which was developed by Hemmecke and Schultz [17], the actual dependency on theparameters r, s, t, ∆ was unknown (we elaborate on this further in the coming section).Their algorithm is based on the augmentation framework which we also discuss in thefollowing section. Suppose we have an initial feasible solution z of an IP max { c T x | Ax = b, l ≤ x ≤ u, x ∈ Z n } and our goal is to find an optimal solution. The idea behind the augmentationframework (see [7]) is to compute an augmenting (integral) vector y in the kernel, i.e. y ∈ ker ( A ) with c T y >
0. A new solution z ′ with improved objective can then be defined3y z ′ = z + λy for a suitable λ ∈ Z ≥ . This procedure can be iterated until a solutionwith optimal objective is obtained eventually.We call an integer vector y ∈ ker ( A ) a cycle . A cycle can be decomposed if there existintegral vectors u, v ∈ ker ( A ) \ { } with y = u + v and u i · v i ≥ i (i.e. the vectorsare sign-compatible with y ). An integral vector y ∈ ker ( A ) that can not be decomposedis called a Graver element [14] or we simply say that it is indecomposable . The set of allindecomposable elements is called the
Graver basis .The power of the augmentation framework is based on the observation that the size ofGraver elements and therefore also the size of the Graver basis can be bounded. With thehelp of these bounds, good augmenting steps can be computed by a dynamic programand finally the corresponding IP can be solved efficiently.In the case that the constraint matrix has a very specific structure, one can sometimesshow improved bounds. Specifically, if the constraint matrix A has a 2-stage stochasticshape with identical block matrices in the vertical and diagonal line, then Hemmeckeand Schultz [17] were able to prove a bound for the size of Graver elements that onlydepends on the parameters r, s, t and ∆. The presented bound is an existential resultand uses so called saturation results from commutative algebra. As MacLagan’s theoremis used in the proof of the bound no explicit function can be derived. It is only knownthat the dependency on the parameters is lower bounded by ackerman’s function [28].This implies that the implicit bound for the size of Graver elements by Hemmecke andSchultz can not be improved beyond an ackermanian dependency in the parameters r, s, t and ∆.In a very recent paper it was even conjectured that an algorithm with an explicitbound on parameters r, s, t and ∆ in the running time to solve IPs of the form (1) doesnot exist [25].Very recently, improved bounds for Graver elements of general matrices and matriceswith specific structure like n -fold [9] or 4-block structure [5] were developed. Theyare based on the Steinitz Lemma, which was previously also used by Eisenbrand andWeismantel [10] in the context of integer programming. Lemma 1 (Steinitz [15, 32]) . Let v , . . . , v n ∈ R m be vectors with k v i k ∞ ≤ ∆ for ≤ i ≤ n . Assuming that P ni =1 v i = 0 then there is a permutation Π such that for each k ∈ { , . . . , n } the norm of the partial sum (cid:13)(cid:13)(cid:13)P ki =1 v Π( i ) (cid:13)(cid:13)(cid:13) ∞ is bounded by m ∆The Steinitz Lemma was used by Eisenbrand, Hunkenschr¨oder and Klein [9] to boundthe size of Graver elements for a given matrix A . As we use the following theorem andits technique in this paper, we give a brief sketch of its proof. Theorem 1 (Eisenbrand, Hunkenschr¨oder, Klein [9]) . Let A ∈ Z m × n be an integermatrix where every entry of A is bounded by ∆ in absolute value. Let g ∈ Z n be anelement of the Graver Basis of A then k g k ≤ (2 m ∆ + 1) m .Proof. Consider the sequence of vectors v , . . . , v k g k consisting of y i copies of the i -thcolumn of A if g i is positive and | g i | copies of the negative of the i -th coplumn of A if g i is negative. As g is a Graver element we obtain that v + . . . + v k g k = 0. Using the4teinitz Lemma above, there exists a reordering u + . . . + u k g k of the vectors such thatthe partial sum p k = (cid:13)(cid:13)(cid:13)P ki =1 u i (cid:13)(cid:13)(cid:13) ∞ ≤ ∆ m for each k ≤ k g k .Suppose by contradiction that k g k > (2 m ∆ + 1) m . Then by the pigeonhole principlethere exist two partial sums that sum up to the same value. However, this means that g can be decomposed and hence can not be a Graver element. The main result of this paper is to prove a new structural lemma that enhances thetoolset of the augmentation framework. We show that this Lemma can be directly usedto obtain an explicit bound for Graver elements of the constraint matrix of 2-stagestochastic IPs. But we think that it might also be of independent interest as it providesinteresting structural insights in vector sets.
Lemma 2.
Given multisets T , . . . , T n ⊂ Z d ≥ where all elements t ∈ T i have boundedsize k t k ∞ ≤ ∆ . Assuming that the total sum of all elements in each set is equal, i.e. X t ∈ T t = . . . = X t ∈ T n t then there exist nonempty submultisets S ⊆ T , . . . , S n ⊆ T n of bounded size | S i | ≤ ( d ∆) O ( d (∆ d )) such that X s ∈ S s = . . . = X s ∈ S n s. Note that this lemma only makes sense when we consider the T i to be multisets asthe number of different sets without allowing multiplicity of vectors would be boundedby 2 ∆ d .A geometric interpretation of the lemma is given in the following figure. On the leftside we have n -paths consisting of sets of vectors and all path end at the same point b . T bT T permute T bT b ′ T Then the Lemma shows, that there always exist permutations of the vectors of each pathsuch that all paths meet at a point b ′ of bounded size. The bound does only depend on∆ and the dimension d and is thus independent of the number of paths n and the sizeof b . For the proof of the Lemma we need basic properties for the intersection of integercones. We show that those properties can be obtained by using the Steinitz Lemma.5e show that Lemma 2 has strong implications in the context of integer programming.Using the Lemma, we can show that the size of Graver elements of matrix A is boundedby ( rs ∆) O ( rs ((2 r ∆+1) rs )) . Within the framework of Graver augmenting steps the boundimplies that 2-stage stochastic IPs can be solved in time n t ϕ log ( nt )( rs ∆) O ( rs ((2 r ∆+1) rs )) ,where ϕ is the encoding length of the instance. With this we improve upon an implicitbound for the size of the Graver elements matrix 2-stage stochastic constraint matricesdue to Hemmecke and Schultz [17].Furthermore, we show that our Lemma can also be applied to bound the size of Graverelements of constraint matrices that have a multi-stage stochastic structure. Multi-stagestochastic IPs are a well known generalization of 2-stage stochastic IPs. By this, weimprove upon a result of Aschenbrenner and Hemmecke [2].To complement our results for the upper bound, we also present a lower bound forthe size of Graver elements of matrices that have a 2-stage stochastic IP structure. Thegiven lower bound is for the case of r = 1. In this case we present a matrix where theGraver elements have a size of 2 Ω(∆ s ) . First, we argue about the application of our main Lemma 2. In the following we showthat the infinity-norm of Graver elements of matrices with a 2-stage stochastic structurecan be bounded by using the lemma.Given the block structure of the IP 1, we define for a vector y ∈ Z s + nt with A y = 0the vector y (0) ∈ Z s ≥ which consists of the entries of y that belong to the vertical blockmatrices A ( i ) and we define y ( i ) ∈ Z t ≥ to be the entries of y that belong to the diagonalblock matrix B ( i ) . Theorem 2.
Let y be a Graver element of the constraint matrix A of IP (1). Then k y k ∞ is bounded by ( rs ∆) O ( rs ((2 r ∆+1) rs )) . More precisely, (cid:13)(cid:13)(cid:13) y ( i ) (cid:13)(cid:13)(cid:13) ≤ ( rs ∆) O ( rs ((2 r ∆+1) rs )) forevery ≤ i ≤ n .Proof. Let y ∈ Z s + nt ≥ be a cycle of IP (1), i.e. A y = 0. Consider a submatrix ofthe matrix A denoted by ( A ( i ) B ( i ) ) ∈ Z r × ( s + t ) consisting of the block matrix A ( i ) ofthe vertical line and the block matrix B ( i ) of the diagonal line. Consider further thecorresponding variables v ( i ) = y (0) y ( i ) ! ∈ Z s + t of the respective matrix A ( i ) and B ( i ) .Since A y = 0, we also have that ( A ( i ) B ( i ) ) v ( i ) = 0. Hence, we can decompose v ( i ) into amultiset C i of indecomposable elements, i.e. v ( i ) = P c ∈ C i c . By Lemma 1 we obtain thebound k c k ≤ (2 r ∆ + 1) r for each c ∈ C i .Since all matrices ( A ( i ) B ( i ) ) share the same set of variables in the overlapping blockmatrices A ( i ) , we can not choose indecomposable elements independently in each block toobtain a cycle of smaller size for the entire matrix A . Let p : Z s + t → Z s be the projectionthat maps a cycle c of a block matrix ( A ( i ) B ( i ) ) to the variables in the overlapping part,6.e. p ( c ) = p ( c (0) c ( i ) ! ) = c (0) . In the case that k y k ∞ is large we will show that we can finda cycle ¯ y of smaller length and ¯ y ≤ y . In order to obtain this cycle ¯ y for the entire matrix A , we have to find a multiset of cycles ¯ C i ⊂ C i in each block matrix ( A ( i ) B ( i ) ) such thatthe sum of the projected parts is identical, i.e. P c ∈ ¯ C p ( c ) = . . . = P c ∈ ¯ C n p ( c ). Weapply Lemma 2 to the multisets p ( C ) , . . . , p ( C n ), where p ( C i ) = { p ( c ) | c ∈ C i } is themultiset of projected elements in C i with k p ( c ) k ≤ (2 r ∆ + 1) r . Note that P x ∈ p ( C ) x = . . . = P x ∈ p ( C n ) x = y (0) and hence the conditions to apply Lemma 2 are fulfilled. Sinceevery v ( i ) is decomposed in a sign compatible way, every entry of the vector in p ( C i ) hasthe same sign. Hence we can flip the negative signs in order to apply Lemma 2.By the Lemma, there exist submultisets S ⊆ p ( C ) , . . . , S n ⊆ p ( C n ) such that P x ∈ S x = . . . = P x ∈ S n x and | S i | ≤ ( s k c k ) O ( s ( k c k s )) = ( rs ∆) O ( rs ((2 r ∆+1) rs )) . Asthere exist submultisets ¯ C ⊆ C , . . . ¯ C n ⊆ C n with p ( ¯ C ) = S , . . . p ( ¯ C n ) = S n , we canuse those submultisets ¯ C i to define a solution ¯ y ≤ y . For i > y ( i ) = P c ∈ ¯ C i ¯ p ( c ),where ¯ p ( c ) is the projection that maps a cycle c ∈ ¯ C i to the part that belongs to matrix B ( i ) , i.e. ¯ p ( c (0) c ( i ) ! ) = c ( i ) . And let ¯ y (0) = P c ∈ ¯ C i p ( c ) for an arbitrary i >
0, which is welldefined as the sum is identical for all multisets ¯ C i . As the cardinality of the multisets ¯ C i is bounded, we know by construction of ¯ y that the one-norm of every y ( i ) is bounded by (cid:13)(cid:13)(cid:13) y ( i ) (cid:13)(cid:13)(cid:13) ≤ (2 r ∆ + 1) r · ( rs ∆) O ( rs ((2 r ∆+1) rs )) = ( rs ∆) O ( rs ((2 r ∆+1) rs )) . This directly implies the infinity-norm bound for y as well. Computing the Augmenting Step
As a direct consequence of the bound for the size of the Graver elements, we obtain bythe framework of augmenting steps an efficient algorithm to compute an optimal solutionof a 2-stage stochastic IP. For this we can use the algorithm by Hemmecke and Schultz[17] or a more recent result by Koutecky, Levin and Onn [26] which gives a stronglypolynomial algorithm. Using these algorithms directly would result in an algorithm witha running time of the form poly ( n ) · f ( r, s, t, ∆) for some doubly exponential functioninvolving parameters r, s, t and ∆. However, in the following we explain briefly how theaugmenting step can be computed in order to obtain an algorithm with a running timethat is polynomial in t .Given a feasible solution z ∈ Z s + nt ≥ of IP (1) and a multiple λ ∈ Z ≥ (which can beguessed). A core ingredient in the augmenting framework is to find an augmenting step.Therefore, we have to compute a Graver element y ∈ ker ( A ) such that z + λy is a feasiblesolution of IP (1) and the objective λc T y is maximal over all Graver elements.Let L = ( rs ∆) O ( rs ((2 r ∆+1) rs )) be the bound for (cid:13)(cid:13)(cid:13) y ( i ) (cid:13)(cid:13)(cid:13) that we obtain from the pre-vious Lemma. To find the optimal augmenting step, it is sufficient to solve the IPmax { c T x | A x = 0 , ¯ ℓ ≤ x ≤ ¯ u, k x k ∞ ≤ L } for modified upper and lower bounds ¯ ℓ, ¯ u according to the multiple λ and the feasible solution z . Having the best augmenting step7t hand, one can show that the objective value improves by a factor of 1 − n . This isdue to the fact that the difference z − z ∗ between z and an optimal solution z ∗ can berepresented by z − z ∗ = n X i =1 λ i g i for Graver elements g , . . . g n ∈ Z d ≥ and multiplicities λ , . . . , λ n ∈ Z ≥ [6].In the following we briefly show how to solve the IP max { c T x | A x = 0 , ¯ ℓ ≤ x ≤ ¯ u, k x k ∞ ≤ L } in order to compute the augmenting step. The algorithm works as follows: • Compute for every y (0) with (cid:13)(cid:13)(cid:13) y (0) (cid:13)(cid:13)(cid:13) ≤ L the objective value of the cycle y consistingof y (0) , ¯ y (1) , . . . , ¯ y ( n ) , where ¯ y ( i ) for i > c ( i ) ) T ¯ y ( i ) B ( i ) ¯ y ( i ) = − A ( i ) y (0) ¯ ℓ ( i ) ≤ ¯ y ( i ) ≤ ¯ u ( i ) where ¯ ℓ ( i ) , ¯ u ( i ) are the upper and lower bounds for the variables ¯ y ( i ) and c ( i ) theircorresponding objective vector. Note that the first set of constraints of the IPensure that A y = 0. The IPs can be solved with the algorithm of Eisenbrand andWeismantel [10] in time O (∆ O ( r ) ) each. • Return the cycle with maximum objective.As the number of different vectors y (0) with 1-norm ≤ L is bounded by ( L + 1) s =( rs ∆) O ( rs ((2 r ∆+1) rs )) step 1 of the algorithm takes time ∆ O ( r ) · ( rs ∆) O ( rs ((2 r ∆+1) rs )) .Putting all things together, we obtain the following theorem regarding the worst-casecomplexity for solving 2-stage stochastic IPs. For details regarding the remaining partsof the augmenting framework like finding an initial feasible solution or a bound on therequired augmenting steps we refer to [9] and [26] Theorem 3. A -stage stochastic IP of the form (1) can be solved in time n t ϕ log ( nt )( rs ∆) O ( rs ((2 r ∆+1) rs )) , where ϕ is the encoding length of the IP. Before we are ready to prove our main Lemma 2, we need two helpful observations aboutthe intersection of integer cones. An integer cone is defined for a given (finite) generatingset B ⊂ Z d ≥ of elements by int.cone ( B ) = { X b ∈ B λ b b | λ ∈ Z B ≥ } . b of an integer cone int.cone ( B ) is indecomposable if theredo not exist elements b , b ∈ int.cone ( B ) \ { } such that b = b + b . We can assumethat the generating set B of an integer cone consists just of the set of indecomposableelements as any decomposable element can be removed from the generating set.In the following we allow to use a vector set B as a matrix and vice versa where theelements of the set B are the columns of the matrix B . This way we can multiply B with a vector, i.e. Bλ = P b ∈ B λ b b for some λ ∈ Z B . Lemma 3.
Given two integer cones int.cone ( B (1) ) and int.cone ( B (2) ) for some gen-erating sets B (1) , B (2) ⊂ Z d where each element x ∈ B (1) ∪ B (2) has bounded norm k x k ∞ ≤ ∆ . Consider the integer cone of the intersection int.cone ( ˆ B ) = int.cone ( B (1) ) ∩ int.cone ( B (2) ) for some generating set of elements ˆ B . Then for each generating element b ∈ ˆ B of theintersection cone with b = B (1) λ = B (2) γ for some λ ∈ Z B (1) ≥ and γ ∈ Z B (2) ≥ , we have that k λ k , k γ k ≤ (2 d ∆ + 1) d . Furthermore, the size of b is bounded by k b k ∞ ≤ ∆(2 d ∆ + 1) d Proof.
Consider the representation of a point b = B (1) λ = B (2) γ in the intersection of int.cone ( B (1) ) and int.cone ( B (2) ). The sum v + . . . v ( k λ k + k γ k ) consisting of λ i copies ofthe i -th element of B (1) and γ i copies of the negative of the i -th element of B (2) equals tozero. Using Steinitz’ Lemma, there exists a reordering of the vectors u + . . . + u ( k λ k + k γ k ) such that the partial sum P ℓi =1 u i ≤ d ∆, for each ℓ ≤ k λ k + k γ k .If k λ k + k γ k > (2 d ∆ + 1) d then by the pigeonhole principle, there exist two partialsums of the same value. Hence, there are two sequences that sum up to zero, i.e. thereexist non-zero vectors λ ′ , λ ′′ ∈ Z B (1) ≥ with λ = λ ′ + λ ′′ and γ ′ , γ ′′ ∈ Z B (2) ≥ with γ = γ ′ + γ ′′ such that B (1) λ ′ − B (2) γ ′ = 0 and B (1) λ ′′ − B (1) γ ′′ = 0. Hence B (1) λ ′ = B (2) γ ′ and B (1) λ ′′ = B (2) γ ′′ are elements of the intersection cone. This implies that b can bedecomposed in the intersection cone.Using a similar argumentation as in the previous lemma, we can consider the inter-section of several integer cones. Note that we can not simply use the above Lemmainductively as this would lead to worse bounds. Lemma 4.
Consider integer cones int.cone ( B (1) ) , . . . , int.cone ( B ( ℓ ) ) for some generat-ing sets B (1) , . . . , B ( ℓ ) ⊂ Z d ≥ with k x k ∞ ≤ ∆ for each x ∈ B ( i ) . Consider the integercone of the intersection int.cone ( ˆ B ) = ℓ \ i =1 int.cone ( B ( i ) ) for some generating set of elements ˆ B .Then for each generating element b ∈ ˆ B with B ( i ) λ ( i ) = b for some λ ( i ) ∈ Z B ( i ) ≥ in theintersection cone, we have that (cid:13)(cid:13)(cid:13) λ ( i ) (cid:13)(cid:13)(cid:13) ≤ O (( d ∆) d ( ℓ − ) for all ≤ i ≤ ℓ . roof. Given vectors λ (1) , . . . , λ ( ℓ ) with λ ( k ) ∈ Z B ( k ) ≥ and B ( k ) λ ( k ) = b for each k ≤ ℓ .Consider the sum of vectors v ( k )1 + . . . + v ( k ) k λ ( k ) k for each 1 ≤ k ≤ ℓ consisting of λ ( k ) j copiesof the j -th element of B kj . By adding 0 vectors to sums we can assume without loss ofgenerality that every sequence has the same number of summands L = max i =1 ,...,ℓ (cid:13)(cid:13)(cid:13) λ ( i ) (cid:13)(cid:13)(cid:13) . Claim:
There exists a reordering u ( k )1 + . . . + u ( k ) L for each of these sums such thateach partial sum p ( k ) m = P mi =1 u ( k ) i is close to the line between 0 and b and more precisely: (cid:13)(cid:13)(cid:13)(cid:13) p ( k ) m − mL b (cid:13)(cid:13)(cid:13)(cid:13) ∞ ≤ d + 1) . for each m ≤ L and each k ≤ ℓ . To see this, we construct the sequence that consists ofvectors from B ( k ) and subtract L fractional parts of the vector b . To count the numberof vectors we use an additional component with weight ∆ of the vector and define¯ v ( k ) i = ∆ v ( k ) i ! and ¯ b = L ∆ b ! . Note that (cid:13)(cid:13)(cid:13) ¯ v ( k ) i (cid:13)(cid:13)(cid:13) , (cid:13)(cid:13)(cid:13) L ¯ b (cid:13)(cid:13)(cid:13) ≤ v ( k )1 + . . . + ¯ v ( k ) L − L ¯ b − . . . − L ¯ b sums up to zero, as v ( k )1 + . . . + v ( k ) L = b . Hence we canapply the Steinitz Lemma to obtain a reordering ¯ u + . . . + ¯ u L for each sequence suchthat each partial sum k P mi =1 ¯ u i k ∞ ≤ d + 1) for each m ≤ L . Each partial sum thatsums up to index m contains p vectors ¯ v ( k ) j and q vectors L b for some p, q ∈ Z ≥ with m = p + q . Hence P pi =1 u i − qL b ≤ d + 1). Furthermore, the ∆ entry of each vectorguarantees that | p − q | ≤ d + 1) which implies the statement of the claim.Now consider the differences of a partial sum p ( k ) m with p (1) m . Using the claim fromabove, we can now argue that (cid:13)(cid:13)(cid:13) p (1) m − p ( k ) m (cid:13)(cid:13)(cid:13) ∞ ≤ d + 1) for each m ≤ L and k ≤ ℓ aseach p ( k ) m is close to mL b . Therefore the number of different values for p (1) m − p ( k ) m is boundedby (16∆( d + 1) + 1) d . Assuming that L > (16∆( d + 1) + 1) d ( ℓ − , by the pigeonholeprinciple there exist indices m ′ and m ′′ with m ′ > m ′′ such that p (1) m ′ − p ( k ) m ′ = p (1) m ′′ − p ( k ) m ′′ for each k ≤ ℓ . Hence p (1) m ′ − p (1) m ′′ = . . . = p ( ℓ ) m ′ − p ( ℓ ) m ′′ =: b ′ and b ′ , b − b ′ ∈ ∩ ℓi =1 B i .This implies that b can be decomposed and is therefore not a generating element of ∩ ℓi =1 int.cone ( B i ) . Using the results from the previous section, we are now finally able to prove our mainLemma 2.To get an intuition for the problem however, we start by giving a sketch of the proof forthe 1-dimensional case. In this case, the multisets T i consist solely of natural numbers,i.e T , . . . , T n ⊂ Z ≥ . Suppose that each set T i consists only of many copies of a singleintegral number x i ∈ { , . . . , ∆ } . Then it is easy to find a common multiple as ∆!1 · ∆!2 · . . . = ∆!∆ · ∆. Hence one can choose the subsets consisting of ∆! x i copies of x i . Nowsuppose that the multisets T i can be arbitrary. If | T i | ≤ ∆ · ∆! = ∆ O (∆) we are done.But on the other hand, if | T i | ≥ ∆ · ∆!, by the pigeonhole principle there exists a single10lement x i ∈ { , . . . , ∆ } for every T i that appears at least ∆! times. Then we can argueas in the previous case where we needed at most ∆! copies of a number x i ∈ { , . . . , ∆ } .This proves the lemma in the case d = 1.In the case of higher dimensions, the lemma seems much harder to prove. But in prin-ciple we use generalizations of the above techniques. Instead of single natural numbershowever, we have to work with bases of corresponding basic feasible LP solutions andthe intersection of the integer cone generated by those bases.In the proof we need the notion of a cone which is simply the relaxation of an integercone. For a generating set B ⊂ Z d ≥ , a cone is defined by cone ( B ) = { X b ∈ B λ b b | λ ∈ R B ≥ } . Proof.
First, we describe the multisets T , . . . , T n ⊂ Z d ≥ by multiplicity vectors λ (1) , . . . , λ ( n ) ∈ Z P ≥ , where P ⊂ Z d is the set of integer points p with k p k ∞ ≤ ∆. Each λ ( i ) p thereby statesthe multiplicity of vector p in T i . Hence P t ∈ T i t = P p ∈ P λ ( i ) p p and our objective is to findvectors y (1) , . . . , y ( n ) ∈ Z P ≥ with y ( i ) ≤ λ ( i ) such that P p ∈ P y (1) p p = . . . = P p ∈ P y ( n ) p p .Consider the linear program X p ∈ P x p p = b (2) x ∈ R P ≥ Let x (1) , . . . , x ( ℓ ) ∈ R d ≥ be all possible basic feasible solutions of the LP correspondingto bases B (1) , . . . , B ( ℓ ) ∈ Z d × d ≥ i.e. B ( i ) x ( i ) = b .In the following we proof two claims that correspond to the two previously describedcases of the one dimensional case. First, we consider the case that essentially eachmultiset T i corresponds to one of the basic feasible solution x ( j ) . In the 1-dimensionalcase this would mean that each set consists only of a single number. Note that theintersection of integer cones in dimension 1 is just the least common multiple, i.e. int.cone ( z ) ∩ int.cone ( z ) = int.cone ( lcm ( z , z )) for some z , z ∈ Z ≥ . Claim 1:
If for all i we have (cid:13)(cid:13)(cid:13) x ( i ) (cid:13)(cid:13)(cid:13) > d · O (( d ∆) d ( ℓ − ) then there exist non-zerovectors y (1) , . . . , y ( ℓ ) ∈ Z d ≥ with y (1) ≤ x (1) , . . . , y ( ℓ ) ≤ x ( ℓ ) and (cid:13)(cid:13)(cid:13) y ( i ) (cid:13)(cid:13)(cid:13) ≤ d · O (( d ∆) d ( ℓ − )such that B (1) y (1) = . . . = B ( ℓ ) y ( ℓ ) . Proof of the claim:
Note that B ( i ) x ( i ) = b and hence b ∈ cone ( B ( i ) ). In thefollowing, our goal is to find a non-zero point q ∈ Z d ≥ such that q = B (1) y (1) = . . . = B ( ℓ ) y ( ℓ ) for some vectors y (1) , . . . , y ( ℓ ) ∈ Z d ≥ . However, this means that q has to be inthe integer cone int.cone ( B ( i ) ) for every 1 ≤ i ≤ ℓ and therefore in the intersection ofall the integer cones, i.e. q ∈ T ni =1 int.cone ( B ( i ) ). By Lemma 4 there exists a set ofgenerating elements ˆ B such that • int.cone ( ˆ B ) = T ni =1 int.cone ( B ( i ) ) and int.cone ( ˆ B ) = { } as b ∈ cone ( ˆ B ) and11 each generating vector p ∈ ˆ B can be represented by p = B ( i ) λ for some λ ∈ Z d ≥ with k λ k ≤ O (( d ∆) d ( ℓ − ) for each basis B ( i ) .As b ∈ cone ( ˆ B ) there exists a vector ˆ x ∈ R ˆ B ≥ with ˆ B ˆ x = b . Our goal is to show thatthere exists a non-zero vector q ∈ ˆ B with ˆ x q ≥
1. In this case b can be simply writtenby b = q + q ′ for some q ′ ∈ cone ( ˆ B ). As q and q ′ are contained in the intersection of allcones, there exists for each generating set B ( j ) a vectors y ( j ) ∈ Z B ( j ) ≥ and z ( j ) ∈ R B ( j ) ≥ such that B ( j ) y ( j ) = q and B ( j ) z ( j ) = q ′ . Hence x ( j ) = y ( j ) + z ( j ) and we finally obtainthat x ( j ) ≥ y ( j ) for y ( j ) ∈ Z B ( j ) ≥ which shows the claim.Therefore it only remains to prove the existence of the point q with ˆ x q ≥
1. ByLemma 4, each vector p ∈ ˆ B can be represented, by p = B ( i ) x ( p ) for some x ( p ) ∈ Z B ( i ) ≥ with (cid:13)(cid:13)(cid:13) x ( p ) (cid:13)(cid:13)(cid:13) ≤ O (( d ∆) d ( ℓ − ) for every basis B ( i ) .As B ( i ) x ( i ) = b = P p ∈ ˆ B ˆ x p p = P p ∈ ˆ B ˆ x p ( B ( i ) x ( p ) ), every x ( i ) can be written by x ( i ) = P p ∈ ˆ B x ( p ) ˆ x p and we obtain a bound on (cid:13)(cid:13)(cid:13) x ( i ) (cid:13)(cid:13)(cid:13) assuming that every for every p ∈ ˆ B wehave ˆ x p < (cid:13)(cid:13)(cid:13) x ( i ) (cid:13)(cid:13)(cid:13) ≤ X p ∈ ˆ B (cid:13)(cid:13)(cid:13) x ( p ) ˆ x p (cid:13)(cid:13)(cid:13) x p < < X p ∈ ˆ B (cid:13)(cid:13)(cid:13) x ( p ) (cid:13)(cid:13)(cid:13) ≤ d · O (( d ∆) d ( ℓ − ) . The last inequality follows as we can assume by Caratheodory’s theorem [30] that thenumber of non-zero components of ˆ x is less or equal than d . Hence if (cid:13)(cid:13)(cid:13) x ( i ) (cid:13)(cid:13)(cid:13) ≥ d · O (( d ∆) d ( ℓ − ) then there has to exist a vector q ∈ ˆ B with x q ≥ Claim 2:
For every vector λ ( i ) ∈ Z P ≥ with P p ∈ P λ p p = b there exists a basic feasiblesolution x ( j ) of LP (2) with basis B ( j ) such that ℓ x ( j ) ≤ λ ( i ) in the sense that ℓ x ( j ) p ≤ λ ( i ) p for every p ∈ B ( j ) . Proof of the claim:
The proof of the claim can be easily seen as each multiplicityvector λ ( i ) is also a solution of the linear program (2). By standard LP theory, weknow that each solution of the LP is a convex combination of the basic feasible solutions x (1) , . . . , x ( ℓ ) . Hence, each multiplicity vector λ ( i ) can be written as a convex combinationof x (1) , . . . , x ( ℓ ) , i.e. for each λ ( i ) , there exists a t ∈ R ℓ ≥ with k t k = 1 such that λ ( i ) = P ℓi =1 t i ¯ x ( i ) , where ¯ x ( i ) p = ( x ( i ) p if p ∈ B ( i ) λ ( i ) an index j with t j ≥ ℓ which proves the claim.Using the above two claims, we can now prove the claim of the lemma by showing thatfor each λ ( i ) , there exist a vector y ( i ) ≤ λ ( i ) with bounded 1-norm such that P p ∈ P y (1) p p = . . . = P p ∈ P y ( n ) p p .By Claim 2 we know that for each λ ( i ) (1 ≤ i ≤ n ) we find one of the basic fea-sible solutions x ( j ) (1 ≤ j ≤ ℓ ) with ℓ x ( j ) ≤ λ ( i ) . Applying the first claim to vec-tors ℓ x (1) , . . . , ℓ x ( ℓ ) with ℓ b = ℓ Bx (1) = . . . = ℓ Bx ( ℓ ) , we obtain vectors y (1) ≤ ℓ x (1) , . . . , y ( ℓ ) ≤ ℓ x ( ℓ ) with By (1) = . . . = By ( ℓ ) . Hence, we find for each λ ( i ) a vec-tor y ( j ) ∈ Z B ( j ) ≥ with y ( j ) ≤ λ ( i ) .As P p ∈ P λ ( i ) p p = b = P p ∈ B ( j ) x ( j ) p p and every p ∈ P is bounded by k p k ∞ ≤ ∆ we knowthat (cid:13)(cid:13)(cid:13) λ ( i ) (cid:13)(cid:13)(cid:13) ≤ d ∆ (cid:13)(cid:13)(cid:13) x ( j ) (cid:13)(cid:13)(cid:13) (3)for every i ≤ n and every j ≤ ℓ . Hence if (cid:13)(cid:13)(cid:13) λ ( i ) (cid:13)(cid:13)(cid:13) ≥ d ∆ ℓ · O (( d ∆) d ( ℓ − ), we know that (cid:13)(cid:13)(cid:13) ℓ x ( j ) (cid:13)(cid:13)(cid:13) ≥ d · O (( d ∆) d ( ℓ − ). Therefore, Claim 1 can be applied to find y ( j ) ≤ ℓ x ( j ) ofsmaller 1-norm.Note that ℓ is bounded by (cid:0) | P | d (cid:1) ≤ | P | d and | P | ≤ ∆ d and we obtain that (cid:13)(cid:13)(cid:13) y ( j ) (cid:13)(cid:13)(cid:13) ≤ d ∆ ℓ · O (( d ∆) d ( ℓ − ) = ( d ∆) O ( d (∆ d )) . In this section we prove a lower bound on the size of Graver elements for a matrix wherethe overlapping parts contains only a single variable, i.e. r = 1.First, consider the matrix A = − · · · − − − · · · . This matrix is of 2-stage stochastic structure with r = 1 and s = 1. We will argue thatevery element in ker ( A ) ∩ ( Z ∆ \ { } ) is large and therefore, the Graver elements of thematrix are large as well. We call the variable corresponding to the i -th column of thematrix variable x i , where x is the variable corresponding to he column with only − x i for i > i in component i and 0 everywhere else. Clearly, for x ∈ Z ∆ to be in ker ( A ) ∩ Z n , weknow by the first row of matrix A that x has to be a multiple of 2. By the second rowof the matrix, we know that x has to be a multiple of 3 and so on. Henceforth thevariable x has to be a multiple of all numbers 1 , . . . , ∆. Thus x is a multiple of theleast common multiple of numbers 1 , . . . , n which is divided by the product of all primesbetween 1 , . . . , n . By known bounds for the product of all primes ≤ n [11], this impliesthat the value of x ∈ Ω(∆) , which shows that the the size of Graver elements of matrix A is in (2 Ω(∆) ).The disadvantage in the matrix above is that the entries of the matrix are rather big.In the following we try to reduce the largest entry of the overall matrix by encoding each13umber 1 , . . . , ∆ into a submatrix. For the encoding we use the matrix C = ∆ − · · ·
00 ∆ − · · · − , having s rows and s + 1 constraints. Due to the first row of matrix C , for a vector x ∈ ker ( C ) ∩ Z s +1 we know by the i -th row of the matrix that x i = x i − · ∆. Hence x i = ∆ i − x . Now we can encode in each number z ∈ { , . . . , ∆ s +1 − } in an additionalrow by z = P si =0 a i ( z )∆ i , where a i ( z ) is the i -th number in a representation of z in base∆. Hence, we consider the following matrix: A ′ = − a (2) · · · a s (2)0 C− a (3) · · · a s (3)0 C ... . . . By the same argumentation as in matrix A above we know that x has to be a multiple ofeach number 2 , . . . , ∆ s +1 −
1. This implies that every non-zero integer vector of ker ( A ′ )has infinity-norm of at least 2 Ω(∆ s ) is the number of rows of the block matrix. This showsthe doubly exponential lower bound for the Graver complexity of 2-stage stochastic IPs. In this section we show that Lemma 2 can also be used to get a bound on the Graverelements of matrices with a multi-stage stochastic structure. Multi-stage stochastic IPsare a well known generalization of 2-stage stochastic IPs. For the stochastical program-ming background on multi-stage stochastic IPs we refer to [29]. Here we simply showhow to solve the deterministic equivalent IP with a large constraint matrix. Regardingthe augmentation framework of multi-stage stochastic IPs, it was previously known thata similar implicit bound than 2-stage stochastic IPs also holds for multi-stage stochasticIPs. This was shown by Aschenbrenner and Hemmecke [2] who built upon the bound of2-stage stochastic IPs.In the following we define the shape of the constraint matrix M of a multi-stagestochastic IP. The constraint matrix consists of given block matrices A (1) , . . . , A ( ℓ ) forsome ℓ ∈ Z ≥ , where each block matrix A ( i ) uses a unique set of columns in M . For agiven block matrix, let rows ( A ( i ) ) be the set of rows in M which are used by A ( i ) . Amatrix M is multi-stage stochastic shape, if the following conditions are fulfilled: • There is a block matrix A i such that for every 1 ≤ i ≤ n we have rows ( A ( i ) ) ⊆ rows ( A ( i ) ). 14 For two matrices A ( i ) , A ( j ) either rows ( A ( i ) ) ⊆ rows ( A ( j ) ) or rows ( A ( i ) ) ∩ rows ( A ( j ) ) = ∅ holds.An example of a matrix of multi-stage stochastic structure is given in the following: A (1) A (2) A (4) A (5) A (3) A (6) A (7) A (8) Intuitively, the constraint matrix is of multi-stage stochastic shape if the block matriceswith the relation ⊆ on the rows, forms a tree (see figure below). A (4) A (5) A (6) A (7) A (8) A (2) A (3) A (1) Let s i be the number of columns that are used by block matrices in the i -th level of thetree (starting from level 0 at the leaves). Here we assume that the number of columnsof block matrices in the same level of the tree are all identical. Let r be the number ofrows that are used by the block matrices that correspond to the leaves of the tree. Inthe following Lemma we show that Lemma 2 can be applied inductively to bound thesize of an augmenting step of multi-stage stochastic IPs. The proof is similar to that ofTheorem 2. Theorem 4.
Let y be an indecomposable cycle of matrix M then k y k ∞ is bounded by afunction T ( s , . . . , s t , r, ∆) , where t is the depth of the tree. The functions T involves atower of t + 1 exponentials and is recursively defined by T ( r, ∆) = (∆ r ) O ( r ) T ( s , . . . , s i , r, ∆) = 2 ( T ( s ,...,s i − ,r, ∆)) O ( s i ) . Proof.
Consider a submatrix A of the constraint matirx M corresponding to a subtreeof the tree with depth t . Hence, A itself is of multi-stage stochastic structure. Letsubmatrix A ∈ { A (1) , . . . , A ( ℓ ) } be the root of the corresponding subtree of A and let thesubmatrices B (1) , . . . , B ( n ) be the submatrices corresponding to the subtrees of A with rows ( B ( i ) ) ⊆ rows ( A ) for all 1 ≤ i ≤ n . 15et ¯ A ( i ) be the submatrix of A which consists only of the rows that are used by B ( i ) (recall that rows ( B ( i ) ) ⊆ rows ( A )). Now suppose that y is a cycle of A , i.e. A y = 0and let y (0) be the subvector of y consisting only of the entries that belong to matrix A .Symmetrically let y ( i ) be the entries of vector y that belong only to the matrix B ( i ) for i >
0. Since A y = 0 we also know that ¯ A ( i ) y (0) + B ( i ) y ( i ) = ( ¯ A ( i ) B ( i ) ) y (0) y ( i ) ! = 0 for every1 ≤ i ≤ n . Each vector y (0) y ( i ) ! can be decomposed into a multiset of indecomposablecylces C i , i.e. y (0) y ( i ) ! = X c ∈ C i c where each cycle c ∈ C i is a vector c = c (0) c ( i ) ! consisting of subvector c (0) of entries thatbelong to matrix A and a subvector c ( i ) of entries that belong to the matrix B ( i ) . Notethat the matrix ( A ( i ) B ( i ) ) has a multi-stage stochastic structure with a correspondingtree of depth t −
1. Hence, by induction we can assume that each indecomposable cycle c ∈ C i is bounded by k c k ∞ ≤ T ( s , . . . , s t − , r ) for all 1 ≤ i ≤ n , where T is a functionthat involves a tower of t exponentials. In the base case that t = 0 and the matrix A only consists of one block matrix, we can bound k c k ∞ by (2∆ r + 1) r using Theorem 1.Let p , be the projection that maps a cycle to the entries that belong the matrix A i.e. p ( c ) = p ( c (0) c ( i ) ! ) = c (0) .For each vector y (0) y ( i ) ! and its decomposition into cycles C i let p ( C i ) = { p ( c ) | c ∈ C i } .Since y (0) = X c ∈ C p ( c ) = . . . = X c ∈ C n p ( c )we can apply Lemma 2, to obtain submultisets S i ⊆ p ( C i ) of bounded size | S i | ≤ ( s t T ) O ( s t ( T s t )) with T = T ( s , . . . , s t − , r, ∆) such that P x ∈ S x = . . . = P x ∈ S n x . As T ( s , . . . , s t − , r )is a function with t exponentials, the bound on | S i | depends by a function t + 1 expo-nentials.There exist submultisets ¯ C ⊆ C , . . . , ¯ C n ⊆ C n with p ( ¯ C ) = S , . . . , p ( ¯ C n ) = S n .Hence, we can define the solution ¯ y ≤ y by ¯ y ( i ) = P c ∈ ¯ C i ¯ p ( c ) for every i >
0, where¯ p is the function that projects a vector to the entries that belong the matrix B ( i ) i.e.¯ p ( c ) = ¯ p ( c (0) c ( i ) ! ) = c ( i ) . For i = 0 we define y (0) = P c ∈ ¯ C i p ( c ). As the sum P c ∈ ¯ C i p ( c ) isidentical for every 1 ≤ i ≤ n , the vector ¯ y is a well defined.16et K be the constant derived from the O -notation of Lemma 2 and T = T ( s , . . . , s t − , r, ∆),then the size of ¯ y can be bounded by k ¯ y k ∞ ≤ T · max i | C i | = T · ( s t T ) Ks t · T ( s t ) ≤ Ks t log( s t T ) · T ( s t ) ≤ T O ( s t ) . Computing the Augmenting Step
As a consequence for the bound of the Graver elements of the constraint matrix M of multi-stage stochastic IPs, we obtain by the augmentation framework an algorithmto solve multi-stage stochastic IPs. As explained in Section 2, the core difficulty is tocompute the augmenting step y ∈ Ker ( M ) such that z + λy is a feasible solution fora given initial feasible solution z and a multiple λ . Therefore, we have to solve the IPmax { c T x | M x = 0 , ¯ ℓ ≤ x ≤ ¯ u, k x k ∞ ≤ T } for some upper and lower bounds ¯ l, ¯ u andconstant T = T ( s , . . . , s t , r, ∆) that is derived from the bound of Theorem 4. This IPcan be solved similar than in the case of 2-stage stochastic IPs. However, since we havemultiple layers, we have to apply the algorithm recursively. At each recursive call, weguess the value of the variables of the corresponding matrix and then apply the algorithmrecursively. For further details on the algorithmic side and the running time we refer to[2] or [26].As a final result we obtain the following theorem for multi-stage stochastic IPs: Theorem 5.
A multi-stage stochastic IP with a constraint matrix M that correspondsto a tree of depth t can be solved in time n s ϕ log ( ns ) · T ( s , . . . , s t , r, ∆) where ϕ is the encoding length of the IP and T is a function depending only on parameters s , . . . , s t , r, ∆ and involves a tower of t + 1 exponentials. References [1] S. Ahmed, M. Tawarmalani, and N. V. Sahinidis. A finite branch-and-bound al-gorithm for two-stage stochastic integer programs.
Mathematical Programming ,100(2):355–377, Jun 2004.[2] M. Aschenbrenner and R. Hemmecke. Finiteness theorems in stochastic integerprogramming.
Foundations of Computational Mathematics , 7(2):183–227, 2007.[3] J. R. Birge and F. Louveaux.
Introduction to stochastic programming . SpringerScience & Business Media, 2011.[4] C. C. Carøe and J. Tind. L-shaped decomposition of two-stage stochastic programswith integer recourse.
Mathematical Programming , 83(1-3):451–464, 1998.175] L. Chen, L. Xu, and W. Shi. On the graver basis of block-structured integer pro-gramming. arXiv preprint arXiv:1805.03741 , 2018.[6] W. Cook, J. Fonlupt, and A. Schrijver. An integer analogue of caratheodory’stheorem.
Journal of Combinatorial Theory, Series B , 40(1):63–70, 1986.[7] J. A. De Loera, R. Hemmecke, and M. K¨oppe.
Algebraic and geometric ideas in thetheory of discrete optimization . SIAM, 2012.[8] M. A. H. Dempster, M. Fisher, L. Jansen, B. Lageweg, J. K. Lenstra, and A. Rin-nooy Kan. Analytical evaluation of hierarchical planning systems.
Operations Re-search , 29(4):707–716, 1981.[9] F. Eisenbrand, C. Hunkenschr¨oder, and K. Klein. Faster algorithms for integerprograms with block structure. In , pages 49:1–49:13, 2018.[10] F. Eisenbrand and R. Weismantel. Proximity results and faster algorithms for inte-ger programming using the Steinitz lemma. In
Proceedings of the Twenty-Ninth An-nual ACM-SIAM Symposium on Discrete Algorithms , pages 808–816. SIAM, 2018.[11] P. Erd¨os. Ramanujan and i. In
Number Theory, Madras 1987 , pages 1–17. Springer,1989.[12] P. Faliszewski, R. Gonen, M. Kouteck´y, and N. Talmon. Opinion diffusion andcampaigning on society graphs. In
Proceedings of the Twenty-Seventh InternationalJoint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stock-holm, Sweden. , pages 219–225, 2018.[13] D. Gade, S. K¨u¸c¨ukyavuz, and S. Sen. Decomposition algorithms with parametricgomory cuts for two-stage stochastic integer programs.
Mathematical Programming ,144(1-2):39–64, 2014.[14] J. E. Graver. On the foundations of linear and integer linear programming i.
Math-ematical Programming , 9(1):207–226, 1975.[15] V. S. Grinberg and S. V. Sevast’yanov. Value of the Steinitz constant.
FunctionalAnalysis and Its Applications , 14(2):125–126, 1980.[16] W. K. K. Haneveld and M. H. van der Vlerk. Optimizing electricity distributionusing two-stage integer recourse models. In
Stochastic Optimization: Algorithmsand Applications , pages 137–154. Springer, 2001.[17] R. Hemmecke and R. Schultz. Decomposition of test sets in stochastic integerprogramming.
Math. Program. , 94(2-3):323–341, 2003.[18] K. Jansen, K. Klein, M. Maack, and M. Rau. Empowering the configuration-ip -new PTAS results for scheduling with setups times.
CoRR , abs/1801.06460, 2018.1819] K. Jansen, A. Lassota, and L. Rohwedder. Near-linear time algorithm for n-foldilps via color coding. arXiv preprint arXiv:1811.00950 , 2018.[20] P. Kall and S. W. Wallace.
Stochastic programming . Springer, 1994.[21] R. Kannan. Minkowski’s convex body theorem and integer programming.
Mathe-matics of operations research , 12(3):415–440, 1987.[22] D. Knop and M. Kouteck´y. Scheduling meets n-fold integer programming.
J.Scheduling , 21(5):493–503, 2018.[23] D. Knop, M. Kouteck´y, and M. Mnich. Combinatorial n-fold Integer Program-ming and Applications. In K. Pruhs and C. Sohler, editors, , volume 87 of
Leibniz International Proceed-ings in Informatics (LIPIcs) , pages 54:1–54:14, Dagstuhl, Germany, 2017. SchlossDagstuhl–Leibniz-Zentrum fuer Informatik.[24] D. Knop, M. Kouteck´y, and M. Mnich. Voting and bribing in single-exponentialtime. In , pages 46:1–46:14, 2017.[25] D. Knop, M. Pilipczuk, and M. Wrochna. Tight complexity lower bounds for integerlinear programming with few constraints. arXiv preprint arXiv:1811.01296 , 2018.[26] M. Kouteck´y, A. Levin, and S. Onn. A parameterized strongly polynomial algo-rithm for block structured integer programs. In , pages 85:1–85:14, 2018.[27] S. K¨u¸c¨ukyavuz and S. Sen. An introduction to two-stage stochastic mixed-integerprogramming. In
Leading Developments from INFORMS Communities , pages 1–27.INFORMS, 2017.[28] F. Pelupessy and A. Weiermann. Ackermannian lower bounds for lengths of badsequences of monomial ideals over polynomial rings in two variables.
MathematicalTheory and Computational Practice , page 276, 2009.[29] W. R¨omisch and R. Schultz. Multistage stochastic integer programs: An intro-duction. In
Online optimization of large scale systems , pages 581–600. Springer,2001.[30] A. Schrijver.
Theory of linear and integer programming . John Wiley & Sons, 1998.[31] R. Schultz, L. Stougie, and M. H. Vlerk. Two-stage stochastic integer programming:a survey.
Statistica Neerlandica , 50(3):404–416.[32] E. Steinitz. Bedingt konvergente reihen und konvexe systeme.
Journal f¨ur die reineund angewandte Mathematik , 143:128–176, 1913.1933] M. Zhang and S. K¨u¸c¨ukyavuzvuz. Finitely convergent decomposition algorithmsfor two-stage stochastic pure integer programs.