Complexity Estimates for Fourier-Motzkin Elimination
aa r X i v : . [ c s . S C ] M a y Complexity Estimates for Fourier-Motzkin Elimination
Rui-Juan Jing, Marc Moreno Maza and Delaram TalaashrafiMay 14, 2019
Abstract
In this paper, we propose a new method for removing all the redundant in-equalities generated by Fourier-Motzkin elimination. This method is based on animproved version of Balas’ work and can also be used to remove all the redundantinequalities in the input system. Moreover, our method only uses arithmetic opera-tions on matrices and avoids resorting to linear programming techniques. Algebraiccomplexity estimates and experimental results show that our method outperformsalternative approaches, in particular those based on linear programming and simplexalgorithm.
Polyhedral sets play an important role in computational sciences. For instance, theyare used to model, analyze, transform and schedule for-loops of computer programs; werefer to the articles [2–4, 10, 11, 14, 32]. Of prime importance are the following operationson polyhedral sets: (i) conversion between H-representation and V-representation; and(ii) projection, namely Fourier-Motzkin elimination and block elimination.Fourier-Motzkin elimination is an algorithmic tool for projecting a polyhedral set onto alinear subspace. It was proposed independently by Joseph Fourier and Theodore Motzkin,in 1827 and in 1936. The original version of this algorithm produces large amounts ofredundant inequalities and has a double exponential algebraic complexity. Removing allthese redundancies is equivalent to giving a minimal representation of the projection of thepolyhedron. Leonid Khachiyan explained in [22] how linear programming (LP) could beused to remove all redundant inequalities, thereby reducing the cost of Fourier-Motzkinelimination to singly exponential time; Khachiyan did not, however, give any runningtime estimate. As we shall prove in this paper, rather than using linear programmingone may use only matrix arithmetic, increasing the theoretical and practical efficiency1f Fourier-Motzkin elimination while still producing an irredundant representation of theprojected polyhedron.As mentioned above, the so-called block elimination method is another algorithmic toolto project a polyhedral set. This method requires enumeration of the extreme rays ofa cone. Many authors have been working on this topic, see Nata´lja V. Chernikova [8],Herv´e Le Verge [25] and Komei Fududa [13]. Other algorithms for projecting polyhedralsets remove some (but not all) redundant inequalities with the help of extreme rays:see the work of David A. Kohler [23]. As observed by Jean-Louis Imbert in [17], themethod he proposed in that paper and that of Sergei N. Chernikov in [7] are equivalent.These methods are very effective in practice, but none of them can remove all redundantinequalities generated by Fourier-Motzkin Elimination.Egon Balas proposed in [1] a method to overcome this latter limitation. We foundflaws, however, in both his construction and its proof. A detailed account is included inSection 7.In this paper, we show how to remove all the redundant inequalities generated byFourier-Motzkin Elimination based on an improved version of Balas’ work. To be morespecific, a so-called redundancy test cone is generated by solving a projection problem fora cone which only one more inequality and one more variable than the inequality definingthe input polyhedron. This latter projection is carried out by means of block elimination.This initial redundancy test cone is used to remove all the redundant inequalities in theinput polyhedron. Moreover, our method has a better algebraic complexity estimate thanthe approaches using linear programming; see [18, 19] for estimates of those approaches.For an input pointed polyhedron Q ⊆ Q n , given by a system of m linear inequali-ties of height h , we show (see Theorem 7) that eliminating the variables from that sys-tem, one after another (thus performing Fourier-Motzkin elimination) can be done within O ( m n n θ +1+ ǫ h ǫ ), for any ǫ >
0, where θ is the exponent of linear algebra. Our algorithmis stated in Section 4 and follows a revisited version of Balas’ algorithm presented in Sec-tion 3. Since the maximum number of facets of any standard projection of Q is O ( m ⌊ n/ ⌋ ),our running time for Fourier-Motzkin elimination is satisfactory; the other factors in ourestimate come from the cost of linear algebra operations for testing redundancy. We haveimplemented the algorithms proposed in Section 4 using the BPAS library [6] publiclyavailable at . We have compared our code against other implemen-tations of Fourier-Motzkin elimination including the CDD library [12]. Our experimentalresults, reported in Section 6, show that our proposed method can solve more test-cases(actually all) that we used while the counterpart software have failed to solve some ofthem.Section 2 provides background materials about polyhedral sets and polyhedral cones2ogether with the original version of Fourier-Motzkin elimination. Section 3 contains arevisited version of Balas’ method and detailed proofs of its correctness. Based on this,Section 4 presents a new algorithm producing a minimal projected representation for agiven full-dimensional pointed polyhedron. Complexity results are established in Section5. In Section 6 we report on our experimentation and in Section 7 we discuss relatedwork. Finally, Section 8 shows an application of Fourier-Motzkin elimination: solvingparametric linear programming (PLP) problems, which is a core routine in the analysis,transformation and scheduling of for-loops of computer programs. In this section, we review the basics of polyhedral geometry. Section 2.1 is dedicatedto the notions of polyhedral sets and polyhedral cones. Sections 2.2.1 and 2.2.2 reviewthe double description method and Fourier-Motzkin elimination, which are two of themost important algorithms for operating on polyhedral sets. We conclude this sectionwith the cost model that we shall use for complexity analysis, see Section 2.3. We omitmost proofs. For more details please refer to [13, 29, 31]. In a sake of simplicity in thecomplexity analysis of the presented algorithms, we constraint our coefficient field to therational number field Q . However, all of the results in this paper generalize to polyhedralsets with coefficients in the field R of real numbers. Notation 1
We use bold letters, e.g. v , to denote vectors and we use capital letters, e.g. A , to denote matrices. Also, we assume that vectors are column vectors. For row vectors,we use the transposition notation, that is, A t for the transposition of a matrix A . For amatrix A and an integer k , A k is the row of index k in A . Also, if K is a set of integers, A K denotes the sub-matrix of A with row indices in K . We begin this section with the fundamental theorem of linear inequalities.
Theorem 1 ( [29])
Let a , · · · , a m be a set of linearly independent vectors in Q n . Also,let b be a vector in Q n . Then, exactly one of the following holds: ( i ) the vector b is a non-negative linear combination of a , . . . , a m . In other words,there exist positive numbers y , . . . , y m such that we have b = P mi =1 y i a i , or, ( ii ) there exists a vector d ∈ Q n , such that both d t b < and d t a i ≥ hold for all ≤ i ≤ m . efinition 1 (Convex cone) A subset of points C ⊆ Q n is called a cone if for each x ∈ C and each real number λ ≥ we have λ x ∈ C . A cone C ⊆ Q n is called convex iffor all x , y ∈ C , we have x + y ∈ C . If C ⊆ Q n is a convex cone, then its elements arecalled the rays of C . For two rays r and r ′ of C , we write r ′ ≃ r whenever there exists λ ≥ such that we have r ′ = λ r . Definition 2 (Hyperplane)
A subset H ⊆ Q n is called a hyperplane if H = { x ∈ Q n | a t x = 0 } for some non-zero vector a ∈ Q n . Definition 3 (Half-space) A half-space is a set of the form { x ∈ Q n | a t x ≤ } for asome vector a ∈ Q n . Definition 4 (Polyhedral cone)
A cone C ⊆ Q n is a polyhedral cone if it is the inter-section of finitely many half-spaces, that is, C = { x ∈ Q n | Ax ≤ } for some matrix A ∈ Q m × n . Definition 5 (Finitely generated cone)
Let { x , . . . , x m } be a set of vectors in Q n .The cone generated by { x , . . . , x m } , denoted by Cone ( x , · · · , x m ) , is the smallest convexcone containing those vectors. In other words, we have Cone ( x , . . . , x m ) = { λ x + · · · + λ m x m | λ ≥ , . . . , λ m ≥ } . A cone obtained in this way is called a finitely generatedcone . With the following lemma, which is a consequence of the fundamental Theorem of linearinequalities, we can say that the two concepts of polyhedral cones and finitely generatedcones are equivalent, see [29]
Theorem 2 (Minkowski-Weyl theorem)
A convex cone is polyhedral if and only if itis finitely generated.
Definition 6 (Convex polyhedron)
A set of vectors P ⊆ Q n is called a convex poly-hedron if P = { x | A x ≤ b } , for a matrix A ∈ Q m × n and a vector b ∈ Q m . Moreover,the polyhedron P is called a polytope if P is bounded. From now on, we always use the notation P = { x | A x ≤ b } to represent a polyhedronin Q n . We call the system of linear inequalities { A x ≤ b } a representation of P . Definition 7 (Minkowski sum)
For two subsets P and Q of Q n , their Minkowski sum ,denoted by P + Q , is the subset of Q n defined as { p + q | ( p, q ) ∈ P × Q } . Lemma 1 (Decomposition theorem for convex polyhedra)
A subset P of Q n is aconvex polyhedron if and only if it can be written as the Minkowski sum of a finitelygenerated cone and a polytope. Another consequence of the fundamental theorem of inequalities, is the famous Farkaslemma. This lemma has different variants. Here we only mention a variant from [29].
Lemma 2 (Farkas’ lemma)
Let A ∈ Q m × n be a matrix and b ∈ Q m be a vector. Then,there exists a vector t ∈ Q n , t ≥ satisfying A t = b if and if y t b ≥ holds for eachvector y ∈ Q m such that we have y t A ≥ . A consequence of Farkas’ lemma is the following criterion for testing whether an inequality c t x ≤ c is redundant w.r.t. a polyhedron representation A x ≤ b , that is, whether c t x ≤ c is implied by A x ≤ b . Lemma 3 (Redundancy test criterion)
Let c ∈ Q n , c ∈ Q , A ∈ Q m × n and b ∈ Q m .Then, the inequality c t x ≤ c is redundant w.r.t. the system of inequalities A x ≤ b ifand only if there exists a vector t ≥ and a number λ ≥ satisfying c t = t t A and c = t t b + λ . Definition 8 (Implicit equation)
An inequality a t x ≤ b (with a ∈ Q n and b ∈ Q ) isan implicit equation of the inequality system A x ≤ b if a t x = b holds for all x ∈ P . Definition 9 (Minimal representation)
A representation of a polyhedron is minimal if no inequality of that representation is implied by the other inequalities of that represen-tation.
Definition 10 (Characteristic (recession) cone of a polyhedron)
The characteris-tic cone of P is the polyhedral cone denoted by CharCone ( P ) and defined by CharCone ( P ) := { y ∈ Q n | x + y ∈ P, ∀ x ∈ P } = { y | A y ≤ } . Definition 11 (Linearity space and pointed polyhedron)
The linearity space of thepolyhedron P is the linear space denoted by LinearSpace ( P ) and defined as CharCone ( P ) ∩− CharCone ( P ) = { y | A y = } , where − CharCone ( P ) is the set of the − y for y ∈ CharCone ( P ) . The polyhedron P is pointed if its linearity space is { } . emma 4 (Pointed polyhedron criterion) The polyhedron P is pointed if and onlyif the matrix A is full column rank. Definition 12 (Dimension of a polyhedron)
The dimension of the polyhedron P , de-noted by dim( P ) , is n − r , where n is dimension of the ambient space (that is, Q n ) and r is the maximum number of implicit equations defined by linearly independent vectors.We say that P is full-dimensional whenever dim( P ) = n holds. In another words, P isfull-dimensional if and only if it does not have any implicit equations. Definition 13 (Face of a polyhedron)
A subset F of the polyhedron P is called a face of P if F equals { x ∈ P | A sub x = b sub } for a sub-matrix A sub of A and a sub-vector b sub of b . Remark 1
It is obvious that every face of a polyhedron is also a polyhedron. Moreover,the intersection of two faces F and F of P is another face F , which is either F , or F ,or a face with a dimension less than min(dim( F ) , dim( F )) . Note that P and the emptyset are faces of P . Definition 14 (Facet of a polyhedron)
A face of P , distinct from P and of maximaldimension is called a facet of P . Remark 2
It follows from the previous remark that P has at least one facet and that thedimension of any facet of P is equal to dim( P ) − . When P is full-dimensional, there is aone-to-one correspondence between the inequalities in a minimal representation of P andthe facets of P . From this latter observation, we deduce that the minimal representation ofa full dimensional polyhedron is unique up to multiplying each of the defining inequalitiesby a positive constant. Definition 15 (Minimal face)
A non-empty face that does not contain any other faceof a polyhedron is called a minimal face of that polyhedron. Specifically, if the polyhedron P is pointed, each minimal face of P is just a point and is called an extreme point orvertex of P . Definition 16 (Extreme rays)
Let C be a cone such that dim( LinearSpace ( C )) = t .Then, a face of C of dimension t + 1 is called a minimal proper face of C . In the specialcase of a pointed cone, that is, whenever t = 0 holds, the dimension of a minimal proper Of course, this notion of dimension coincides with the topological one, that is, the maximum dimen-sion of a ball contained in P . ace is and such a face is called an extreme ray . We call an extreme ray of the polyhedron P any extreme ray of its characteristic cone CharCone ( P ) . We say that two extreme rays r and r ′ of the polyhedron P are equivalent , and denote it by r ≃ r ′ , if one is a positivemultiple of the other. When we consider the set of all extreme rays of the polyhedron P (or the polyhedral cone C ) we will only consider one ray from each equivalence class. Lemma 5 (Generating a cone from its extreme rays)
A pointed cone C can be gen-erated by its extreme rays, that is, we have C = { x ∈ Q n | ( ∃ c ≥ ) x = R c } , where thecolumns of R are the extreme rays of C . Remark 3
From the previous definitions and lemmas, we derive the following observa-tions:1. the number of extreme rays of each cone is finite,2. the set of all extreme rays is unique up to multiplication by a scalar, and,3. all members of a cone are positive linear combination of extreme rays.
We denote by
ExtremeRays ( C ) the set of extreme rays of the cone C . Recall that allcones considered here are polyhedral.The following, see [26,31], is helpful in the analysis of algorithms manipulating extremerays of cones and polyhedra. Lemma 6 (Maximum number of extreme rays)
Let E ( C ) be the number of extremerays of a polyhedral cone C ∈ Q n with m facets. Then, we have: E ( C ) ≤ (cid:18) m − ⌊ n +12 ⌋ m − (cid:19) + (cid:18) m − ⌊ n +22 ⌋ m − n (cid:19) ≤ m ⌊ n ⌋ . (1)From Remark 3, it appears that extreme rays are important characteristics of polyhedralcones. Therefore, two algorithms have been developed in [13] to check whether a memberof a cone is an extreme ray or not. For explaining these algorithms, we need the followingdefinition. Definition 17 (Zero set of a cone)
For a cone C = { x ∈ Q n | A x ≤ } and t ∈ C ,we define the zero set ζ A ( t ) as the set of row indices i such that A i t = 0 , where A i is the i -th row of A . For simplicity, we use ζ ( t ) instead of ζ A ( t ) when there is no ambiguity. C = { x ∈ Q n | A ′ x = , A ′′ x ≤ } where A ′ and A ′′ are two matricessuch that the system A ′′ x ≤ has no implicit equations. The proofs of the followinglemmas are straightforward and can be found in [13] and [31]. Lemma 7 (Algebraic test for extreme rays)
Let r ∈ C . Then, the ray r is an ex-treme ray of C if and only if we have rank " A ′ A ′′ ζ ( r ) = n − . Lemma 8 (Combinatorial test for extreme rays)
Let r ∈ C . Then, the ray r is anextreme ray of C if and only if for any ray r ′ of C such that ζ ( r ) ⊆ ζ ( r ′ ) holds we have r ′ ≃ r . Definition 18 (Polar cone)
For the given polyhedral cone C ⊆ Q n , the polar cone induced by C is denoted C ∗ and given by: C ∗ = { y ∈ Q n | y t x ≤ , ∀ x ∈ C } . The following lemma shows an important property of the polar cone of a polyhedral cone.The proof can be found in [29].
Lemma 9 (Polarity property)
For a given cone C ∈ Q n , there is a one-to-one corre-spondence between the faces of C of dimension k and the faces of C ∗ of dimension n − k .In particular, there is a one-to-one correspondence between the facets of C and the extremerays of C ∗ . Each polyhedron P can be embedded in a higher-dimensional cone, called the homogenizedcone associated with P . Definition 19 (Homogenized cone of a polyhedron)
The homogenized cone of thepolyhedron P = { x ∈ Q n | A x ≤ b } is denoted by HomCone ( P ) and defined by: HomCone ( P ) = { ( x , x last ) ∈ Q n +1 | C [ x t , x last ] t ≤ } ,where C = " A − b0 t − is an ( m + 1) × ( n + 1) -matrix, if A is an ( m × n ) -matrix. emma 10 (H-representation correspondence) An inequality A i x ≤ b i is redun-dant in P if and only if the corresponding inequality A i x − b i x last ≤ is redundant in HomCone ( P ) . Theorem 3 (Extreme rays of the homogenized cone)
Every extreme ray of the ho-mogenized cone
HomCone ( P ) associated with the polyhedron P is either of the form ( x , where x is an extreme ray of P , or ( x , where x is an extreme point of P . In this section, we review two of the most important algorithms for polyhedral com-putations: the double description algorithm (DD for short) and the Fourier-Motzkinelimination algorithm (FME for short).A polyhedral cone C can be represented either as an intersection of finitely many half-spaces (thus using the so-called H-representation of C ) or as by its extreme rays (thususing the so-called V-representation of C ); the DD algorithm produces one representationfrom the other. We shall explain the version of the DD algorithm which takes as inputthe H-representation of C and returns as output the V-representation of C .The FME algorithm performs a standard projection of a polyhedral set to lower di-mension subspace. In algebraic terms, this algorithm takes as input a polyhedron P given by a system of linear inequalities (thus an H-representation of P ) in n variables x < x < · · · < x n and computes the H-representation of the projection of P on x < · · · < x k for some 1 ≤ k < n . We know from Theorem 2 that any polyhedral cone C = { x ∈ Q n | A x ≤ } can begenerated by finitely many vectors, say { x , . . . , x q } ∈ Q n . Moreover, from Lemma 5we know that if C is pointed, then it can be generated by its extreme rays, that is, C = Cone ( R ) where R = [ x , . . . , x q ]. Therefore, we have two possible representations forthe pointed polyhedral cone C : H-representation: as the intersection of finitely many half-spaces, or equivalently, witha system of linear inequalities A x ≤ ; V-representation: as a linear combination of finitely many vectors, namely
Cone ( R ),where R is a matrix, the columns of which are the extreme rays of C .9e say that the pair ( A, R ) is a
Double Description Pair or simply a
DD pair of C . Wecall A a representation matrix of C and R a generating matrix of C . We call R (resp. A )a minimal generating (resp. representing) matrix when no proper sub-matrix of R (resp. A ) is generating (resp. representing) C .It is important to notice that, for some queries in polyhedral computations, the outputcan be calculated in polynomial time using one representation (either a representationmatrix or a generating matrix) while it would require exponential time using the otherrepresentation.For example, we can compute in polynomial time the intersection of two cones whenthey are in H-representation but the same problem would be harder to solve when thesame cones are in V-representation. Therefore, it is important to have a procedure toconvert between these two representations, which is the focus of the articles [8] and [31].We will explain this procedure, which is known as the double description method as wellas Chernikova’s algorithm . This algorithm takes a cone in H-representation as input andreturns a V-representation of the same cone as output. In other words, this procedurefinds the extreme rays of a polyhedral cone, given by its representation matrix. It hasbeen proven that this procedure runs in single exponential time. To the best of ourknowledge, the most practically efficient variant of this procedure has been proposed byFukuda in [13] and is implemented in the
CDD library. We shall explain his approachhere and analyze its algebraic complexity. Before presenting Fukuda’s algorithm, we needa few more definitions and results. In this section, we assume that the input cone C ispointed.The double description method works in an incremental manner. Denoting by H , . . . , H m the half-spaces corresponding to the inequalities of the H-representation of C , we have C = H ∩ · · · ∩ H m . Let 1 < i ≤ m and assume that we have computed the extreme raysof the cone C i − := H ∩ · · · ∩ H i − . Then the i -th iteration of the DD method deducesthe extreme rays of C i from those of C i − and H i .Assume that the half-spaces H , . . . , H m are numbered such that H i is given by A i x ≤ A i is the i -th row of the representing matrix A . We consider the following partitionof Q n : H + i = { x ∈ Q n | A i x > } , H i = { x ∈ Q n | A i x = 0 } and H − i = { x ∈ Q n | A i x < } .Assume that we have found the DD-pair ( A i − , R i − ) of C i − . Let J be the set ofthe column indices of R i − . We use the above partition { H + i , H i , H − i } to partition J asfollows: J + i = { j ∈ J | r j ∈ H + } , J i = { j ∈ J | r j ∈ H } and J − i = { j ∈ J | r j ∈ H − } ,10here { r j | j ∈ J } is the set of the columns of R i − , hence the set of the extreme rays of C i − .For future reference, let us denote by partition ( J , A i ) the function which returns J + , J , J − as defined above. The proof can be found in [13]. Lemma 11 (Double description method)
Let J ′ := J + ∪ J ∪ ( J + × J − ) . Let R i bethe ( n × | J ′ | ) -matrix consisting of • the columns of R i − with index in J + ∪ J , followed by • the vectors r ′ ( j,j ′ ) for ( j, j ′ ) ∈ ( J + × J − ) , where r ′ ( j,j ′ ) = ( A i r j ) r j ′ − ( A i r j ′ ) r j , Then, the pair ( A i , R i ) is a DD pair of C i . The most efficient way to start the incremental process is to choose the largest sub-matrix of A with linearly independent rows; we call this matrix A . Indeed, denoting by C the cone with A as representation matrix, the matrix A is invertible and its inversegives the extreme rays of C , that is: ExtremeRays ( C ) = ( A ) − .Therefore, the first DD-pair that the above incremental step should take as input is( A , ( A ) − ).The next key point towards a practically efficient DD method is to observe that most ofthe vectors r ′ ( j,j ′ ) in Lemma 11 are redundant. Indeed, Lemma 11 leads to a constructionof a generating matrix of C (in fact, this would be Algorithm 2 where Lines 13 and 16 aresuppressed) producing a double exponential number of rays (w.r.t. the ambient dimension n ) whereas Lemma 6 guarantees that the number of extreme rays of a polyhedral cone issingly exponential in its ambient dimension. To deal with this issue of redundancy, weneed the notion of adjacent extreme rays. Definition 20 (Adjacent extreme rays)
Two distinct extreme rays r and r ′ of thepolyhedral cone C are called adjacent if they span a -dimensional face of C . The following lemma shows how we can test whether two extreme rays are adjacent ornot. The proof can be found in [13]. We do not use the minimal face, as it used in the main reference because it makes confusion. roposition 1 (Adjacency test) Let r and r ′ be two distinct rays of C . Then, thefollowing statements are equivalent:1. r and r ′ are adjacent extreme rays,2. r and r ′ are extreme rays and rank( A ζ ( r ) ∩ ζ ( r ′ ) ) = n − ,3. if r ′′ is a ray of C with ζ ( r ) ∩ ζ ( r ′ ) ⊆ ζ ( r ′′ ) , then r ′′ is a positive multiple of either r or r ′ .It should be noted that the second statement is related to algebraic test for extreme rayswhile the third one is related to the combinatorial test. Based on Proposition 1, we have Algorithm 1 for testing whether two extreme rays areadjacent or not.
Algorithm 1
AdjacencyTest Input: ( A, r , r ′ ), where A ∈ Q m × n is the representation matrix of cone C , r and r ′ are two extreme rays of C Output: true if r and r ′ are adjacent, false otherwise s := A r , s ′ := A r ′ let ζ ( r ) and ζ ( r ′ ) be set of indices of zeros in s and s ′ respectively ζ := ζ ( r ) ∩ ζ ( r ′ ) if rank( A ζ ) = n − then return true else return false end if The following lemma explains how to obtain ( A i , R i ) from ( A i − , R i − ), where A i − (resp. A i ) is the sub-matrix of A consisting of its first i − i ) rows. The doubledescription method is a direct application of this lemma, see [13] for details. Lemma 12
As above, let ( A i − , R i − ) be a DD-pair and denote by J be the set of indicesof the columns of R i − . Assume that rank( A i − ) = n holds. Let J ′ := J − ∪ J ∪ Adj , where
Adj is the set of the pairs ( j, j ′ ) ∈ J + × J − such that r j , and r j ′ are adjacent as extremerays of C i − , the cone with A i − as representing matrix. Let R i be the ( n × | J ′ | ) -matrixconsisting of • the columns of R i − with index in J − ∪ J , followed by • the vectors r ′ ( j,j ′ ) for ( j, j ′ ) ∈ ( J + × J − ) , where ′ ( j,j ′ ) = ( A i r j ) r j ′ − ( A i r ′ j ) r j , Then, the pair ( A i , R i ) is a DD pair of C i . Furthermore, if R i − is a minimal generatingmatrix for the representation matrix A i − , then R i is also a minimal generating matrixfor the representation matrix A i . Using Proposition 1 and Lemma 12 we can obtain Algorithm 2 for computing theextreme rays of a cone. Algorithm 2
DDmethod Input: a matrix A ∈ Q m × n , a representation matrix of a pointed cone C Output: R , the minimal generating matrix of C let K be the set of indices of A ’s independent rows A := A K R := ( A ) − let J be set of column indices of R while K = { , · · · , m } do select a A -row index i K J + , J , J − := partition ( J, A i ) add vectors with indices in J + and J as columns to R i for p ∈ J + do for n ∈ J − do if AdjacencyTest( A i − , r p , r n ) = true then r new := ( A i r p ) r n − ( A i r n ) r p add r new as columns to R i end if end for end for let J be set of indices in R i end while2.2.2 Fourier-Motzkin eliminationDefinition 21 (Projection of a polyhedron) Let A ∈ Q m × p and B ∈ Q m × q be matri-ces. Let c ∈ Q m be a vector. Consider the polyhedron P ⊆ Q p + q defined by P = { ( u , x ) ∈ Q p + q | A u + B x ≤ c } . We denote by proj ( P ; x ) the projection of P on x , that is, thesubset of Q q defined by proj ( P ; x ) = { x ∈ Q q | ∃ u ∈ Q p , ( u , x ) ∈ P } . In this algorithm, A i shows the representation matrix in step i proj ( P ; x ) of the polyhedron of P by successively eliminating the u -variables from theinequality system A u + B x ≤ c . This process shows that proj ( P ; x ) is also a polyhedron. Definition 22 (Inequality combination)
Let ℓ , ℓ be two inequalities: a x + · · · + a n x n ≤ d and b x + · · · + b n x n ≤ d . Let ≤ i ≤ n such that the coefficients a i and b i of x i in ℓ and ℓ are respectively positive and negative. The combination of ℓ and ℓ w.r.t. x i , denoted by Combine ( ℓ , ℓ , x i ) , is: − b i ( a x + · · · + a n x n ) + a i ( b x + · · · + b n x n ) ≤ − b i d + a i d . Theorem 4 shows how to compute proj ( P ; x ) when u consists of a single variable x i .When u consists of several variables, FME obtains the projection proj ( P ; x ) by repeatedapplications of Theorem 4. Theorem 4 (Fourier-Motzkin theorem [23])
Let A ∈ Q m × n be a matrix and let b ∈ Q m be a vector. Consider the polyhedron P = { x ∈ Q n | A x ≤ b } . Let S be the set ofinequalities defined by A x ≤ b . Also, let ≤ i ≤ n . We partition S according to the signof the coefficient of x i : S + = { ℓ ∈ S | coeff( ℓ, x i ) > } , S − = { ℓ ∈ S | coeff( ℓ, x i ) < } and S = { ℓ ∈ S | coeff( ℓ, x i ) = 0 } . We construct the following system of linear inequalities: S ′ = { Combine ( s p , s n , x i ) | ( s p , s n ) ∈ S + × S − } ∪ S . Then, S ′ is a representation of proj ( P ; x \ { x i } ) . With the notations of Theorem 4, assume that each of S + and S − counts m inequalities.Then, the set S ′ counts ( m ) inequalities. After eliminating p variables, the projectionwould be given by O (( m ) p ) inequalities. Thus, FME is double exponential in p .On the other hand, from [27] and [19], we know that the maximum number of facets ofthe projection on Q n − p of a polyhedron in Q n with m facets is O ( m ⌊ n/ ⌋ ). Hence, it canbe concluded that most of the generated inequalities by FME are redundant . Eliminatingthese redundancies is the main subject of the subsequent sections. We use the notion of height of an algebraic number as defined by Michel Waldschmidt inChapter 3 of [33]. In particular, for any rational number ab , thus with b = 0, we define the height of ab , denoted as height ( ab ), as log max( | a | , | b | ). For a given matrix A ∈ Q m × n , let14 A k denote the infinite norm of A , that is, the maximum absolute value of a coefficient in A . We define the height of A , denoted by height ( A ) := height ( k A k ), as the maximal heightof a coefficient in A . For the rest of this section, our main reference is the PhD thesis ofArne Storjohann [30]. Let k be a non-negative integer. We denote by M ( k ) an upperbound for the number of bit operations required for performing any of the basic operations(addition, multiplication, division with reminder) on input a, b ∈ Z with | a | , | b | < k .Using the multiplication algorithm of Arnold Sch¨onhage and Volker Strassen [28] one canchoose M ( k ) ∈ O ( k log k log log k ).We also need complexity estimates for some matrix operations. For positive integers a, b, c , let us denote by MM ( a, b, c ) an upper bound for the number of arithmetic opera-tions (on the coefficients) required for multiplying an ( a × b )-matrix by an ( b × c )-matrix. Inthe case of square matrices of order n , we simply write MM ( n ) instead of MM ( n, n, n ).We denote by θ the exponent of linear algebra, that is, the smallest real positive numbersuch that MM ( n ) ∈ O ( n θ ).In the following, we give complexity estimates in terms of M ( k ) ∈ O ( k log k log log k )and B ( k ) = M ( k ) log k ∈ O ( k (log k ) log log k ). We replace every term of the form(log k ) p (log log k ) q (log log log k ) r , (where p, q, r are positive real numbers) with O ( k ǫ ) where ǫ is a (positive) infinitesimal. Furthermore, in the complexity estimates of algorithms op-erating on matrices and vectors over Z , we use a parameter β , which is a bound onthe magnitude of the integers occurring during the algorithm. Our complexity estimatesare measures in terms of machine word operations. Let A ∈ Z m × n and B ∈ Z n × p .Then, the product of A by B can be computed within O ( MM ( m, n, p )(log β ) + ( mn + np + mp ) B (log β )) word operations, where β = n k A k k B k and k A k (resp. k B k ) de-notes the maximum absolute value of a coefficient in A (resp. B ). Neglecting logfactors, this estimate becomes O (max( m, n, p ) θ max( h A , h b )) where h A = height ( A ) and h B = height ( B ). For a matrix A ∈ Z m × n , a cost estimate of Gauss-Jordan transform is O ( nmr θ − (log β ) + nm (log r ) B (log β )) word operations, where r is the rank of the inputmatrix A and β = ( √ r k A k ) r . Letting h be the height of A , for a matrix A ∈ Z m × n ,with height h , computing the rank of A is done within O ( mn θ + ǫ h ǫ ) word operations,and computing the inverse of A (when this matrix is invertible over Q and m = n ) isdone within O ( m θ +1+ ǫ h ǫ ) word operations. Let A ∈ Z n × n be an integer matrix, whichis invertible over Q . Then, the absolute value of any coefficient in A − (inverse of A ) canbe bounded above by ( √ n − k A k ( n − ). As recalled in Section 2, FME produces a representation of the projection of a polyhe-dron by eliminating one variable atfer another. However, this procedure generates lots15f redundant inequalities limiting its use in practice to polyhedral sets with a handful ofvariables only. In this section, we propose an efficient algorithm which generates a min-imal representation of a full-dimensional pointed polyhedron, as well as its projections.Through this section, we use Q to denote a full-dimensional pointed polyhedron in Q n ,where Q = { ( u , x ) ∈ Q p × Q q | A u + B x ≤ c } , (2)with A ∈ Q m × p , B ∈ Q m × q and c ∈ Q m . Thus, Q has no implicit equations in itsrepresentation and the coefficient matrix [ A, B ] has full column rank. Our goal in thissection is to compute the minimal representation of the projection proj ( Q ; x ) given by proj ( Q ; x ) := { x | ∃ u , s.t. ( u , x ) ∈ Q } . (3)We call the cone C := { y ∈ Q m | y t A = 0 and y ≥ } (4)the projection cone of Q w.r.t. u . When there is no ambiguity, we simply call C as theprojection cone of Q . Using the following so-called projection lemma , we can compute arepresentation for the projection proj ( Q ; x ): Lemma 13 ( [7])
The projection proj ( Q ; x ) of the polyhedron Q can be represented by S := { y t B x ≤ y t c , ∀ y ∈ ExtremeRays ( C ) } , where C is the projection cone of Q defined by Equation (4). Lemma 13 provides the main idea of the block elimination method. However, the rep-resention produced in this way may have redundant inequalities. The following examplefrom [16] shows this point.
Example 1
Let P be the polyhedron represented by P := x + x − x + x ≤ − x − x + 18 x − x ≤ − − x − x + 9 x − x ≤ − x + 4 x − x + 13 x ≤ x ≥ x ≥ . (5) The projection cone of P w.r.t. u := { x , x } is C := y − y − y + 45 y = 0 ,y − y − y + 4 y = 0 ,y ≥ , y ≥ , y ≥ , y ≥ . (6)16 he extreme rays of the cone C are: (0 , , , , , , (3 , , , , , , (0 , , , , , , (1 , , , , , , (0 , , , , , , (3 , , , , , . These extreme rays generate a representation of proj ( P ; { x , x } ) : ( x − x ≤ , x − x ≤ , x − x ≤ , − x + x ≤ , − x + 13 x ≤ , x − x ≤ . (7) One can check that, in the above system of linear inequalities, the inequality x − x ≤ is redundant. In [1], Balas observed that if the matrix B is invertible, then we can find a cone suchthat its extreme rays are in one-to-one correspondence with the facets of the projectionof the polyhedron (the proof of this fact is similar to the proof of our Theorem 5). Usingthis fact, Balas developed an algorithm to find all redundant inequalities for all cases,including the cases where B is singular.It should be noted that, although we are using his idea, we have found some flaws inBalas’ paper. In this section, we will explain the corrected form of Balas’ algorithm. Toachieve this, we lift the polyhedron Q to a space in higher dimension by constructing thefollowing objects. Construction of B . Assume that the first q rows of B , denoted as B , are independent.Denote the last m − q rows of B as B . Add m − q columns, e q +1 , . . . , e m , to B , where e i is the i -th vector in the canonical basis of Q m , thus with 1 in the i -th position and 0’sanywhere else. The matrix B has the following form: B = " B B I m − q . To maintain consistency in the notation, let A = A and c = c . Construction of Q . We define: Q := { ( u , x ′ ) ∈ Q p × Q m | A u + B x ′ ≤ c , x q +1 = · · · = x m = 0 } . Here and after, we use x ′ to represent the vector x ∈ Q q , augmented with m − q variables( x q +1 , . . . , x m ). Since the extra variables ( x q +1 , . . . , x m ) are assigned to zero, we note that proj ( Q ; x ) and proj ( Q ; x ′ ) are “isomorphic” by means of the bijection Φ:Φ : proj ( Q ; x ) → proj ( Q ; x ′ )( x , . . . , x q ) ( x , . . . , x q , , . . . , proj ( Q ; x ) and proj ( Q ; x ′ ) as the same polyhedron whenthere is no ambiguity. Construction of W . Define W to be the set of all ( v , w , v ) ∈ Q q × Q m − q × Q satisfying { ( v , w , v ) | [ v t , w t ] B − A = 0 , [ v t , w t ] B − ≥ , − [ v t , w t ] B − c + v ≥ } . (8)This construction of W is slightly different from the one in Balas’ work [1]. Indeed, wechanged − [ v t , w t ] B − c + v = 0 to − [ v t , w t ] B − c + v ≥
0. Similar to the discussionin Balas’ work, the extreme rays of the cone proj ( W ; { v , v } ) are used to construct theminimal representation of the projection proj ( Q ; x ). To prove this relation, we need apreliminary observation. Lemma 14
The operations “computing the characteristic cone” and “computing projec-tions” commute. To be precise, we have:
CharCone ( proj ( Q ; x )) = proj ( CharCone ( Q ); x ) . Proof ✄ By the definition of the characteristic cone, we have
CharCone ( Q ) = { ( u , x ) | A u + B x ≤ } , whose representation has the same left-hand side as the one of Q . The lemmais valid if we can show that the representation of proj ( CharCone ( Q ); x ) has the same left-hand side as proj ( Q ; x ). This is obvious with the Fourier-Motzkin elimination procedure. ✁ Theorem 5 shows that extreme rays of the cone proj ( W ; { v , v } ), which is defined as proj ( W ; { v , v } ) := { ( v , − v ) | ( v , v ) ∈ proj ( W ; { v , v } ) } , are in one-to-one correspondence with the facets of HomCone ( proj ( Q ; x )) and as a resultits extreme rays can be used to find the minimal representation of HomCone ( proj ( Q ; x )). Theorem 5
The polar cone of
HomCone ( proj ( Q ; x )) is equal to proj ( W ; { v , v } ) . Proof ✄ By definition, the polar cone (
HomCone ( proj ( Q ; x )) ∗ is equal to { ( y , y ) | [ y t , y ][ x t , x last ] t ≤ , ∀ ( x , x last ) ∈ HomCone ( proj ( Q ; x )) } . This claim follows immediately from: (
HomCone ( proj ( Q ; x )) ∗ = proj ( W ; { v , v } ). Weshall prove this latter equality in two steps.( ⊇ ) For any ( v , − v ) ∈ proj ( W ; { v , v } ), we need to show that [ v t , − v ][ x t , x last ] t ≤ x , x last ) ∈ HomCone ( proj ( Q ; x )). Remember that we assume that Q is pointed. Observe that HomCone ( proj ( Q ; x )) is also pointed. Therefore, we only needto verify the desired property for the extreme rays of HomCone ( proj ( Q ; x )), which either18ave the form ( s ,
1) or are equal to ( s ,
0) (Theorem 3). Before continuing, we shouldnotice that since ( v , v ) ∈ proj ( W ; { v , v } ), there exists w such that { [ v t , w t ] B − A =0 , − [ v t , w t ] B − c + v ≥ , [ v t , w t ] B − ≥ } . Cases 1 and 2 below conclude that ( v , − v ) ∈ HomCone ( proj ( Q ; x )) ∗ holds.Case 1: For the form ( s , s ∈ proj ( Q ; x ). Indeed, s is an extreme point of proj ( Q ; x ). Hence, there exists u ∈ Q p , such that we have A u + B s ≤ c . By constructionof Q , we have A u + B s ′ ≤ c , where s ′ = [ s t , s q +1 , . . . , s m ] t with s q +1 = · · · = s m = 0.Therefore, we have: [ v t , w t ] B − A u + [ v t , w t ] B − B s ′ ≤ [ v t , w t ] B − c . This leads usto v t s = [ v t , w t ] s ′ ≤ [ v t , w t ] B − c ≤ v . Therefore, we have [ v t , − v ][ s t , x last ] t ≤
0, asdesired.Case 2: For the form ( s , s ∈ CharCone ( proj ( Q ; x )) = proj ( CharCone ( Q ); x ).Thus, there exists u ∈ Q p such that A u + B s ≤ . Similarly to Case 1, we have[ v t , w t ] B − A u + [ v t , w t ] B − B s ′ ≤ [ v t , w t ] B − . Therefore, we have v t s = [ v t , w t ] s ′ ≤ [ v t , w t ] B − = 0, and thus, we have [ v t , − v ][ s t , x last ] t ≤
0, as desired.( ⊆ ) For any ( y , y ) ∈ HomCone ( proj ( Q ; x )) ∗ , we have [ y t , y ][ x t , x last ] t ≤ x , x last ) ∈ HomCone ( proj ( Q ; x )). For any x ∈ proj ( Q ; x ), we have y t x ≤ − y since ( x , ∈ HomCone ( proj ( Q ; x )). Therefore, we have y t x ≤ − y , for all x ∈ proj ( Q ; x ),which makes the inequality y t x ≤ − y redundant in the system { A u + B x ≤ c } . ByFarkas’ Lemma (see Lemma 3), there exists p ≥ , p ∈ Q m and λ ≥ p t A = , y = p t B , y = p t c + λ . Remember that A = A , B = [ B, B ′ ], c = c . Here B ′ is the last m − q columns of B consisting of e q +1 , . . . , e m . Let w = p t B ′ . We then have { p t A = , [ y t , w t ] = p t B , − y ≥ p t c , p ≥ } , which is equivalent to { p t = [ y t , w t ] B − , [ y t , w t ] B − A = , − y ≥ [ y t , w t ] B − c , [ y t , w t ] B − ≥ } . Therefore, ( y , w , − y ) ∈ W , which leads us to ( y , − y ) ∈ proj ( W ; { v , v } ). From this,we deduce that ( y , y ) ∈ proj ( W ; { v , v } ) holds. ✁ Theorem 6
The minimal representation of proj ( Q ; x ) is given exactly by { v t x ≤ v | ( v , v ) ∈ ExtremeRays ( proj ( W ; ( v , v ))) \ { ( , ) }} . Proof ✄ By Theorem 5, a minimal representation of the homogenized cone
HomCone ( proj ( Q ; x ))is given exactly by { vx − v x last ≤ | ( v , v ) ∈ ExtremeRays ( proj ( W ; ( v , v ))) } . ByLemma 10, any minimal representation of HomCone ( proj ( Q ; x )) has at most one moreinequality than any minimal representation of proj ( Q ; x ). This extra inequality wouldbe x last ≥ proj ( W ; ( v , v )) would have the extreme ray ( , proj ( Q ; x ) is given by { v t x ≤ v | ( v , v ) ∈ ExtremeRays ( proj ( W ; ( v , v ))) \ { ( , ) }} . ✁ For simplicity, we call the cone proj ( W ; { v , v } ) the redundancy test cone of Q w.r.t. u and denote it by P u ( Q ). When u is empty, we define P ( Q ) := P u ( Q ) and we call it the initial redundancy test cone . If there is no ambiguity, we use only P u and P to denotethe redundancy test cone and the initial redundancy test cone, respectively. It should benoted that P ( Q ) can be used to detect redundant inequalities in the input system, as itis shown in Steps 3 to 8 of Algorithm 5. In this section, we present our algorithm for removing all the redundant inequalities gen-erated during Fourier-Motzkin elimination. Our algorithm detects and eliminates redun-dant inequalities, right after their generation, using the redundancy test cone introducedin Section 3. Intuitively, we need to construct the cone W and obtain a representationof the redundancy test cone proj ( W ; { v , v } ), each time we eliminate a variable duringFME. This method is time consuming because it requires to compute the projection of W onto { v , v } space at each step. However, as we prove in Lemma 15, we only needto compute the initial redundancy test cone, using Algorithm 3, and the redundancy testcones, used in the subsequent variable eliminations, can be found incrementally withoutany extra cost.Note that a byproduct of this algorithm is the minimal projected representation ofthe input system, according to the specified variable ordering. This representation isuseful for finding solutions of linear inequality systems. The projected representation wasintroduced in [18, 19] and will be reviewed in Definition 23.For convenience, we rewrite the input polyhedron Q defined in Equation (2) as: Q = { y ∈ Q n | Ay ≤ c } , where A = [ A, B ] ∈ Q m × n , n = p + q and y = [ u t , x t ] t ∈ Q n . Weassume the first n rows of A are linearly independent. Remark 4
There are two important points about Algorithm 3. First, we only need arepresentation of the initial redundancy test cone this representation needs not to be min-imal. Therefore, calling Algorithm 3 in Algorithm 5 (which computes a minimal projectedrepresentation of a polyhedron) does not lead to a recursive call to Algorithm 5. Second, tocompute the projection proj ( W ; { v , v } ) , we need to eliminate m − n variables from m + 1 inequalities. The block elimination method is applied to achieve this. As it is shown inLemma 13, the block elimination method will require to compute the extreme rays of the lgorithm 3 Generate initial redundancy test cone
Input: S = { Ay ≤ c } : a representation of the input polyhedron Q ; Output: P : a representation of the initial redundancy test coneof Q Construct A in the same way we constructed B , that is, A := [ A , A ′ ], where A ′ := [ e n +1 , . . . , e m ] with e i being the i -th vector of the canonical basis of Q m ; Let W := { ( v , w , v ) ∈ Q n × Q m − n × Q | − [ v t , w t ] A − c + v ≥ , [ v t , w t ] A − ≥ } ; P = proj ( W ; { v , v } ); return P projection cone (denoted by C ), which contains m + 1 inequalities and m + 1 variables.However, considering the structural properties of the coeffient matrix of the representa-tion of C , we found that computing the extreme rays of C is equivalent to computing theextreme rays of another simplier cone, which still has m + 1 inequalities but only n + 1 variables. For more details, please refer to Step 3 of Lemma 18. Lemma 15 shows how to obtain the redundancy test cone P u of the polyhedron Q w.r.t. u from its initial redundancy test cone P . This gives a very cheap way to generate all theredundancy test cones of Q once its initial redundancy test cone is generated; this will beused in Algorithm 5. To distinguish from the construction of P , we rename the variables v , w , v as v u , w u , v u , when constructing W and computing the test cone P u . That is,we have P u = proj ( W ; { v u , v u } ), where W is the set of all ( v u , w u , v u ) ∈ Q q × Q m − q × Q satisfying { ( v u , w u , v u ) | [ v t u , w t u ] B − A = , − [ v t u , w t u ] B − c + v u ≥ , [ v t u , w t u ] B − ≥ } , while we have P = proj ( W ; { v , v } ) where W is the set of all ( v , w , v ) ∈ Q n × Q m − n × Q satisfying { ( v , w , v ) | − [ v t , w t ] A − c + v ≥ , [ v t , w t ] A − ≥ } . Lemma 15
Representation of the redundancy test cone P u can be obtained from P bysetting coefficients of the corresponding p variables of v to in the representation of P . Proof ✄ By Step 1 of Algorithm 3, [ v t , w t ] A − A = v t holds whenever ( v , w , v ) ∈ W .Rewrite v as v t = [ v t , v t ], where v and v are the first p and last n − p variables of v . Wehave [ v t , w t ] A − A = v t and [ v t , w t ] A − B = v t . Similarly, we have [ v t u , w t u ] B − A = and [ v t u , w t u ] B − B = v t u whenever ( v u , w u , v u ) ∈ W . This lemma holds if we can show P u = P| v = . We prove this in two steps.( ⊆ ) For any ( v u , v u ) ∈ P u , there exists w u ∈ Q m − q satisfying ( v u , w u , v u ) ∈ W . Let[ v t , w t ] := [ v t u , w t u ] B − A , where v t = [ v t , v t ] with v ∈ Q p , v ∈ Q n − p and w ∈ Q m − n .Then, v t = [ v t u , w t u ] B − A = and v t = [ v t u , w t u ] B − B = v u due to ( v u , w u , v u ) ∈ W .Let v = v u , it is easy to verify that ( v , w , v ) ∈ W . Therefore, ( , v u , v u ) = ( v , v ) ∈ P .21 ⊇ ) For any ( , v , v ) ∈ P , there exists w ∈ Q m − n satisfying ( , v , w , v ) ∈ W . Let( v u , w u ) := ( , v , w ) A − B . We have v u = ( , v , w ) A − B = v . Let v u = v , it iseasy to verify ( v u , w u , v u ) ∈ W . Therefore, ( v , v ) = ( v u , v u ) ∈ P u . ✁ For the polyhedron Q , given a variable order y > · · · > y n , for 1 ≤ i ≤ n , we denoteby Q ( y i ) the inequalities in the representation of Q whose largest variable is y i . Definition 23 (Projected representation)
The projected representation of Q w.r.t. the variable order y > · · · > y n , denoted ProjRep ( Q ; y > · · · > y n ) , is the linear systemgiven by Q ( y ) if n = 1 , and is the conjunction of Q ( y ) and ProjRep ( proj ( Q ; y ); y > · · · > y n ) otherwise. We say that P := ProjRep ( Q ; y > · · · > y n ) is a minimal projectedrepresentation if, for all ≤ k ≤ n , every inequality of P with y k as largest variable isnot redundant among all the inequalities of P with variables among y k , . . . , y n . We can generate the minimal projected representation of a polyhedron by Algorithm 5.
Algorithm 4
RedundancyTest
Input: ( P , ℓ ): where (i) P := { ( v , v ) ∈ Q n × Q | M [ v t , v ] t ≤ } with M ∈ Q m × ( n +1) ,(ii) ℓ : a t y ≤ c with a ∈ Q n and c ∈ Q ; Output: false if [ a t , c ] t is an extreme ray of P , true otherwise Let M be the coefficient matrix of P Let s := M [ a t , c ] t Let ζ ( s ) be the index set of the zero coefficients of s if rank( M ζ ( s ) ) = n then return false else return true end if We analyze the computational complexity of Algorithm 5, which computes the minimalprojected representation of a given polyhedron. This computation is equivalent to elim-inate all variables, one after another, in Fourier-Motzkin elimination. We prove thatusing our algorithm, finding a minimal projected representation of a polyhedron is singlyexponential in the dimension n of the ambient space.The most consuming procedure in Algorithm 5 is finding the initial redundancy testcone, which requires another polyhedron projection in higher dimension. As it is shown22 lgorithm 5 Minimal Projected Representation of Q Input: S = { Ay ≤ c } : a representation of the input polyhedron Q ; Output:
A minimal projected representation of Q ; Generate the initial redundancy test cone P by Algorithm 3; S := { } ; for i from 1 to m do if RedundancyTest( P , A i y ≤ c i ) = false then S := S ∪ { A i y ≤ c i } ; P := P| v =0 ; end if end for for i from 0 to n − do S i +1 := { } ; for ℓ pos ∈ S i with positive coefficient of y i +1 do for ℓ neg ∈ S i with negative coefficient of y i +1 do ℓ new := Combine ( ℓ pos , ℓ neg , y i +1 ); if RedundancyTest( P , ℓ new ) = false then S i +1 := S i +1 ∪ { ℓ new } ; end if end for end for for ℓ ∈ S i with zero coefficient of y i +1 do if RedundancyTest( P , ℓ ) = false then S i +1 := S i +1 ∪ { ℓ } ; end if end for P := P| v i +1 =0 ; end for return S ∪ S ∪ · · · ∪ S n .in Remark 4, we can use block elimination method to perform this task efficiently. Thisrequires the computations of the extreme rays of the projection cone. The double descrip-tion method is an efficient way to solve this problem. We begin this section by computingthe bit complexity of the double description algorithm. Lemma 16 (Coefficient bound of extreme rays)
Let S = { x ∈ Q n | A x ≤ } be aminimal representation of a cone C ⊆ Q n , where A ∈ Q m × n . Then, the absolute value ofa coefficient in any extreme ray of C is bounded over by ( n − n k A k n − . Proof ✄ From the properties of extreme rays, see Section 2.1, by Lemma 7, we know thatwhen r is an extreme ray, there exists a sub-matrix A ′ ∈ Q ( n − × n of A , such that A ′ r = 0.23his means that r is in the null-space of A ′ . Thus, the claim follows by proposition 6.6of [30]. ✁ Lemma 17
Let S = { x ∈ Q n | A x ≤ } be the minimal representation of a cone C ⊆ Q n ,where A ∈ Q m × n . The double description method, as specified in Algorithm 2, requires O ( m n +2 n θ + ǫ h ǫ ) bit operations, where h is the height of the matrix A . Proof ✄ To analyze the complexity of the DD method after adding t inequalities, with n ≤ t ≤ m , the first step is to partition the extreme rays at the t − t − ⌊ n ⌋ extreme rays(Lemma 6) whose coefficients can be bounded over by ( n − n k A k n − (Lemma 16) atthe t − C := ( t − ⌊ n ⌋ × n × M (log(( n − n k A k n − )) ≤ O ( t ⌊ n ⌋ n ǫ h ǫ ) bit operations. After partitioning the vectors, the nextstep is to check adjacency for each pair of vectors. The cost of this step is equivalent tocomputing the rank of a sub-matrix A ′ ∈ Q ( t − × n of A . This should be done for t n pairsof vectors. This step needs at most C := t n × O (( t − n θ + ǫ h ǫ ) ≤ O ( t n +1 n θ + ǫ h ǫ )bit operations. By Lemma 6, we know there are at most t ⌊ n ⌋ pairs of adjacent extremerays. The next step is to combine every pair of adjacent vectors in order to obtaina new extreme ray. This step consists of n multiplications in Q of coefficients withabsolute value bounded over by ( n − n k A k n − (Lemma 16) and this should be donefor at most t ⌊ n ⌋ vectors. Therefore, the bit complexity of this step, is no more than C := t ⌊ n ⌋ × n × M (log(( n − n k A k n − )) ≤ O ( t ⌊ n ⌋ n ǫ h ǫ ). Finally, the complexityof step t of the algorithm is C := C + C + C . The claim follows after simplifying m · C . ✁ Lemma 18 (Complexity of constructing the initial redundancy test cone)
Let h be the maximum height of A and c in the input system, then generating the initial redun-dancy test cone (Algorithm 3) requires at most O ( m n +3+ ǫ ( n + 1) θ + ǫ h ǫ ) bit operations.Moreover, proj ( W ; { v , v } ) can be represented by O ( m ⌊ n +12 ⌋ ) inequalities, each with a heightbound of O ( m ǫ n ǫ h ) . Proof ✄ We analyze Algorithm 3 step by step.
Step 1: construction of A from A . The cost of this step can be neglected. However, itshould be noticed that the matrix A has a special structure. Without loss of generality,we cam assume that the first n rows of A are linearly independent. The matrix A hasthe following structure A = A A I m − n ! , where A is a full rank matrix in Q n × n and A ∈ Q ( m − n ) × n . 24 tep 2: construction of the cone W . Using the structure of the matrix A , its inversecan be expressed as A − = A − − A A − I m − n ! . Also, from Section 2.3 we have k A − k ≤ ( √ n − k A k ) n − . Therefore, k A − k ≤ n n +12 k A k q , and k A − c k ≤ n n +32 k A k n k c k + ( m − n ) k c k . That is, height ( A − ) ∈ O ( n ǫ h ) and height ( A − c ) ∈ O ( m ǫ + n ǫ h ). As a result,height of coefficients of W can be bounded over by O ( m ǫ + n ǫ h ).To estimate the bit complexity, we need the following consecutive steps:- Computing A − , which requires O ( n θ +1+ ǫ h ǫ ) + O (( m − n ) n M (max( height ( A ) , height ( A − )))) ≤ O ( mn θ +1+ ǫ h ǫ ) bit operations;- Constructing W := { ( v , w , v ) | − [ v t , w t ] A − c + v ≥ , [ v t , w t ] A − ≥ } requiresat most C := O ( m ǫ n θ +1+ ǫ h ǫ ) + O ( mn M ( height ( A − , c )))+ O (( m − n ) h ) ≤ O ( m ǫ n θ + ǫ +1 h ǫ ) bit operations. Step 3: projecting W and finding the initial redundancy test cone. FollowingLemma 13, we obtain a representation of proj ( W ; { v , v } ) through finding extreme raysof the corresponding projection cone.Let E = ( − A A − ) t ∈ Q n × ( m − n ) and g t be the last m − n elements of ( A − c ) t . Then, theprojection cone can be represented by: C = { y ∈ Q m +1 | y t E g t I m − n = , y ≥ } . Note that y n +2 , . . . , y m +1 can be solved from the system of equations in the representationof C . We substitute them in the inequalities and obtain a representation of the cone C ′ ,given by: C ′ = { y ′ ∈ Q n +1 | y ′ t E g t ! ≤ , y ′ ≥ } In order to find the extreme rays of the cone C , we can find the extreme rays of thecone C ′ and then back-substitute them into the equations to find the extreme rays of C . Applying Algorithm 2 to C ′ , we can obtain all extreme rays of C ′ , and subsequently,25he extreme rays of C . The cost estimate of this step is bounded over by the complexityof Algorithm 2 with C ′ as input. This operation requires at most C := O ( m n +3 ( n +1) θ + ǫ max( height ( E, g t )) ǫ ) ≤ O ( m n +3+ ǫ ( n + 1) θ + ǫ h ǫ ) bit operations. The overall com-plexity of the algorithm can be bounded over by: C + C ≤ O ( m n +3+ ǫ ( n +1) θ + ǫ h ǫ ) . Also,by Lemma 16 and Lemma 17, we know that the cone C has at most O ( m ⌊ n +12 ⌋ ) distinct ex-treme rays, each with height no more than O ( m ǫ n ǫ h ). That is, proj ( W ; { v , v } ) can berepresented by at most O ( m ⌊ n +12 ⌋ ) inequalities, each with a height bound of O ( m ǫ n ǫ h ). ✁ Lemma 19
Algorithm 4 runs within O ( m n n θ + ǫ h ǫ ) bit operations. Proof ✄ The first step is to multiply the matrix M and the vector ( t , t ). Let d M and c M be the number of rows and columns of M , respectively, thus M ∈ Q d M × c M . Weknow that M is the coefficient matrix of proj ( W , { v , v } ). Therefore, after eliminating p variables c M = q + 1, where q = n − p and d M ≤ m n . Also, we have height ( M ) ∈ O ( m ǫ n ǫ h ). With these specifications, the multiplication step and the rank computationstep need O ( m n n ǫ h ǫ ) and O ( m n ( q + 1) θ + ǫ h ǫ ) bit operations, respectively, and theclaim follows after simplification. ✁ Using Algorithms 3 and 4, we can find the minimal projected representation of a poly-hedron in singly exponential time w.r.t. the number of variables n . Theorem 7
Algorithm 5 is correct. Moreover, a minimal projected representation of Q can be produced within O ( m n n θ +1+ ǫ h ǫ ) bit operations. Proof ✄ Correctness of the algorithm follows from Theorem 6, Lemma 15.By [17,23], we know that after eliminating p variables, the projection of the polyhedronhas at most m p +1 facets. For eliminating the next variable, there will be at most ( m p +1 ) pairs of inequalities to be considered and each of the pairs generate a new inequality whichshould be checked for redundancy. Therefore, overall the complexity of the algorithm is: O ( m n +3+ ǫ ( n + 1) θ + ǫ h ǫ ) + P np =0 m p +2 O ( m n n θ + ǫ h ǫ ) = O ( m n n θ +1+ ǫ h ǫ ) . ✁ This section reports on our software implementation of the algorithms presented in the pre-vious sections. Our code is part of the BPAS library, which is available at C programming language. We tested our algorithm in terms of ef-fectiveness for removing redundant inequalities and also in terms of running time. Thefirst thirteen test cases, (t1 to t13) are linear inequality systems with random coefficients;moreover, of these systems is consistent, that is, has a non-empty solutiiion set. Thesystems S24 and S35 are 24-simplex and 35-simplex polytopes, C56 and C510 are cyclicpolytopes in dimension five with six and ten vertices, C68 is a cyclic polytope in dimen-sion six with eight vertices and C1011 is cyclic polytope in dimension ten with elevenvertices [15]. Our test cases can be found at . Inour implementation, each system of linear inequalities is encoded by an unrolled linkedlist, where each cell stores an inequality in a dense representation.Table 1 illustrates the effectiveness of each redundancy elimination method. Thecolumns and specify the number of variables and inequalities of each inputsystem, respectively. The last two columns show the maximum number of inequalitiesappearing in the process of FME algorithm. The column check1 corresponds to the casethat the Kohler’s algorithm is the only method for redundancy detection and the column check2 is for the case that Balas’ algorithms is used. Column MinProjRep gives therunning times of our algorithm for computing a minimal projected representation.The
Maple column shows the running time for the
Projection function of the
PolyhedralSets package in Maple. The last two columns show running time of Fourierelimination function in the
CDD library. The
CDD1 column is running time of the func-tion when it uses an LP method for redundancy elimination, while the
CDD2 column isthe running time of the same function but it uses Clarkson’s algorithm [9]. During our study of the Fourier-Motzkin elimination, we found many related works. Asdiscussed above, removing redundant inequalities during the execution of Fourier-Motzkinelimination is the central issue towards efficiency. To our knowledge, all available imple-mentations of Fourier-Motzkin elimination rely on linear programming for removing allthe redundant inequalities, an idea suggested in [22]. However, and as mentioned above,there are alternative algorithmic approaches relying on linear algebra. In [7], Chernikovproposed a redundancy test with little added work, which greatly improves the practicalefficiency of Fourier-Motzkin elimination. Kohler proposed a method in [23] which onlyuses matrix arithmetic operations to test the redundancy of inequalities. As observed byImbert in his work [17], the method he proposed in this paper as well as those of Chernikov Because the running time of the algorithm for eliminating all variables is more than one hour for somecases we only remove some of the variables. The numbers in level parts shows the number of variablesthat can be eliminated in one hour of running program est case Table 1: Maximum number of inequalities
Case MinProjRep Maple CDD1 CDD2t1 8.042 7974 142 47t2 107.377 3321217 122245 7925t3 2.193 736 4 1t4 5.960 2579 48 17t5 3.946 3081 32 13t6 26.147 117021 core dump wrong resultt7 353.588 >
1h 1177807 57235t8 4.893 4950 124 22t9 8.858 8229 75 39t10 24998.501 > >
1h (2) >
1h (3)t11 191191.909 > >
1h (2) >
1h (2)t12 21665.704 > >
1h (2) 746581t13 1264.289 >
1h 77372 30683S24 39.403 6485 334 105S35 158.286 57992 1827 431C56 1.389 825 11 3C68 4.782 20154 682 75C1011 85.309 > >
1h (4) 76861C510 23.973 6173 6262 483
Table 2: Running time comparison (ms)28nd Kohler are essentially equivalent. Even though these works are very effective in prac-tice, none of them can remove all redundant inequalities generated by Fourier-Motzkinelimination.Besides Fourier-Motzkin elimination, block elimination is another algorithmic tool toproject polyhedra on a lower dimensional subspace. This method relies on the extremerays of the so-called projection cone. Although there exist efficient methods to enumeratethe extreme rays of this projection cone, like the double description method [13] (alsoknown as Chernikova’s algorithm [8, 25]), this method can not remove all the redundantinequalities.In [1], Balas shows that if certain inconvertibility conditions are satisfied, then theextreme rays of the redundancy test cone exactly defines a minimal representation of theprojection of a polyhedron. As Balas mentioned in his paper, this method can be extendedto any polyhedron. Through experimentation, we found that the results and constructionsin Balas’ paper had some flaws. First of all, in Balas’ work, the redundancy test cone isdefined as the projection of the cone W := { ( v , w , v ) ∈ Q q × Q m − q × Q | [ v t , w t ] B − A =0 , − [ v t , w t ] B − c + v = 0 , [ v t , w t ] B − ≥ } on the ( v , v ) space. The Author claimed that a t x ≤ c defines a facet of the projection proj ( Q ; x ) if and only if ( a , c ) is an extreme ray ofthe redundancy test cone proj ( W ; { v , v } ). However, we have a counter example for thisclaim. Please refer to the page . Inthis example, when we eliminate two variables, the cone proj ( W ; { v , v } ) has 19 extremerays while proj ( Q ; x ) has 18 facets. 18 of the 19 extreme rays of proj ( W ; { v , v } ) giveout the 18 facets of proj ( Q ; x ), while the remaining extreme ray gives out a redundantinequality w.r.t. the 18 facets. The main reason leading to this situation is due to a misuseof Farkas’ lemma in the proof of Balas’ paper. We improved this situation by changing − [ v t , w t ] B − c + v = 0 to − [ v t , w t ] B − c + v ≥ proj ( W ; { v , v } ) and the facets of proj ( Q ; x ), for the detailsplease refer to Theorems 5, 6. In fact, with our change in the construction of W , we willhave at most one extra extreme ray, which is always ( , proj ( Q ; x ), which is time consuming. Ouralgorithm tests the redundancy of the inequality ax ≤ c by checking whether ( a , c ) is anextreme ray of the redundancy test cone or not. After revisiting Balas’ method, we found another cone called subsumption cone [16, 24],which we will prove later equals to the initial test cone P := proj ( W ; { v , v } ) in theprevious section. 29onsider the polyhedron Q given in Equation (2), denote T := { ( λ, α, β ) | λ t A = α t , λ t c ≤ β, λ ≥ } , where λ and α are column vectors of dimension m and n respectively, β is a variable. The subsumption cone of Q is obtained by eliminating λ in T , that is, proj ( T ; { α, β } ).Remember that we can obtain the initial test cone P = proj ( W ; { v , v } ) by Algorithm3, here W := { ( v , w , v ) | − [ v t , w t ] A − c + v ≥ , [ v t , w t ] A − ≥ } . Lemma 20
The subsumption cone of Q equals to its initial redundancy test cone P . Proof ✄ Let λ t := [ v t , w t ] A − and β = v , we prove the lemma in two steps.( ⊆ ) For any ( α, β ) in the subsumption cone proj ( T ; { α, β } ), there exists λ ∈ Q m sat-isfying ( λ, α, β ) ∈ T . Remember that A = [ A , A ′ ], where A ′ = [ e n +1 , . . . , e m ] with e i being the i -th canonical basis of Q n for i : n + 1 ≤ i ≤ m , we have A − A = [ e , . . . , e n ]with e i being the i -th canonical basis of Q n for i : 1 ≤ i ≤ n . Hence, α t = λ t A =[ v t , w t ] A − A = v t . Also, we have [ v t , w t ] A − c ≤ β = v , [ v t , w t ] A − ≥ . Therefore,( α, β ) = ( v , v ) ∈ proj ( W ; { v , v } ).( ⊇ ) For any ( v , v ) in the initial redundancy test cone proj ( W ; { v , v } ), there exists w ∈ Q m − n satisfying ( v , w , v ) ∈ proj ( W ; { v , v } ). Let α = v . Then, α t = v t =[ v t , w t ] A − A = λ t A , λ t c = [ v t , w t ] A − c ≤ v = β and α t = [ v t , w t ] A − ≥ . Therefore,( v , v ) = ( α, β ) ∈ proj ( T ; { α, β } ). ✁ In Section 4, we have shown how to use the initial test cone to remove all the redundantinequalities and give a minimal representation of the projections of given pointed poly-hedra. Detailed proofs are also explained in the previous section. It also applies to thesubsumption cone. In [16, 24], the authors mentioned that the subsumption cone can notdetect all the redundant inequalities. However, their object is full-dimensional polyhedrawhile ours are pointed polyhedra. Notice that any full-dimensional polyhedron can betransformed to a pointed polyhedron by some coordinate transformations.Based on the improved version of Balas’ methods, we obtain an algorithm to removeall the redundant inequalities produced by Fourier-Motzkin elimination. Even thoughthis algorithm still has exponential complexity, which is expected, it is very effective inpractice, as we have shown in Section 6.The projection of polyhedra is a useful tool to solve problem instances in parametriclinear programming, which plays an important role in the analysis, transformation andscheduling of for-loops of computer programs, see for instance [5, 20, 21].30
Solving parametric linear programming problemwith Fourier-Motzkin elimination
In this section, we show how to use Fourier-Motzkin elimination for solving parametriclinear programming (PLP) problem instances.Given a PLP problem instance: z ( Θ ) = min cx A x ≤ B Θ + b (9)where A ∈ Z m × n , B ∈ Z m × p , b ∈ Z m , and x ∈ Q n are the variables, Θ ∈ Q p are theparameters.To solve this problem, first we need the following preprocessing step. Let g > c . Via Gaussian elimination, we can obtaina uni-modular matrix U ∈ Q n × n satisfying [0 , . . . , , g ] = c U . Let t = U − x , the abovePLP problem can be transformed to the following equivalent form: z ( Θ ) = min gt n AU t ≤ B Θ + b . (10)Applying Algorithm 5 to the constraints AU t ≤ B Θ + b with the variable order t > · · · > t n > Θ , we obtain ProjRep ( Q ; t > · · · > t n > Θ ), where Q ⊆ Q n + p is the polyhe-dron represented by AU t ≤ B Θ + b . We extract the representation of the projection proj ( Q ; { t n , Θ } ), denoted by Φ := Φ ∪ Φ . Here we denote by Φ the set of inequalitieswhich have a non-zero coefficient in t n and Φ the set of inequalities which are free of t n .Since g >
0, to solve (10), we only need to consider the lower bound of t n , which is veryeasy to deduce from Φ .Consider Example 3.3 in [5]:min − x − x ( x + 3 x ≤ − θ + θ , x + x ≤ θ − θ x ≤ θ + θ , − x ≤ , − x ≤ − , − U = (0 , U = − − ! . Let ( t , t ) T = U − ( x , x ) T , theabove PLP problem is equivalent tomin t ( − t − t ≤ − θ + θ , − t ≤ θ − θ t ≤ θ + θ , − t ≤ , t + t ≤ P denote the polyhedron represented by the above constraints. Applying Algorithm 5to P with variable order t > t > θ > θ , we obtain the projected representation ProjRep ( P ; t > t > θ > θ ), from which we can easily extract the representation of theprojected polyhedron proj ( P ; { t , θ , θ } ): proj ( P ; { t , θ , θ } ) := − t − θ + 2 θ ≤ , − t − θ − θ ≤ , − t + 4 θ − θ ≤ , t ≤ , − θ − θ ≤ , − θ + 2 θ ≤ , − θ ≤ , θ ≤ .t has three lower bounds: t = − − θ +2 θ , t = − θ − θ − / t = 4 θ − θ − ( − θ ≤ / , − θ − θ ≤ ,θ + 2 θ ≤ , θ − / θ ≤ . , ( θ ≤ − / , θ ≤ / , − θ − θ ≤ . , ( − θ ≤ − / , − θ + 4 / θ ≤ − ,θ − θ / ≤ / References [1] Egon Balas. Projection with a minimal system of inequalities.
Computational Opti-mization and Applications , 10(2):189–193, 1998.[2] C. Bastoul. Code generation in the polyhedral model is easier than you think. In
Proceedings of the 13th International Conference on Parallel Architectures and Com-pilation Techniques , PACT ’04, pages 7–16, Washington, DC, USA, 2004. IEEEComputer Society.[3] M. Benabderrahmane, L. Pouchet, A. Cohen, and C. Bastoul. The polyhedral modelis more widely applicable than you think. In
Proceedings of the 19th joint Europeanconference on Theory and Practice of Software, international conference on CompilerConstruction , CC’10/ETAPS’10, pages 283–303, Berlin, Heidelberg, 2010. Springer-Verlag.[4] U. Bondhugula, A. Hartono, J. Ramanujam, and P. Sadayappan. A practical auto-matic polyhedral parallelizer and locality optimizer.
SIGPLAN Not. , 43(6):101–113,June 2008.[5] Francesco Borrelli, Alberto Bemporad, and Manfred Morari. Geometric algorithmfor multiparametric linear programming.
Journal of optimization theory and appli-cations , 118(3):515–540, 2003.[6] Changbo Chen, Svyatoslav Covanov, Farnam Mansouri, Marc Moreno Maza, NingXie, and Yuzhen Xie. The basic polynomial algebra subprograms. In
MathematicalSoftware - ICMS 2014 - 4th International Congress, Seoul, South Korea, August 5-9,2014. Proceedings , pages 669–676, 2014.327] Sergei Nikolaevich Chernikov. Contraction of systems of linear inequalities.
DokladyAkademii Nauk SSSR , 131(3):518–521, 1960.[8] Natal’ja V. Chernikova. Algorithm for finding a general formula for the non-negativesolutions of a system of linear inequalities.
Zhurnal Vychislitel’noi Matematiki iMatematicheskoi Fiziki , 5(2):334–337, 1965.[9] Kenneth L Clarkson. More output-sensitive geometric algorithms. In
Foundations ofComputer Science, 1994 Proceedings., 35th Annual Symposium on , pages 695–702.IEEE, 1994.[10] P. Feautrier. Dataflow analysis of array and scalar references.
International Journalof Parallel Programming , 20, 1991.[11] Paul Feautrier. Automatic parallelization in the polytope model. In
TheData Parallel Programming Model: Foundations, HPF Realization, and Sci-entific Applications , pages 79–103, London, UK, UK, 1996. Springer-Verlag. http://dl.acm.org/citation.cfm?id=647429.723579 .[12] Komei Fukuda. The CDD and CDDplus homepage. .[13] Komei Fukuda and Alain Prodon. Double description method revisited. In
Combi-natorics and computer science , pages 91–111. Springer, 1996.[14] T. Grosser, H. Zheng, R. Aloor, A. Simb¨urger, A. Gr¨oßlinger, and L. Pouchet. Polly- polyhedral optimization in llvm. In
First International Workshop on PolyhedralCompilation Techniques (IMPACT’11) , Chamonix, France, April 2011.[15] Martin Henk, J¨urgen Richter-Gebert, and G¨unter M Ziegler. 16 basic properties ofconvex polytopes.
Handbook of discrete and computational geometry , pages 255–382,2004.[16] Tien Huynh, Catherine Lassez, and Jean-Louis Lassez. Practical issues on the projec-tion of polyhedral sets.
Annals of mathematics and artificial intelligence , 6(4):295–315, 1992.[17] Jean-Louis Imbert. Fourier’s elimination: Which to choose? In
PPCP , pages 117–129, 1993.[18] Rui-Juan Jing and Marc Moreno Maza. Computing the integer points of a polyhe-dron, I: algorithm. In
Computer Algebra in Scientific Computing - 19th InternationalWorkshop, CASC 2017, Beijing, China, September 18-22, 2017, Proceedings , pages225–241, 2017. 3319] Rui-Juan Jing and Marc Moreno Maza. Computing the integer points of a poly-hedron, II: complexity estimates. In
Computer Algebra in Scientific Computing -19th International Workshop, CASC 2017, Beijing, China, September 18-22, 2017,Proceedings , pages 242–256, 2017.[20] Colin Jones. Polyhedral tools for control. Technical report, University of Cambridge,2005.[21] Colin N. Jones, Eric C. Kerrigan, and Jan M. Maciejowski. On polyhedral projectionand parametric programming.
Journal of Optimization Theory and Applications ,138(2):207–220, 2008.[22] Leonid Khachiyan. Fourier-motzkin elimination method. In Christodoulos A. Floudasand Panos M. Pardalos, editors,
Encyclopedia of Optimization, Second Edition , pages1074–1077. Springer, 2009.[23] David A. Kohler. Projections of convex polyhedral sets. Technical report, CaliforniaUniv. at Berkeley, Operations Research Center, 1967.[24] Jean-Louis Lassez. Querying constraints. In
Proceedings of the ninth ACM SIGACT-SIGMOD-SIGART symposium on Principles of database systems , pages 288–298.ACM, 1990.[25] Herv´e Le Verge.
A note on Chernikova’s algorithm . PhD thesis, INRIA, 1992.[26] Peter McMullen. The maximum numbers of faces of a convex polytope.
Mathematika ,17(2):179–184, 1970.[27] David Monniaux. Quantifier elimination by lazy model enumeration. In
InternationalConference on Computer Aided Verification , pages 585–599. Springer, 2010.[28] Arnold Sch¨onhage and Volker Strassen. Schnelle multiplikation großer zahlen.
Com-puting , 7(3-4):281–292, 1971.[29] Alexander Schrijver.
Theory of linear and integer programming . John Wiley & Sons,Inc., New York, NY, USA, 1986.[30] Arne Storjohann.
Algorithms for matrix canonical forms . PhD thesis, Swiss FederalInstitute of Technology Zurich, 2000.[31] Marco Terzer.
Large scale methods to enumerate extreme rays and elementary modes .PhD thesis, ETH Zurich, 2009.[32] S. Verdoolaege, J. Carlos Juega, A. Cohen, J. Ignacio G´omez, C. Tenllado, andF. Catthoor. Polyhedral parallel code generation for CUDA.
TACO , 9(4):54, 2013.[33] Michel Waldschmidt.