aa r X i v : . [ c s . D M ] A ug A Simpler Approach to Linear Programming
Jean-Louis LassezIBM Research Center (Retired)[email protected]
Abstract
Dantzig and Eaves claimed that fundamental duality theorems of linear programming were atrivial consequence of Fourier elimination. Another property of Fourier elimination is consideredhere, regarding the existence of implicit equalities rather than solvability. This leads to a differentinterpretation of duality theory which allows us to use Gaussian elimination to decide solvabilityof systems of linear inequalities, for bounded systems.
Keywords : Linear programming, Solutions at Infinity, Elementary Dual, Implicit Equalities,Fourier elimination, Gaussian elimination, Primal cone, Smale problem number 9.
Dantzig and Eaves [DE73] make an important statement regarding Fourier elimination:“From it one can derive easily, by trivial algebraic manipulations, the fundamental theorem of linearprogramming, Farkas lemma, the various theorems of the alternatives and the well known MotzkinTransportation theorem.”Because Fourier algorithm is about elimination and Tarski showed that any theorem in elementaryalgebra and geometry can be obtained via elimination, this remark about the capacity of Fourierelimination to provide proofs of major theorems is not accidental. What is more striking is the useof the word “trivial”. The essence of Dantzig’s and Eaves remark is that theories of duality andassociated theorems of the alternative are based on the fact that Fourier’s algorithm generates acontradiction ≤ − when the primal is not solvable. In [LM92a] it was shown that when Fourieralgorithm generates a tautology ≤ it tells us that the primal has implicit equalities. This givesus a parallel theory of duality with associated theorems of the alternative, also obtained by trivialoperations. In particular the primal is solvable or solvable at infinity if and only if the elementarydual has implicit equalities. And the strong duality theorem has a new interpretation which leadsto an algorithm to decide if a bounded set is solvable. The algorithm uses Gaussian eliminationuntil the last step . Then it uses a trivial case of Fourier elimination. It has three interestingaspects, one from a theorem proving point of view, the algorithm comes very naturally and itscorrectness is straightforward, in the same line as Dantzig and Eave’s statement. The second pointis that it solves an important problem, known as Smale’s problem number nine about finding astrongly polynomial algorithm. The third point is that one could have found a solution for Smale’sproblem number nine with a polynomial of arbitrary high degree, making it unsuitable for practicalapplications. Here the complexity is based on Gaussian elimination, for which there are manyefficient implementations. Consequently, this new algorithm may have significant practical use. Atopic we are exploring. Now simplicity is in the eye of the beholder. Here we claim simplicity as1ll the operations are, as noted by Dantzig and Eaves “trivial” manipulations whose correctness isan immediate consequence of Fourier algorithm. The exception is the use of Gaussian elimination,which is not a trivial operation, but can be considered a simple one. A trivial remark gives us a different view of duality. We need a definition first: A constraint L ≤ r is an implicit equality if and only if the constraint − L ≤ − r holds, that is if and only if adding thetwo constraints gives us the constraint [0] ≤ , where [0] denotes the constraint with all coefficientsset to . This obvious remark can be generalized easily, as was shown in [LM92a].The constraints { L i ≤ r i } are implicit equalities if and only if there exists a set of positive coefficients { λ i } , called multipliers giving rise to linear combinations P λ i L i = [0] and P λ i r i = 0 .It might not immediately appear so but, as we will see, this is in fact an obvious statement of atheorem of the alternative.The elementary dual of a set S of constraints AX ≤ b ∪ { x j ≥ } is A T Λ ≥ ∪ {− P λ i r i ≥ } ∪ { λ i ≥ } where A T represents the transpose of A. And the r i are the right hand side constantsthat form the vector b . The constraint − P λ i r i ≥ is called the extension.Once we remark that a constraint L i ≤ r i can be written equivalently as L i + s i − r i = 0 where s i is a non negative variable called a slack variable, the following becomes obvious:The coefficients x j and s i of a solution of S together with the coefficient 1 for the extension form aset of multipliers for a set of implicit equalities in the elementary dual.In other words, S has a solution if and only if the extension in the elementary dual is an implicitequality.Equivalently S has no solution if and only if the extension is not an implicit equality, that is theelementary dual has a solution such that the extension is > , this is Farkas lemma.This is because the solution in the elementary dual corresponds to multipliers in the primal givingus ≤ − In fact we have a more complete version of the lemma, which tells us that if the elementary dualhas a solution, the primal is unsolvable or has an implicit equality.This implies also that if Farkas lemma is a trivial consequence of Fourier algorithm, the previousremark tells us that Farkas lemma implies the correctness of Fourier algorithm. That is because anunsolvable set remains unsolvable when Fourier elimination is applied.A non solvable set may have implicit equalities, with an appropriate set of multipliers. For instanceit may have a solvable subset. But also as it is not solvable, Fourier elimination will produce aconstraint [0] ≤ − . If we add the tautology ≤ , we have multipliers that produce [0] ≤ .A set AX ≤ b is said to have a solution at infinity if and only if the set AX ≤ has a solutiondifferent from the origin. A set without solutions at infinity is called bounded . Solutions at infinitycorrespond to implicit equalities in the elementary dual independent of the extension.2 xample: The set − x + y ≤ x − y ≤ − − x ≤ − y ≤ Can be rewritten as: − x + y + s − x − y + s +1 = 0 − x ≤ − y ≤
00 1 1 0 1 solution solution at infinityIt has the elementary dual solution solution at infinitymultipliers multipliers − λ + λ ≥ x xλ − λ ≥ y y − λ + λ ≥ λ ≥ s s λ ≥ s s If the right hand side coefficients 2 and -1 are respectively replaced by -2 and 1 the extension is notan implicit equality as λ = λ = 1 make the extension > We can reword the theorem of the alternative in the following way:
Theorem 1.
Solutions and solutions at infinity in the primal are multipliers in theelementary dual.
The fundamental theorem on strong duality states thatFor a set of constraints AX ≤ b ∪ { x j ≥ } the maximum of P c i x i with c being the vector of the c i , and b a vector of coefficients denoted r j is obtained when the following system is solvable: AX ≤ b ∪ { x j ≥ } A T Λ ≥ c X λ i r i = X c i x i ∪ { λ i ≥ } . AX ≤ b ∪ { x j ≥ } A T Λ ≥ c − X λ i r i ≥ − X c j x j ∪ { λ i ≥ } . Let us also call this last constraint the extensionBecause a solution to the system will give multipliers such that P c j x j − P c j x j = 0 , we have thatthe maximum is reached for the extension to be an implicit equality with multiplier equal to 1. Example:
M ax ( x + y ) x + y ≤ x − y ≤ − x + y ≤ − x ≤ − − y ≤ − − x ≤ − y ≤ In the standard duality we add the constraints: λ + λ − λ − λ ≥ λ − λ + λ − λ ≥ λ − λ − λ = x + y Together with sign constraintsLet us write it this way: Multipliers λ + λ − λ − λ ≥ xλ − λ + λ − λ ≥ y λ λ λ ≥ x + y − λ + λ + λ ≥ − x − y Together with sign constraints.If this last constraint is an implicit equality, this formulation is equivalent to the standard one whenthere is a solution. And when there is a solution it satisfies the last constraint as an equality as themultipliers give us [0] ≥ The following properties of multipliers are as immediate as they are fundamental. It is however adelicate situation because we have to be very careful when we use the words solution or solutionat infinity, multipliers or implicit equalities. The following statements are straightforward when weuse them in the right context, and we realize that they follow directly from the two properties ofFourier algorithm regarding solvability and implicit equalities.If the primal is solvable it has a set of multipliers with the multiplier for the extension equal to1. However multipliers can be multiplied by an arbitrary positive scalar. So the requirement thatthe extension’ s multiplier be equal to 1 can be replaced by the requirement that the multiplier bepositive.Let AX ≤ b together with non negativity constraints for the variables be the primal. Let z be thename of a new variable. The primal cone is defined by adding the column − bz to AX , setting theright hand side constants to 0, with the same non negativity constraints for the variables as well asfor the variable z.If the primal is bounded, then the primal cone is reduced to the origin if and only if the primal isunsolvable.Indeed if the primal cone has a solution X different from the origin, z is > as the primal is bounded.And the primal is solvable by dividing the X solution with z. And if the primal has a solution,obviously the primal cone has one different from the origin. This is important as we know that todetermine solvability we can restrict ourselves to the bounded case [Ach84].And also the immediate:If the primal cone has multipliers the primal is either non solvable or non full dimensional.This depends on whether the multiplier for − z ≤ is zero or not. If it is positive, it tells us thatwhen all variables are eliminated we are left with a contradiction in the primal.If the primal is bounded and unsolvable, its elementary dual has no implicit equalities. Equivalentlyall multipliers in the elementary dual are equal to zero. If the primal is bounded and solvable, allsets of multipliers have a positive multiplier for the extension. If the primal is solvable and fulldimensional, all the constraints in the elementary dual are implicit equalities and the elementarydual is reduced to the origin. Conversely if the elementary dual is reduced to the origin, the primalis solvable and full dimensional. So an algorithm to decide if a cone is reduced to the origin canbe used to decide if a set, bounded or not, is full dimensional solvable. Or more directly a set issolvable full dimensional if and only if its primal cone is full dimensional.Assuming the input to be bounded, it may be solvable full dimensional, solvable with implicitequalities or not solvable. In the first case its primal cone is full dimensional and its dual cone isreduced to the origin. In the second case its primal cone has implicit equalities but is not reduced to5he origin. In the third case, its primal cone is reduced to the origin and its dual is a full dimensionalcone. Consider the set: x + y ≤ , x − y ≤ , − x + 2 y ≤ If we eliminate x using Fourier elimination, we obtain: y ≤ and y ≤ , the multipliers for this set are 0 and 0 as it has no implicit equality.If we eliminate x by Gaussian elimination using the first constraint, setting x = − y we obtain: − y ≤ and y ≤ . This set admits the multipliers 1 and 3. We call these parasite multipliers.However, as we see now, if the initial set has multipliers, eliminating a variable by Gaussian elim-ination setting a symbol = instead of ≥ or ≤ in a main constraint with a positive multiplier givesa new set which also has multipliers that are derived from those of the initial set. We call theselegitimate multipliers. It may also generate parasite multipliers. When the new set has multiplierswe can decide if they are legitimate or parasite. From the legitimate ones we can retrieve multipliersfor the initial set. The parasite multipliers tell us that the constraint used to eliminate a variable inthe initial set is redundant, as a positive linear combination of some other constraints in the set. Itis straightforward, but there are a lot of details we have to consider. And of course if we eliminatea variable by such Gaussian elimination using a constraint that is not an implicit equality, we maygenerate an unsolvable set.Formally: We consider a cone with ≤ symbols. This cone can be a primal cone. We choose a variable x , the other variables are labelled x v the index v positive and a main constraint a x + L ≤ .The requirements are that a be different from 0 and that L is not reduced to [0] . Otherwise weare in a trivial case where all main constraints are [0] ≤ and if a main constraint is such that L is reduced to [0] , then it can be eliminated as redundant or the variable set to 0. In such casesmultipliers will be adapted to the situation.The notation is simplified if we first scale the coefficients of the variable x in such a way that theybecome equal to 1, -1, or remain 0. If we have a constraint ax + L ≤ with a multiplier µ , and a is positive, it becomes x + 1 / ( a ) L ≤ or − x + 1 / ( − a ) L ≤ if a is negative. The multiplier µ becomes aµ or − aµ . The multiplier for the − x ≤ will not be modified as we see below.The constraints are classified in the following way6ultipliers a x + L ≤ µ a i x + L i ≤ µ i − a j x + L j ≤ µ j L k ≤ µ k − x ≤ s − x p ≤ s p The sets of indices i , j , k , are distinct sets and do not contain the index 0. And all indices p aredifferent from 0. All the a , a i , a j are positive. x +1 /a L ≤ a µ x +1 /a i L i ≤ a i µ i − x +1 /a j L j ≤ a j µ j L k ≤ µ k − x ≤ s − x p ≤ s p We eliminate x = − /a L And obtain a new set of constraints with multipliers Multipliers − /a L +1 /a i L i ≤ a i µ i /a L +1 /a j L j ≤ a j µ j L k ≤ µ k /a L ≤ s − x p ≤ s p It is straightforward to show that the multipliers of the initial set are transferred in the way de-scribed. The important point is that s the slack multiplier of a sign constraint becomes themultiplier of a main constraint. If this multiplier is positive, the main constraint is an implicitequality.The process is reversible as a µ = − P a i µ i + P a j µ j + s Parasite multipliers will fail this reversal.
We know that a bounded system is solvable if and only if its primal cone is not reduced to theorigin. So in a first step we have as input a bounded system and we go to its primal cone.A constraint P x i ≤ is added to the primal cone. Call this system the primal. We then usethe new version of the strong duality theorem of linear programming to compute max( P x i ) . Weillustrate the algorithm with an example. The new system consists of two parts, the primal and itsassociated strong elementary dual. 7 ax ( x + y ) x + y ≤ x − y ≤ x − y ≤ x − y ≤ − x ≤ − y ≤ Multipliers λ + λ + λ + λ ≥ xλ − λ − λ − λ ≥ y − λ ≥ − x − y = − σ = 0 or − λ ≥ s λ ≥ s λ ≥ s λ ≥ s There is always a solution either the origin with P x i = 0 or a solution such that the maximum isreached for P x i = 2 Assume the maximum is 2, set σ = 2 . The new system is such that P x i ≤ is an implicit equality, it is solvable. The multiplier for the extension is equal to 1, and the strongelementary dual is solvable. The new system has a remarkable property. As λ is equal to 1, it issolvable for all coefficients of the other λ ’s, including the situation where they are all 0. And thestrong elementary dual is equal to the elementary dual of the original cone.Now we eliminate a λ i different from λ using a main constraint in the strong elementary dual,different from the extension, where the inequality is replaced by an equality. − λ − λ − λ = λ Multipliers λ λ − λ ≥ u > − λ ≥ − x − y = − − λ − λ − λ ≥ − s > λ ≥ s = 0 λ ≥ s λ ≥ s We are in fact working on the elementary dual of the original cone, and this operation can set to 0several if not all λ ’s, as parasite multipliers can be created. But that does not affect the solvabilityof the system as λ remains equal to 1. And the extension remains an implicit equality. Whathappens is that a solvable system becomes even more solvable. If the original cone is reduced tothe origin, we have λ = 0 , having set σ = 2 renders the new system unsolvable. Either the strongelementary dual is not solvable, or the extension is not an implicit equality. Eliminating a variableas was done preserves unsolvability. If the strong elementary dual is solvable and eliminating avariable still gives a solvable set, all solutions are such that λ = 0 . We cannot have the extensionbe an implicit equality, which would imply λ = 1 .8e eliminate in the same way all the other variables but the variables λ .Multipliers − λ ≥ − > − λ ≥ − x − y = − / > λ ≥ / > λ ≥ s = 0 This last system is solvable for x + y = 2 , and the extension is still an implicit equality. And wefind that λ is equal to 1. Example:
Input cone is x + y ≤ , − x − y ≤ , x , y , non negative. It is reduced to the origin.The process gives us the two constraints λ ≤ and λ ≥ . While the only two possible solutionsare λ = 1 or λ = 0 , and they are mutually exclusive. So we find λ = 1 if and only if the originalbounded system is not solvable.The references [Ach84, Fou27, DE73] and [LM92a] are used to establish the results in this paper. Theother references [CL03, BLLM99, HJCL91, Sma98, HN94, YL82, LM92b, LHM92] are to providebackground information on various aspects of Fourier elimination. References [Ach84] S Achmanov. Programmation linéaire traduction française.
MIR, Moscou , 1984.[BLLM99] Alexander Brodsky, Catherine Lassez, Jean-Louis Lassez, and Michael J Maher. Sep-arability of polyhedra for optimal filtering of spatial and constraint data.
Journal ofAutomated Reasoning , 23(1):83–104, 1999.[CL03] Vijay Chandru and Jean-Louis Lassez. Qualitative theorem proving in linear constraints.In
Verification: Theory and Practice , pages 395–406. Springer, 2003.[DE73] George B. Dantzig and B. C. Eaves. Fourier-motzkin elimination and its dual.
Journalof Combinatorial Theory Ser. A , page 255, 1973.[Fou27] JBJ Fourier. Analysis of the work of the royal academy of sciences during the year 1824.
Mathematic Part , 1827. Partial English translation by D.A. Kohler, “Translation of aReport by Fourier on his work on Linear Inequalities,” Opsearch 10.[HJCL91] T. Huynh, L. Joskowicz, and Jean-Louis Lassez Catherine Lassez. Practical tools toreason about linear constraints.
Fundamenta Informaticae , 15(3-4), 1991.[HN94] Dorit S Hochbaum and Joseph Naor. Simple and fast algorithms for linear and integerprograms with two variables per inequality.
SIAM Journal on Computing , 23(6):1179–1192, 1994.[LHM92] Jean-Louis Lassez, T. Huynh, and K. McAloon. Simplification and elimination of redun-dant linear arithmetic constraints.
Constraint Logic Programming, Selected Research ,1992. 9LM92a] Jean-Louis Lassez and Michael J Maher. On fourier’s algorithm for linear constraints.
Journal of Automated Reasoning , 9(1), 1992.[LM92b] Jean-Louis Lassez and K. McAloon. A canonical form for generalized linear constraints.
Journal of Symbolic Computation , 1992.[Sma98] Steve Smale. Mathematical problems for the next century.
The mathematical intelli-gencer , 20(2):7–15, 1998.[YL82] Boris Yamnitsky and Leonid A Levin. An old linear programming algorithm runs inpolynomial time. In23rd Annual Symposium on Foundations of Computer Science (sfcs1982)